It is time to look at interactivity in Processing. You can program Processing to work with a range of input devices, such as microphones, cameras, gamepads, or even something you have built with an Arduino board. For this lesson, though, we’ll stick to plain-old keyboard and mouse input. You will look and building basic interfaces for painting freely and drawing faces. In the process, you will discover that Processing’s standard functions are not exactly purpose-designed for constructing user interfaces. However, the lesson also includes an introduction to the ControlP5 graphical user interface library. ContolP5 provides a suite of essential control widgets, such as buttons, checkboxes, sliders, toggles, and text-fields, thereby saving you the time and effort of having to create them from scratch.
We will also touch on a few game development concepts, specifically collision detection and delta time.
Some User Interface History
It may hard to believe, but there was a time when computers had no video displays. We’ll skip over that early chapter of computing history, though, and begin at the Command Line Interface (CLI). The first computer monitors couldn’t display much more than text and basic graphics, but this was enough to support a handy CLI. By typing a series of commands, one could instruct a computer to perform its various functions.
You may be surprised to hear that the CLI is far from dead and buried. While it may no longer be the predominant means of interfacing with computing devices, system administrators and programmers still rely on it for many daily computing tasks. Furthermore, you are likely to be surprised by how much can be accomplished just typing instructions. As anybody who has mastered the command line can testify, it’s more efficient in various situations, particularly where repetitive tasks and batch processing are involved.
In the above example, you can spot two
$ symbols; each is referred to as a prompt, although the symbol displayed can vary between operating systems. The prompt signifies that the computer is ready to accept input. Two commands have been used here:
cd for changing directory; and
wget for downloading a file from a web server. In this case, I’m downloading the command-line version of Processing.py to my Desktop. That’s right – you can run Processing sketches from the command line without opening the editor.
A Text-based User Interface (TUI) is a kind of blend between the CLI and modern graphical interface. For example, take
w3m – a text-mode web browser. Using the arrow keys one can navigate websites, albeit in with limited styling and no images.
For richer text-based interfaces, many old systems included semigraphics. You can think of semigraphics as extra characters that allow you to ‘draw’ with type. Modern systems have adopted many of these characters; for instance, you can copy-paste these symbols straight from your web browser into any text document: ♠ ♥ ♦ ♣. Additionally, Unicode (basically, a collection of all of the characters a computer can display) includes over a hundred box-drawing characters for constructing TUI interfaces.
In text-mode, computer displays are measured in characters as opposed to pixels. For instance, the ZX Spectrum, released in 1982, managed 32 columns by 24 rows of characters on a screen with a resolution of 256×192 pixels. Because text-mode environments rely on mono-spaced characters, box-drawing characters will always align perfectly.
It is important to mention, though, that many CLI- and TUI-based systems were not incapable of rendering raster graphics. There were text and graphics modes that a system could switch between. Take games for instance. Of course, text-mode games – like the dungeon crawler, NetHack – operate in text mode, but for games with graphics, the computer would switch to addressing individual pixels. Even today, PCs still boot in text mode, before shifting to graphics mode to load the desktop environment.
A Graphical User Interface (GUI) allows for interaction through the manipulation of graphical elements. You routinely make use of such interfaces to interact with your file manager, web-pages, application software, and mobile phone. To narrow down GUIs a bit, I’d like to focus on WIMP interfaces. The Windows/Icons/Menus/Pointer paradigm was developed by Xerox PARC in 1973 and popularised by Apple’s Macintosh in 1984. This has been massively influential on graphical user interface design, and the WIMP-meets-desktop environment has remained fundamentally unchanged since it’s inception. The desktop metaphor was particularly intuitive as it mimicked the very items that computers sought to replace – documents, folders, notepads, and the forgiving trashcan for retrieving deleted files. With a GUI, gestures and menus replace CLI commands. For example, rather than typing “
mv” commands, a user can drag-and-drop files to move them between folders (directories).
Apple licensed certain GUI features to Microsoft for use in Windows 1.0 but sued them when features like overlapping windows appeared in Windows 2.0. The district court ruled in favour of Microsoft. Regardless of the legal outcome, Windows 1.x and 2.x were slow, clumsy, and poorly received. Most Microsoft users elected to stick with the Microsoft text-mode environment, MS-DOS. With VGA-colour, fonts, mouse support, and lightning-fast performance thanks to text-mode, MS-DOS TUIs grew to become remarkably advanced.
Many significant hard- and software developments paved the way for WIMP environments. Arguably, though, it was the invention of the mouse set that the process in motion. It was Douglas Engelbart – in collaboration with computer engineer, Bill English – who created the first mouse prototype in 1964.
In reality, the development of GUIs involved many people over many years. As the field developed, it spawned new disciplines. Human-Computer Interaction (HCI) researchers emerged in the early 1980s. Bill Moggridge and Bill Verplank coined Interaction Design (IxD) in the mid-1980s to describe the practice of designing interactive digital products – Moggridge felt this was an improvement over his earlier term, Soft-Face. Since then, User Experience (UX) designers, User Interface (UI) designers, and Information Architects (IA) have all entered the scene. I’d imagine that some labyrinthine, mutant Venn diagram exists somewhere to help explain how all of these disciplines relate to one another.
Of course, advances in interaction design are not limited to software. Touchpads found their niche in laptops (as well as MP3 players and nifty music synthesisers). Touchscreens hit it big with tablets and smartphones. Then there is gesture recognition, force feedback, GPS, and augmented reality. Voice recognition has gained newfound traction thanks to enhanced natural language processing. In some respects, speech interfaces represent a coming full circle – instead of typing in commands at the CLI, we now issue them with our voice!
Although we will stick to keyboard/mouse input in this lesson, you are encouraged to explore other means of interaction in your own time. GUI programming features prominently in many software and web development projects, so there are plenty of GUI toolkits out there. HTML, for example, is purpose-built for constructing web-pages. For Python, there’s PyQT, Tkinter, and Kivy, to name but a few. You’ll discover that programming basic buttons without any readymade widgets is painful enough, not to mention checkboxes, sliders, drop-down lists, text-fields, and windows. I’ll try to provide a few tips on good user interface design in the process, but this field really requires another book(s) to cover in any proper detail.
Create a new file and save it as “mouse_toy”. Add the following setup code:
Run the sketch and move your mouse pointer about the display window. The
mouseY system variables to print the x/y-coordinates to the Console. These same values govern the x/y position of each
ellipse (circle) drawn.
frameRate is relatively slow (20 fps), so rapid mouse movement results in circles distributed at larger intervals. There will always be a circle in the top-left corner because the pointer is assumed to be at (0,0) until the mouse moves into the display window.
pmouseY system variables hold the pointer’s x/y position from the previous frame. In other words, if the
mouseX is equal to the
mouseY you know that the mouse hasn’t moved since the last frame. As per the code below: add the two new global variables (
sw), comment out the previous
draw lines, and add the four new lines at the bottom of the
stroke() line rotates the stroke colour each new frame. The
line() function draws a line between the current and previous frame’s mouse coordinates. Recall that rapid mouse movement increases the distance between the x/y coordinates captured in successive frames. Run the sketch. As you move your mouse about a multicoloured line traces your path; you can gauge the speed of mouse movement by the length of each alternating band of rainbow colour.
Currently, you have no means of controlling the flow of colour. To turn the brush on and off, we will add some code that activates it only while the mouse’s left-click button is held down.
While any mouse button is held down, the
mousePressed system variable is equal to
mouseButton variable can be used to determine which button that is – either
CENTER. However, the
mousePressed variable reverts to
False once you have released, but
mouseButton retains its value until another is clicked. For this reason, it’s best to use these two variables in combination with one another. Insert the following
if statement to control when the
line function is active.
Run the sketch to test how the left mouse button works.
Now restructure the
if statement to accommodate a centre-click that sets the stroke-weight to 3, and a right-click that incrementally increases the stroke thickness.
The lines need not persist. Play around to see what interesting effects you can create. As an example, I have added this code to the
The background now changes colour as you move towards different corners; the x mouse position shifts the hues while the y position adjusts the saturation. Colourful rectangles appear as you move the mouse about then fade progressively as the frames advance. The
noCursor() function hides the mouse pointer while it is over the display window.
The right- and centre-click functions will adjust of the squares.
Processing offers a selection of mouse event functions – which somewhat overlap in functionality with the mouse variables – but, are placed outside of the
draw() function. These are:
We will combine the first three to create a simple paint app that features a panel for selecting and adjusting brush properties. These functions listen for specific mouse events, and once triggered, execute some code in response. Once you’ve grasped a few event functions, it’s easy enough to look up and figure out the others. We will also be controlling Processing’s
draw() behaviour manually as opposed to having it automatically repeat per the frame rate.
Create a new sketch and save it as “paint_app”. Download the font, Ernest (by Marc André ‘mieps’ Misman) from DaFont; extract it; then place the “Ernest.ttf” file in your data sub-directory:
Add the following setup code:
noLoop() function prevents Processing continually executing code within the
draw() function. If you run the sketch, the Console displays a single “
1”, confirming that the
draw ran just once. This may seem odd to you. After all, if you wanted to avoid frames why would you include a
draw() at all? Well, there is also a
loop() function to reactivate the standard
draw behaviour. As you will come to see, controlling the
draw behaviour with mouse functions makes for a neat approach to building the app.
Add some global variables. It shouldn’t matter if you place these above or below the
setup() code, as long the lines are flush against the left edge of the editor. These variables will be used to adjust and monitor the state of the brush. Perhaps somewhere near the top of your code makes more sense?
mousePressed() function is called once with every press of a mouse button. If you need to establish which button has been pressed, you can use it in combination with the
mouseButton variable. Add the code below. Ensure that the lines are flush left and that you have not placed it within the
Run the sketch. The moment you left-click within the display window numbers begin to count-up in the Console. To stop this upon release of the mouse button, use a
mouseReleased() function; this is called once every time a mouse button is released.
When you run the sketch, the frame-count only counts-up in the Console while you are holding the left mouse button down. Excellent! Now add some painting code to the
Run the sketch and have a play. It works, but there are some issues.
The first point you lay connects to the top-left corner via a straight line. This is because
pmouseY grabbed their last x/y coordinates on frame 1 before your moused reached into the display window – hence, the line’s initial position of (0,0). Also, if you paint for a bit then release the mouse button, then click again to paint elsewhere, the app draws a straight line from where you last left-off to your new starting position. While the mouse button is raised, the
draw() code ceases to execute, so
pmouseY hold coordinates captured prior to the loop’s suspension. Make the necessary adjustments to resolve these bugs:
Run the sketch to confirm that everything works. Read over these edits while simulating the process in your mind, paying careful attention to when
painting is in a true or false state. The
if not painting… statement draws a line from the current x/y coords to the current x/y coords (not previous) if
frameCount > 1 part solves the initial (0,0) problem. The
paintmode variable will become relevant later when we begin adding different paint-modes.
The next step is to provide a panel from which the user can select colours and other brush features. Add the code below to the
draw() loop. It places a black panel against the left edge, and within it, selectable colour swatches based on the
The panel code is placed below the paint code. In this way, Processing draws the panel last so that no paint strokes appear over it.
Selecting buttons is where things get a little clumsy. When you are programming with GUI libraries, every element in your interface is something to which you can attach an event handler. Consider your red button:
Now suppose that you were using some GUI library. The same code might look something like this:
The position, size, and fill parameters are all handled in a single
createButton function. That’s neat, but it gets better! There will be dedicated methods that listen for events. For example, something like a
click() method that can be attached to any buttons you have created:
To reiterate: this is not real code. However, we will look at one such library (ControlP5) further into this lesson. What I wish to highlight here is that there is no need to detect the mouse position when event listeners are handling things for you. As this sketch employs no such library, we will adopt a similar approach to that of the four-square task (lesson 03); that is, detecting within which square a pointer is positioned. Overhaul your
< 30 and
< 60 conditions separate the area into two columns; the sub-conditions isolate the row. Run the sketch. You can now select different colours for painting.
Next, we will add a feature for resizing the brush, mapping the function to the scroll wheel. In addition, there will be a profile of the brush below the swatches. This profile will reflect the active brush’s colour, size, and shape. Locate the last line you wrote in the
draw() function, and add the brush preview code the
The last line does nothing for now, but it will be important for the next (sizing) step. The app now renders a brush preview in the panel. Although the size cannot be adjusted yet, the colour of the dot changes as you click different swatches.
mouseWheel() event function returns positive or negative values depending on the direction the scroll wheel is rotated. Add the following lines to the very bottom of your code.
This code requires some explanation. Firstly, there is the
e argument within
mouseWheel() brackets. You may use any name you like for this argument; it serves as a variable to which all of the event’s details are assigned. Note how the Console displays something like this each time the scroll wheel rotates:
<MouseEvent WHEEL@407,370 count:1 button:0>
From this output, one can establish the type of mouse event (
WHEEL), the x/y coordinates at which it occurred (
@407,370), and the number of scroll increments (
count:1). If you added an
e argument to one the other mouse functions – i.e.
mouseReleased() – the
button value would be some integer. For example, a
mousePressed(e) upon left-click would hold something like
<MouseEvent PRESS@407,370 count:1 button:37>
We do not want to paint while adjusting the brush size, so the
paintmode is switched to
select. This way, it can be switched back once the adjustment is complete. The switch-back happens inside the
e.count is used to retrieve the number of scroll increments from the mouse event. It is necessary, however, to include some checks (
if statements) to ensure that the new size remains within a range of between
redraw() function executes the
draw() code just once – in contrast to a
loop() that would set it to repeat continuously.
Run the sketch to confirm that you can resize the brush using the scroll wheel.
There is one problem, though. When selecting swatches with a large brush a discernible blob of colour extends into the canvas area.
To resolve this issue, add an
if statement to the
draw() that disables painting while the mouse is over the panel. Use the
paintmode variable to control this.
Next, add a clear button that wipes everything from the canvas. This requires a new
clearall variable, as well as some additional code for the
The clear button has no hover effect. That is to say, when you position the mouse cursor above it, there is no visible change.
It’s good practice always to provide mouse hover and pressed states for clickable interface elements. This provides visual feedback to the user indicating when he or she has something activated or is about to select something. A ‘while pressed/pressing’ state may seem redundant, but most buttons fire-off instructions when a user releases the click. In other words, you can click on any interface element – and provided you keep your mouse button held down – can then move out of the clickable zone and release without triggering anything. Try it on this link:
We could add hover effects to this paint app’s interface, but it’s going to get too messy. I’ve tried to keep things orderly, but the code is beginning to turn into spaghetti. Once again, this is where it helps to use a proper user-interface toolkit, markup language, or GUI library.
Another small tweak that will improve the interface is a custom mouse cursor. Processing’s
cursor() function can switch the standard pointer for an image. Download the PNG file below and add it to your data sub-directory.
Then add the following code to the end of your
There are six predefined cursor arguments:
WAIT. In this case, a crosshair (
CROSS) will appear for any brush sized less than 15 pixels. For anything larger, the PNG image cursor (an empty circle) appears instead to help gauge the brush size.
The appearance of the predefined cursors will vary depending on your operating system. If you ever need to hide the mouse cursor altogether, use the
In the next section, you will explore keyboard interaction. After that, you may want to add some shortcut keys to your drawing app and maybe even some new features?
Computers inherited their keyboard designs from typewriters. In adapting, keyboards spawned various new keys – like, the arrows for navigating text-based interfaces, escape and function keys, and a number pad for more efficient numeric entry. Of course, computers could also perform a more diverse range of tasks, and this warranted the inclusion of further modifier keys (Alt, Ctrl, ⌘, Fn) to be used in conjunction with existing keys to perform specific operations. The Z, X, C, and V keys, for example, when combined with Ctrl or ⌘, perform undo/copy/cut/paste operations. Each modifier key, essentially, doubles the range of input with the addition of a single key. The typewriter’s shift key, though, could be credited as the original modifier key. The key got its name from how it physically shift-ed a substantial part of the typewriting mechanism into a position for transferring capital letters.
Over the years, keyboard layout and usage has evolved in interesting ways. The ubiquitous QWERTY arrangement was devised to avoid characters jamming on mechanical typewriters, so arguably there is room for some optimisation in computer designs. On a typewriter, backspace literally tracked backwards a space for placing diacritical marks above letters, i.e. typing e, then backspace, then ´, resulted in an é. On computers, however, the backspace key deletes characters to the left of the cursor; conversely, the delete key eliminates characters to the right of the cursor (although, it formerly punched holes in stiff paper cards). Furthermore, to make things confusing, the backspace key is often labelled ‘delete’. The escape (Esc) key – originally included for controlling devices using “escape sequences” – was commandeered by programmers looking to stop or abort (‘escape from’) an active process.
Arrow keys were popular for early computer games, but as more titles began to combine the mouse and keyboard, players discovered that a WASD configuration provided a more ergonomic arrangement for right-handed mouse users. Today, keyboard manufacturers offer a plethora of gaming-specific designs, including single-handed variations with less than half complement of standard keys.
One can utilise keyboard input in many creative ways. For example, the rhythm game, Frets on Fire, relies on the F1–F5 and Enter keys to emulate the form of a guitar. The mascot on the game’s menu screen provides a good idea of how to hold the keyboard.
In ALPHABET, a game by Keita Takahashi and Adam Saltsman, each letter is controlled by its corresponding key. The goal is to get all of the letters to the end of a wacky obstacle course.
Keyboard interaction in Processing works similarly to mouse interaction. There are a series of system variables –
keyPressed – as well as event functions –
We will create a simple game that controls a basic character using keyboard input. The closest game I can think of is Snake, although “Snake” is really more of a genre than a game. Many (most?) people are familiar with this game, largely thanks to the version Nokia preinstalled on its hugely successful mobile phones of the late nineties. Our game will be far simpler though, missing many key features. For this reason, it will be named Sna.
Create a new sketch and save it as “sna”. Create a “data” sub-directory and place a copy of the Ernest font within it. Add the following code to get started.
Run the sketch. Confirm that you have a white square sitting in the middle of a blue background.
To control the movement of the square – or if you use your imagination, the ‘snake’ – we’ll use keyboard input. Add a
keyTyped() function; this will be called every-time any key is pressed. Holding down a key results in repeated calls, the frequency of which is determined by your operating system. From here you can establish exactly which key been pressed by printing the
key system variable; this always holds the most recent key you have used (whether currently pressed or released).
Run the sketch. Whichever key you press appears in the Console. However, there will be specific keys that fail to register, and these include the arrow keys. You will see why this is and how to work around it shortly.
For now, though, we will use the W key for moving up. One approach is to place a
keyPressed system variable inside of the
draw loop, then use an
if statement to monitor when it’s
True. Instead, though, we’ll employ a
keyPressed() event function. Think of it this way:
mousePressed is to
keyPressed is to
Add the following code the end of your working file:
Ensure that this code is flush against the left-edge (not indented within another function). The
if statement tests the
key variable to determine if it is equal to
'w'. Run the sketch. Pressing the w-key sends the ‘snake’ heading off in an upward direction. The
yspeed variable – formerly equal to zero – is assigned a value of
-4, which is in turn added to the
y coordinate with each new frame drawn.
The square passes straight through the top of the display window never to be seen again. We will add some wrap-around walls so that, if the square exits at a given edge, it reappears on the opposite side. Add some
if statements to the
draw() function to reposition the cube upon breaching the boundary.
Test the game. The cube will now teleport as it exits the display window. Adding left/right/down movement shouldn’t be a challenge for you. But, rather than relying on A/D/S, we will employ the arrow keys. Recall that the
key variable registers any letter-keys, but ignores the arrow- and some other special keys. For detecting these, one uses the
keyCode system variable. Add a line to print key-codes.
Run the sketch. Every key that you press produces a corresponding number. The arrow codes range from
You can now use these numbers with
if statements to check for special keys. To make things more readable, Processing provides some keyword alternatives to the number codes, such as
DOWN. Add some code for arrow-key movement.
The game can now handle four-way (but not diagonal) movement.
Note, however, that
TAB are not ‘special’ keys, and therefore held by the
key variable (along with other ‘non-special’ keys). If you wish to, you can add the following code to your
keyPressed() function to test this out.
By using a pair of round brackets with the
if statement, one can break-up conditional expressions across multiple lines. This technique can help improve code readability.
So far, it’s not the most advanced game. Each feature we add must be programmed from the bottom-up, whereas a proper game framework would typically include (at the very least) a built-in selection of pre-programmed rendering, physics, collision detection, audio, animation, and perhaps AI features. Processing has the renderer already, as well as some support for other essentials, like event handlers and graphics. What it lacks, though, can be made up through the inclusion of various libraries.
In my experience, many people get excited about developing a game when introduced to handling mouse and keyboard interaction. So, we will press on a little further, adding some simple collision detection the Sna sketch. This will (a) provide insight into some further game programming concepts, and (b) help you appreciate all the heavy-lifting a game library can do for you.
To establish if two or more shapes have intersected within a game, one performs collision detection tests. There are many algorithms for this – the more accurate types, though, are more demanding on your system (and coding skills). We’ll look at one of the most basic forms of collision detection techniques, namely, axis-aligned bounding boxes (or AABBs).
With AABB collision testing, a rectangular bounding box encapsulates each collide-able element. Of course, many games assets are not perfectly rectangular, so one must sacrifice some accuracy.
We can attempt to improve the perceived accuracy by shrinking the bounding box, using multiple boxes, or employing a different yet comparably performant shape – like, a circle. You could even combine bounding- boxes and circles. Be aware, though, that each obstacle, item, and enemy on screen is tested for collisions with every other obstacle, item, and enemy. Complex bounding volumes can cause a significant increase in processing overhead, and as a result, slow or jerky performance.
In a few chapters time, we will take a look at circular collision volumes. For even greater accuracy, there are polygonal bounding volumes that can accommodate just about any shape, but these require a heap of involved math!
To begin with AABBs, add a collectable item – a red square – to the stage:
The collision test will be handled using a single
if statement and we will build-up the conditional expression one piece at a time. The snake’s trail will not trigger any collisions, just the solid white square at its ‘head’. Add a new
if statement to the
If part of the head is anywhere to the right of the red square, a hit is registered. The
rect() draws squares from the top-left corner across-and-down, so it is necessary to use
x+10 (the x-coordinate plus the width of the head) to ascertain the x-coordinate of the head’s right edge. Run the sketch to confirm that this is working. Watch for the “HIT!” that appears in the top-left corner of the display window. The shaded green area in the image below highlights the ‘collision’ zone as it operates currently.
To refine this further, expand on the condition to test whether the player has ventured too far rightwards to trigger any possible collision.
playerx+10 >= itemx
checks if the right edge of the head is overlapping the left edge of the red item;
playerx <= itemx+10
checks if the left edge of the head is overlapping the right edge of the red item.
This constrains the hit-zone to a vertical band as wide as the item.
The head no longer registers a hit once it has passed the right edge of the item. However, as indicated by the green area in the image, anywhere directly above or below the item reports a collision. To resolve this, add additional checks for the y-axis.
The result is an axis-aligned bounding-box that conforms perfectly to the red item.
The collision detection is now functioning correctly. From here, you could make the item disappear and apply power-up. For example, perhaps the snake’s speed could increase when upon collecting the red square? Then maybe after a short period, a new item could appear at some random new location? Before you begin trying anything, though, let’s look at one another important game programming concept: delta time.
Films run at a constant frame rate. Games attempt to run at a constant frame rate, yet there is often fluctuation. Your Sna game is ticking over at 30 fps, as specified in the
setup function. Your computer is powerful enough to check for key input, render the snake’s new position, and detect possible collisions – each and every frame – without producing any noticeable lag. However, there are instances where a game must perform many additional interframe computations. For example, there may be twenty collectable items scattered about the stage; in such a scenario, a further nineteen AABB collision tests must take place before a new frame can be displayed. More likely, though, it would take thousands of collision tests per frame to produce any perceivable slow-down.
yspeed variable so that the snake immediately heads upward when the sketch runs. In addition to this edit, add an
if statement to the bottom of your draw function to record the total milliseconds elapsed upon the snake reaching the top edge.
if statement detects when the snake is somewhere below its starting position. In other words, just as the head teleports to the lower half of the stage, but before rendering it at the opposite edge.
Run the sketch. The snake heads-off as soon as the display window opens. Upon reaching the top-edge, the
noLoop() halts everything and the millisecond count is displayed.
The fastest possible time that the snake can reach the boundary is 2500 milliseconds. My computer managed 2833 milliseconds, but your system could be slower or faster. The snake has 300 ÷ 2 = 150 pixels to cover, travelling at a speed of 2 pixels-per-frame. So, that’s 150 pixels ÷ 2 pixels-per-frame = 75 frames to reach the edge. Recall that the game is running at 30 frames per second. Therefore, 75 total frames ÷ 30 fps = 2.5 seconds, or, 2500 milliseconds. Why can’t it manage 2500 milliseconds flat? Well, the very first frame takes some extra time because Processing needs to setup a few things.
To measure the time elapsed between the drawing of each new frame, add the following code:
currframe variable is used to record the current time – which can then compared with the
lastframe variable. The difference between these two values is assigned to the
deltatime variable. Run the sketch. Once the snake has reached the top edge, scroll back up through the Console output. The
deltatime averages around 33 milliseconds – because 1000 milliseconds divided by 30 (the frame rate) is 33.3 recurring. The exception is the very first value, as the first frame takes significantly longer to process.
To emulate some heavier processing loads, as if there were thousands of collisions to test, add a highly demanding (if pointless) computational task to the end of your
draw loop just before the
lastframe = currentframe line:
for loop does nothing useful. It performs a bunch of intense trigonometry calculations only to discard the values when complete. All of this extra trig-crunching should slow things down. Run the sketch to see what happens.
You should experience a noticeable reduction in frame rate. Note, however, that the loop employs a random function. The lag effect is, hence, erratic as the loop may run anywhere between zero and 900 times in a single
draw. In other words, the snake will move smoothly, but then randomly struggle before speeding up again. My computer clocked 5985 milliseconds for the boundary sprint, but yours could be much slower or faster. If you find that your computer is grinding to a near-halt, reduce the
900 to something a bit more manageable. Conversely, if everything seems to be running about as smoothly as before, try doubling this value. You’ll want to find some number that, roughly speaking, halves the snake’s average speed.
You will also notice that the
deltatime (the milliseconds elapsed between each frame) values are now far more erratic and generally larger.
This is where the delta time proves useful. The time between frames can be used to calculate where the snake’s head should be, as opposed to where it managed to reach. To calculate the projected
playery position, multiply it by
deltatime divided by the required frame interval (33.3 milliseconds).
Run the sketch. The snake reaches the top-edge in around 2500 milliseconds, even slightly under, as if there were no lag at all. However, rather than rendering each successive head two-pixels apart, the head ‘leaps’ in larger, unevenly-sized increments. The size of each leap is dependant on how much time is required to catch up. This results in a longer trail, as the starting position in now fewer frames from the ending position. Moreover, some discernible gaps may appear in the trail, although this will depend on how much your system struggles to match 30 frames per second.
You can now adjust the loop’s
900 value as you wish and the snake still reaches the top edge in around 2500 milliseconds (give or take a few hundred).
Delta time, thus, helps maintain a constant game speed despite variations in frame rate. We are ‘dropping’ frames to keep apace, but, ultimately, delta time helps smooth out the movement values. It can also be used to limit frame rates in cases where a game may run too fast. Generally speaking, the motions of any positioning, rotation, and scaling operations should incorporate delta time. On the other hand, games can behave very strangely if physics calculations mix with variable frame rates. Many game engines, hence, include fixed- and variable time-step functions – like
draw() – to separate out physics and graphics code.
If you wish to move the player around freely again, be sure to remove the
if playery > 145 code.
That is as deep as we will venture into game development concepts. If it’s games you are serious about, then you’ll need to explore further using other resources. That said, the concepts and techniques covered in the previous and upcoming tutorials are integral to any journey towards game development.
ControlP5 is a feature-packed GUI library, full of options for building and customising user interfaces. It provides an extensive set of control widgets that include buttons, sliders, knobs, toggles, textfields, checkboxes, accordions, charts, timers, drop-downs, tab- and window-interfaces, and more.
To begin using ControlP5, one must first install it. From the Processing menu bar, select Sketch > Import Library… > Add Library…
This raises the Contribution Manager window; under the Libraries tab, locate and install ControlP5.
Once you have the library installed, create a new sketch named “identikit”. An identikit – or facial composite – is a portrait image reconstructed from the memory of one or many eyewitnesses. You have probably seen criminal identikits in police stations or on the news. These are rendered in various ways, namely: by sketch artists; or using a system of overlaid transparencies; or with computer software. Our identikit program will be no use for any real-world application, but fun to play with, nonetheless. To get started, add the following code.
Processing requires the
add_library() line for loading in ControlP5. In the
setup block, a new ControlP5 instance is assigned to a variable named
cp5. From here on, one can access ContolP5 features with a
cp5. prefix; this will make sense a little further along when we begin to add controllers. The
axis coordinate runs through the centre of the
ellipse; in other words, it marks the horizontal centre of the circular ‘face’. The face is positioned to the left of the display to make room for control widgets on the right.
The first widget will be a textfield. The controller is added to the
The brackets surrounding the
cp5.addTextfield() lines may look odd, but this is necessary to break the chain of methods over multiple lines. Alternatively, you could write this all on a single line but likely won’t find it as readable. Whenever you create a new controller, specify a name in the first argument – in this case, I have used
'alias'. This name is used to reference the controller further along and also serves as the default label for the field.
draw function, retrieve the captured input and render it using a
text() function. You use the
getController() method to access the properties of any
cp5 controller (by its name), and chain a
getText() onto this to isolate the text value.
Test out the input field. The alias you enter will appear beneath the face.
Next, we will add widgets for controlling the eyes. The additional methods,
setValue(), set the lower/upper value range and in the initial value, respectively.
The problem is that the default position for any slider label is to the right of the widget, which doesn’t fit nicely in this layout. Add some code that adjusts the alignment and padding to reposition the label at the bottom left.
Before drawing the eyes on the face, add three more control widgets – a knob and two toggles:
To draw the eyes, add the following lines to your
draw function. You will notice that a
getController() is used to retrieve each of the controller properties – but, unlike the textfield – there are
getValue() methods in place of
Run the sketch a have a play with the various eye features.
For the nose, we will add a 2D slider; for the mouth, a standard slider but with tick marks for set increments.
The 2D slider holds two values in a list, hence the
.getArrayValue() white square brackets. If you are confused about what the different methods control, try adjusting the arguments to see what effect this has.
We will add one final button widget that will save the image as a TIFF file. Using the
.addButton() method, place a new button at the lower right of the display window.
One can attach event handlers like any other method. These require a Python lambda – but for now, all you need to know is where to write the lambda. We will not review lambdas in these lessons, but if you wish to explore them further, wait until after the next lesson on functions (they’ll make far more sense, after that).
e variable serves the same role as in the mouse event examples from earlier. That is, it holds all of the properties related to the event (in this case, an
.onClick). You may also name
e whatever you wish. To provide some insight into what these properties are, we will print them to the console.
Of course, we wish to save an image, so change the lambda line, replacing everything after the colon with a
save() function that uses the alias input for a filename.
.onClick( lambda e: save(cp5.getController('alias').getText()) )
That is as far as we will venture into ControlP5. There’s plenty more to explore, though. For instance, while pressing the Alt key and click-and-dragging, controllers can be moved about the display. You can also hide all of the controllers using Alt+Shift+H. To activate these shortcut features, add a
cp5.enableShortcuts() line somewhere in the setup block.
For more examples of how to use ControlP5, refer to the File > Examples… menu. In the window that pops-up, you’ll find an extensive selection of sample sketches. Be warned though: almost all of these are written in Processing’s Java language. In spite of this, it should be similar enough for you to understand and translate to Python.
That’s all for this lesson. Feel welcome to experiment with and add additional features to the tasks you have completed.
You will often find that you repeat the same, or very similar, lines of code within a single sketch. Moreover, as your programs grow more complex, repetition tends to creep in. For more modular and reusable code, one can employ functions. In the next chapter, you will look at how to define and work with functions. As a concept, you should grasp functions without much trouble, especially considering what you managed thus far. On the other hand, I’ll still be throwing in some crunchy tasks to keep you challenged!
Begin Lesson 08: Functions