Today’s guest tutorial comes to you courtesy of Matt Webster, a.k.a. “HoraceBury.” Matt is a Development Manager working in central London. He has 15 years of experience building enterprise sites in .NET and Java, but he prefers Corona to build games and physics-based apps. As a Corona Labs Ambassador, Matt has organized two London meet-ups and looks forward to doing more in 2013. Matt’s first game in Corona was Tiltopolis, a hybrid twist on the classics Columns and Tetris.
First, please download the project files so you can follow along with the code samples. Each of the “sampleX” modules are functioning mini-projects and should be worked through one at a time within main.lua. Uncomment only one require statement at a time to follow the workings in the logic. The last module, sample11.lua, is the entire pinch-zoom-rotate module which you can incorporate into your own project. Enjoy!
Most applications (more than you’d expect) can perform perfectly fine with just one touch point. If you consider the large number of apps out there, you can see that many have a huge feature set but still get by with just a single point of input because they are designed around buttons or individual swipe actions, etc.
Take “Angry Birds,” for example. This game requires that every tap, drag, and swipe is performed by one finger. Navigating the menu, opening up settings, and firing the afore-mentioned birds with attitude is all done with one finger, and rightly so. It makes for a simple, intuitive and engrossing game. However, even this most basic interface requires one simple trick learned from iOS. This involves using two fingers to “pinch” zoom in and out of the parallax-scrolling action.
So, that’s simple, isn’t it? The rule is: when one finger is used, perform the action for the object being touched. When two fingers are used, perform a gentle scaling of the top-level parent display group.
This tutorial aims to show you how to handle these multitouch scenarios with as little hassle as possible. It will also try to provide some insight into the oft-requested pinch zoom.
If you’re reading this tutorial, you probably already have some experience with the Corona touch model, so I will just highlight the core tenets.
addEventListener() is used to listen to a particular display object for user touches.
There are two types for touch events: touch and tap.
The touch event is comprised of phases: began, moved and ended.
Listening to one display object for both touch and tap events will fire the touch event phases before the tap event fires.
Returning true from an event function stops Corona from passing that event to any display objects beneath the object.
system.activate(“multitouch”) enables multitouch.
Once a touch event has begun, future touch phases are directed to the same listener by calling display.getCurrentStage():setFocus().
setFocus can only be called once per object per event (without cancellation).
Calling dispatchEvent() on display objects fires artificial events.
Events fired with dispatchEvent do not propagate down the display hierarchy.
The Tap Problem
As described above, touch events have a number of phases which literally describe the users interaction with the device: putting the finger on the screen, moving it around, and letting go.
When it is listened for, the normal tap event is fired if the above event phases occur within a given time span — iOS employs about 350 milliseconds — and with a distance of less than ~10 pixels between the began and ended locations.
This means that if you are listening for both touch and tap events you need to actually detect a tap within your touch listener function to know that your tap listener function is going to be called. So, if you’re already detecting taps you may as well not attach a tap listener at all. For the purposes of this tutorial that’s exactly what we’ll do: we will leave out tap events because they simply complicate our code.
To demonstrate the typical touch event, let’s create a display object with a standard touch listener and use it to move the display object around.
The above function handles touch events when multitouch is not activated. This isn’t the “simplest” touch listener, but it’s practical and safe. It’s also not the most complex that we could build, but any other work it can perform should be done by functions it can call. It caters for the following situations:
The touch starts on the object.
The touch is used to move the object.
Touches which start “off” the object are ignored.
Handled touches do not get passed to other display objects.
Ignored touches get propagated to other display objects.
The display object has its own :touch(e) function, not a global function.
Note that the object will ignore touches which start elsewhere. This is because setting hasFocus indicates that the object should accept touch phases after began. Also, it will not lose the touch once it acquires it because setFocus tells Corona to direct all further input to this object.
Fortunately, converting this function to be used by multiple display objects is not difficult. The catch with setFocus is that each display object can only listen for one touch because all other touch events are ignored on that object after it begins handling a touch.
To demonstrate multitouch we will convert the above code to create multiple objects which will handle one touch each.
Note the key differences in this code:
We have activated multitouch.
We have wrapped the display object creation so that it can be called repeatedly.
setFocus accepts a specific touch ID to differentiate between user screen contacts.
When ending the touch, setFocus accepts nil to release the object’s touch input.
With the code above, we should be able to create 5 large circles, each of which can be moved independently. Note that, as before, due to setting hasFocus,and with setFocus nowaccepting a specific touch ID, the display objects will ignore touches which start elsewhere and they will not lose a touch once it begins.
The Multitouch Problem
Remember that the strength of the code above is that it can distinguish between multiple touches easily. This is because objects will not lose their touch once they acquire it. This is both a huge bonus and a bit of a problem.
The bonus is that setFocus allows us to say, “Send every move this user’s touch makes to my object’s event listener and nowhere else.”
The slight problem is that setFocus also stops our display object from receiving any other touch events.
If we have not yet called setFocus, using hasFocus conveniently allows our object to ignore touches which don’t begin there. This is useful because users often make a swiping gesture (by accident) on the background or inactive part of the screen and swipe across our object. We want it to ignore touches which don’t begin on it. So, the question is “how do we convince Corona to let our objects receive multiple touches?” when the functions which give us this great ease-of-use stop exactly that? The answer is to create a tracking object in the began phase.
With a small change to the code above, we can create a single object which spawns multiple objects in its began phase. These objects will then track each touch individually. We will also change the code further to remove the tracking object when the touch ends. The complete code will have one function to listen for the touch event began phase and another to listen for moved, ended and cancelled phases. These two functions will be added to the target listening object and the tracking dot objects, respectively.
Spawning Tracking Dots
First, we need to create an object which will handle the began phase as before, but this time it will call a function to create a tracking dot.
This is pretty straightforward. It just creates a display object which listens for the began phase of any unhandled touch events. When it receives a touch with a began phase, it calls the function which will create a new display object. This new object will be able to track the touch by directing the future touch phases to itself (instead of “rect”) by calling setFocus. Note that we are not setting the hasFocus value because multitouch objects only need to handle the began phase.
Next, we need to create the tracking dot. This code is almost identical to the previous multitouch function.
Note that the only two changes we’ve made to this function are:
We call circle:touch(e) because the circle has only been created and has not actually received the touch event’s began phase. Calling this allows the circle object to take control of the touch event away from the “rect” object and handle all future touch phases.
At the start of the :touch() function we also change to using the circle as the target because the e.target property is actually the “rect” object (where the touch began).
When this code is used with the code above we will see a small blue rectangle which can create multiple white circles. Each circle is moved by an independent touch. It is this mechanism which we can use to direct all of the touch information to our blue “rect” and pretend that it is receiving multitouch input.
Faking Multitouch Input
Our blue “rect” object is going to become the recipient of multiple touch inputs. To do this we need to first modify its touch listener function. At first we will simply add some print() statements for the moved, ended and cancelled phases. Here is the modified :touch() listener function for the small blue rectangle: