Tutorial: Implementing Pinch-Zoom-Rotate

Tutorial: Implementing Pinch-Zoom-Rotate

Today’s guest tutorial comes to you courtesy of Matt Webster, a.k.a. “HoraceBury.” Matt is a Development Manager working in central London. He has 15 years of experience building enterprise sites in .NET and Java, but he prefers Corona to build games and physics-based apps. As a Corona Labs Ambassador, Matt has organized two London meet-ups and looks forward to doing more in 2013. Matt’s first game in Corona was Tiltopolis, a hybrid twist on the classics Columns and Tetris.


First, please download the project files so you can follow along with the code samples. Each of the “sampleX” modules are functioning mini-projects and should be worked through one at a time within main.lua. Uncomment only one require statement at a time to follow the workings in the logic. The last module, sample11.lua, is the entire pinch-zoom-rotate module which you can incorporate into your own project. Enjoy!


Most applications (more than you’d expect) can perform perfectly fine with just one touch point. If you consider the large number of apps out there, you can see that many have a huge feature set but still get by with just a single point of input because they are designed around buttons or individual swipe actions, etc.

Take “Angry Birds,” for example. This game requires that every tap, drag, and swipe is performed by one finger. Navigating the menu, opening up settings, and firing the afore-mentioned birds with attitude is all done with one finger, and rightly so. It makes for a simple, intuitive and engrossing game. However, even this most basic interface requires one simple trick learned from iOS. This involves using two fingers to “pinch” zoom in and out of the parallax-scrolling action.

So, that’s simple, isn’t it? The rule is: when one finger is used, perform the action for the object being touched. When two fingers are used, perform a gentle scaling of the top-level parent display group.

This tutorial aims to show you how to handle these multitouch scenarios with as little hassle as possible. It will also try to provide some insight into the oft-requested pinch zoom.

Touch Basics

If you’re reading this tutorial, you probably already have some experience with the Corona touch model, so I will just highlight the core tenets.

  • addEventListener() is used to listen to a particular display object for user touches.
  • There are two types for touch events: touch and tap.
  • The touch event is comprised of phases: began, moved and ended.
  • Listening to one display object for both touch and tap events will fire the touch event phases before the tap event fires.
  • Returning true from an event function stops Corona from passing that event to any display objects beneath the object.
  • system.activate(“multitouch”) enables multitouch.
  • Once a touch event has begun, future touch phases are directed to the same listener by calling display.getCurrentStage():setFocus().
  • setFocus can only be called once per object per event (without cancellation).
  • Calling dispatchEvent() on display objects fires artificial events.
  • Events fired with dispatchEvent do not propagate down the display hierarchy.

The Tap Problem

As described above, touch events have a number of phases which literally describe the users interaction with the device: putting the finger on the screen, moving it around, and letting go.

When it is listened for, the normal tap event is fired if the above event phases occur within a given time span — iOS employs about 350 milliseconds — and with a distance of less than ~10 pixels between the began and ended locations.

This means that if you are listening for both touch and tap events you need to actually detect a tap within your touch listener function to know that your tap listener function is going to be called. So, if you’re already detecting taps you may as well not attach a tap listener at all. For the purposes of this tutorial that’s exactly what we’ll do: we will leave out tap events because they simply complicate our code.

Single Touch

To demonstrate the typical touch event, let’s create a display object with a standard touch listener and use it to move the display object around.


[gist id=”4589007″]

The above function handles touch events when multitouch is not activated. This isn’t the “simplest” touch listener, but it’s practical and safe. It’s also not the most complex that we could build, but any other work it can perform should be done by functions it can call. It caters for the following situations:

  • The touch starts on the object.
  • The touch is used to move the object.
  • Touches which start “off” the object are ignored.
  • Handled touches do not get passed to other display objects.
  • Ignored touches get propagated to other display objects.
  • The display object has its own :touch(e) function, not a global function.

Note that the object will ignore touches which start elsewhere. This is because setting hasFocus indicates that the object should accept touch phases after began. Also, it will not lose the touch once it acquires it because setFocus tells Corona to direct all further input to this object.

Multiple Touches

Fortunately, converting this function to be used by multiple display objects is not difficult. The catch with setFocus is that each display object can only listen for one touch because all other touch events are ignored on that object after it begins handling a touch.

To demonstrate multitouch we will convert the above code to create multiple objects which will handle one touch each.


[gist id=”4589149″]

Note the key differences in this code:

  • We have activated multitouch.
  • We have wrapped the display object creation so that it can be called repeatedly.
  • setFocus accepts a specific touch ID to differentiate between user screen contacts.
  • When ending the touch, setFocus accepts nil to release the object’s touch input.

With the code above, we should be able to create 5 large circles, each of which can be moved independently. Note that, as before, due to setting hasFocus, and with setFocus now accepting a specific touch ID, the display objects will ignore touches which start elsewhere and they will not lose a touch once it begins.

The Multitouch Problem

Remember that the strength of the code above is that it can distinguish between multiple touches easily. This is because objects will not lose their touch once they acquire it. This is both a huge bonus and a bit of a problem.

  • The bonus is that setFocus allows us to say, “Send every move this user’s touch makes to my object’s event listener and nowhere else.”
  • The slight problem is that setFocus also stops our display object from receiving any other touch events.

If we have not yet called setFocus, using hasFocus conveniently allows our object to ignore touches which don’t begin there. This is useful because users often make a swiping gesture (by accident) on the background or inactive part of the screen and swipe across our object. We want it to ignore touches which don’t begin on it. So, the question is “how do we convince Corona to let our objects receive multiple touches?” when the functions which give us this great ease-of-use stop exactly that? The answer is to create a tracking object in the began phase.

The Concept

With a small change to the code above, we can create a single object which spawns multiple objects in its began phase. These objects will then track each touch individually. We will also change the code further to remove the tracking object when the touch ends. The complete code will have one function to listen for the touch event began phase and another to listen for moved, ended and cancelled phases. These two functions will be added to the target listening object and the tracking dot objects, respectively.

Spawning Tracking Dots

First, we need to create an object which will handle the began phase as before, but this time it will call a function to create a tracking dot.


[gist id=”4589411″]

This is pretty straightforward. It just creates a display object which listens for the began phase of any unhandled touch events. When it receives a touch with a began phase, it calls the function which will create a new display object. This new object will be able to track the touch by directing the future touch phases to itself (instead of “rect”) by calling setFocus. Note that we are not setting the hasFocus value because multitouch objects only need to handle the began phase.

Next, we need to create the tracking dot. This code is almost identical to the previous multitouch function.

[gist id=”4589441″]

Note that the only two changes we’ve made to this function are:

  • We call circle:touch(e) because the circle has only been created and has not actually received the touch event’s began phase. Calling this allows the circle object to take control of the touch event away from the “rect” object and handle all future touch phases.
  • At the start of the :touch() function we also change to using the circle as the target because the e.target property is actually the “rect” object (where the touch began).

When this code is used with the code above we will see a small blue rectangle which can create multiple white circles. Each circle is moved by an independent touch. It is this mechanism which we can use to direct all of the touch information to our blue “rect” and pretend that it is receiving multitouch input.

Faking Multitouch Input

Our blue “rect” object is going to become the recipient of multiple touch inputs. To do this we need to first modify its touch listener function. At first we will simply add some print() statements for the moved, ended and cancelled phases. Here is the modified :touch() listener function for the small blue rectangle:


[gist id=”4589514″]

The major change here is the addition of the moved, ended and cancelled phases. Doing this allows the tracking dots to call the :touch() function of the blue rectangle, passing in the event parameter received by the white circle’s touch function.

The elseif statement is also important here — if the tracking dots pass the event parameter to the rectangle, the e.target will be a reference to the dot, not the rectangle. We will store the reference to the rectangle in the .parent property. This way, the rect:touch() function can determine if it is the rightful recipient of the touch event. Of course, we haven’t changed the circle’s touch function to call the rectangle’s :touch() yet. Before we do that, we need to make sure that each circle keeps a reference to the rectangle object so that it can call the rect:touch() function and pass it the event parameter.

Here is the start of the newTrackDot() function, which needs to make a local copy of the original .target property of the event parameter.

[gist id=”4589568″]

Keeping a reference to the object which received the original began event phase allows our tracking dots to send the multitouch events back to it. Now, we don’t need our tracking dots to send the began phase event parameter to the “rect” because it has already received that event. What we do need is to call rect:touch(e) in the :touch() function of the tracking dot so that the other phases get sent to our “rect” object.

[gist id=”4590059″]

Pretty simple. We now have a rectangle which creates a tracking dot for each touch it detects. Each of those dots also send their touch information back to the rectangle, using its original touch handler function. The rectangle will also know that it is the proper target.

The trick now is to make use of this multitouch information!

Employing Multitouch

We now have an object which can detect the start of multiple touch points. It spawns tracking dots for each point and receives touch events.

To make some basic use of this multitouch information we will position the “rect” display object at the centre of the touch points. This can all happen within the :touch() function of the rectangle object. To position the “rect” object at the centre of our multiple touch points we first need to find the average x and y of all the touch points. We’ll use a separate function for that.


[gist id=”4590295″]

In order to call this function, “rect” needs to keep a list of the tracking dots it creates. We will add this list to the rectangle as a property, directly after we create it.

[gist id=”4590320″]

Now we’ll get the average centre of those dots and update the x and y position of “rect”:

[gist id=”4590358″]

Run this code and you’ll see a small blue rectangle. Touch the rectangle and it produces a white circle. Moving this first circle will cause the blue rectangle to follow it precisely. Release the touch and create another white circle and you’ll see that the blue rectangle now stays at the midpoint between the two white circles. Create yet another and it will stay between the three circles, and so on.

Debugging and Devices

We now have a good Simulator debugger for multitouch capable display objects. You’ll notice, however, that when you release your touch from one of the tracking dots, the dot does not disappear. This is really great for debugging with the Simulator because you can pretend to have multiple touch points. This is not so great on the device because you’re filling up the screen with white circles.

To fix this, if it’s running on a physical device, the rect:touch() function needs to remove the tracking dots in the ended phase. First, however, we need to store a variable at the start of our code which indicates whether we are running on a device.

-- which environment are we running on?
local isDevice = (system.getInfo("environment") == "device")

The isDevice variable will be true if the code is running on a real, physical device and it can be used to automatically remove the tracking dot when the user lifts their finger.


[gist id=”4590468″]

Notice that or e.numTaps == 2 is used. This allows the tracking dot to have a tap listener which also calls the rect:touch() function so that in the Simulator we can use a double tap to remove the tracking dot.

The tap listener should only listen for taps if the code is running in the Simulator, so we’ll use the isDevice variable again. The tap listener is added inside the newTrackDot() function which creates tracking dots.

[gist id=”4590508″]

Note that we also:

  • Check for two taps, so that only a double tap will remove a tracking dot.
  • Set the .parent property, just as we do in the touch function.
  • Only attach the tap listener if the code is running on the Simulator.

Making it Useful

The code so far is useful but doesn’t do very much. We can move a small, blue rectangle around with more than one finger. The beauty of multitouch input devices is that the real world has an impact on the virtual. If all we want to do is move an image or collection of display objects around we can add this code to those objects and have them respond to the user’s touch. If we want it to be a bit more realistic, we should add some rotation and scaling.

Relative Motion

Before we do that, however, take a look at how the rectangle moves when you use one finger. It centres itself directly under the touch point. To be more believable, it should really move relative to the motion of the touch point. Unfortunately, this is not as simple a change as it would appear, because we need to cater for removing a touch point. We now need to move some code into the moved and ended phases.

To illustrate the complete change and to lay out the full rect:touch(e): code — it has changed a lot, after all — here’s the whole function:


[gist id=”4590589″]

The fairly significant change here is to:

  • Calculate the centre of all touches and store it for reference in the began phase.
  • Add the difference between the previous and current touch centres to the rect.x and rect.y in the moved phase.
  • Update the stored touches centre in the ended phase so that removing a finger does not throw off the next moved phase.

The user can now place any number of fingers on the rectangle, even change them, and move it around as if shifting a photo on a table. Of course, what it doesn’t do (yet) is rotate with their touch.


With multitouch control of a display object, each transformation we want to apply to it requires taking the average of all the tracking dots and applying that to the image at the midpoint (the average location) of the display object.

For scaling, this means that the mathematical process is:

  • Sum the distances between the midpoint and the tracking dots.
  • Get the average distance by dividing the sum distance by the number of dots.
  • Get the same average distance for the previous location of the tracking dots.
  • Take the difference between the previous and the current average distance.
  • Apply the difference as a multiplication to the display object’s .xScale and .yScale.

This is only slightly more advanced compared to how we applied the average transition of the display object when moving multiple tracking dots. To help us get these scaling values we’ll need some basic library functions. The following function calculates the distance between two points on the screen. This is a very typical trigonometry function and widely used.


[gist id=”4590651″]

To get the midpoint of the tracking dots we’ll use the calcAvgCentre() function above. To get and store the average distance between the midpoint and the tracking dots we’ll use these functions. The first of these gets the current distance for each dot, stores it in the tracking dot and also saves the previously known distance. The second function calculates the difference between the previous and current set of distances.

[gist id=”4590666″]

Using these functions is simple. For the began and ended phases of the rect:touch() we just call them and they update our tracking dots with the appropriate values. Here is the additional update call for the began and ended phases:

[gist id=”4590689″]

The moved phase is a little more complex because this is where the real work is done. Fortunately, all we need to do here is update the tracking dots again and only apply the scaling if there is more than one tracking dot.

[gist id=”4590703″]

Above, we’ve made the following changes to the moved phase:

  • Declared variables to work with the forthcoming transformation values.
  • Called updateTracking to refresh the stored distance values of the tracking dots.
  • Used those distance values to calculate the average change in tracking scaling.
  • Applied that scaling to the display object “rect”.

The display object now translates (moves) and scales (zooms) along with our tracking dots (touch points).


To rotate our display object, the basic logic follows that we work out how much each tracking dot has rotated around the midpoint (of all the tracking dots), get the average, and add the difference between that and the previous amount to our object’s .rotation value. This requires adding some more general math functions to our code.

Because of an oddity in angle calculations, we will also need a function which can determine the smallest angle between two points on the perimeter of a circle. This is important because when we’re using the angle for which a tracking dot has rotated, and we may accidentally end up with an angle that represents the larger angle, say, between 10 degrees and 260 degrees. What we need is the angle 90 degrees, not 260 degrees.


[gist id=”4590736″]

As in the calcAvgScaling function, we’ll make use of the above function in the calcAvgRotation function to determine the average amount that all of the tracking dots have rotated around the midpoint. We also want to update the difference between the tracking dot angles — and their previous angles at the same time. Fortunately, we’re already doing this for tracking dot distances from the midpoint, so we can add code for that as well.

[gist id=”4590837″]

Now, due to this small addition of code, the rect:touch() function is already updating the appropriate values in the began and ended phases. All we have to do is apply rotation to the “rect” display object in the moved phase. Of course, we only need to do this if there is more than one tracking dot. So, we simply call the functions described earlier to calculate the average amount of rotation around the tracking dots’ midpoint and apply it to the display object.

[gist id=”4590859″]

Pinch Centre Translation

Run the code now and you’ll notice that while the display object rotates, scales and moves with the tracking dots, it doesn’t quite shift with the tracking dots. This is because, unless the user is very lucky (or not paying attention), they will never quite get the midpoint to be the very centre of the display object being manipulated.

To solve this, we can’t just heed the basic translation, scaling, and rotation to the display object — we also need to apply it to the centre point location of the display object. This means that:

  • Scaling should be applied to the distance between the midpoint and the “rect” centre.
  • Rotation should be applied to the “rect” centre, rotated around the tracking dot midpoint.
  • But, fortunately, we’re already applying translation, so that can be ignored.

Ok, so what standard library maths functions do we need? Well, we want to rotate a point around another point, so we need the following math helpers. Additionally, the moved phase also needs some new additions.


[gist id=”4591987″]

The moved phase is now doing a number of things, whether there’s one tracking dot or many:

  • pt is declared to use as a working space for the display object’s position.
  • The midpoint translation is applied to the working object.
  • The distance between the midpoint and the display object centre is scaled.
  • The centre of the display object is rotated around the midpoint.

Run the code now and no matter where you place your fingers, real or virtual (in the Simulator), as long as the touch (tracking dot) is started on the display object it will pinch-zoom with the touch points.

The effect is most obvious when using two fingers because the tracking points stay precisely relative to their starting location on the display object, but more can be used and the result is the same, just a little more averaged across the touch points.

And Finally…

Everything so far has relied on a single display object being manipulated. When does that happen in the real world? Realistically, a program will need a group of objects to be pinch-zoomed. More importantly, what use is a complex function if it can’t be re-used?

To re-use the :touch() function so that it can be attached to any display object — image or group — simply change the references it uses. To show that, let’s create a display group with a number of objects contained, and attach a touch listener and function to that group!


[gist id=”4592012″]

In Summary

And there we have it: a touch listener module which can be applied to any display object or group to implement multitouch pinch-zoom-rotation.

If you didn’t already download the project from the link at top, you can get it here.

As usual, please participate in the conversation by posting your questions and comments below.

Brent Sorrentino
  • David Rangel
    Posted at 13:30h, 22 January

    Huge thanks to Matt for writing this fantastic tutorial.

  • Chris Leyton
    Posted at 01:44h, 23 January

    Excellent tutorial – plenty of substance and thoughtfully explained. More like this please.

  • Thomas Vanden Abeele
    Posted at 06:01h, 23 January

    Thank you for this extensive and very well laid out tutorial. It’s been WAY too long (since Jon left, I think) since the last time we had a tutorial of substance in my opinion, so this is nice to see and read.

    Brent, if I could make a request: what a lot of people in the community really need is a definitive guide on object oriented programming in Lua, because no matter how hard you search online, documentation on this is either sorely lacking, or thoroughly confusing “because there are many ways to do OOP in Lua”. If there are many ways, then please give us a full and complete description on how to do them all!

    I can practically guarantee you that a lot of Lua programmers will Google their way to Corona if you would have this content on offer with the same in-depth style as this tutorial.


    • Brent
      Posted at 22:50h, 23 January

      Hi Thomas,
      Thanks for the feedback. Rob and I try to mix it up when thinking up subject matter for the weekly tutorials, in order to appeal to the overall community. We generally focus on the most-requested features or “tricks” that aren’t necessarily obvious from just reading through documentation.

      That being said, I fully agree that a “proper” OOP tutorial would be useful… actually, well beyond useful! But as you’ve seen, nobody can seem to agree on what “proper” means when it comes to an OOP-type methodology in Lua.

      You’ve prompted me to look harder though… to “solve” this issue in a cohesive, well-documented manner would be awesome. 😉

  • Krystian
    Posted at 04:25h, 24 January

    That is one hell of a tutorial!

  • russell
    Posted at 09:46h, 24 January

    This will be excellent tool for point an click adventures or hidden object type games. I suppose you will have to use a higher resolution background pic for the close up shots.

  • John Nagle
    Posted at 11:16h, 26 January

    One problem I have with using this is that in my app, I want to be able to apply this to the group but also to items within the group, such that if you scale the group all items scale. Then, if you touch an item in the group, it should scale as well.

    In the current implementation, the scaled coordinates for items inside the group cause the items to translate incorrectly.


  • Matt Webster
    Posted at 12:56h, 29 January

    Hi John, you’re right and this is something I want to address in a follow up post. I can’t say when I’ll get to write this however, so I’ll see if I can help right here.
    Essentially, the problem is that the position of the objects (possibly including the tracking dots, if they’re added to a group) are all relative to the (0,0) of the display group, not the world coordinate, but the touch coordinates are relative to the screen (0,0).
    To get round this, you need to convert the world coordinates into group-relative coordinates. In the touch listener use object:contentToLocal( e.x,e.y ) – this will take the touch event x and y and convert it to an x and y value relative to the display group your tracking dots or pinch-zoomed display object are contained within.
    Good luck; Its certainly an interesting idea to have items which can be manipulated individually and also as a group. This is also one of the main problems with expanding the tutorial into a “How to Use” tutorial – its just too long to incorporate all the different scenarios and problems. I’ll do my best though!

  • Matthew Webster
    Posted at 09:24h, 30 January

    Just thought I should post a link showing how it should look once completed. This is also what the final code sample provides if you run it on its own:


    • antonio
      Posted at 12:59h, 14 May

      Hi Matt,
      thank you very much for the tutorial, it’s great.
      just 1 question, in sample 10, how I can do to add a second rect and assign the same event??

  • Arivan Bastos
    Posted at 12:57h, 06 February

    Very nice tutorial and module.

    Just a question: how to use this module with scrollView widget? I want be able to drag the scrollContent (as usual) and use the multitouch pinch-zoom-rotation feature with each image inside the scroll container.

    I’m trying hard, but I can’t figure out how combine the “scrollWidget:takeFocus()” method and your approach.

  • Matthew Webster
    Posted at 10:11h, 07 February

    Hmm, good one. I’m not sure that you want to be associating the functionality between them…

    If you let the (any) scroll (view, widget, etc) take the focus, you will lose the ability to control the handling of the following multiple touches. This implicitly stops your multitouch effect from ever starting.

    Well, while writing this, I’ve taken a stab at modifying the final ‘sample11.lua’ file to add a simple, full-screen scrollView widget and add the multi-coloured object from the demo…

    If you open ‘sample11.lua’, go to line 214 and add the following code (note: I’ve left in the surrounding lines to show where to place the insert) you should see the multi-colour object in a light blue scrollView:

    — create display group to listen for new touches
    local group = display.newGroup()

    — insert this…
    local widget = require “widget”
    local scrollView = widget.newScrollView{
    width = display.contentWidth,
    height = display.contentHeight,
    scrollWidth = display.contentWidth,
    scrollHeight = display.contentHeight*2,
    — inserted code ends.

    scrollView:insert( group )

    — populate display group with objects
    local rect = display.newRect( group, 200, 200, 200, 100 )

    Now, I’ve just built this and tried it on my iPad and it works. There is one issue which I’ve mentioned in a previous post, on this page:

    As the scrollView moves (when you use one touch to scroll it) the relative position of the object being pinched effectively changes. Because of this, you would need to begin making use of the :contentToLocal() and :localToContent() functions. These would allow the touch points to take effect in the relative space of the scrolled display group (the scrollView, in this case) rather than the absolute world coordinates of the screen.

    I will be working on this and I hope to provide a follow-up tutorial on making use of pinch-zoom in a real world application, but for now I’ll leave this as an exercise for the reader.

    Please leave your comments and questions and I’ll do my best to help out.

  • Matthew Webster
    Posted at 10:17h, 07 February

    The code got a little distorted, so I’ve moved it to a forum thread:


  • Nathan
    Posted at 17:46h, 17 February

    Great tutorial Brent – this helped me a lot.

    One quick question – you talked about implementing “tap” inside the touch function, but then excluded to reduce complexity of the example. I have both but they are getting a little confused (sometimes both get triggered – resulting in the item moving a little when tapped).

    How can I get my touch function to either detect the simultaneous tap (and not do the touch work), or to handle the tap itself?

    I do need to catch the tap either way, as I don’t want it passed through to the item behind the one that is the current focus.


  • Matt
    Posted at 07:13h, 18 February

    I would recommend that you don’t attach a tap listener and that you simply handle the touch as a tap.

    The process is basically to check in the ended phases whether the touch has moved beyond a certain distance (“threshold”) and if it is released within a certain time (“window”).

    I recommend a threshold of about 10px and a window of 320 milliseconds.

    I did not handle this scenario in this tutorial because it is a bit beyond the requirements for a pinch-zoom handler. I do intend to address it when I write a more general multi-touch tutorial, but there are a lot of complex scenarios to cover.

    • Nathan
      Posted at 18:21h, 18 February

      Thanks Matt – I’ll try that, but I’m assuming you meant “handle the tap as a touch”, not “handle the touch as a tap” as written?

      I’ll also have to make sure there is only one finger “tapping” too.

      (Sorry for calling you Brent too!)

      • Matthew Webster
        Posted at 08:16h, 09 May

        I’ve been intending to write a more involved tap/touch/swipe tutorial for some time, but I need to improve my own knowledge before getting that far and this year simply hasn’t provided me with the time. Keep an eye out for it, but don’t hold your breath, as they say…

  • Kerem
    Posted at 22:03h, 07 May

    Matt, this is terrific! Thank you so very much for your time and effort.

    I am working on an festival app and need to display a map (png file) of festival grounds. The app uses StoryBoard scenes and the map is to go into a scene. Rotation is not so much of a necessity but zoom/pinch and move is. Your sample gets me real close to what I need to do.

    I am able to adapt the Sample 10 to show my map and move it / zoom etc but I am a little lost in transitioning this code to live within my StoryBoard scene and coexist with the other elements. There is a tab bar taking some space at the bottom so the image display area for the map should honor that too…

    Have ever applied this code to a StoryBoard? I would really appreciate it if you have another sample like that. Thank you very very much!

    • Matt
      Posted at 08:19h, 09 May

      If I understand you correctly, you want to be able to use the sample as provided, but within a display group parent which may have moved from 0,0 on the screen. This is possible but would need the input coordinates to be adjusted. As mentioned above, I’ll be working on this challenge and did make some headway a few weeks ago. I will need to revisit it, but (as also mentioned) this year is not providing me with much personal time. Keep your eyes peeled!

  • Kerem
    Posted at 09:20h, 09 May

    Hi Matt,

    Thanks for your kind response. I was able to get Sample 10 adapted for my needs. Basically I have a StoryBoard driven app where one scene is a png based map image. I switch scenes using a tabbar at the bottom so the map snaps in right above the tabbar.

    I pretty much crammed all of Sample 10 into the create scene event and call a purge scene on didExit event. It is not the most ideal spread but it works!

    I do want to get to using the group method you outline in the sample 11 though. This will allow me to overlay other images such as arrows pointing out to something the user might choose in an earlier scene. I know I can do this using your multiple objects moving and zooming together approach in sample 11.

    My question, if you have a moment at some point, is how you would take your sample 11 and spread its contents into a StoryBoard Scene. I can most likely whack it all into create scene and it’ll probably work but if you were doing it how would you do it. No rush.

    Thanks much for all your help. This is fantastic.

    • Matthew Webster
      Posted at 00:51h, 10 May

      To be honest, I’ve not really tried my code with storyboard, but I did put it into a scroll view, as you’ve seen.

      I don’t see why the storyboard approach would be problematic, as it doesn’t affect how the display objects are moved around.

      I think that you are battling with how to handle the touch listeners and various display objects used in the code sample though.

      When putting code into a storyboard function which you need to keep a reference to it is best to start thinking about your code structure up front. Do you want to have references outside the scene object, in the module, or do you want to create everything in the create function and run the whole scene as if it never leaves that function.

      One approach I’ve used in a recent (unfinished) game is to put almost everything into the create scene function and even attach the listeners within it. All the other functions are internal to the create function as well. This means that writing code within the other functions, such as enterScene, have access to the variables created in the createScene function. The one listener which can’t be attached within that function is the createScene listener, of course. And it is very important to clean out the scene when it exits.

      I would post the code here, but it’s long and this comment box won’t render it properly. If you like, I can post it in the forum, but be prepared to adjust your view on storyboard code somewhat.

  • Kerem
    Posted at 12:14h, 10 May


    Thank you very much for your continued input, support and willingness to share. I did exactly what you mention above. Placed almost everything in Sample 10 in createScene and it worked well. You have to remember createScene gets called once per run so it might be a problem but in my case I wanted to purge the scene since the map is so large so this is not a problem.

    I would love to see the code sample so if you could post it in the forum that would be terrific. You had a forum discussion spawned from this tutorial but it appears that it is on the old and now shutdown forum. It would be lovely to start a similar one in the new forum.

    Alternatively you can post it in response to my forum post titled : Image move, zoom, pinch & overlay question. I was looking for exactly what you teach here so you can imagine my delight in coming across your tutorial. Thank you once again.


  • Alan
    Posted at 05:18h, 10 July

    Hi Matt, great tutorial, but I have one question.

    I have used your code to rotate/zoom etc, but I want to limit the amount that the user can zoom in/out.

    If I do something like this:

    if (#rect.dots > 1) then
    — calculate the average rotation of the tracking dots
    rotate = calcAverageRotation( rect.dots )

    — calculate the average scaling of the tracking dots
    scale = calcAverageScaling( rect.dots )

    — apply rotation to rect
    rect.rotation = rect.rotation + rotate

    — apply scaling to rect
    rect.xScale, rect.yScale = rect.xScale * scale, rect.yScale * scale

    –add some zoom restrictions
    if rect.xScale > 2 then
    rect.xScale, rect.yScale = 2, 2
    elseif rect.xScale < 0.3 then
    rect.xScale, rect.yScale = 0.3, 0.3

    My zooming is restricted the way I want, but if I keep on pinching/stretching, the object will continue to move as if it was zooming (although it won't be scaling).
    It's kind of a hard one to explain, if someone could copy my "–add some zoom restrictions" part in, they will see what I mean.

    Is there a nice easy way to fix this?

  • Matthew Webster
    Posted at 23:35h, 10 July

    In the section “Pinch Centre Translation” you’ll also need to modify the code which scales around the pinch centre. This is because it is impossible to place your fingers equal distances from the centre of the display object, so we also scale the distance between the centre of the tracking points and the display object. Apply the same limits to this scaling operation as you have to the scaling of the display object and you’ll be done.

    • Zayed
      Posted at 11:22h, 05 September

      I was in the exact same situation as Alan, used his code and did as Mathew Webster suggested. The following shows a visual:

      — add some zoom restrictions
      if object.xScale > 1 then
      object.xScale, object.yScale = 1, 1

      elseif object.xScale 1 then
      object.xScale, object.yScale = 1, 1

      elseif object.xScale < 0.42 then
      object.xScale, object.yScale = 0.42, 0.42


      Now my issue is trying to prevent the user to allow the area that is not within the rectangle group appear on the screen. I want to have a wall around the group so that the maximum you could zoom out by pinching is the screen shape itself revealing only what exists within the group.

  • poolem
    Posted at 10:49h, 25 September

    This was very helpful in implementing a solution that would allow the user to resize the display group due to device screen limitations.

    I used this reference:
    to turn this into “multitouch.lua” library.

    I then added a ‘rotate’ property to the display group so that this feature can be disabled if desired.

    To implement this in main.lua:

    local touchGroup = require(“multitouch”) — this library
    touchGroup.rotate = false — new property to enable/diable rotation

    local screen = display.newImage( “assets/mycdss-bg.png”, 0, 0, display.contentWidth, display.contentHeight ) — something to place on the device screen that should be multitouchable

    touchGroup:insert( screen)

    Here is the modified lua file that became my library:


    — one more thing

    — turn on multitouch

    — which environment are we running on?
    local isDevice = (system.getInfo(“environment”) == “device”)

    — returns the distance between points a and b
    function lengthOf( a, b )
    local width, height = b.x-a.x, b.y-a.y
    return (width*width + height*height)^0.5

    — returns the degrees between (0,0) and pt
    — note: 0 degrees is ‘east’
    function angleOfPoint( pt )
    local x, y = pt.x, pt.y
    local radian = math.atan2(y,x)
    local angle = radian*180/math.pi
    if angle 180) then
    a = a – 360
    elseif (a 1) then
    — calculate the average rotation of the tracking dots
    if (rect.rotate) then
    rotate = calcAverageRotation( rect.dots )
    — calculate the average scaling of the tracking dots
    scale = calcAverageScaling( rect.dots )

    — apply rotation to rect
    rect.rotation = rect.rotation + rotate

    — apply scaling to rect
    rect.xScale, rect.yScale = rect.xScale * scale, rect.yScale * scale

    — declare working point for the rect location
    local pt = {}

    — translation relative to centre point move
    pt.x = rect.x + (centre.x – rect.prevCentre.x)
    pt.y = rect.y + (centre.y – rect.prevCentre.y)

    — scale around the average centre of the pinch
    — (centre of the tracking dots, not the rect centre)
    pt.x = centre.x + ((pt.x – centre.x) * scale)
    pt.y = centre.y + ((pt.y – centre.y) * scale)

    — rotate the rect centre around the pinch centre
    — (same rotation as the rect is rotated!)
    pt = rotateAboutPoint( pt, centre, rotate, false )

    — apply pinch translation, scaling and rotation to the rect centre
    rect.x, rect.y = pt.x, pt.y

    — store the centre of all touch points
    rect.prevCentre = centre
    else — “ended” and “cancelled” phases
    print( e.phase, e.x, e.y )

    — remove the tracking dot from the list
    if (isDevice or e.numTaps == 2) then
    — get index of dot to be removed
    local index = table.indexOf( rect.dots, e.target )

    — remove dot from list
    table.remove( rect.dots, index )

    — remove tracking dot from the screen

    — store the new centre of all touch points
    rect.prevCentre = calcAvgCentre( rect.dots )

    — refresh tracking dot scale and rotation values
    updateTracking( rect.prevCentre, rect.dots )
    return true

    — if the target is not responsible for this touch event return false
    return false

    — attach pinch zoom touch listener
    touchGroup.touch = touch
    touchGroup.rotate = true
    — listen for touches starting on the touch object

    return touchGroup

    -Thank you for providing this example!

  • poolem
    Posted at 10:55h, 25 September

    by previous post sponged the example code.
    Not worth posting – please discard my previous post.

    Thank you,

  • Eric
    Posted at 10:23h, 13 November

    If I wanted to have the whole screen touchable rather than just the display objects, what would be the most effective way to do this?

    • Matt
      Posted at 00:25h, 14 November

      There’s really only two ways to handle that. Either attach listeners to every display object or have one big object sitting in front if everything, with a listener. That object would also need to be invisible and has isHitTestable=true

      That’s really the solution whether you’re talking about regular touch listeners or the pinch zoom code above.

      • Tomaz
        Posted at 07:47h, 27 November


        I’m also very grateful for this tutorial. Just one question. As soon as I apply this code in my game, all other eventListeners stop wokring, they become unresponsive. Only your eventListener is working. How can I change this? Thank you for your help, Matt.


        • Matt
          Posted at 00:05h, 03 December

          Without seeing code, I can’t really tell. Have you created a forum thread for this?

          I would guess that you are either using single touch mode or inadvertently creating a touch layer on top of all the other layers, which is blocking touches from going through. If that is the case, each touch would potentially be creating a new tracking dot, rather than going to the target object.


  • David Gross
    Posted at 01:17h, 29 July

    This is great… but if I turn off the double-tap in the circle touch handler, then the second touch of the double-tap doesn’t capture the target anymore.

  • Dewey
    Posted at 22:53h, 29 July

    How would one apply this technique to scaling and positioning a photo image WITHIN it’s existing mask??

    I’m guessing that each time you move the image, you’d need to move the mask by the same amt negated (x -1)
    And same with the scale…each time you scale the object one direction, you need to scale it’s mask in the opposite to preserve it’s original dimensions.

    Am I thinking of this the right way?

    Or perhaps you find some way to keep the image and mask separate and only apply this technique to the image itself?

  • Jonathan
    Posted at 18:55h, 14 March


    This is just a fabulous tutorial. You’ve taken a complicated interaction and explained it both conceptually and practically in a perfect series of progressive samples. I love the way you can turn the samples on and off as require files, that’s very elegant!

    Having purposely avoided multitouch until now, I had built my own clunky single-touch, zoom-rotation interface. However now, within a day (and bunch of contentToLocal and localToContent calls later ) I’ve improved my App’s UI immensely, making it both more intuitive and visually appealing for the user. THANKS!

    – Jonathan