Posted on by

Note: Graphics 2.0 is now in public beta for all Corona SDK Pro and Enterprise subscribers. Please see Walter’s recent blog piece on the news.


Today, I wanted to show you an experiment that really shows you how our new engine will be unifying graphics in an unprecedented way. At Adobe, I always felt the walls between Illustrator, Photoshop, and After Effects didn’t need to exist.

Well, now I can show you a glimpse of the amazing effects that will be possible in Graphics 2.0.

In the following video, you’ll see an experiment of hooking up the camera into our pipeline. The idea is that the frames captured by the camera are the texture source for a polygon. In this case, I used a circle, but in our engine, it could be any vector shape.

We drop them from the sky and let Box2D handle the rest. I also attached a touch joint to the object so in the middle of the video, you see me swinging a circle, knocking away the other objects.

The code for hooking up a camera to texture a shape is trivial. We’re piggy-backing on the ‘fill’ property that I talked about previously:

local crate = display.newCircle( 60 + math.random( 160 ), -50, rand )
crate.fill = { type="camera" }

I should emphasize that while other features I’ve shown you are very nearly alpha/beta-quality, this feature is still in the research phase. Hopefully, you can see the power of grand unification!


Posted by . Thanks for reading...

15 Responses to “Augmented Reality Experiment”

  1. Jason Schroeder

    Very cool! May I take this to mean that in Graphics 2.0 there is no longer the complete separation of native display objects and OpenGL display objects? Will this also apply to map views, web views, etc? (Or you. Ould just add me to the beta and I’ll figure it out for myself…*hint, hint*) ;-)

    Reply
  2. Chris

    That’s awesome.
    I guess we can do it visa-versa aswell – using a background rect as cameraOverlay and adding our own GUI, i.e. a “Snapshot” Button and maybe a head-silhouette.

    Very nice.
    Have been waiting for that!

    Now let’s get some calculateBearing functions and do some AR magic ;)

    Reply
    • David

      JF – what is showing in the circles is a live camera feed, so this is a combination of both the “real world” and computer-generated graphics – which is basically the definition of augmented reality. No?

      Reply
      • Jason Adair

        Not really. Augmented reality would be pointing your camera at something and the app giving you info in a heads-up display, or drawing something into the camera scene that appears to fit in with reality. This is nothing more than using the camera as a live texture. I had the same thought, JF.

        Reply
        • Walter

          Right, as I said it’s an experiment, and my focus was on the “drawing something into the camera scene” aspect, and having this be a live texture means you could draw something on top of the camera very easily. Obviously, there’s no analysis of the scene or computer vision happening here.

          Reply
    • Walter

      I was using the term with respect to the UI part — it’s a live texture in the OpenGL layer, so you are free to draw display objects on top.

      Clearly, there’s nothing here wrt the computer vision portion of analyzing frames.

      Reply
  3. Krystian

    This looks very promising…
    But to be able to do the proper augmented reality we would have to have an access to the camera feed itself to analyze the frames and react to changes.
    Will this be possible?

    Reply
  4. Chris

    This looks great! A simple polygon in the background with fill=”camera” and then accelerometer based graphics on top of it.. very nice.

    Is access to the camera buffer available? So pixel changes could be detected? That way you could “swat” the spheres?

    Reply

Leave a Reply

  • (Will Not Be Published)