This post is going to be a little different than usual. What we present here is behind the scenes stuff used in making the Corona SDK. But we hope the information presented here goes beyond satisfying simple academic curiosity. We hope this information will actually be useful for others to directly use in their own projects. And the target audience for this post goes beyond our normal demographic. In addition to Corona developers, we are also reaching out to all Xcode/iOS/Mac developers, all Android developers, all Lua developers, and anybody interested in automated testing/software reliability. Also, as a consequence of our solution, people interested in Applescript, Scripting Bridge, and/or LuaCocoa may also find things of interest. Because the topic is vast, not every single item is going to apply to you, but the intent is that there is somebody for everybody. This amount of content we need to cover is quite large, so this post is being broken up into multiple parts which will refer to one another.
Part 1: The Overview: A crazy walkthrough of Xcode, Scripting Bridge, LuaCocoa, lua-TestMore, adb, shell scripts, sockets, and lots of trial & error
For those people not familiar with the Corona SDK, the Corona SDK is a cross-platform SDK that helps users write native applications for iOS and Android. Corona also provides a Mac and Windows based simulator to promote more rapid development so you don’t always have to build/install/run on a physical device.
If you think about this, it means we essentially have at least 4 different code bases (iOS, Android, Mac, and Windows) in order to provide the level of native access to each platform. This means we have a lot of opportunity for bugs to appear between different platforms. And even on the same platform, we need to test different hardware configurations, e.g. an iPhone vs. iPad, Nexus S vs. Galaxy Tab, etc. And every time we make a code change, we risk breaking something that might have used to work.
With the number of different devices and computers and operating system versions, the testing matrix is a nightmare. For Corona official stable releases, we do a lot of manual testing and have an internal matrix of tests we run on a whole bunch of different devices and operating systems. But this is a very slow and painful process. Our Daily Build process gives our subscribers first access to our latest build, but the Daily Builds don’t go through this rigorous manual testing process. Instead, our subscribers in a sense become our testers which helps improve the quality of our software. But this can only get us so far.
To improve the quality of our software, we have been investing time in building an automated testing suite. To start with, we weren’t trying to be overly clever or fancy. We just wanted something that works. But to our dismay, we found that even accomplishing just the basics was an ordeal. Something as simple as automatically installing an app and launching it without manual intervention was surprisingly complex. And to our surprise, very few developers actually have looked into automation on iOS and Android even though we all have the same testing matrix hell.
The purpose of this article is to share what we’ve learned, in hopes that other developers might improve upon this. Also, many of our problem points can only be addressed by Apple and Google, so getting a critical mass of people complaining about the same limitations is good for the community as a whole. This is article is going to cover mostly iOS and Android, though we also do automation on our Mac simulator too. Windows is currently our odd-man out though we are obviously interested in incorporating that into our automated test suite as well.
Some of you might jump the gun and asking, ‘Why aren’t you using Apple’s UIAutomation framework?’ The short answer is that UIAutomation doesn’t solve how to get an app installed to a device in the first place.
1) Defining the type of tests
To start with, we are not trying to be overly ambitious. We just want to run automated tests that require no user interaction. So for now, we are ignoring things touch event testing, visual rendering layout and quality, and so forth. Tests we currently focus on are things like:
‘Did this API function return the correct value?’
‘Did this function calculate the correct result?’
‘Did we successfully send a file over the network with the correct checksum?’
‘Did this callback function actually fire?’
We would eventually like to do the other more advanced things like touch event testing and visual output verification, but that will be in future iterations.
2) How to write our tests
Corona developers write code in Lua. This seemed to be the perfect way to write our test code as well as it will run like any real world app that our users build. So the idea is that we will build a separate app for each test program we write. Then we can build/install/run that app on each device.
We do need additional functionality to:
1) Assert/Verify results (essentially an assert() function)
2) Report results
So we found a project called lua-TestMore. Test More is a popular library for writing tests, particularly in Perl, but somebody wrote an implementation for Lua. The test output format (TAP) is standard enough that there are tools available to help parse it and interpret it. Using a standardized output format was appealing to us because we use another tool called Jenkins (formerly known as Hudson) to help automate our building and deployment process. But Jenkins’s built in unit test reporting reads xUnit XML reports, but none of the other Lua unit test suites we found generated compatible XML. But we found a TAP-to-JUnit script that will convert between the formats. (http://github.com/jmason/tap-to-junit-xml). There has recently been a tap plugin made for Jenkins, but we haven’t explored it yet.
lua-TestMore is written in Lua so we only need to include the Lua scripts in our Corona project. This is also potentially really convenient for our Corona developers because they can also use lua-TestMore for their own projects. We’ll cover lua-TestMore in a lot more detail in Part 3.
2b) Redirecting Test Output
lua-TestMore was written for reporting to stdout/stderr or a file. The problem with doing on-device testing (iOS and Android), is we need to get the results back to our server testing process to process the results. Our solution was to modify lua-TestMore to support writing to a socket as well. Right before we build our product, we include additional information into our scripts to specify an IP address and port of a server waiting to receive the data so when we run the test, the app knows where to send the data.
3) Building the apps
Corona is designed to build apps so a lot of the engineering we need is already there. But, unfortunately, the Corona simulators were not designed with automation in mind. So using the simulator to build a large number of test programs is impractical. (I want to rewrite the Mac simulator backend to support Applescript/Scripting Bridge, but that is down the road.)
But internally, we have Xcode projects and build scripts that allow us to build Corona apps without going through the simulator, so we use these. We just need to add additional shell script code around these tools to instruct our build tools to build the specified test projects we want and invoke any special modifications we need (such as the IP address and port for output redirection).
4) Installing the apps
This is the hard part on iOS. Apple provides no command line tools to install apps through the command line. Furthermore, iTunes and Xcode Organizer which do let you install apps via drag-and-drop do not have the UI infrastructure to allow you to script an installation via Applescript/Scripting Bridge. (Drag-and-drop is not an operation you can write Applescript for.)
After struggling with this for a long time, we noticed Xcode had the sole ability to build a project and launch it to an attached device. Xcode also has a scripting dictionary, so so our solution was to use Scripting Bridge to instruct Xcode to build our project (the previous step) and also install and launch it (the next step).
Unfortunately, the Scripting Bridge dictionary was very fragile and a lot of things didn’t work. But we actually got something that worked for us for a basic case and we were really happy. But then Xcode 4 shipped and the scripting was completely broken. We will elaborate on this later in Part 2.
For Android, since there really are no GUI tools, Android’s own backwardness actually makes this easy. We just need to use ‘adb install’.
5) Launching/running the apps
On iOS, this is already handled by how we install the app in Xcode.
On Android, we had to track down a magic invocation to launch it via command line:
adb -d shell am start -a android.intent.action.MAIN -n com.ansca.test.Corona/com.ansca.corona.CoronaActivity
The Mac simulator responds to secret command line parameters we added to aide automated testing.
The Xcode iOS Simulator has a private framework which people have documented and posted utilities for which we take advantage of for automation.
6) Terminating the apps
This turned out to be very hard/tricky because we don’t have a user available to hit the home key or back button on the device. There doesn’t seem to be a programatic way to kill a process remotely via Xcode or any Android tools (without rooting), so it is completely up to the app to quit itself. But guaranteeing a program will always halt is a hard problem. (In fact, there is a classic computer science problem named the Halting Problem.)
There are actually several different cases for termination.
6a) App completes the test program normally
This is the easiest case. We can call Lua’s os.exit() which is just calls the C exit() function. On all platforms, this terminates the app immediately. (Note on Mac, we actually trapped calls os.exit() on our simulator so users wouldn’t see the whole simulator die when they were trying to terminate their simulation. But for our Mac simulator automated tests, we actually have a special flag to reenable this because it was convenient for us to kill the process after the test ran.)
6b) The app triggers internal exceptions/errors
It is possible that some code triggers an exception. Particularly in Lua, it is easy to trigger a run time error such as concatenating a string with nil. In Corona, we trap Lua errors with pcall and that part of the Lua execution ends, but the core application may continue running and in some cases, the app may just continue running without ever knowing a problem existed.
But sometimes this leads to the app getting stuck because code that was expected to be called which would eventually lead us to our os.exit() call will not be called and our app never ends.
So our solution was to modify our exception traps to call exit() when triggered. We pass an error code to exit() so we know we hit this case for our own information.
6c) The app runs away
It’s possible that the app doesn’t go down the code path you expect, and the ultimate call to os.exit() is never reached. One easy example is a callback notification. If your app is expecting a callback to fire, such as audio finished playing, but the callback code is broken and never invokes the callback, your app will just continue to run without halting.
To handle this case, we setup a self destruct timer at the launch of the program. The timer is designed to fire at a predesignated time that is longer than what our test should take to run (say 2 minutes) and call exit(). We set a different exit code so we can distinguish this case as well.
6d) The app crashes or never launches
In this case, termination is also easy since devices return to their home screen in these cases. We don’t get the TAP output we expect and it will be cut short. Parsing/interpreting the TAP output can yield the fact that a test did not run the planned (declared) number of tests. However, if the app crashes or never launches before the number of tests are declared, then we may get no data. So we have to check for this possibility as well and make sure to interpret this as a test failure.
7) Processing the results
Our server which controls the whole build/install/run process also sets up a simple server to receive test output as a test is running. After the test completes, we post-process the TAP output and integrate with our Jenkins build server which among other things gives us a nifty report page showing us whether things passed or failed.
8 ) Rinse and repeat for each test app
We wrote additional custom shell scripts to pull a repository containing all our tests and iterate through each test.
9) Integration into our normal build process
We have integrated this into our official and nightly build process which includes the Daily Builds. Currently only the Mac simulator tests will block a build from being put up because we have enough random device problems that trigger failures (such as the tools failing to launch the app) that we don’t want to make those block. We just made some changes to the device tests so if they fail, we try to repeat those tests a few more times just to make sure it wasn’t some intermittent device or tool problem out of our control.
Other Gotchas, Bugs, etc
- Remember to clear off space on your device so you don’t run out of disk during your tests
- We have to remember to check for the possibility that the app doesn’t install
- Apps fail to install
- Xcode gets stuck sometimes. We periodically quit Xcode every so many runs to help avoid this.
- adb just hangs sometimes and you can’t do anything but manually yank the cable and replug it in. (Killing and restarting adb does nothing to fix the problem.)
- App or toolchain (Xcode, adb) hangs. We use the tool gtimeout in our master script that orchestrates the entire test run. We use it to kill processes that take longer than we think it should.
To give you a sense of what this looks like, here are some videos:
Video 1: The first video is running through part of our automated test suite on the Corona Mac Simulator. The Mac simulator is very fast because there is no compile/linking phase and no lengthy overhead in copying things to a physical device. The Mac simulator can run through all our tests in a few minutes as opposed to 20 minutes to an hour for our on-device tests.
Video 2: This is an excerpt from automated tests being run on an iPad. Scripting Bridge is being used to automate Xcode. This is discussed in more detail in Part 2.
Video 3: This is an excerpt of from automated tests being run on the Xcode iOS Simulator. Launching the simulator is controlled by talking to private frameworks, but skin changing, resetting, and quitting is controlled by Scripting Bridge. We discuss this in more detail in Part 4.
Video 4: This is an excerpt from automated tests being run on an Android Nexus S. This is discussed in more detail in Part 5.
The following parts are going to cover the above points in a lot more detail and provide example scripts and code.
Please stay tuned. Our hope is that you will be able to learn and reproduce what we’ve done for our own projects, whether they be Corona based projects, general Lua projects, iOS projects, Android projects, etc.