Mobile applications CI on the Cloud

The process of packaging and shipping your mobile app is one of the most critical ones, as much it is one of the most delicate and complex ones. Let’s break it down to its atomic components:

  1. Developing features
  2. Writing tests
  3. Building the application
  4. Running tests
  5. Deploying the application

But is it really all there is to it? Let’s refactor:

  1. Developing features
    • Creating a feature branch in order to allow different developers to focus on different features.
    • Integrating features into the master branch by merge.
  2. Writing tests:
    • Developing tests using automation and unit testing frameworks
  3. Building the application
    • Build machine
      • Required dependencies
      • Build on one’s own machine
  4. Running tests
    • A suite of devices of various makes, OS and OS version

As you can see, the process is composed of subprocesses that are composed of tasks and subtasks. Setting up, managing and monitoring all of the above can and will ultimately turn into an overhead nightmare. On top of overhead, you have to be cautious about introducing bugs into your project. Have a Look at requirement 1.b. Can you foresee an integration that introduces bugs? Can you afford the time and effort entailed by procurement and maintenance of so many devices? Remember the overhead, you want to minimize maintenance as much as possible so you can focus your effort and resources on development and QA.

What you want to do is setup, configure and automate the process such that there is little to no manual intervention. As the title of this post suggests, a good option would be to setup a CI pipeline in the cloud. Let’s go through the various required components and methodologies that will help you reach your goal of setting your framework of “Mobile Applications CI on the Cloud”.

The Cloud

An amorphous concept in and of its own but a solid one nevertheless. To simplify things, the cloud is just a remote service that frees you from the need to setup services and environments locally. With high uptime and stability, as well as consistent development and testing environment, the cloud offers your developers a stable and consistent environment.

CI in the Cloud

CI stands for “Continuous Integration” and spells out the need to be able to integrate code without breaking the build or introducing bugs. A successful build and minimum amount of bugs (because let’s be honest, no bugs is utopia and we are not about utopia in this post), constitute “Continuous Integration” because it allows you to continually push code if and only if the code meets the two most important conditions:

  1. Successful build
  2. Tests pass

Any failure to meet any of these two conditions will prevent the code from being integrated, saving you hours of debugging and tracing down the faulty code. If the code doesn’t fit, it’s not going to be admitted.

CI Services

With cloud in mind, you should aim to use cloud based CI services. The list includes Jenkins, Travis, circleci, TeamCity and more. With these services you define your CI pipeline. The CI pipeline is a set of instructions that make up the CI process. They include spinning up an environment with required dependencies, building the app or compiling the code, running tests, publishing results and eventually integrating the code into the current branch. Remember that the integration is only allowed if the build succeeds and if the tests pass.

Devices in The Cloud

Defining and setting up the CI template is starting point of the CI process. Once you have the CI pipeline figured out, you need to obtain a set of devices on top of which your tests will run. Given the the time and effort involved in obtaining these devices, connecting them to your local machines, and maintaining them (keeping them updated, clean and fully charged at all times), it is much better to have these devices hosted and maintained elsewhere.

Putting Everything Together

If you take all three elements discussed above, CI in the cloud, CI services, devices in the cloud, and consider them carefully it should start getting clear what we are aiming at here. Think of as a flow chart (we will review one in abit). First, identify the cloud services you want to use. Then, plan your CI route. Finally, execute. It is that simple.

Let’s look at a simple diagram that demonstrates the concept.

A diagram breakdown is as follows:

  1. Developer pushes code
  2. The repository notifies the CI service
  3. The CI service pulls the latest changes to the branch and starts building the app
  4. If the build is successful, the CI service installs the app on cloud devices (that’s where we come in). If the build failed, the whole process stops and the developer is notified.
  5. The CI service then continues on to run pre-defined tests on the newly installed application.
  6. If the tests pass, great. The code is integrated by way of merge or push. If tests failed, the process stops and then developer is notified.

As simple as that. Once you get the whole pipeline setup, it is a matter of just tweaking the pipeline script.

Real-World Example

It’s all nice in theory but what about putting it into practice? Clouds, CI services, devices, apps, where do they come into play and how do we intertwine them together in the pipeline? Let’s look at a real world example.

Setting up the scenario and requirements

In the middle of running the sprint, Dave and I are developing new features for our mobile application. We have already branched off master to a feature branch and I now branch off the feature branch to my own branch in order to work on the features for which I am responsible. My goal in branching off to my own branch is to prevent any disruption to Dave’s work in the form of failing builds.

With much effort and a lot of coffee, I finally finished introducing the new features. Now I want to test them. What I want to do next is the following:

  1. Develop my tests by combining an automated testing framework with a unit testing framework. I use Appium, Java and Junit.
  2. Set up my CI job (I’m using Jenkins):
    • Create a pipeline CI job
    • Parameterize the test (parameters are added as environment variables):
      • Add the cloud access key
      • Add the git repository username and password
  3. Configure the pipeline with a Jenkinsfile (written in Groovy)
    • Pull the repo at the feature branch
    • Build the app
    • Upload the app to the cloud
    • Run the tests
      • Install the application on the selected devices
      • Run the tests
      • Return test results
  4. Based on results (see step 3):
    • Merge my branch with the feature branch
    • Fail the job and send a notification with test results

CI Example Using Jenkins

We have a webinar on the topic coming soon, stay tuned!

The Anatomy of Testing Frameworks

Unit Testing is the practice of validating the functionality of your program or app. It is a part of the TDD (Testing Driven Development) method in which the code gradually evolves around test cases. The idea behind TDD is that test cases are planned and written before any new features are coded. If run, the test cases will obviously fail if the feature has yet to be implemented, but that is exactly the purpose and intention of Unit Testing. Consider each feature as a unit, or a set of units. As the development of test cases precedes the actual coding of new features, the developer gets a clear notion of what the feature should do, how it should behave, and how it should handle error and edge cases. Test cases then achieve a double goal; not only are they set beforehand to serve as test cases, but they also help the developer solidify his intentions regarding a feature and how it should be developed or implemented. In our context, Unit Testing comes in really handy when running a suite of tests on mobile devices, especially if testing requires you to run the tests on multiple devices.

Consider unit testing as your last line of defense. You work on your code to make sure the feature answers the requirements but you must make sure it passes all tests. If it does, you can introduce it into a working production version of the app. This is where Continuous Integrations comes into play. We cover Mobile Applications CI in another blog post.

Structure Of Unit Testing

The classical structure of Unit Testing consists of three stages:

  1. Setup – the setup stage prepares the environment for testing. The setup step can include setting local variables, retrieving environment variables, initializing instances of helper classes, setting test conditions (pass / fail ratio, timeout), and defining the file into which test results will be logged.
  2. Test – the test stage is where the actual test takes place.
  3. Teardown – the teardown stage serves as the final stage of the test. The teardown stage cleans the environment, logs the final results in the log file, and releases agents and clients if such were used in the test. One of the most crucial roles of the teardown stage is to make sure that the test is seen all the way to the end, even if the test failed. As many test failures are a result of exceptions that are thrown during the test, the absence of the teardown stage would cause the program (test case) to stop running completely. If the test stops running abruptly, it might result in processes that keep on running even though the test came to a halt, and incomplete test result logs. The teardown stage ensures that no matter what happens during the test, failures and errors will not cause the program to stop running. The teardown stage will take place regardless with all end-of-test requirements being fulfilled.

There are many unit testing frameworks. There’s Junit and TestNG for Java, unittest for python, NUnit for C# and more. However, going through each of these unit testing frameworks and how to work with them is beyond the scope of this post. I believe that the example given here will help solidify the concept. If you want the practical side of using these unit testing tools, have a look at our documentation where we created sample tests for Native and Web apps, both Android and iOS.

Let’s have look at a sample unit testing code, written Java and using TestNG as the unit testing framework.


Test Class:

The Multiplier class simply takes two numbers, multiplies them and returns the result. Consider that your classes will have more elaborate calculations and much more advanced functionality but for the sake of demonstration, the calculator class should suffice. The test class initiates three instances of the Multiplier class and then runs a few tests to see if the calculation carries through correctly. The Multiplier class is pretty straightforward, so let’s break down the test class.

The setup method

The setup method prepares everything that we need before the test can run. In this case, initiating three instances of the multiplier class. However, this could be anything from creating a file into which you log results, initiating helper classes such as DB connection helper classes etc. Note that we are using the @BeforeClass annotation. This tells the setup method to run only once when tests initiated. If there is anything you want to run before each test commences, say delete some data or reset connections, you can use the annotation @BeforeTest. This annotation will tell whatever method that follows it to run before each test.


Tests are actual method and are annotated by the @Test annotation. This annotation tells TestNG what methods are the actual tests. Annotating these methods as test methods allows TestNG to log results and also allows you to extract information at the end of the test cycle. In these tests, we are using the well-known assertEquals method. It basically takes two arguments and compares them, that’s it. But the comparison has to take place in the context of the test itself and provide an assertion error (note that this is not an exception). The occurrence of an assertion error is what allows TestNG to mark tests as failed. In our scenario, we’ve set the second test to fail and we will see that in the results that we print out in the tearDown method.

Tear Down

The tearDown method is annotated with @AfterClass. Remember that we annotated the setup method with @BeforeClass. Naturally, the @AfterClass annotation sets the method that follows it to only run at the end of the class, when all test methods have run. In our case, the tearDown method receives a parameter of the type Test Context. Test Context contains information such as failed and pass tests and the names of the methods that passed or failed, as well as other useful information. What we did here is to count the number of failed and passed tests and the name of the method that failed. The tearDown scenario here is quite simple but note that you can harness its power. The tearDown method can close DB connections, create or destroy log files, send emails or slack notifications. You name it. It can even call other functions or classes according to test results. If I counted failed tests and found that it was zero, I can then call some other function and even invoke other tests.

Results Output

If I ran the set of tests that we created, the output will be as follows:

We can see that Java alerts us that an Assertion Error took place and
Java will also output the stack trace but for our purposes it is not relevant.

The output above is generated by the tearDown method. If you recall, we tapped into the Test Context from which we extracted information about the test. Using Test Context, we are able to count the number of passed and failed test and output the name of the failed test method, which is secondTest.

This final bit of output is generated by TestNG. It lists how many tests ran in total, how many failed and how many were skipped.

Parameterized Tests

One of the strongest features of unit testing, and TestNG in particular, is the ability to parameterize the tests. There are numerous parameters that you can inject into a test, but we will focus on just a few in order to demonstrate the concept. Parameters are placed in parentheses following the annotation. Let’s look at a few examples:

invocationCount parameter – this parameter allows us to define how many times a test method will be invoked. If we did something like @Test(invocationCount = 4), the test method that followed the @Test annotation would be invoked 4 times.

dependsOnMethods parameter – this parameter allows us to define method dependency. We are basically specifying if a given method depends on another. It is important to note that if we specified method A’s dependency on method B, A will only run if B passed. So if we specified:

firstTest would never run because, if you recall, we purposely set secondTest to fail.

dataProvider parameter – this parameter allows us to define another method as the data provider of the test method. Let’s say that we want to utilize multiple assertions in one test method. We can create a data provider method and set it as the data provider of the test method.

First, we annotate the data provider method with the @DataProvider annotation. The method must return Object[] or Object[][]. The method will return an object that contains an array of 4 numbers.

We then specify the dataProvider parameter in the @Test annotation. What this will do is invoke the method once for each number in the array, a total of 4 invocation.

The method provideNumbers will provide a set of 4 numbers to be passed on to the test method. Look at the code above. To trace the way the data travels:

The test method is annotated with @Test and is given the method provideNumbers as its data provider. This setup causes TestNG to pass the set of numbers in provideNumbers sequentially, one for each test invocation. By specifying int number as a parameter that the test method must receive we are basically telling TestNG “take the number from the object we defined in provideNumbers and pass it as a parameter to the test method. We will pick it up by passing it to the test method”. So every time the test method is invoked, the next number in the list is passed as parameter to the test method and we can then use by plugging it into the assert method.

The output of running the tests:

There are three failure instances and TestNG logs the the data that caused the failure (look at the java.lang.AssertionError part before each stack trace). The stack trace also shows the line in the code where this assertion error took place.

Use Unit Testing for Mobile Applications Testing

Unit testing really fits the flow of mobile application testing or mobile website testing. Let’s see how the structure of unit testing helps in designing and executing test cases on mobile applications and websites.

Setup Stage – Every test that is run using Appium requires predefined capabilities that determine the nature of the test. By specifying capabilities, you tell Appium what app you want to test, or what browser you want to test on. You can tell Appium that you want to install an app, keep the app alive at the end of a test, set command timeout and more. The setup stage of unit testing is the perfect place to set these test capabilities. The setup stage is also where you initialize the driver instance that will serve you throughout the test.

Test – Once capabilities are set and the driver has been initialized, you can start testing the application. Recall the setup stage; it ensures that when we actually reach the test stage, everything has already been configured to support the test case. From there on you can test the login and logout functions of your app, payment, friend invite or alerts of all kinds. You can go about designing complex tests that verify and validate a sign up and login process followed by actions performed in the app, or you can settle for the simplest test of all by just viewing and screenshooting each page of your mobile website to verify that it is mobile ready.

Teardown – In the context of mobile testing, the teardown stage is the cleanup and regrouping stage. Here you will release the driver, log results and print the results, or rerun the test if you need to.

It’s really simple if you think about it. You have a testing template into which you plug in everything that will make your test cover every scenario of your feature and even edge cases that might be introduced into the flow.

Let’s look at an example of testing a login and payment flow of a demo iOS app

The TestNG XML

The TestNG XML file is probably the strongest trait of the TestNG unit testing framework. It allows you to specify test suites, pass parameters, run tests in parallel and even run multiple instances of classes and methods in parallel. The ability to run multiple instances and methods in parallel is key feature in test scalability. The best way to demonstrate the strength of this feature in TestNG is to show how it can be harnessed in order to execute Appium tests in parallel and on a large set of devices.

Let’s have a look at a sample TestNG XML file:

The structure of TestNG XML allows us to specify the names of nodes (suites, tests, classes and methods) and specify which ones we want to run.

The easiest way to execute your tests cases in parallel is by executing your tests in parallel. There are other ways, such as running classes, instances and methods, but these are a bit more advanced and require some code implementation. We will focus on tests in parallel.

It’s a bit overloaded but once you get the hang of it, it’s simple to see what it does. If we ran this XML the results would be as follows:

See that all tests run in parallel. This allows me to run a test case on both Android and iOS, and on multiple devices in the cloud.

Parameterization in TestNG XML

Another powerful thing that TestNG lets you do is pass parameters from the TestNG XML into the test class. In the context of our Appium tests, we could pass the udid of specific devices that we wanted to test, or pass the application version that we want to test. Let’s see an example of specifying the device ID.

The parameters annotation passes the parameter udid as variable from the TestNG XML.

It is important to keep consistent naming throughout, otherwise the test method will not be able to pick up the parameter and its value.

If we combined parallel execution with the ability to choose the device we wish to test on, we then get double the power. Not only do we get to choose which devices we run our tests on, but we can also run them all at once and save ourselves time and effort.

To conclude, unit testing is your choice when you want to to make sure features are bullet-proof. Unit testing is the most crucial stage of development because they allow you to make sure a feature works as intended, and avoid shipping faulty code and features into production. The structure of unit testing allows you to define test prerequisites, test stages, and after test actions of logging and reporting. Combined with development and integrations, unit testing is the missing piece of the well-formed development process.