The Guide to Mobile Testing Automation

July 15, 2023
10 min

Mobile applications have been shown to have higher conversion rates and engagement than mobile websites. Integration into the native OS opens up many opportunities for branding, customization, UI optimization, and device-specific features like push notifications.

Mobile environments run in an entirely different context than their web counterparts. Web applications rely on a persistent connection to a server, while mobile applications are compiled and run natively on the device, often with intermittent or no network connectivity. Despite these differences, the fundamental requirements and challenges remain the same in mobile contexts. Code coverage, test code standardization, and the optimization of test suites remain vital. 

This article introduces readers to the world of mobile application testing, highlighting its differences and ways to harness the power of the mobile ecosystem for comprehensive testing. Among the many testing frameworks, this article will illustrate concepts using Playwright, an end-to-end testing automation tool. You can expect to learn how to build a comprehensive testing strategy, from initial planning to later maintenance stages.

Summary of key mobile testing automation concepts

Concept Description
Device and OS fragmentation One mobile app release can be downloaded in hundreds of combinations of versions, operating systems, and devices, each of which may need to be addressed through automation strategies. Plan for this reality and establish full coverage in testing.
Real-device testing While emulators offer scalable, cost-effective testing, it is also beneficial to test physical mobile devices. This helps detect hardware-specific issues and accurately simulate real-world conditions.
Test suite design Determine a testing strategy that covers test data management, app state, data operations, and network requests. Plan for device-level considerations in mobile contexts like timeouts, interrupts, OS quirks, and the app lifecycle.
Manual mobile testing Manual testing is the first step in building a test suite. Be conscientious in writing scalable tests and building a proper foundation for your test suite. Effective mobile test cases require careful design to cover critical user flows, different device orientations, and mobile-specific interactions like gestures and permissions.
Automating mobile testing Automation is the conversion of manual processes into repeatable scripts and scheduling those scripts to run at specified times. Continuous integration and deployment techniques are the foundation of modern approaches to automation.
Ongoing test management Some maintenance will ensure that a test suite remains stable even after automation is established. Use techniques like regression testing to catch issues in the test environment proactively. Periodically review test suite results for accuracy, performance, and consistency.
AI testing AI tools reduce the overhead traditionally associated with software testing. Instead of manually maintaining a test suite, the tests can be created and updated automatically over time.

Introduction to mobile testing

Mobile testing builds on the fundamental software testing concepts in other environments, such as web and desktop applications. Concepts like unit, integration, and UI testing can be translated to the mobile domain, and techniques like behavior-driven development and end-to-end testing remain applicable.

The principal difference between web/desktop and mobile lies in the underlying mobile device operating system (OS) and how it manages applications. Mobile applications have their own lifecycles, start-sleep-shutdown cycles, and background communications. The vast mobile device marketplace also introduces complexity

Mobile-specific testing challenges

Mobile testing presents a unique set of challenges that are not as prevalent in other types of software testing, which are outlined in the table below.

Problem Description Solutions
Device fragmentation Device fragmentation is the divide that occurs as different users download different versions of an app. The sheer number of devices—with varying screen sizes, resolutions, and hardware capabilities—complicates testing. Playwright supports device emulation, allowing you to simulate various device profiles. Pair this with a cloud testing service for real-device testing.
Operating system (OS) variations Different versions of operating systems like Android and iOS require thorough testing to ensure compatibility across the board. Playwright supports testing on multiple mobile devices, including Android, iOS, and more. Use emulation to simulate various devices, browsers, viewports, and more.
Network conditions and connectivity Mobile applications must perform well under different network conditions, including varying signal strengths and offline scenarios. Use Playwright’s network throttling capabilities. Intercept and modify network requests, simulating different network conditions. Use context.setOffline(true) to test offline scenarios.
User interface and gestures Mobile apps rely heavily on touch interactions, so testing must account for gestures, swipes, and other mobile-specific inputs. While Playwright doesn’t directly support touch events, you can simulate many gestures using mouse events. Use page mouse methods to simulate taps, swipes, and other gestures. For more complex gestures, consider using page.evaluate() to inject custom touch event code.
Application lifecycles and interruptions Unlike desktop applications, mobile apps must gracefully handle interruptions such as incoming calls, notifications, and app-switching. In Playwright, test app state management by using page.reload() to simulate app restarts and context.newPage() to simulate app switching. For notifications, use page.evaluate() to trigger custom events that mimic notification behavior.

Key concepts in mobile testing automation

Device and OS fragmentation

There are thousands of devices available worldwide with varying operating system types and versions. Different operating systems render content to the screen differently, leading to small differences in the user experience that engineers must account for.

A few years ago, there were two very common business solutions to this problem:

  • Outsourcing to providers: Using third-party infrastructure providers to schedule testing with hundreds of physical mobile devices 
  • In-house management: Organizing a collection of mobile devices and manually writing scripts to perform testing operations on each

Fortunately, using physical mobile devices is no longer the only option—in fact, cloud emulation is now the most common form of mobile testing. Emulation creates a virtual browser, and test scripts are executed within the isolated virtual environment. Cloud-based testing platforms abstract away infrastructure details, allowing users to focus on maintaining their testing coverage.

Emulation should not be confused with simulation. Simulation is a way to run a program incompatible with the current OS and can be done using a first-class tool like XCode or Android Studio. Simulation is efficient, but developers are restricted in what devices they can simulate. For instance, developers can only simulate specific iOS devices at approved versions using XCode.

This is where emulation shines. Playwright, a popular code-testing framework, avoids this limitation by employing an open-source model. It can emulate a wide variety of browsers and devices

Here’s an example of configuring a set of testing devices:

import { defineConfig, devices } from '@playwright/test'; // import devices

export default defineConfig({
  projects: [
    {
      name: 'mobile chrome browser',
      use: {
        ...devices['Desktop Chrome'],
      },
      // It is important to define the `viewport` property after destructuring `devices`,
       // since devices also define the `viewport` for that device.
       viewport: { width: 1280, height: 720 },
    },
    {
      name: 'iPhone 13',
      use: {
        ...devices['iPhone 13'],
      },
    },
    {
      name: 'Kindle',
      use: {
        ...devices['Kindle Fire HDX landscape'],
      },
    }
  ],
});

Real-device testing

Real-device testing is the practice of testing using physical mobile devices. Despite the proliferation of emulation-based testing, real-device testing remains relevant for prototyping and manual testing. It can also be relevant in device-specific testing scenarios like battery usage or memory management.

Testing on a local device is less prevalent, and many mobile teams use a physical mobile device to verify significant code releases. Using “smoke testing,” engineers can check whether core app functionality loads as usual after an impactful deployment. Other teams may periodically compare snapshots from UI testing results to the UI displayed on their actual devices.

Testing real devices may be unnecessary beyond the occasional sanity check for standard testing operations. However, it is worth checking that expectations continue to match reality, and this consideration can factor into any ongoing test management strategies.

{{banner-large-dark="/banners"}}

Test suite design

In mobile environments, the golden rule is to test as early as possible. Establishing a testing strategy is the first step. Different teams have different ways of building a testing strategy, but there are common themes.

At a minimum, consider the following approaches:

  • Establish goals and requirements: Start with predefined goals in existing SLAs, OKRs, and KPIs. Determine the scope of testing and prioritize features, versions, or configurations. Determine what browsers, devices, or operating systems need to be covered in testing.
  • Define target browsers and environments: Configure which browsers and mobile environments will be emulated in testing. Hybrid apps (having both a web and mobile interface) benefit from combining both to maximize test coverage.
  • Decide what types of testing to use: Determine which types of testing are most appropriate for your codebase. Common forms of testing, such as smoke, unit, integration, and UI testing, still have relevance in mobile contexts. 
  • Add reporting capabilities: Use reports to monitor ongoing test suite performance. Reporting is a good entryway into ongoing analysis and history management.
  • Add an error alerting system: Use a notification system like PagerDuty to alert team members when tests fail in production environments.
  • Choose acceptable thresholds: Determine when to consider tests failed. What should the default timeout and number of retries be for each request? How often should a failing test be retried before being considered a failure?
  • Select your testing interval: This refers to the testing frequency; for example, is the app tested hourly, daily, or weekly? This may not be needed in codebases where automated tests are run for each release version.

Manual mobile testing

Manual testing is necessary when initially setting up a testing suite. At this stage, developers must verify questions about basic functionality: 

  • Does the app load in the testing environment? 
  • Do necessary network requests run, and are non-necessary requests disabled for testing?
  • Does the testing environment use “fake” data instead of manipulating real system data?

These questions are all worth verifying as one of the first testing steps.

Test cases for backend testing are almost always written directly in the codebase. In contrast, tools exist to help developers visually create tests for front-end and end-to-end testing. Playwright allows developers to load UIs from within an emulated mobile device, where they can record their in-app interactions and generate new tests based on their recorded actions. This is a faster way of testing UIs than manually writing test code. 

A full test may look like the following example.

Sign in as a logged-in user, and then test whether the dashboard appears:

const { test, expect } = require('@playwright/test');

test('user can login successfully', async ({ page }) => {
  // Navigate to login page
  await page.goto('https://example.com/login');
  
  // Fill in login form
  await page.fill('#email', 'test@example.com');
  await page.fill('#password', 'password123');
  
  // Click login button
  await page.click('#login-button');
  
  // Verify successful login
  await expect(page.locator('.welcome-message')).toBeVisible();
});

Example Playwright test script—the test user is logged in.

For real-device testing, Playwright supports Android automation through an ADB bridge. This can be used in many ways: An app can be run natively, through a browser, or by launching separate clients and servers. This flexibility allows one or many devices to connect to the same local applications.

An Android-based test might look like the following example, showing how to manually test a web app using Android Webview functionality:

const { _android: android } = require('playwright');

(async () => {
  // Connect to the device.
  const [device] = await android.devices();
  console.log(`Model: ${device.model()}`);
  console.log(`Serial: ${device.serial()}`);

  // Take screenshot of the whole device.
  await device.screenshot({ path: 'device.png' });

  {
    // --------------------- WebView -----------------------

    // Launch an application with WebView.
    await device.shell('am force-stop org.chromium.webview_shell');
    await device.shell('am start org.chromium.webview_shell/.WebViewBrowserActivity');

    // Get the WebView.
    const webview = await device.webView({ pkg: 'org.chromium.webview_shell' });

    // Fill the input box.
    await device.fill({
      res: 'org.chromium.webview_shell:id/url_field',
    }, 'github.com/microsoft/playwright');
    await device.press({
      res: 'org.chromium.webview_shell:id/url_field',
    }, 'Enter');

    // Work with WebView's page as usual.
    const page = await webview.page();
    await page.waitForNavigation({ url: /.*microsoft\/playwright.*/ });
    
    // devs to verify that correct title is displayed
    console.log(await page.title());
  }

  // Close the device.
  await device.close();
})();

Automating mobile testing

Tests may grow in complexity at this stage, so test code organization becomes increasingly important. Testing efforts can become more repetitive as code becomes more standardized in a codebase, and standard best practices dictate that testing code follow efficiency principles like the “don’t repeat yourself” (DRY) rule.

Automation requires some up-front time investment but pays off quickly in developer time saved managing app lifecycles. CI/CD integrations are fundamental to modern automation solutions: They allow developers to flexibly arrange how they want their code quality checks to be run. Among many benefits, tools like Github Actions enable developers to customize their testing workflows and operationalize code analysis cheaply.

CI/CD scripts can be pulled from existing test suite configurations to determine browser and device configurations. Beyond general setup, further considerations come up in automated contexts, some of which are described in the table below.

Area Description Potential Solutions
Reporting Personnel should have a way to reference historical test suite runs. Save test logs from GitHub Action runs, whether testing manually or using workflows.
Monitoring Ensure ongoing observation of test suite health and performance in automated environments. Integrate monitoring tools like Datadog or New Relic to track test suite performance metrics over time.
Alerting Team members should be notified when test suite failures occur in production environments. Use tools like OpsGenie or PagerDuty for alert notifications based on CI/CD test failures.
Acceptable thresholds Define allowable rates of errors, retries, and timeouts that won’t trigger an alert or halt the deployment process. Using tools like Jenkins or GitHub Actions, set thresholds for test failures, retry attempts, and allowable timeouts within CI/CD pipelines.
Regression management Ensure that new changes do not introduce failures in previously passing test cases (i.e., prevent regressions). Implement version control strategies and continuous regression testing in CI/CD workflows to quickly catch regressions.
Test code organization Structuring test code to be maintainable, readable, and reusable becomes critical as tests grow. Use the DRY principle, modularize tests, and maintain consistent naming conventions.
Test Instability (“flakes”) Intermittent test failures, or “flaky” tests, can occur due to unstable environments or dependencies. Implement retries for flaky tests, review root causes, and isolate tests to identify environmental causes.

These needn’t all be solved up front, but they should factor into any long-running test management strategy.

{{banner-small-1="/banners"}}

Preparing to transition from manual to automated testing

As teams mature their testing practices, they often repeat the same manual test cases. This repetition presents natural opportunities for automation. Here’s how to identify and transition manual tests to automated ones:

  1. Identify automation candidates: These could be frequently run tests, critical user paths, or time-consuming manual processes.
  2. Start small: Automate a single, simple test case. Two common starting points are user logins and form submissions.
  3. Document code processes: Before automating code, document any notable code to understand the context later. 

Ongoing test management

As an app evolves, so will its tests. By the time an app has been running for several years, coding standards may shift significantly within an organization. Here are some factors to consider:

  • Creating and organizing test cases: Organize test cases logically, grouping them by functionality, user flow, or other relevant criteria. Traditionally, engineering teams dedicate several hours per week (per person) to testing. Qualiti’s AI-powered platform reduces this time from hours to minutes per week reviewing test code.
  • Editing test case metadata and steps: Regularly update test cases to reflect application changes and ensure that they remain relevant. Regression testing is common for addressing these changes. Beyond that, test metadata shouldn’t be forgotten. Naming conventions, test organization practices, and the like should all remain consistent to promote code clarity. 
  • Running single test cases and test plans: Depending on the testing phase and goals, running core test cases for each feature can be more strategic than running comprehensive test plans to verify functionality.
  • Skipping and archiving tests: Manage the test suite by archiving outdated and disabled (“skipped”) tests irrelevant to the current testing cycle. Often, disabled tests are removed together with the outdated functionality that led to them being skipped in the first place.

Once the testing suite is fully stabilized and growing, look into advanced ways to optimize the testing process. Advanced testing considerations include prioritizing components of the testing ecosystem under different conditions, handling “minor failover” situations in which small, infrequent errors arise, and determining how best to speed up testing.

Qualiti provides the following solutions to each of these questions:

  • Test prioritization: Identifying the key tests that can be used for targeted feature testing.
  • Triaging: Automatic review of test suite run results, leaving only real issues for the user to review.
  • Test instability mitigation: Removing flaky tests, selector issues, or failures from minor changes.
  • Parallelization: Running tests in parallel to improve test suite run times.

Qualiti’s approach to mobile testing automation

Qualiti is designed to automatically maintain software testing suites, reducing the repetitive work often involved in test maintenance. The platform’s AI tools generate test code and automatically update tests as code changes. Test suite maintenance is seamlessly managed through scheduled runs or CI triggers.

Qualiti can generate entire test suites in minutes, organizing tests by in-app functionality. Unlike traditional methods that rely solely on scripts or prompts, Qualiti’s AI engine analyzes page contents in the test environment to create comprehensive tests.

[In this example, we generate full tests based on Qualiti’s on-page analysis]

The platform then organizes these tests into collections, performing automatic maintenance as code changes are detected. Engineers can directly update test scripts, while non-technical users can add new tests through plain English prompts. All users can create test plans and scheduled testing workflows.

[In the Test Library, users can find tests by folder or by status]

Instead of requiring hours of effort each week, regression testing becomes an occasional approval process. Non-functional performance tests can be completely automated. 

Some of the characteristics of mobile testing automation include:

  • Low-code/no-code accessibility: Qualiti’s low-code/no-code approach makes mobile testing accessible to all team members, fostering collaboration and enhancing the overall quality of mobile applications.
  • Automatic test environment setup: Qualiti’s Test Plans feature simplifies the setup and management of test environments. It organizes tests on the user’s behalf, from test creation to maintenance and triaging. Qualiti can create new tests on your behalf, organize tests as code changes are made within the app, and prompt test updates in response to code changes.
  • Continuous integration and deployment: Qualiti seamlessly integrates into existing CI/CD pipelines, empowering engineering teams to launch their test suites in minutes.

Incorporating Qualiti alongside tools like Playwright can optimize your automated mobile testing strategy and ensure reliable and efficient testing practices.

{{banner-small-2="/banners"}}

Last thoughts

Mobile applications exist in a unique environment, and mobile testing must follow suit. In this article, we have explored: 

  • The key differences between mobile environments and others
  • The power of automation frameworks like Playwright in simplifying each step of the test automation process
  • Best practices for automating mobile testing and recommendations for ongoing test management strategies

To get started, establish a baseline plan. Identify and prioritize critical user flows. Choose a testing automation framework, like Playwright, that suits project needs and integrates with your CI/CD pipeline. Start by manually testing the code, then automate testing as early as possible once enough core behavior has been verified. Where applicable, balance using real devices and emulators to optimize cost and accuracy.

These best practices are crucial for ensuring consistent, high-quality mobile applications that meet the demands of today’s users.