Table of Contents
How to do automation testing on an existing web application: introduction
Let’s imagine that you have a legacy application (web applications) you need to change. Of course, it has never been tested, and you should write the test before starting to work with the code. Understanding how to do automation testing is crucial in this scenario, as it will guide you through the process of setting up effective test cases and integrating automation into your workflow.
Should you start with web testing? But this is time-consuming, and there is never enough time in the software development. Perhaps, you’d rather start with security testing or performance testing that helps you to cover the most critical issues? Or maybe it’s better to write unit tests since they are faster?
Working with legacy code is a non-trivial situation. In this article, we will consider the possible variants (test cases) on how to do automation testing on an existing web app and decide which of them are better in certain situations.
To effectively approach this challenge, understanding how to do automation testing is crucial. Starting with web testing can ensure that the user interface and functionality are working as expected. Security testing can help identify vulnerabilities, while performance testing ensures the application can handle load and stress. Unit tests, on the other hand, can provide quick feedback on the correctness of individual components. By combining these strategies, you can comprehensively cover the application and ensure it meets quality standards.
What is a legacy system?
A legacy system (web applications) is a software written with the use of old methods and technologies that operates without access to the Internet. 70% of corporate business applications are legacy ones. The statistics show that 60% of the IT budget is spent on Legacy system maintenance.
Legacy applications are maintained by very few people possessing the knowledge of the corresponding technologies and business processes. Usually, earlier versions of the software are used. The companies avoid upgrading the app since there is a risk that everything goes down.
Most of the legacy apps don’t have proper documentation, that makes it challenging to understand how they work.Mykhailo PoliarushCEO, ZappleTech Inc.
Given the above-mentioned factors, replacing web applications may be a risky affair. With existing technologies and test cases, migration may be possible, but it involves time, maintenance, and testing the application. Since the legacy software is mainly written without security testing or performance testing, it’s a good idea to start introducing it to the app’s work. Let’s consider, what can be done for legacy system modernization.
Moreover, adopting a continuous integration/continuous deployment (CI/CD) pipeline can streamline the testing and deployment process. This approach allows for automated testing at various stages of development, ensuring that changes are continuously validated. Additionally, leveraging modern tools and frameworks for automation testing can significantly reduce the manual effort involved. By understanding how to do automation testing effectively, teams can maintain and improve legacy systems with greater confidence and efficiency, ultimately extending their lifecycle and enhancing their performance.
How to automate the website testing?
It is always challenging to write performance testing (or a usability testing) test cases for a legacy application for, maybe, mobile devices. From where to start? How much to automate? What is the best strategy for test automation? Besides, there is never enough time and money, so you can’t test each and every module.
It is always challenging to write performance testing (or usability testing) test cases for a legacy application for, maybe, mobile devices. From where to start? How much to automate? What is the best strategy for test automation? Besides, there is never enough time and money, so you can’t test each and every module. Understanding how to do automation testing involves prioritizing critical functionalities, focusing on areas with the highest risk, and leveraging automated testing tools to cover repetitive and time-consuming tasks. By developing a strategic approach, you can maximize the efficiency of your testing efforts and ensure robust coverage of your legacy application.
To decide on the first steps, you need to put priorities:
- What are the most critical areas of the application?
- Which key functionality brings the most money to the company?
- What are the biggest risks for the app operating systems?
- If you could improve only one thing in an application, what would it be?
What you shouldn’t do, is retrospectively write the tests for usability testing that successfully function as a part of the system. However, we need a set of key scenarios that check the system end-to-end to ensure that future development and maintenance won’t threaten the system functionality.
How to start legacy app testing
Here are some guidelines that can be used for testing (include security testing or performance testing) and creating test cases an existing web app to find the key scenarios and a way to expand them.
1) Exploring
Exploring the first steps for conducting usability testing or compatibility testing involves getting thoroughly acquainted with the web app and its features. Begin by exploring the structure of the website, its pages, and the behavior of each feature. For your convenience, you can create a mind map to visualize the website’s structure, which helps in understanding how the pages are interconnected.
To dive deeper into usability testing, start by identifying the primary user flows and scenarios that need to be tested. This includes common tasks that users will perform on the site, such as navigating through different sections, filling out forms, or completing transactions. By mapping out these user journeys, you can ensure that all critical paths are tested for usability issues.
Compatibility testing requires an understanding of the various environments in which the web app will be used. This includes different browsers, operating systems, and devices. Begin by listing the most commonly used browsers and devices by your target audience. Use tools like BrowserStack or Sauce Labs to simulate these environments and ensure that the web app performs consistently across all of them.
For how to do automation testing, it’s essential to select the right tools that match your testing requirements. Tools like Selenium, TestCafe, or Cypress can be used to automate the testing of web applications. Start by writing automated test scripts for the most critical user journeys identified earlier. Ensure these scripts cover various scenarios, including edge cases and potential error conditions.
Next, set up a continuous integration (CI) pipeline to run these automated tests regularly. Tools like Jenkins, GitLab CI, or CircleCI can be used to integrate automated testing into your development workflow. This ensures that any new code changes are automatically tested, helping to identify issues early in the development process.
Finally, regularly review and update your test cases and scripts to keep them relevant as the web app evolves. This includes adding new test cases for new features and retiring tests for deprecated functionality. By maintaining an up-to-date and comprehensive test suite, you can ensure ongoing usability and compatibility of the web app, effectively leveraging the power of automation testing.
Through these steps, you will not only understand how to do automation testing but also ensure that your web application remains user-friendly and accessible across various platforms and devices.
2) Gathering metrics
Gathering metrics involves analyzing how the web app is used by leveraging data from the marketing or analytics team. Typically, applications utilize tools like Google Analytics to track user actions, providing valuable insights into user behavior. By examining this data, you can identify common user journeys and behaviors, which are essential for building effective test cases.
Understanding user behavior through analytics helps prioritize which test scenarios to automate first in automation testing, ensuring maximum value in minimal time. Start by reviewing metrics such as page views, bounce rates, and conversion rates to pinpoint areas of the web app that are frequently accessed or where users encounter issues. This data-driven approach allows you to focus automation efforts on critical user paths and functionalities that impact overall user experience and business goals.
To implement automation effectively, collaborate closely with the analytics team to gather relevant metrics and insights. Identify key performance indicators (KPIs) that align with usability and compatibility goals, such as load times, error rates, and user engagement metrics. By integrating these metrics into your automation testing strategy, you can validate the performance and functionality of the web app across different user scenarios and environments.
Regularly monitor and update your automation test suite based on new data and insights from ongoing analytics. This iterative process ensures that your tests remain aligned with user behavior trends and evolving business requirements. By leveraging analytics for automation testing, you not only streamline testing efforts but also enhance the overall quality and usability of your web application, driving continuous improvement and user satisfaction.
3) Key scenarios automation
Once you have conducted the research, start automating core scenarios of the web app by test cases. For example, a typical user path in an e-commerce app is:
Homepage –> Search results –> Product details –> Customer login / Register –> Payment details –> Order confirmation
For us, it is important to check if the chain is working well and the user test cases is able to place and pay the order. At this stage, we don’t need to check the page functionality in detail.
Key scenarios for automation testing involve verifying critical user actions such as placing and paying for orders, ensuring the smooth functioning of essential user journeys without delving into detailed page functionalities. This approach focuses on validating the end-to-end flow of key processes that directly impact user experience and business outcomes.
When prioritizing automation of these key scenarios, begin by identifying the primary paths that users take to place and complete orders. This includes navigating through product selection, adding items to the cart, entering payment details, and completing the transaction. By automating these workflows, you can systematically verify that the entire chain—from product selection to order confirmation—is robust and error-free.
To effectively automate these scenarios, select automation tools like Selenium, TestCafe, or Cypress that support scripting for web application interactions. Develop test scripts that simulate user actions across different scenarios, including normal flows, edge cases (e.g., incorrect payment details), and error handling (e.g., network timeouts). These scripts should cover various conditions to ensure comprehensive coverage of the order placement and payment processes.
Integrate these automated tests into your CI/CD pipeline to run them regularly, ideally after each code deployment or on a scheduled basis. This continuous testing approach helps detect issues early in the development lifecycle, reducing the risk of critical bugs impacting users post-release.
Additionally, monitor key metrics related to order placement and payment processes, such as transaction success rates and average transaction times. Use these metrics to gauge the effectiveness of your automated tests and identify areas for further optimization or refinement.
By focusing on automating key scenarios like order placement and payment, you not only ensure the reliability of essential user interactions but also optimize testing efforts to deliver maximum value efficiently. This strategic approach to automation testing aligns with business priorities while enhancing overall software quality and user satisfaction.
4) Increasing the feature coverage
Increasing feature coverage in automation testing involves expanding the smoke regression pack into a more comprehensive suite. Utilizing the mind map created earlier, apply state transition and compatibility testing techniques to build detailed scenarios and test cases.
Start by identifying entry points into the web application, which could include landing pages, product details pages, or SEO-optimized pages. These entry points serve as critical starting points for users and are essential for testing various features and functionalities.
Once entry points are identified, focus on pinpointing the specific features that users interact with. This includes elements like drop-down boxes, search fields, user detail forms, and clickable links. Each of these features represents a potential test scenario that should be automated to ensure consistent performance across different user interactions and environments.
To implement this strategy effectively, leverage automation tools that support detailed scripting and interaction with web elements. Tools like Selenium WebDriver allow for precise control over user interactions and can simulate complex user scenarios across different browsers and devices.
Apply state transition testing techniques to verify how the application behaves as users navigate through different states or conditions. This includes testing transitions between pages, error handling during form submissions, and responses to user inputs.
As you expand your automation testing suite, integrate these new scenarios into your existing CI/CD pipeline. Regularly execute these tests to validate the stability and functionality of newly developed features and to ensure backward compatibility with existing functionalities.
Monitor test results closely and prioritize fixing any issues identified through automated testing. Use metrics such as test coverage, defect density, and test execution time to assess the effectiveness of your automation efforts and make informed decisions for further improvements.
By increasing feature coverage through automation testing and applying state transition compatibility techniques, you enhance the reliability and usability of your web application while optimizing testing efforts. This systematic approach not only accelerates the release cycle but also strengthens overall software quality and user satisfaction.
The next actions are as follows:
- Recording the initial state of the feature to test cases.
- Triggering the feature (some features will load the same page, but with different data, the other ones will transfer us to a different page). Here we check how the trigger is working.
- Once the state of the application changed, we make an assertion to check the new feature state and write to test cases.
Then, you can continue similarly on a different page or go to the initial state and interact with a different feature. Repeat the actions, until you cover all the important features in your mind map. This is the principle of web app testing.
Test automation strategy for existing applications
We have told you about the general principles of implementing database testing or, functional testing in a legacy web app. In this part of the article, we will speak about the test data, particular steps and tools on how to automate web application.
Step 1 – Static analysis
Static analysis is an essential part of how to do automation testing, involving the automated examination of source code without executing the program. This critical process occurs early in the development stage, preceding database testing, to identify and rectify coding errors and weaknesses promptly.
Static code analysis tools, such as SonarQube or ESLint, analyze code syntax, structure, and semantics to detect issues like typos, potential security vulnerabilities, and adherence to coding standards. By leveraging these tools, developers can ensure code quality and consistency across the project, minimizing the risk of bugs and performance issues later in the development lifecycle.
During static analysis, the focus is on identifying common programming mistakes, such as uninitialized variables, unused variables or functions, and inconsistent formatting. These issues, if left unresolved, can lead to runtime errors or system failures when the application is deployed.
Integrating static code analysis into the automation testing process enables early detection and resolution of coding errors, optimizing development efficiency and maintaining codebase health. By addressing issues at this stage, developers can prevent costly rework and ensure a solid foundation for subsequent testing phases.
To enhance the effectiveness of static analysis, establish coding guidelines and configure analysis tools to enforce these standards automatically. Regularly review and update these guidelines to align with best practices and industry standards, ensuring continuous improvement in code quality and maintainability.
By incorporating static analysis as a foundational step in automation testing, teams can foster a proactive approach to software development, fostering robust, reliable applications that meet performance and security requirements from the outset. This systematic approach not only enhances developer productivity but also contributes to delivering high-quality software products that meet user expectations.
Tools for static analysis include:
- Linters
Lint tool (Linter) is an automated checker of your code for programming and stylistic errors. Linting is a basic stage of static code analysis.
Linter can make the code more readable:
– put the necessary tabs, spaces, semicolons, brackets
– make a nice nested structure of tags
– put the styles from tags to the separate <style> block
– eliminate the odd spaces and blank entries.
Advanced linters can not only format the code but check the code logic test data, for example, to find dependencies that occupy memory, are but never used. Some of them can find potential memory leaks, circular dependencies, incomplete calls and clumsy classes.
There are many linters depending on your programming language. For example: ESLint, PC-Lint, Pylint, JSLint etc.
- Formatters
A code formatter parses your code and re-prints it according to the set of formatting rules.
– enforce the set maximum line length
– make sure you don’t confuse single and double quotes
– add commas at the end of each item
– fix other formatting issues
Code formatter doesn’t touch the code functionality, only formatting. Example of code formatter: Prettier.
- Type checkers
Type checkers are static tools that detect errors in rarely used code paths and optimize the code in test scenarios by adding features like auto-completion. It detects the data types that are used and makes the code work faster and use less memory with the help of compilation.
Static type checkers are able to verify that the conditions hold all possible variants of program execution. It eliminates the need to repeat type checking during each program execution.
Examples of type checkers: TypeScript, Flow.
Step 2 – E2E testing
End-to-end (E2E) testing is a crucial aspect of how to do automation testing, especially in complex applications where testing every single feature, like data integrity, may be impractical or impossible, especially in legacy systems. End-to-end testing focuses on validating the entire flow of an application from start to finish, simulating real user scenarios and interactions.
In E2E testing, the emphasis is on testing critical user journeys and scenarios that span multiple layers of the application, including the user interface, backend services, and databases. This approach ensures that all integrated components function correctly together, identifying potential issues that may arise from interactions between different modules or systems.
To effectively conduct E2E testing, define key user workflows and scenarios that represent typical usage patterns of the application. This includes scenarios such as user registration, product purchase, checkout processes, and data submission forms. Automate these workflows using tools like Selenium WebDriver or Cypress, which can simulate user actions across different browsers and devices.
During E2E testing, verify that data flows correctly between frontend and backend components, ensuring data integrity and consistency throughout the application. Validate the accuracy of calculations, data validations, and interactions with external systems or APIs to ensure seamless operation under various conditions.
Integrate E2E tests into your CI/CD pipeline to automate testing as part of the deployment process. By running E2E tests regularly, you can detect regressions and integration issues early in the development cycle, facilitating faster feedback and resolution of issues before they impact users.
Monitor test results and metrics such as test coverage, test execution time, and test failure rates to assess the effectiveness of your E2E testing strategy. Use insights from these metrics to refine test scenarios, improve test coverage, and optimize testing efforts over time.
By prioritizing E2E testing in automation testing strategies, teams can mitigate risks associated with complex application interactions and dependencies. This systematic approach not only enhances software quality but also accelerates time-to-market by identifying and addressing issues early in the development lifecycle.
Once the most important user path is tested, write the test scenarios for additional ones. But, if you do not have such an opportunity, you have already done an important job by testing the main one.Sergey AlmyashevCOO, ZappleTech Inc.
Step 3 – Unit testing
Once the main user paths are thoroughly tested, the next critical phase in how to do automation testing involves unit testing. Unit tests are essential components of automated testing that focus on verifying the functionality of individual units or modules within an application. Unlike integration or end-to-end testing, which validate the application as a whole, unit tests isolate and validate specific parts of code in isolation.
The primary challenge in testing an existing web application lies in determining where to begin with unit testing, especially in complex and interconnected systems. A strategic approach is to start with a single pure function—a function that operates solely based on its input parameters and returns deterministic output without relying on external dependencies or state.
To initiate unit testing effectively, select a pure function within your application that encapsulates a clear and discrete functionality. Pure functions are ideal candidates for unit testing as they facilitate predictable and repeatable test outcomes, making it easier to establish baseline expectations and assert behavior.
Set up the necessary tools and frameworks for unit testing, such as JUnit, NUnit, or Mocha, depending on the programming language and environment of your application. Configure test environments and mock dependencies as needed to simulate various scenarios and edge cases that the function may encounter during execution.
Once the initial unit test for the selected function is implemented and validated, progressively extend testing to cover additional functions, methods, or classes within the application. Aim to achieve comprehensive test coverage across critical components that contribute to the overall functionality and performance of the application.
Integrating unit tests into your automation testing strategy offers several advantages. It helps detect and address defects early in the development lifecycle, improves code quality by enforcing modular design principles, and facilitates continuous integration and delivery practices. Unit tests also serve as documentation that outlines expected behaviors and functionality, aiding in code maintenance and refactoring efforts over time.
By prioritizing unit testing as a foundational aspect of automation testing, teams can enhance software reliability, streamline development processes, and deliver high-quality applications that meet performance and user experience expectations. This systematic approach not only ensures robustness in individual code units but also contributes to overall application stability and maintainability.
Step 4 – Integration testing
Integration testing is a pivotal stage in how to do automation testing, following the completion of unit testing and other preliminary phases. Integration testing involves verifying the interaction and data exchange between different modules or components of an application when they are integrated as a unified system.
Unlike unit testing, which focuses on testing individual units or functions in isolation, integration testing evaluates how these units work together as a cohesive unit. The primary goal is to detect defects, inconsistencies, or communication issues that may arise when integrating various components, ensuring seamless functionality across the entire application.
To commence integration testing effectively, leverage the testing environment and tools already configured during earlier phases, such as continuous integration (CI) pipelines and testing frameworks like Selenium, JUnit, or TestNG. These tools facilitate automated execution of integration tests, allowing you to simulate real-world interactions between integrated modules and verify their interoperability.
Focus on defining test scenarios that cover critical integration points and interactions between modules. Test cases should encompass scenarios where data flows between frontend and backend systems, API interactions, database operations, and communication between microservices, if applicable.
Implement both positive and negative test scenarios to validate expected behaviors and edge cases. Positive tests confirm that integrated components function correctly under normal conditions, while negative tests simulate unexpected inputs or conditions to identify potential vulnerabilities or failure points.
Integrate integration tests into your CI/CD pipeline to automate test execution and ensure that new code changes do not disrupt existing functionality or dependencies. Regularly run integration tests as part of the deployment process to detect integration issues early in the development lifecycle, minimizing the risk of regressions in production.
Monitor test results and metrics, such as integration test coverage, test success rates, and performance benchmarks, to assess the effectiveness of your integration testing strategy. Use insights from these metrics to refine test scenarios, improve test coverage, and optimize application performance and reliability.
By prioritizing integration testing as a critical step in automation testing, teams can validate the integrity and interoperability of their application components, ensuring robustness and reliability across integrated systems. This systematic approach not only enhances software quality but also supports agile development practices by enabling faster feedback and iteration cycles.
What are the main mistakes that can be made when providing automation testing on an existing web application?
If the team needs to conduct web application test automation for an existing solution, experts may encounter difficulties. Primarily because they haven’t led a project from scratch, they lack access to the initial results of product analysis.
Issues can still arise even when technical documentation, test cases, algorithms, and scenarios are available. These issues are linked to the fact that testing methodologies differ significantly. However, the peculiarities of website automation testing are not solely limited to this.
Unclear or misguided concept
In certain cases, the project might lack a clear Product Vision, which poses obstacles to developers and QA professionals. Essentially, you’re tasked with launching web application testing without clearly understanding what and why needs to be verified. Working for the sake of work is counterproductive; the focus should be on achieving results that satisfy both the target audience and stakeholders.
To address this problem, it’s essential to work on the concept of the IT solution, establish its goals, and use them as a basis for initiating web application test automation.
Misunderstanding project priorities
“Find errors and bugs in the software because we’re not confident they’re absent” – this is a fairly common requirement from QA service clients. It lacks specificity, focus, priorities, values, and goals. This is a problem both in the early stages of product creation and in processes like maintenance and scalability.
To mitigate such issues, the proper solution would involve analyzing the software and independently defining test cases for web application test automation. Even if some bugs remain, you’ll be confident that critical issues for the target audience have been identified, localized, and addressed.
Incorrect tech stack
You might find yourself stuck when you join a project previously managed by another QA team. The reason is quite simple: the client insists on using the same toolkit as the previous team. This creates a dissonance. On the one hand, it’s a logical request, but on the other, it’s not necessarily correct. The effectiveness of the previous website automation testing approach is unknown.
In this situation, you have two options: either agree to work according to the established template or convince the client that their solution for web application test automation was ineffective.
What is not so simple at first glance in this kind of testing?
When you start working with a finished product, meaning you initiate a hypothetical website automation testing, there are certain challenges the team awaits. For instance, a lack of understanding of the concept that needs to be considered for software verification, the action algorithm, or the overall QA approach in this project.
The issue lies in the fact that there was a hypothetical team that had already tailored the processes to their needs. The effectiveness of their operations could vary, but if third-party experts from an automation testing company are hired, it might not be the best approach. We recommend paying immediate attention to several aspects of web application test automation and making decisions for further project actions based on the results of their analysis.
Structure of web application test automation
From individual to collective: the typical pyramid when website automation testing is conducted in multiple stages. Cases are formed for small tests that gradually integrate into a single End-to-End scenario. It would be great if this concept was applied throughout the entire QA process.
Your task in a project with a finished product is to rebuild the testing pyramid and verify all its components. During web application test automation, you will uncover errors whose roots trace back to the initial development stages.
Algorithms for website automation testing
Chances are, you already have your own web application test automation algorithm. Ensure it aligns with the specific product and can effectively analyze it. Key note: use the project’s values as a foundation upon which you’ll test the IT solution.
This will enable you to set QA goals correctly and achieve maximum results.
Roadmap for web application test automation
Analyze the product and technical documentation before starting website automation testing, so you can devise a proper plan for verifying the IT solution.
Web application test automation, especially in later stages (maintenance), is a somewhat complex process considering the project’s scale and status.Mikhail BodnarchukCDO, ZappleTech Inc.
Therefore, you’ll need a roadmap to focus efforts on critical points, gradually shifting towards less important issues.
In-house vs. outsourced automation web application testing: making the right choice
We hope we don’t need to explain the cost of errors during website automation testing in the later stages. It’s quite significant, so it’s unlikely that the Product Owner (PO) can afford the risk of subpar QA.
The question lies in how to conduct the web application test automation process: using in-house resources or a dedicated QA team. Let’s compare the benefits of both approaches and select the optimal one.
Pros and cons of in-house web application test automation
Let’s assume you already have an established QA team within the company. They perform their tasks excellently, regularly test products, and so on. But do they have the experience required for this particular project?
Advantages of working with in-house professionals:
- Knowledge of the product and its specifics.
- High motivation and result-oriented focus.
- Understanding of the target audience and project values.
Disadvantages:
- Limited skill set.
- Costs of maintenance and technical support.
- Lack of experience.
A decent option for ongoing tasks or small projects within the context of a larger goal.
Pros and cons of outsourced web application test automation
Typically, outsourcing provides access to a wide range of diverse talents. You can find experts of any profile and even save on their hiring.
Advantages:
- Strong knowledge base.
- Multi-faceted experience.
- Relatively lower cost.
- Minimal additional expenses.
Disadvantages:
- Language barrier (possible).
- Somewhat challenging progress monitoring.
- Difference in activity hours.
When working with external experts, usually, the disadvantages can be mitigated. Therefore, considering the financial and expertise aspects, working with outsourcing teams is an ideal option for both SMBs and corporations.
Tips for choosing a good contractor
You don’t need to sacrifice your time, budget, and nerves when choosing a contractor for your project. We’ve prepared a short guide to help you quickly find an expert performer with minimal expenses.
Setting priorities and budget
Regardless of how top-notch a QA service company is, don’t spend your entire budget hiring them. Considering the industry’s specifics, you’ll need additional services in the future. So, focus on current priorities and a reasonable provider’s price.
Trust us, it’s the best solution if you have a limited budget.
Finding experts and evaluating their experience
We recommend relying on platforms like G2, TrustPilot, etc., where clients post reviews about the work of specific teams. From there, you can learn whether the company works in your niche and how well it performs the assigned tasks.
However, this is just the initial stage; you must communicate with their representatives afterward.
Portfolio review and interview
Visit the team’s website and evaluate their portfolio. Check for case studies that match your niche or software type. If you’re satisfied with what and how the company is doing, arrange communication with managers, team leads, etc.
This is the only way to determine if a certain contractor suits your project.
Why it’s better to order this kind of service from ZappleTech
If you don’t want to waste your time and nerves, you can take advantage of the web test automation services from ZappleTech professionals.
For over 10 years, our team of experts has been executing QA projects for nearly all regions of the world, business industries, and client budgets. This is your chance to receive quality assistance with minimal expenses.
Need a preliminary consultation? Contact a company manager and discuss the collaboration terms!