Test automation is no longer new cocept anymore. A lot of teams now have some sort of automated tests. However, for more than 90% of companies automation means Selenium Webdriver. Most companies focus on automating regression tests. The problem is such monolithic automation is rigid, fragile, expensive and difficult to maintain.
We break automation into different layers, starting from unit, functional, system, end to end and smoke. We also apply a lot of abstraction and re-usability throughout.
We automate the full stack, from backend to frontend, infrastrcture, performance and compatibility. And within each stack, we still break it down to DAO, service and controller when testing at unit level.
We implement automated tests such that each is atomic and independent. That meas each test does not really on any test to pass or fail. Each test has its own test data, starting and finishing state. When a test fails, you know why it fails and can quickly rectify the problem.
Infrastructure as code
With DevOps becoming a standard practice, this area is normally out of touch for the QA team. Most QA teams/professionals do not really understand or get involved with what DevOps teams, thus leaving this stack not covered.
This is a fundemenatl issue because every code is a source of defects and the impact of a defects in infrastructure as code be significant. We engage with every team and ensure that infrustracture is also covered by the same QA principles as any other aretfact.
Choice of technology
The choice of test automation technology/tools is the a major factor in making it a success or failure. Here at Fullstack we employ various criteria to make the choice but fundamentally we keep the progamming language the same as the main development language. This is very important in ensuring automation is not just a task for the QA members but for the whole team. We guide development team in implementing and maintaining robust automation, by implementing the framework, libraries, strategy, structure, test data and templates. Left unattended/guide developers will nromally go wild in their imagination with automated tests and soon you will discover a suite of automated tests that has lots of duplicates, no re-usability, abstraction and focus on solving one time problems.
This type of automation usually fails as tests become fragile and people more time fixing them that testing. As a result to release in time, there is usually atendency to fall back on manual testing, which invalidates the whole concept of automation.
We embed automation into the build process, including local builds and continuous integration tools like Jenkins, GoCD, TeamCity or CircleCI. This requires careful startegy to get the most value and feedback time. For example, you want developers to get quick feedback during implementation so you make running unit tests are part of the build but functional or integration tests are run on demand. You also make it so that tests can be run be tags, suites or groups so that one can first check the areas they are working one. It is about testing early and failing fast.
Abstraction & re-usability
Here at Fullstack QA we make automated tests have high level of abstraction and re-usability. Concepts like page object models, error wrappers, server response handlers, json/xml extraction, test data database or file operations, etc are all standardized and modularized so team members will focus on implementing robust automated tests rather than fixing issues.
In spring automation
This normally is the first sign that your QA has failed to adopt agile testing when automation is an aftermath activity. true agile teams include implementing automated tests in their definition of done because they want to get the benfit of automation. Quite often you find defectes in the system when writing automated tests, which requires fixing. So implementing automated tests is in fact another testing activity that helps reduce defects, which you certainly would like to complete before declaring the feature completed.
We engage with scrum team, take active parts in story grooming and sprint planning to determine upfron the requirements for automated tests and ensure they are included in the list of subtasks.