Recently, there has been a trend towards refusal of manual testing in favor of total automation. We discuss here when it is possible and when it is reasonable. We also analyze pros and cons of both types of testing and their implementation from the customer’s point of view.
Automatic testing – obvious advantages and less obvious disadvantages
To begin with, load testing and stress-testing are nearly always automatic. Whether JMeter, LoadRunner or any other special scripts and applications written for testing of a specific product, they are all automated tests. As a rule, it’s practically impossible to build the required load for the backend without engaging a swarm of engineers.
Once written, automatic tests are always performed with the same speed and quality, until the system behavior changes in the tested part. So autotests are reasonably necessary for regression testing, which is increasing apace for dynamically developing products.
One more field where scripts are more preferable is precise measurements testing. This could be checking the specific color of some elements, measuring the position of elements with one-pixel accuracy, checking the time that an animation or another event takes to execute in milliseconds, etc. Though some of that can be performed manually, a script will do it faster and more precisely.
Despite being so advantageous, autotests still have some disadvantages. They require high-priced specialists for writing and support and are more time-consuming while being developed. Moreover, if the product changes, regression tests must be updated, which can take a lot of time. A small bug in the project architecture (e.g., missing unambiguous locators of elements on the page) can hinder development of autotests. A small bug in the architecture of the tests (e.g., hardcoded sequences or locators, instead of being allocated within the function or parameters) can require a lot of effort when updating tests after implementing changes into the product functionality. This means the product should be initially architecturally developed for automated testing, which requires additional costs for development.
Automatic tests sometimes output incorrect results because of the very fact that the autotesting tool is working on the tested equipment. For example, they can consume the processor time and memory slow down the work of the tested application or service, and conflict when accessing files, databases, and equipment. All this sometimes causes test failures in places where everything works normally in usual conditions. Sometimes, problems come out of leftfield: the run testware consuming the system resources lets the asynchronous code perform correctly while the unloaded system returns asynchronization and errors. Thus, the autotest can return a positive result where manual testing would find a problem right off.
Another interesting example of a test case that cannot be simply automated is a scenario when the user of a security control system being tested is shown a modal view for interaction but no other application sees this view. This is done intentionally, in order that no other application can interfere in the work of the security system and hide a message before the user sees it.
One more evident limitation of autotests is scenarios requiring physical operations by the test engineer. For example, sometimes it is necessary to toggle switch connectors, switch on or off the equipment, physically move the equipment, change the state of its temperature or humidity sensors, etc. What can be easily done with manual testing is sometimes prohibitively costly to automate. It can require setting up additional relays and their control system, servo drives for angling, and complex solutions for temperature and humidity control of the environment. All this can be compensated only during constant implementation of such solutions over long periods. This more corresponds to the objectives of a plant manufacturing equipment, not a company developing software for it.
Finally, an inevitable and grave disadvantage of autotests is finding only the problems foreseen by the test author. During manual testing, the engineer always takes into consideration the environment of the test scenario and can notice the layout disarrangement, problems with app screen transitions, and errors on the main app side. Currently, it is practically impossible to create automated tests foreseeing all such errors for most types of software. This gives rise to situations when a great, high-quality test suite performed on a regular basis creates a false sense of error safety while overlooking defects that would be detected by the “manual” test engineer at once.
Strengths and weaknesses of manual testing
Manual testing is almost always cheaper for short-term projects (lasting less than a year). It is true that “manual” testers are usually cheaper than “automation” testers. The low entry barrier to the carrier makes finding “manual” testers much simpler than “automation” ones. Also, an experienced engineer can start “exploratory testing” right after delivery of the product and get started with a minimal set of instructions and documentation; preliminary results and some defects can be sometimes discovered on the first day of work, which is practically impossible with automated testing.
During manual testing, the engineer is looking not only at what is described in the test scenarios, but also checks additional use cases missing in the test plan, based on his or her intuition and experience. As a result, manual testing preforms more checks than are written in test cases, and the range of detected errors is much wider than could be discovered by the automated script.
Sometimes, the test result cannot be strictly defined. For example, without an altitude-test chamber, it is useless to write a test case for a pressure sensor in a system fixed at 746 mm Hg, as the real atmospheric pressure is constantly fluctuating. At most, the test will require that the sensor reading is consistent with the independent certified pressure gauge to the specified accuracy.
Finally, only manual testing is possible for usability checks and documentation such as user guides.
The drawbacks of manual testing are quite evident and understandable. Manual testing is performed much slower than automated testing. Sometimes it can take two weeks of manual testing compared to one hour of an automated script, though in other cases the difference is not that much. A well-written script does not make mistakes and does not miss defects due to the monotony of a day-to-day routine, unlike an engineer. It is almost impossible to manually perform test scenarios requiring a high rate of response or extremely high accuracy in time.
Summing up
Based on the above, the boundaries of both approaches can be clear cut. Automation is needed for load and stress-testing, for the increasing variety of regression tests, and for cases of huge quantities of identical tests. Automation requires expensive engineers, a lot of time for achievement of first results and, often, changes in the tested product increasing the scope of work for the development team. Moreover, automated tests apply specific requirements to the product architecture and require a lot of attention when writing a test scenario.
Manual testing is needed for checking usability, documentation, cases with indeterminate expected results, exploratory testing, etc. Manual testing is cheaper and can give results right after the work is started, and support of manual tests after product functionality changes is much simpler and faster. On the other hand, manual tests require more time for execution and the risk of the “human” factor increases (people get tired and distracted).
Automated tests perform of the kind “verify that the product complies with the requirements” tasks better, while manual tests perform “find problems and bottlenecks” tasks more efficiently. So, the two types of approaches should never be used exclusively, but supplement each other and be used appropriately. If regression tests are always run automatically for any part of the functionality, you’d better sometimes verify it with manual testing.
Vyacheslav Vanyulin, CEO of Auriga Inc., shared his opinion on manual testing in the healthcare industry:
“Although I’m personally an adherent of automation, manual testing plays an important, even critical, role for some tasks. When testing usability or documentation, tasks with indeterminate requirements or, say, exploratory testing, one cannot do without manual tests. Moreover, as a rule, manual testers are cheaper and can bring results right after the work is started. Employing highly qualified specialists you will have more than just manual testing. But how can we tell a professional from one ‘who has never seen combat’?
One cannot do without the three most important qualities of the test engineer:
- being a confident business user who can verify the work of the main functionality;
- understanding of internal product architecture and interdependencies;
- understanding that after reading the release notes one should go deep into the details regarding the latest changes, including interdependencies
This is applicable to any more-or-less serious project. However, when we talk about testing of medical equipment that will soon be in-situ in the intensive care unit and sustain the patient’s life, having test engineers with these traits is not enough. At Auriga, we use the term Intelligent Testing. This approach presupposes that, within the project scope, engineers undergo training in specifics of the hardware and industry standards; they should know the dashboard ecosystem, different kinds of sounds, dependences, critical value indices, etc. Usually they also perform as qualified business users (in this case, medical personnel) because to verify the work of the equipment one should understand the main operating scenarios of the product. A well balanced team is one of the most important factors of success for the whole project. We, at Auriga, are proud of our expertise in building and managing such teams, relieving our customers of this burden.”
How it all works in practice?
The project situation of one of Auriga’s major customers offers a good illustration of why manual tests are still useful even for a project that has been developed for about twenty years.
The first and major reason mentioned by everybody at once is legacy software and tests. Initially, only manual testing was run for this project, so there was a huge pool of regression tests and the customer was not in a hurry to switch out manual testing for automation.
Nonetheless, upon close inspection, it turns out that there is already automation in the project, including a substantial part of tests previously run manually. When comparing the remaining manual tests with automated ones, one can see the main difference. All autotests differ only with data but using the same script and the same fragments of code. So, for a thousand of test cases there are only several lines of code – the rest are configuration parameters. For another thousand there is one more fragment of code and, again, config. The remaining manual tests are a unique set of operations and checks. All tests that are commercially effective and reliable (including reducing monotonous work for test engineers) are already automated.
Among the remaining manual tests, there is a test package for physical user interaction with the equipment: connecting and disconnecting units, placing the equipment into the temperature test chamber, etc. Even if the quantity of such tests is not big, they should be performed manually, as their automation is not cost-efficient considering investment in testing equipment.
Besides, the processing capacity of the tested equipment is rather small and mostly taken up by the tested software. So, autotests are run on the lightweight framework developed specially within the project scope. Using more powerful and complex software tools with a broader scope of possibilities (e.g., image recognition) can lead to invalid test results due to increasing load impact on the hardware to critical values. This also places restrictions on test automation possibilities. Of course, is a possibility for automation with the use of external equipment, which will physically click on buttons and recognize images on the screen, but the cost of development, implementation, and support of such autotests will be unreasonably high.
To supplement the test plan within every test cycle, engineers always perform ad-hoc testing. This is a very good tradition, revealing defects which one could never expect.
It should be noted that, in recent years, most defects found in the projects are not failures of certain test cases, but results of ad-hoc testing or testing the system beyond the planned test cases. For example, when tests are run for a certain program unit, there is a short-term failure in another unit. When developing automated tests, it is practically impossible to foresee all such checks – that’s why manual testing plays an important role in the development process of a high-quality software product.
Finally, the test team and the team manager are there to find the right balance between manual and automated testing in every particular situation, taking into consideration technical specifics of the projects.