OK, let’s talk about the current fad of some teams thinking they could achieve 100% coverage by their automated tests and resting on their laurels after that. And why the automated testing cannot solve all your QA needs.
- Robots are testing your code, but your users aren’t robots. It means if you are developing software designed to use by human, you would want to make sure it’s looking good, working stable.
- Automated testing could take more time to run than the manual one. Sounds ridiculous, but it is if your interfaces are often changes or your project lifecycle is relative small, you may found that more time wasted on development or maintenance your automated testing code in comparison if we would done it manually.
- Exploratory testing cannot be simulated. Let’s face it — our users usually don’t like reading manuals or follow formal ways as we designed software to use. Users would intuitively explore software and learn how to use it in that manner. We cannot predict all the users workflows and document them, thus we have to manually do exploratory testing.
- False alarms and false positive. Automated tests should be well tested itself to make sure it works as we want to. And they need to be reviewed periodically, especially if you started getting too small or too much alarms.
- If you use it — you know it. Testers who are manually testing its software know it better (workflows, UI, workarounds, features, etc). Remember how many times your Dev’s asked you how works the application they are developing? It’s because they are looking on the code most of the time, whilst you are on the user interface.
- 100% coverage is a myth. We may define tons of requirements for our app and even try cover all of them, but there is always a risk we missed something or new requirements appears after the release. Whilst this applies to both manual and automated testing, we should bear in mind that first 30% of coverage would be easy to automate, second 30% is relatively hard and I suppose 10-40% requirements cannot be automated for various reasons.
- Automated testing frameworks cannot predict the bugs. Or consult you about the possible bottlenecks in architecture you are going to implement. Some bugs has common reason, but appears in different places. And some of them are hard to reproduce, but easy to explain. Human may provide deep analytics based on its practice and experience.
In conclusion, although today is much less open positions required manual testing skills only and automated testing development becomes mandatory knowledge, we still need manual testing in our SDLC and we will need this as long as people will continue to use computers. Automated testing may reduce the amount of routine in our day-to-day duties, but we should still keep an human eye on what we are delivering to our customers (despite the fact that our auto-tests could indicate 100% passed rate).
p.s. On the picture displayed an issue found on MS Office 2010 — if you try to input text instead of expected number in font size field then switch to any other field — app will crash and you lose your work. This bug has been fixed in later versions of MS Office.