There’s a constant stream of articles on automatic testing, but the topic of manual testing mostly gets ignored. So in this article, I’ll outline my personal take on how I incorporated manual UI testing in my day-to-day job as a professional frontend developer.
We can all agree that a comprehensive automated test suite is essential for building confidence in a product, allowing us to ship fast without requiring extensive manual testing. However, this confidence can only be as good as the quality of our tests.
Let’s face it: Most user-facing apps have complex business logic and fairly complicated UIs with lots of state management. It can easily be time-consuming to write tests that cover all expected use cases of these apps. Understandably, we don’t have all the time in the world when building our products, and we need to balance perfection with shipping products to end consumers in a reasonable amount of time. This means that we can get into situations where it’s often neither practical nor feasible to cover the entire app’s UI with integration/acceptance tests.
Another area where automatic tests have limitations is that of tests for user experience, such as whether animations are smooth or how the user experience is in general. They also don’t help with discovering problems in unanticipated (we can even say “creative”) user interactions with our apps. As far as the user experience goes, it’s a good idea to have repeatable automatic metrics to quantify the “general user experience.” An example of this could be performance metrics, such as statistics for dropped frames.
After our apps gain a certain complexity, full coverage with automatic tests is almost certainly not feasible, especially in the UI department. And we definitely shouldn’t get rid of the occasional manual testing; we just need to get pragmatic about it. For example, don’t write fewer (or worse) automatic tests. Manual testing should be incrementally replaced with automatic test cases only once the missing areas of test coverage are identified.
The first line of defense when you employ manual tests should be your actual development process. Initially, you’ll mainly want to test the very thing you’re working on. Once you’re done with testing the obvious use cases, try to identify the not-so-obvious ones:
Distance yourself from the implementation details and try to get into your user’s mindset. What helps me personally is to perform more in-depth manual testing only after a certain delay after the initial implementation — this can be a night (sleep on it) or a weekend or even longer. You should experiment with what works best for you.
Don’t focus on the functional issues only. User experience is arguably as important as a bug-free app. During feature implementation is the correct time to steer your design approach in the proper direction, as it can be very expensive to change it later.
Be creative in breaking your apps in unexpected ways. Whether you’re building a mobile or a web app, try to test it on various devices to catch environment- and device-specific issues early.
I recommend identifying which use cases aren’t covered by your automated tests and writing these missing tests after you’re done with the manual testing. During manual testing, I usually keep detailed notes that are easy to convert to concrete TODOs for the missing tests.
Finally, if your manual testing setup is time-consuming — for example, when you need to set up the testing data or prepare your environment on multiple devices — try to not only focus on the task you’re currently working on, but also keep an eye out for other possible issues you’ll encounter while testing your app. Don’t fix them right away though; make a note and decide on actionable steps (create an issue, schedule the fix, etc.) only after you’re done with the testing run. By following this mantra, you can distribute your manual QA effort across your whole team, testing your app in various environments and by various people with vastly different mindsets.
An important part of any successful developer’s workflow is that of reviews. These usually focus mainly on the code quality itself and skip on the manual testing part. The reasons can vary. In the case of native apps (Android, iOS, or even desktop), the problem can be that they’re fairly cumbersome to build and deploy for testing.
Manual testing should be a mandatory part of your reviews, and you should invest in making these tests as hassle-free as possible for your reviewers. For example, it’s usually a great idea to build and deploy your pull requests to make performing manual testing faster for your reviewers. In our experience, doing just that made it so that the review process caught issues in advance that were able to be resolved before they moved on to QA.
Manual review testing is mostly similar to manual testing done during the development on your own code. The biggest advantage is that reviewers don’t need to distance themselves from the actual implementation since they didn’t work on it for the most part. This is also the reason why I manually test features first before reading any code in my reviews — I don’t want to cloud my judgement and give myself preconceptions about the inner workings of the tested code.
The feature is now reviewed, reviewers are happy, and we can move on to the next brand-new shiny task… Not so fast! We have a whole team of highly motivated people who enjoy destroying our apps — QA teams are responsible for automated and manual testing before the release.
Here at PSPDFKit, our QA team makes sure that all new features work without major issues even before they’re merged with the upstream production code. This makes it easy for us to identify most regressions as soon as they’re introduced.
The main part of our QA team’s responsibilities is Human Acceptance Testing (HAT). Whenever we want to release, we freeze development and prepare a release candidate for the QA team to test in advance (usually up to a week before the actual release).
The HAT process consists of testing the actual new features and bug fixes of a given release, as well as performing a general test for the most important areas of the app. Dev teams are responsible for writing a concise document with the HAT testing instructions for a given release. The QA team maintains a set of use cases that are tested to make sure no major regressions are introduced in the rest of the app.
Other than assuring the quality of our products, another important asset that the QA team brings to the table is its objectiveness and open-mindedness. There isn’t preexisting in-depth knowledge of the application code itself, and QA isn’t involved in the detailed implementation details such as the UX and UI decisions. This brings a fresh perspective that lets us make the product better for the end users.
I’ve always found the quality of products I’m involved in to be a personal matter, and I take pride in not only the code I produce but the product as a whole. Since I am a predominantly Android developer, you can always find apps I’m involved with on my main phone that I use as a daily driver. I also try to have a habit of using them on a regular basis.
Working at PSPDFKit makes this habit easier since we showcase our comprehensive PDF framework as a complete app for working with PDF documents: PDF Viewer for iPad, iPhone, Mac, and Android. I use PDF Viewer almost daily for my reading needs, and a lot of the team does too. We mostly run on internal nightly builds, which lets us identify bugs before shipping to our customers. This also brings a lot of internal insight and ideas that help improve our main framework products with what we learn.
If you’re working on a frontend application, I strongly recommend adopting the same strategy on your team. Is there a better way to care for (and manually test) your product than by using it yourself?
I believe we should all learn to effectively test our apps. Stop looking at manual testing as something bad. It’s merely another tool that, when put to a good use, can improve product quality significantly. Manual testing is a skill that requires experience, especially because it requires a large deal of empathy to let you think like your users would. I hope that over time, you’ll develop testing approach that works for you.
So give it a go and practice testing some apps today! Of course, please report any found bugs to their respective owners. 😉