I’ve heard engineers scoff about manual testing. Derogatory things like “anyone can push buttons on a screen” are often thrown around when talking about manual testing/testers. However, manual testing is a hugely important part of shipping great software. If done correctly, manual testing can uncover both software defects and UX irregularities that would piss the sh*t out of the users. It’s a huge service to the developers, designers, and PMs alike.
Start with the user story. Dissect the sh*t out of the user story for a feature that the PM gives you. Does the story have enough acceptance criteria to understand how a user is meant to use it? Does it make sense or is it a convoluted jumble of words? Is the acceptance criteria something that can be validated and verified? If not, it’s time to set a meeting with the PM and go over their stuff and make it better. Together.
Create tangible expectations. The core of every test is an actual result (what sh*t actually happens when you test) and an expected result (what you wish your software was doing). An expectation like ‘I expect that the button should turn gray when clicked’ is something that is inarguable; the button is either gray or not (ok, if you’re using multiple shades of gray, specify those specific shades, smartass). However, creating an expectation that is ‘I expect the site to load smoothly’ isn’t objective enough. Depending on the expectation of the person who is testing, the test could either pass or fail with the same behavior. Defining metrics such as measurable lag or loading time for intangibles of a UI can help make sure that performance standards are met.
Keep the test cases simple (and modifiable). For example, if you’re testing the blogging feature of a site like this one, you could start by testing the ability to stylize the text. The test section could be called ‘Change styling of normal paragraph text’. You can then have test cases that check whether selecting bold results in the desired text turning bold, and then can b returned to normal. Then the same for italic. Then the same for strikethrough. So within the section you would have three modular test cases. This will help keep you organized and break the tests down enough for automation. Also, by keeping the cases truly separate it will be easier to find and modify the correct test case when something changes in the product and you need to update your test.
Define different types of test runs- and run them when appropriate. If you have a massive test suite of 300 test cases, you may not want to run all of them every day. Instead, you should define different types of runs that are run on various cadences such as daily, weekly, or before releases. You can choose to include different types of tests in the various runs. Perhaps your daily run includes tests around your most critical features. You can also define runs around areas that are considered high-risk when there’s ongoing development around that area.
Exploratory testing is your chance to be a bad, bad normal user. DO NOT BYPASS THIS STEP. Most users won’t be the perfect user you want them to be. They’re going to impatiently click/press/hold buttons on your UI. They’re going put freaking emojis into your text fields. They’re going to visit your sites on iPhones, Alexas, watches, tablets, Notes XXL, flip phone Motorolas, big-ass monitors, TVs, and more. They’re going to press forward six times and press back once. They’re going to remove their headphones while a sound is playing. Please go and check everything. Even if it feels like you’re being really mean and an edge-case hunter. People working on a product tend to live in a bubble and forget how people unfamiliar with the product may interact with it.
Example: I once found a bug in a Find my Device style application where the notification/sound would no longer play after the screen (and some underlying hardware) had gone fully idle (after an hour or so). The notification however would play after a user would unlock their phone. HOW THE HELL WOULD THIS BENEFIT A USER THAT LOST THEIR PHONE?! It wouldn’t. Was I being a mean? Yes. Was I going outside the scope of my smoke test? F*ck yes! Did I find some sh*t that would’ve sucked for the user? You know it!
That brings me to my last point. DOCUMENT YOUR SH*T. Take a screen recording. Take a video of the screen on your iPhone of the sh*t you’re seeing. Write down the build, the serial number, the version number, the w/e of the device you’re using, user agents and sh*t if needed. Of course give them the steps to reproduce including the expected and actual, but make sure you have this other stuff too. You know why? The first thing a developer is going to say when you find something nasty and give them the STR is “Are you sure it’s as bad as you say it is?” And then you send them the video and they’re like “fuq”. Bonus points if the PM sees it too and puts a big, fat, red BLOCKER stamp on it. Extra bonus points if you show it during a meeting on a big screen so everyone stares at the bug with a horrified expression. Go celebrate. You’ve done your job.