There’s a particular type of testing which is not logically or computationally intensive, but can yield some pretty good results if done right.
Back-to-Back Testing (BBT) works by finding differences between outputs of a program. It works like those “Spot the Difference” pictures. It’s not looking for a specific difference, just whether one is there.
It’s best suited for standalone applications that take input and generate output. Webservices would work too.
This may rule out UI, but if after reading this you know a way, or have a creative approach, I’m all ears.
How it Works
The way it works is, you make or extract certain pieces of test data, then run it through the application, and save the output. Carefully note the results, and determine then whether the application is behaving properly.
If it is, this becomes your baseline.
The next time that code changes, use the same data, and get the results again. This time, compare the new results to the old ones, and note any differences.
An added benefit of this approach is, you have the data in-hand that caused the difference, which is good for reproducing a problem, and giving other people the ammo to help diagnose and fix an issue.
BBT is also a good way to fill in the cracks with what to test. You don’t have to have a specific test to cover every possible case, just data. And, you don’t have to think about specific tests to write. That can come later, just you don’t have to do it right now. So it buys you some time, if you’re trying to carve some out for your team.
What Happens If I Get a Difference?
However, again though, this type of test doesn’t necessarily find bugs, just differences. Very important distinction.
In the picture above, the clock on the wall is different. That’s not a “problem” because it’s still a clock.
But for the cookies on the cookie sheet, one of them is actually a floppy disk. That’s a “problem”. Probably shouldn’t eat that.
They’re both differences, but it’s up to a human to distinguish what’s ok and what’s not.
In the same way, if your test detects a difference, don’t run and report a bug just yet. The code could have a known enhancement made, or maybe an outstanding bug has been fixed and this is the result.
They’re both differences, but it’s up to a human to distinguish what needs fixing, and what needs to be considered the new standard to compare against.
(An added bonus of this type of test is it will foster communication across teams, just saying).
If the differences are ok, then you have yourself a new baseline! Grats!
- If you can, combine your data with a code coverage tool, so you at least know how much code gets hit. There’s probably a way to get a percentage of lines hit, so if that number dips, you know to find/create new data to hit those lines of code.
- Keep your data small. It might be tempting to use pre-recorded data, but hitting the same common lines of code hundreds or thousands of times won’t give you better coverage, and will bloat your output.
- It’s also important to keep your data in a versioning repository so you don’t lose it. That’s another reason to keep the data small.
- Data is generally cheaper than code to maintain. It takes some brainwork to figure out what data to use, but once it’s done, it doesn’t suffer from the same kind of bit-rot that code can. At least, that’s been my experience anyway.
- If you can get the outputs in some kind of standardized format like CSV, XML, JSON, etc. there are likely lots of tools out there already that can tell you differences between files of the same format. They probably don’t care about the order of data, as can be the case for XML and JSON.
- If you can’t, don’t be afraid to write a tool that can tell you what the differences are between outputs.
Lifehack: For “spot the differences” pictures like above, try crossing your eyes until two similar parts on both pictures overlay perfectly. A good example is the clock. The parts of the pictures that are different will appear to “flicker” since each eye is getting different information for that part of the picture.