So I bought around 40 pounds of Lego. They were a great price and I couldn’t pass them up.
I guess I can let our 4 girls play with them too 🙂
The first thing I did was to get all of them out and sort out all the non-Lego pieces.
Now, I can remember my (modest by comparison) collection, and what pieces I had.
It was possible for me to think about what I wanted to build, and kinda figure out what all would be involved, before I dumped out my bucket of Lego.
But after looking through 40 pounds worth of Lego I realized: holy decision fatigue, Batman, there’s, like, 20 different sets worth of these things in here.
These new sets of Lego they’ve been coming out with have so many specialized pieces that now it’s really hard to tell what’s possible.
And I realized: the reason for this is, I have to be aware of what all the different pieces are, so that I can figure out what I want to build.
Turns out this is a great object lesson for test automation frameworks.
I’m an avid user of Behavior Driven Development tools. Usually I measure the complexity of a test framework by the number of step definitions they employ–the more there are, the higher the complexity.
Complexity is a bad thing. And yet I’ve seen frameworks that have upwards of 2000 step definitions.
That’s basically like trying to build Lego models with at least 2000 kinds of pieces.
It’s just about impossible. There’s no way to know about each kind of piece, and how it interacts with other pieces. But if you look at pictures of Legoland, they build life size elephants out of basic gray pieces.
And in the same sense, there’s no way to know about that many step definitions and how they interact with other steps or other parts of the system.
What I see happen is some/all of these as a new test gets built:
- Looking for a similar test to copy/modify
- Searching for relevant test steps
- Asking people for help (collaboration is good but let’s be honest: this does distract others)
- Analysis paralysis from not wanting to reinvent the wheel
- Reinventing the wheel anyway and creating almost duplicate steps.
This can become a pretty big problem. Complexity equals maintenance equals cost. I hate that. FRITZ SMASH!
So here are some strategies to prevent this from happening, and keeping frameworks simple enough so less thought needs to be devoted to it, and more can be devoted to the product:
Review and get rid of unnecessary code. Combine multiple similar code blocks into one. Remove tests that hit the system the same way.
Specifically for BDD, use regexes to boost mileage of step definitions, and offer more flexibility for your users. This prevents definitions that differ only grammatically.
Do you need to write a bunch of similar methods, or can you employ tricks that let the code change what it does at runtime?
Avoid decision fatigue by simply limiting the number of decisions you have to make when writing test automation.
What other strategies do you use? Share in the comments below!