I empathize with Ash Winter after reading his blog post. Being both a tester and developer myself, I have the opportunity to interact with both “worlds.” Which is why I also empathize with the developer that said it.
So why do so many developers see the art, science and craft of software testing as worthy of the unskilled? Why do so many see software testing as a step down the career ladder?
There are many reasons, one of them being that testing in itself tends to be incorrectly seen as a chore — something you have to do even if you do not want to.
But I believe there are other reasons that perpetuate this misconception.
In particular, the reason is most likely not because you are unskilled (if you are, then start educating yourself!). On the contrary: it is most likely because of outside groups — groups that you need to be aware of whether you are a tester or a developer.
You might be the most spectacular tester they have met, but your reputation as a tester has already been tarnished by loud people with bad ideas and worse advice on testing.
It is my belief that you, me, our software community and the industry of testing as a whole has been negatively affected by certain groups, and the loudest (and most well-funded) of them all are the so-called Context-Driven Testing school, of which many testers subscribe to.
I’ll explain my rationale as a developer.
I personally value exploratory testing as an important testing technique, particularly structured exploratory testing.
Just like Elisabeth Hendrickson, I champion exploratory testing when the context is right. But I do not believe that exploratory testing is the one and only technique that brings all the value to the table of software testing for all projects.
This is the first collision I have with the CDT group.
In particular, the whole point of being a context-driven tester, as defined by the CDT manifesto, is to apply the right methodology in the right context. The first two items state:
1. The value of any practice depends on its context.
2. There are good practices in context, but there are no best practices.
This sounds great and it implies that some scripted testing will be best for some contexts, while automated testing will be better for others, and exploratory might not be best for every context, right?
In my experience, I like to use exploratory testing for the initial phases right after a sprint or iteration, when I have a deliverable to try out (think in the context of smoke testing as well as usability testing).
Now, unfortunately, for some reason, the only methodology the CDT group seems to evangelize and use everywhere and anywhere (i.e. for all contexts) is exploratory testing.
Yet what the CDT group fails to mention is that exploratory testing is not better than scripted testing in finding functional defects in software. In fact, it sometimes achieves a bit worse results.
How do I know this? Because a few intelligent people have actually taken the time to think, to see through the hype and to question the consultants that charge thousands of dollars for useless CDT sessions — session whose exclusive focus is, ironically, exploratory testing and nothing else!
These people that have seen through the hype — who have nothing to gain for being pro or against exploratory testing — have written down their experiences as well as their research results.
That’s right! There has been study after study after study after study after study that shows that exploratory testing does not perform better than scripted testing in all cases. In fact, ET can also worsen a software project’s technical debt!
In fact, several prominent companies in the software industry have noticed the same thing. For example, Microsoft gave ET a try and was not impressed. In a study driven by BJ Rollison, one of MIcrosoft’s top testing architects, they found that ET does not outperform scripted testing as was advertised by the CDT consultants.
Microsoft hired James Bach (top CDT consultant) to learn about ET but when it was time for the test teams to actually apply such “wisdom,” they were naturally disappointed. Needless to say, Microsoft considered Bach’s teachings a flop and started to investigate on their own.
Of course, any attempt at a constructive debate is annihilated on the spot by the CDT consultants and their proponents with the argument that “different paradigms cannot possibly coexist,” (James Bach’s favorite retort) which is a rather backwards argument to use nowadays, particularly in the field of software where different ways of thinking actually established the field in the first place! (think Turing machines vs lambda calculus).
Indeed, CDT sessions from these consultants are just that: Cash-Driven testing sessions.
Care to have a discussion with them as to why they think that “different paradigms cannot possibly coexist”? Unfortunately, attempts to initiate conversations about other methodologies and different ways of thinking are quickly diffused so as to prevent any discussion (and loss of potential paying clients). Examples are here, here, here, here, here, here and here, just to name a few.
In a response that has been deleted by James Bach, I challenged him and asked why he did not prove such “faulty” studies wrong? Since he already knows the errors of other people’s ways and has first-hand knowledge of how not to get “ET wrong,” I believe he (and other CDT consultants) is in the best position to prove them wrong.
Alas, no reply. Perhaps those CDT consultants really are all talk.
My second collision with the CDT group is regarding automation.
As a developer, I can guarantee that you value automation and consider it a best practice in the seemingly endless universe of software development practices.
The reason we value automation is because it enables:
- Repetition of tests
- which gives a safety net for the quality of our code
- which enables continuous delivery
- and therefore affords agility
Automation has so many benefits other than to run a specific test for a specific situation.
In particular, the coupling effect tells us that certain bugs are actually interactions of other bugs — bugs that might have been introduced by our colleagues or even by ourselves (this is the competent programmer hypothesis).
Automation is therefore great at catching these “satellite” or “peripheral” bugs whose symptoms are results of coupling and whose root cause are nowhere near the lines of code targeted by the test itself. This has certainly been my experience as well, and strengthens the safety net quality of automated tests.
In other words, having piles of automated tests not only allows us to detect such bugs as regressions in our code, but also to detect many other bugs that we did not plan for in our tests. It’s a win-win!
So here comes another clear example of contradiction and lack of knowledge from the CDT group of consultants in the form of a video called “Skills based testing” with one of their leaders, Paul Holland.
In that video, Paul Holland uses a rather bizarre and primitive analogy between bugs and Easter eggs. That sentence left me wondering how these guys are seen as the authorities in testing.
The problem starts on minute 2:41. So go watch the video or keep reading along. In the sentence on minute 2:41, Paul Holland is (as usual for CDT consultants) speaking against testing methodologies that are not exploratory:
Well guess what, you already found that egg and unless somebody comes along and puts that egg back in the exact same spot, it doesn’t matter that you’ve now added a new script because that egg is found
That way of thinking is simply absurd!
The goal of the sentence (and the video) is to downplay automated testing (and all its advantages as I mentioned above), only to introduce their CDT “teachings” as a “You really need this new product which will solve all your software testing problems” infomercial.
And, of course, people pay them. Yikes!
But do we really need their product to solve all our software testing problems?
Automated tests are scripted tests by definition (something that the CDT group really hates for no logical reason). Further, automation enables regression testing. Regression testing helps detects bugs (or eggs, as Paul likes to think of them) that were once fixed but have been re-introduced into the SUT as part of the evolution of the system.
So Paul is clearly speaking against regression testing. In doing so, he is ignoring and negating the benefits of automated and regression testing that are confirmed by sound research as well as experiences that span decades.
Developers value automation and particularly regression testing. This bizarre, invalid perspective from the CDT consultants raises big, bright red flags to me as a developer, flags that indicate:
WARNING: These people have NO idea what they are saying!
A third reason that being a tester is seen as requiring no skills is because of the vacuous terms that the consultants around the CDT school tend to come up with; terms that have apparently (and sadly) impressed many testers, but have not impressed developers for good reason.
All these fancy phrases are smoke and mirrors that are meant to appear smart and intelligent. However, developers catch on quick and know in a heartbeat that these terms are worthless. By association, testers that are seen reading or using these empty terms get labeled the same, and rightly so.
That’s expected from people who do not know what they are doing but have built a million-dollar business around it and want to protect it by trying to hide the truth. And they don’t seem to care, since they are getting paid to do it anyway by many people.
But with companies like Microsoft and researchers becoming aware that the teachings from these consultants are not the holy grail (as those consultants make it out to be), and people publishing their experiences and studies, there is good change coming.
And I think this is where things are taking a turn for the better for our software industry; when they do, being a tester will not seem like a step down any longer.