The testing of software products or any software artifact, after the fact (i.e. inspection to find defects versus inspection to prevent defects), adds no value to the business providing the product or customer using the software. In this regard we can say that testing is waste.
What!? Should I stop testing, then?
Well no, but let’s discuss from a couple of perspectives.
Am I cheating my customer?
The story goes like this:
- My customer wants a high-quality product
- Because we are human, we make mistakes and therefore no software is bug-free
- We test in order to catch problems before they reach the user.
So far so good, but now comes the difficult part. Determining how much to test is an inexact science. I can infinitely test the software driving up project costs throughout. I could also let the client decide how much to test (I am agile after all), but beware! His decisions could result in tons of technical debt in the form of untested code. I could also carefully explain the needs, alternatives and tradeoffs in the various testing approaches, how much automation to incorporate; this would all a piece of cake. In fact, I have just convinced him to pay for the testing, and now you tell me that testing is a waste. Am I making my customer pay for waste?
We are professionals that take pride in our work. We strive not only to be effective but to also be efficient. Our aim is to perform only those activities that contribute to the final outcome. Thinking we are wasting customer money is disturbing. Just bear with me a little longer.
Not my fault, this is a new concept
I could just tell my customer “I’m sorry but I just realized that testing is a waste.”
Truthfully, the idea of testing as something to avoid has been with us for quite long time. In Software Process Improvement (SPI) the story of “testing is waste” goes as follows.
To build a piece of software, we first gather requirements, analyze them, design the solution, program and implement the design and finally deliver it. Now suppose that the user finds a defect after using the product. How much would cost?
Let me introduce cost of quality (CoQ) and its taxonomy. CoQ relates to the costs which are incurred due to a lack of quality and costs which are incurred in the achievement of quality. Cost due to lack of quality is caused by failure before (internal) and after (external) the delivery and the cost of fixing the defect. Cost of achieving quality includes appraisal (inspection) and prevention costs.
Some defects could be mapped to a problem that originates during the programming or design stages. Could we have detected closer to the moment the defect originated? How much cost would we have incurred in this case?
Barry Boehm published in ’81 a much cited study saying that a change in the requirement phase cost x1, in analysis x10, in design x100, and so on. In the context of Boehm (bureaucratic environments, projects with fixed scope, mostly US Government and contractors to them), most of the changes come from defects. Then, taking quality upstream makes economic sense.
CoQ was used to justify Big Requirement Up Front and the whole BxUF, where x is also analysis, design, etc. If you do it right from the beginning, what could go wrong? This is the infamous waterfall approach.
But you as well as everyone else know that the universe throws us curve balls all the time in the form of changing requirements. As a result, we have to iterate. Iteration means rework, and rework is bad, but what can we do?
What about automating the whole development process itself? You know, Computer Aided Software Engineering (CASE) and all that stuff. Old fashion and will surely have some problems, but remember the goal is to make it right from the start. This would reduce the CoQ.
You have to be politically careful in attacking CoQ; it is used as an economic justification for SPI initiatives including CMMI projects, but also to justify agile methods.
The exact factors influencing each phase in the cost of change curve are not collectively agreed to, but Alistair Cockburn argues that the curve is correct even in XP. So the tool is not bad per se, but it could be used for justifying wildly different solutions.
The machine that changed the world
You are probably thinking “Moving testing upstream seems reasonable; we want to make every task as efficient as possible. That still doesn’t mean that the testing will add value!”
Sometime we find ourselves arguing just because we assign different meaning to the same words. Value, Waste … let’s agree what they mean to me, but first I will digress.
Japanese car manufacturers (starting with Toyota) use a set of principles and practices called Toyota Production System, currently know as Lean Production, radically changed the way that production is organized. The Lean principles are:
- Eliminate waste
- Amplify learning
- Decide as late as possible
- Deliver as fast as possible
- Empower the team
- Build integrity in
- See the whole
Just-In-Time and Stop-the-line (Jikoda) practices, which originate from Lean Production, were counter-intuitive when first proposed, but nowadays those and other Lean Production practices are widely accepted and considered to have had such a positive impact to be referenced in The Machine That Changed the World.
But Lean Production is about manufacturing. Software development is not manufacturing. Does Lean Production apply to Software Product Development? Not directly, you must take a look at The Toyota Product Development System to see how the principles are applied. And finally, there are difference between developing cars and software.
You may already know Lean Software Development and Mary and Tom Poppendieck’s work. They applied Lean principles to software development. This is a pretty new area, still in motion, but it already having a positive impact in the Agile community, triggering many interesting discussions.
Value Stream Mapping
In Lean, optimization must be global (“See the whole” principle).
One of the Lean tools for detecting opportunities for improvement maps the flow of a “piece” thru the value adding system (From concept to cash), taking note of the time spent by the piece in each step of the flow and whether any queue of pieces or work in process (WIP) exists. The flow’s end-to-end time is called lead time. Having the map, you can identify improvement applying mostly two principles: Eliminate waste and shorten the end-to-end time (Deliver as fast as possible). The nice thing is that shortening the lead time forces you to eliminate waste.
What is a piece in software development? You can think in a new functionality, from the moment that is conceived to the moment that starts to be used by a user in a production environment.
After identifying all kinds of waste, you can measure the efficiency of your process as the ratio between the time spent in value adding tasks and the lead time.
Eliminating waste is a never ending activity. Processes will never have 100% efficiency.
Waste is everything that doesn’t add value as perceived by the user.
There are many kinds of waste. There are some common waste patterns: every queue and waiting step is waste, and also every rework or product not used/sold is waste. For instance in software there are 7 types of waste:
- Partially Done Work
- Extra Features
- Lost Knowledge
- Task Switching
But even if defects can be classified as waste (they are both a queue and a signal that rework is needed), you can still argue that detecting them thru testing is not waste.
No, no. You must work in such a way that defect didn’t happen in the first place (Build quality in principle). And if you can’t do that, at least you must build incrementally so small that you detect the defect almost when it is happening, and fix it (and it cause) immediately. You know where I’m going: Test-driven development (TDD) and Acceptance Test-Driven Development (ATDD). The catch is that testing after the fact (inspection after the defect occurs) is waste, while testing to prevent the defect is not waste.
This is easier to understand in manufacturing, but what about software development?
There is consensus that TDD is not testing but a design that could be executed and validated automatically, so it is not waste. But ten years ago most of us thought that a good set of UML diagrams was the best representation of the design and that there was value in keeping it updated. Who knows, in ten years we might know the “proper” way of doing design, and tests associated with TDD will be waste used only by dinosaurs.
… and Value
Software testing is a measurement of the qualitative attributes of the product. Once tested, you’ll have a better understanding of the product, but from the customer’s point of view, it is still the same product.
In some cases the customer may request and pay for a particular measurement (test). Is this a case where testing adds value? The answer is no. It is more likely a lack of trust which causes the customer to request and pay for this wasteful behavior.
After all, what good is in testing?
Some care must be taken. Saying that something is waste doesn’t mean that you can just get rid of it. After analyzing the value stream mapping of your organization, you might find some quick benefits, like removing duplicate tasks (i.e. testing by the development teams and by an external test team), but many times you can remove waste only after changes occur in other activities or tasks. Take, for example, the lack of trust situation. Removing tests is only possible if the customer trusts us, and this will take time. Other common examples: TDD or ATDD could decrease the amount of testing needed at the end, but you have to change how you manage requirement and how you design.
Every test is waste?
Brian Marick proposes a classification of test in four quadrants. In one dimension you have whether the tests “support the team” or “critique the product”. The other dimension is whether the tests are “technology-facing” or “business-facing”.
Most (or all?) tests that support the team are potentially automated and test-first-able. If you find yourself doing this kind of test after the fact or manually, it’s waste.
But what about tests that critique the product? While doing them (i.e. exploratory testing) you are learning the emergent properties of the product, you are finding ways to improve the product. It is just too easy to say that those are not waste but value adding activities. Don’t. Read Agile Testing, of Lisa Crispin and Janet Gregory. Make some experiments. Some of those tests could be done before having a product.
I will lose my job!
If testing is in fact waste, what does that mean for us poor testers! It can’t be true, or not always or not everybody will do the right things.
I’m trying to shake you a little, but is your current way of developing software (process has a bad name lately) bad?
I think that usually the answer is no. It is the best way … until now. You can always improve it.
Don’t give up hope, read Testers: The hidden resource by Lisa Crispin and Janet Gregory. There are many ways in which a person coming with a testing background could contribute to an agile team.
Quality is paramount in agile development; we need quality to have effective and efficient teams. But quality is not testing. Testing is commonly done by people with great commitment to quality, but is the least efficient moment to add quality to a product.
The team needs quality champions, but those people (testers like me) could feel out of place in an agile team. As testers, we need to leverage our skill, help the team find ways to include quality early on. As testers, we have to work hard for a team that doesn’t need after the fact testing, cannibalizing our experience, the same way that successful companies do with products.
We are living interesting times; we are in the middle of paradigm change (not to mention some financial problems). If you define your value as professional based in your ability to do a particular task, you are at risk. I try to identify how I contribute to software development at a higher level, and I keep learning. I hope that this is the answer!
What is your answer?
(Juan’s follow up article provides a potential answer to this dilemma.)
About the author: Juan Gabardini has more than 20 years of experience in IT management, IT product development, teaching and consulting in financial services, retail, telecommunications, medical equipment and manufacturing sectors.
Juan is currently focused on testing and coaching the Agile way, and building an Argentinian and Latam Agile community (Agile Open Tour and Ágiles 200x).
Juan is a member of IEEE, Agile Alliance and Scrum Alliance, and holds degrees in System Analysis and Electronic Engineering from the Buenos Aires University, where he also is teaching. You can find him (in Spanish) in http://softwareagil.blogspot.com and http://www.agiles.org