Ivory Consulting’s CEO Scott Thacker provides advice and counsel to equipment lessors and lenders on the best ways to improve customer satisfaction and profitability using modeling and pricing techniques. His colleague David Holmgren contributed to this article.
In my role as CEO of Ivory Consulting, one of the three most important facets of my job is to ensure that our quality assurance activities are executed to the best of our ability. Toward that end, we have an initiative at our company known as QA First. It means that everyone must engage our quality assurance group at the beginning of each customer or internal project – it’s the most efficient way to ensure our quality assurance is the best it can be.
David Holmgren has led Ivory Consulting’s quality assurance efforts since he started working with us in 1996. Each month, he writes an internal blog known as the QA Corner. As David goes into his 10th year of blogging, I asked him to share with all of us one of his favorite posts. I hope you’ll enjoy it.
Infallibility – Don’t Count on It!
Recently a bug was reported to us by a client in their custom version of SuperTRUMP. It had to do with the effect of a passive loss adjustment. Upon investigation, it turned out that our own testing had missed it because we had relied on a test tool that had long been regarded as infallible. The tool was the termination value comparison report control vs. equation. It is a great test because if the column at the far right has a zero, this indicates that the Control value equals the equation-derived value for the test in that row. It’s very easy to see whenever that column has any non-zeros, indicating an error. This comparison report was thought to be infallible because the two calculations (control and equation) in the report are completely separate. Although the calculations are separate, the reporting of the results turned out to be misleading. In fact, the report shows that only the two values match, not whether they are correct or incorrect. In this case, they were both wrong in exactly the same way; both lacked the passive loss adjustment.
The moral of the story is that even “infallible” tests need to be separately validated. The quality assurance analyst should have considered all the side-effects; a change to termination values was not anticipated, so our tests helped to preserve the incorrect behavior, with insidious reliability.
When creating test cases we need to maintain the same level of scrutiny, and perhaps more, than we use when testing the software code itself. We must have an awareness of our limitations and imagination for those beyond that.
Be suspicious if someone tells you a test is infallible—that’s a mighty high standard. The follow-up question might properly be, “within what limits?”