1 minute read

One of the companies I worked for had a rule which dictated zero bugs in each release. Any bug that was found during the formal QA process had to be fixed before the release could go into production. This sounds like a good idea at first, however, it is fraught with problems:

  1. Some reported bugs have very low impact. For example, a label might be misaligned a few pixels. Or maybe a certain bug is only seen by internal users. Fixing bugs takes time away from new features, which may be more important. New features drive application development, and focusing on minor bugs slows down the project.
  2. The requirement to fix every bug led people to fear reporting new bugs. They knew that we would have to spend time fixing it before cutting the release. We did not want the person finding the bugs to decide whether or not to report the bug. All bugs should be reported, and the business should prioritize and decide which ones are worth fixing.

These problems stemmed from the fact that fixing bugs once the software reached formal QA was expensive. Each bug must first be fixed by developers. Then, the tester had to verify the fix (possibly by pushing a new build into a signoff or local QA environment). Once the fix was verified, we had to release a new version of the software in order to promote it to the formal QA environment. Finally, the formal QA had to verify the fix. This entire process took at least half a day, and could take much longer.

Obviously, this process was a point of pain. The time and people involved meant that we should only fix bugs worth fixing, and the business sponsors had the final say.

Updated: