How to use dogfood testing
A number of companies use the word dogfood testing to describe part of their software testing process. I believe it’s a powerful and essential part of testing as I mention in My hierarchy of testing. However, it means different things to different people, so I wanted to describe how I think it should be used to make the most of it.
As great as QA engineers can be, it’s sometimes impossible to cover the breadth of a product and ensure high quality. Dogfood testing is the practice of installing reasonably current builds of a product on your own device(s) and living on them. Everyone has their own unique way of using their devices so it can be a great way to tease out bugs that otherwise would be very difficult to find. Historically, many of the most important bugs I’ve seen have come out of this process.
An important part of dogfood testing is that it’s an integral part of the development process. Since people doing dogfood testing are also going about their lives, it’s important that great value comes from this without too much loss of productivity to the tester.
It’s important to note that dogfood tester is not a job title. Ideally, it’s mostly people who are NOT in QA and potentially not even in engineering who are volunteer some of their time to contribute.
I’ve seen many, many development processes over the years and I’ll describe one here and show how dogfood testing could fit in. This makes a couple of assumptions for the sake of a simpler explanation:
- QA engineers sit directly on engineering teams and there are also QA engineers that do more global testing.
- Submissions to a new version of the product happen at the end of the day
- A new build of the product is available in the morning
So, here is how the process could work. Note that I’m also interjecting some things that I consider best practices:
- Engineering would be working on bugfixes or features.
- Each engineering team would determine when they have something worth submitting, rather than forcing submissions on a fixed schedule.
- QA engineers embedded on the engineering team would do some focused testing on what was going to be submitted that evening. Perhaps they would watch commits or talk to engineers to identify important new functionality.
- Issues found would be reported to engineering immediately and sometimes the submission would be aborted or a fix would be added.
- The next day, the greater QA organization would install the new build and run it through a sanity check to test fundamental functionality or new features that they wanted people to try out.
- Based on feedback they provided, a determination would be made whether the build was good enough for people to install. The key terms would be livability and testability.
- Any good builds would be made available outside QA
- Dogfood testing would start on any good builds
Since there was both pre- and post-sumbission testing that preceded people installing it for dogfood testing, there was a reasonable expectation that core functionality was in a state that was at least testable (for the wider QA organization) and livable (for dogfood testers).
Ideally, QA engineers doing sanity checks could create a quick summary of their findings. This helps in cases where a build might be deemed livable but some relatively small feature is not working. If someone relies on that small feature, they shouldn’t be dogfood testing that day.
I’m not arguing that this process would result in no bad builds, but would result in more builds that don’t waste the time of QA or the wider dogfood testing audience.
Ramifications
The effects of ignoring the important parts of the engineering-testing cycle I listed can be quite significant. I’ve provided a long list here.
Valuable time is wasted
Instead of QA finding the serious bugs ahead of time in pre- and post-submission testing, dogfood testers are now QA testers. They find obvious bugs that could have been caught by embedded QA and had a quick fix applied. If many people all report the same issue, it just wastes everyone’s time.
Less feeling of accomplishment
It’s great for an engineer to have lots of people living on (and hopefully loving) what they have been working on. It makes them feel that progress is being made and to get a taste of what it will be like when releasing in to the customers.
One, it’s great to be able to live on the changes you have made. Sure, you can always custom install a build.
Two, there is no time to polish up features. It’s during the polish phase that you get to refine what you’re working on and it’s often when the “little things” that make our software special can be worked on. Plus, having things that are not completely finished from your perspective in the build, it’s harder to enjoy what you’ve worked on.
Important bugs are missed
If dogfood testing generates tons of bug reports, it’s easier for important bugs to be missed. Also, there are many bugs that are hidden behind these bugs that should’ve been caught earlier. These are the kind of bugs that dogfood testing is perfectly suited to find, but if a build is not testable, you won’t uncover these until the ship date gets uncomfortably close. Processing a high volume of bug reports takes time away from root causing and prioritizing.
Frustration and aggravation
With dogfood testing, there’s an expectation that a lot of things won’t work perfectly. It can sometimes be frustrating. But having a high level of aggravation and frustration week afer week tends to wear people down. No one wants to work around a large number of people that are frustrated or aggravated. A handful of things are expected and tolerated, but stumbling on aggravating things minute after minute is just not good for mental health. There should be a lot of things that work that we can enjoy and be happy about.
Abandonment of dogfood testing
If the quality is not high enough at the time a build is released, you risk annoying dogfood testers to the point where they no longer want to volunteer for this type of testing. Then your customers become your only dogfood testers.
Lower morale
It’s demoralizing to work hard every day and then load up the fruits of your labors and be disappointed or utterly depressed. To see that even the most basic functionality doesn’t work and the overall level of quality is low. We should want to take recent builds for a spin and feel great that they have so many cool features to play around with.
Also, dogfood testers want to report issues and feel like they are contributing. They want to find things that others haven’t found for the maximum impact. Let them find the edge cases. Give them a build without obvious bugs.
Less respect for other engineers
I believe that low overall quality of builds leads to less respect for coworkers or other teams. On your own team, you generally know what works and what doesn’t, so you avoid the latter. And if you find something bad, your teammate or you can fix it quickly. So, your own team’s software typically looks much better to you. It’s everyone else that seems to be incompetent, right? I think this leads to people thinking that their fellow engineers just aren’t particularly talented or perhaps don’t care about their work as much. This makes it much more difficult to establish good relationships with other teams.
Poor quality death spiral
When people are accustomed to chronically poor builds, it becomes far easier to lower the bar for submitting new code. If what you submitted doesn’t work right, who’s going to notice? If they do, it’s easy to say “well, this whole build is shit” instead of taking responsibility for it. The reason this relates to dogfood is that with so many people living on poor builds, it’s much more obvious what the state of quality currently is.
Hopefully there is something of use here. Not that this mostly applies to larger companies but you could adapt this by making all QA embedded in a small company environment.