A few months ago I wrote how I was humbled trying to record screen capture demos of our product for internal training. My original attempt wasn't too productive. I probably was overly ambitious and set my standards too high. But recently I decided to give it another shot, with the opposite expectations. I lowered my standands and decided to release what ever I created in one take. Well, I was late for dinner, so I had to do it one take.
This was more successful and I was able to relay information about the product to our sales team who were able to turn into a polished client presentation.
This gave me an idea. We had recently discussed improving the communication between our development and QA teams. We are a pretty small firm, but the idea was floated to create detailed specs of every feature in the system. Now I fall somewhere between the camp that believes every feature requires a detailed functional specification, and those that use the software as the specification.
There are times when specification not only clarifies the feature in the project manager's mind, but also gives precise directions to development on how a specific feature should be implemented. But most in most cases, I verbally describe the feature and document it with a bullet point in the notes for a development iteration and give the developers some leeway on implementing it. I believe this is one place that a small team with good communication can way out perform larger teams, and this is a formula that has worked for us in the past. But even if the QA team is involved in the discussions where the feature is communicated to development, because QA isn't involved in the day to day implementation and might miss some context of the background discussion, this can lead confusion about the behavior of the application.
With this problem in mind I considered compromise solutions that would provide more documentation to QA, but also save me from writing detailed specifications that would likely be misinterpreted anyway. The idea I came up with was to create a screen capture video of the feature when we felt it was in a completed state as a way to communicate the behavior of the feature to QA. This also provided a way to point out areas we felt required more attention. So this is what I set off to do earlier this week, but the end result was not quite what I expected. Here's the lesson I learned:
Demonstrating a feature as a user would use it in a recorded demo reveals a lot of bugs.
One of our senior QA guys said something to me recently that caused me to reconsider their role. QA doesn't expect their job to be like "shooting fish in a barrel." They expect their job to be hard. They don't expect the typical use cases to fail. They have enough work to do on the exceptional cases, that they don't want to spend a lot of time documenting bugs on the fundamental behavior of a feature. It is the job of development to ensure the application works in typical cases. I have to agree. If I as a developer implement a feature, but it doesn't work in the most typical cases, have I really done anything?
Well it turns out when I started recording the feature that we considered "ready for testing," the application started feeling a lot like a barrel of fish and my mouse was the gun. I personally fixed 3-4 bugs. As a project manager, I want to make QA's job hard. I don't want there to be a lot of obvious bugs in a feature when they start testing it. So I've decided to add this to our QA procedure: all new features will be documented with a screen capture demo before we consider a feature "ready for testing." While the verdict is still out on using the screen capture as a functional specification (stay tuned for that one), my (granted limited) experience now shows that the bug count will be lower before the application goes to QA, and ultimately that makes for higher productivity and a better final product.