CAST 2011 – Testing competition with Happy Purples
First of all, Ill have to report a bug in James’ blog post. We only got $23 for the worst bug report award.=)
Then I would like to thank for the fun competition James set up, it was really a learning experience and in retrospect I would maybe have put even more effort in the learning parts throughout the exercise. This, and my ability to concentrate may also have been impaired because of the time of day (after 6 pm after long conference day) and that I was still jet-lagged. But enough whining now, here is the story and my learnings that hopefully will help me make better decisions in the future.
First out, I was going to enter the competition myself, but then I got to know Jay and Sandra and we formed Happy purples (even written in purple on the registration flip chart). It felt quite good to be a team instead of working solo on this. Especially since we had the Miagi-do all-star team on the table next to us, and they were plenty.The start of the competition could have been better. While we knew we were already crippled with having two computers in a team of three (with two iPads as complement), we thought we would be ok since we were told we would get a web-link to the software to test. Here was a first set-back when we realized that it was a Windows application to test on our two macs and iPads. Dual-booted Win XP worked for one of the laptops while my own Parallells Win 7 instance had not been running for a while and did not really want to cooperate.
Keep your options open to be able to test on whatever platform is needed. Also, make sure you know the platform requirements. Here we really should have questioned the statement of “you will soon get a web link to the software” and not assumed that it was a web application to test, which I actually did at the time right before the competition.
Later I got to know why my Parallells instance of Windows did not want to cooperate during the competition. It was due to a performance bug in Parallells that causes MacosX dock to hog for CPU because of having Win applications folder in dock.
The next set-back in time was the slow download of the software. It was however solved by organizers that we could get installer on thumb drive. But already half an hour into competition we only had one instance running. This is where we filed the “download takes long time” bug in frustration.
While I tried for some more time to get my Windows running, the others started testing the product and reading up on the documentation provided. I also scrolled through the readme file while observing the initial exploring of a quite unstable product. While it seemed like a dead end to get my Windows up and running, I left it for starting to report the first bugs. This is where our quite bad standard of bug reports were initiated. The problem here was our quite unstructured way of testing together with note-taking while at the same time only have one running instance of the application under test. Because of the stress factor in the room, and that the bugs just jumped at us like bullets in Matrix, the team was overwhelmed by some information overload.
It is important to have enough instances of the application under test to make sure everyone can have a good view of what is going on.
When I later finally (2,5 h into competition) got the application running on my machine, I was able to verify behavior as correct although it looked wrong on the other laptop, because the data on my installation had not been tampered with.
When having an initial information overload of bugs/issues/problems/questions in the beginning of testing, we should have invested some time in stepping back from the product and thinking about what was really important and focus on that. In our current situation with our first encounter of the product, the only sensible thing to do would be to go talk to the developer.
Documentation of the product is important, but after reviewing this and getting acquainted with the product, opening a dialog with developer should be prioritized in the setting we were in.
When up and running, we actually tried several approaches to how to report all the bugs found. Since I did not have the application running, most of the reports were written as short notes that I then submitted to the tracker. The problems with this was that I sometimes had hard time getting the context of the bugs, and yet did not completely succeed to deliver that context into the reports when it was entered. Very similar to the whispering game, since we all just went on in a really stressed pace.
James came by with his notebook and asked how were were doing our testing and what approaches we had. This was just when we had gotten just a little up and running but still not very organized in our work. We told him about /some/ status and I realized that would not suffice and swiftly grabbed one of Ben Kellysadvices from an earlier session “Since we are in the middle of working, can I get back to you with the answer to that question?”. A quite good answer in a hectic environment and you need to keep your breath above the surface, but unfortunately I later realized that I never got back to James about that.
While having James at our disposal, I realized that was a good opportunity to interview him on his expectations on the test report. That was a quite good thing, since we got to know how important the report actually was. Our main focus had really been on finding bugs that would be foundation for the report, but there was more to it than just that.
There is always more information to get about the context, you just have to ask the right questions to the right person.
When the reporting was brought to our attention, we started thinking about how the outline could be. One of us started doing an outline while the others continued testing and reporting bugs. This is something that would have needed more attention and not something to plan /during/ our test session. This fact got more obvious when we all directed our focus on the report, and needed to do most of it from scratch.
It is important to set the stage in a group before starting solving a problem like this competition. I think our group would have benefited from at least some structure in our work, and I think during the heat of action we underestimated these needs. It is especially important to have some kinds of statement of work when being a group of more than two people. This was shown over and over in all activities we performed, yet we did not cope with it at all, i guess because of the stress and not realizing the need for it in the short term of the competition. We even observed the structured way the Miagi-do team set up their SBTM on flip charts.
About the worst bug report award
The two bugs mentioned by James are of course questionable. The tooltip inconsistency was actually filed with an assumption of consistency, but later we got to know from the developer that those type of bugs were not important. So, getting feedback on bugs from developer did actually change the type of bugs we filed later on, we improved. But it was still not possible to edit or delete already filed bugs.
On the “slow download of application under test”-bug, it is related to a discussion I had with Michael Bolton on my blog a while back. I would consider it a testability issue to not get the application delivered in a timely fashion. Especially if we were going to get more deliveries of it during the competition time frame. In retrospect, it might seem more rational NOT to make a bug report on this, but only include it in our final test report, as it delayed our testing quite a bit.
This post was also posted on our team blog.