A new teams encounter with DET/TET as a framework for testing – Part 3-Running with the basics!
First of all, I want to be clear that this series of posts have nothing to do with my work at Atlassian. These posts are experiences from previous assignments with Jayway, and I am just now gathering my thoughts on it.
The basic principles of DET (Session, People, Focus and Reporting) are pretty easy to follow, so just doing it basic style will give some value. This is of course a place for a disclaimer, since running with the basics here would be similar to scrum, easy-to-follow rules that are just not so easy to follow. The sections below are about some aspects of the basics that we tweaked to fit the context.
With a quite large team, the sessions needed to have a stated agenda to explicitly highlight the time box. This is and example:
9.00 Introduction, mission and focus areas
11.00 End session
At first we dedicated Friday mornings 10-12 for sessions, but this was revised to 9-11 with an extra buffer for decisions afterwards. Changing start time to 9 also removed the friday daily standups (usually 9-9.15), but this was anyway just waste since everyone was testing those days.
Test time was revised from 1h to 1h15min after a couple of sessions, since the overall feeling of not finishing got better by just adding 15 mins. A later reflection from the dev manager considers the extra need for time being mainly about getting used to the whole format of testing. However, I think that there are more variables here:
- Complexity of focus area
- Depth of testing
- Form of reporting and note taking.
A big part of running the sessions are the people involved, and managing pairs towards focus areas. The most important thing here is the pairing. The biggest achievement with pairing is the focus of the people involved. It is the responsibility of both persons in a pair to actually start testing, which will give the session a kick-start. Also keeping the focus on testing during the whole session is a win.
A hard thing was to actually know which people that were taking part of testing, to be able to plan the pairs. We tried both opt-in and opt-out solutions, but this will be something that never gets good without discipline from the team. Fortunately enough, the development manager kept pushing people to take part in testing, preferring the opt-out alternative.
Since we had quite many people involved in testing, we discovered a limit for this team of 12-14 people involved in testing at the same time. This is quite a lot of people, which includes 2 facilitators and a maximum of 5-6 test pairs/triplets. The limit was mostly caused by the amount of test results (bugs/questions/notes) we could handle in the debrief and to be acted upon. A possible solution for this is to follow up the debrief with a smaller and more focused triage meeting, which we did on some occasions.
Because of the nature of the product, it was never hard to find new areas to focus on. The bigger problem was to prioritise the important area of the week. This depends on many different factors such as:
- Areas where code has changed, development focus
- Areas where it always is important that no new things arise, customer focus
- Areas where findings will be acted upon, management focus
We also learned that the preconditions for DET in regards to focus on new functionality had higher demands on the facilitator.
- Needs to be smoke-tested by facilitator
- Short demo when introducing the testing
- Findings don’t necessarily create issues in the bug tracker, might be handled differently.
Debrief meetings can take some time when not used to it, but it took about 2-3 until they kept to the time box.
Because of the big group of people doing debrief, we do it in two steps. First every pair gets to say just a couple of things to summarise their session. After that we go through every pairs session notes and discuss bugs and improvements. It has proved valuable to keep the discussion with the same vocabulary for issue types as used in the bug tracker, i.e. Bugs, improvements and documentation. Here we also considered adding new types at a later stage to fit other purposes that we saw.
The company uses the Atlassian tool suite extensively, so it was a natural thing to do to have the written reporting from sessions on the wiki and bugs in Jira.
If there is anything above that needs clarification, please leave a comment. Or you can continue with next post: