CAST 2012: My session notes
Starting off my stay in San Jose with Test Coach Camp, my head was already full of ideas before the conference. But that did not actually matter when I got up to speed of conferring with my fellow peers at the conference.
This was my second CAST. Last year I got to meet so many new people that I have been in contact with since online and at other conferences. The difference for this year was obvious, I must already have known or at least met and talked to one third of this years crowd before. It was a comforting thought when I was about to go to San Jose. Anyway, it turned out big parts of the other two thirds were highly interesting people as well, so if I would have to regret something from this years conference, it would be that I did not talk enough to the people I already know. That is a pity when you don’t meet very often. But I made so many new friends instead.
Another difference from last year was that this year I was a speaker. Although I have now a couple of conferences under my belt, I was very nervous about my performance within this highly critical thinking environment as CAST is. But it is also a very safe environment which I found out throughout the conference. It actually turned out quite well. I got good feedback and many interesting and interested questions about Developers Exploratory Testing that I talked about. I will publish more background and writing on this after I have presented it at Agile 2012 in August.
Most of the sessions I attended I tried to keep up with listening and structuring the thoughts in mind maps. I am still practicing and I realized that I tend to focus too much on what is written in the slides than listening to what is being said when I am scribing. But I want to share the ones I have anyway. Be aware, there was just too much great stuff shared at this conference and the selection of sessions to attend was really hard. Second day I also took some time off for sleeping in and practicing my own talk.
Emerging topics: Illari Agertner
Illari showed some nice examples of observation as a huge part of testing. I especially liked the “Lazy people” heuristic. Are you really a better tester if you are better at observing? Video found here.
Emerging topics: Scott Allmam
Scott talked about how causality relates to software testing. Cause of a bug could be copy-paste, is it so? Video found here.
Emerging topics: Claire Moss
I had already talked to Claire about her presentation at Test coach camp, but I was really after more discussions on it. She has done a lot in her team to raise the overall understanding of testing. And two things that I liked was how her bug board evolved from visibility of testing to being a tool for the whole team. Also, I like the idea of user personas for the test efforts and documentation, although that is something I really like to do more subtle through discussion and common understanding. Video found here.
This session touched something that is still going on and being developed within development groups and programmers. DevOps is reality for some nowadays, which creates huge demands on the understanding of products for testers. Although some product contexts have a great deal of way to go before they are anywhere near continuous deployment, testers need to be aware of and being able to cope with situations that require this. TestOps will become a growing aspect of software development in the future I think. Now, if you as a tester do not have an ongoing conversation with your operations team already, I advice you to do that. They ARE one of your stakeholders that depend on your testing, whether deployment is continuos or not.
Keynote: Tripp Babbitt
The keynote in whole can be found here. It was a good keynote from Tripp making a lot of references to Deming, which I have still not come around to reading very much from. But the systems thinking approach to managing IT I want to refer to John Seddons keynote at Öredev in 2010 where he brings up many of the points Tripp did in this session. I feel that many of the challenges we run into within software development and testing are really aligned with bad management practices that are the ones taught in the MBAs. That is a huge problem that Tripp brought up and it is good that these things surface. But when will that thinking go back to academia? This actually relates back to how my class in software testing in the university also consisted of mostly old-time factory approaches to testing. If you haven’t heard Tripp Babbitt or John Seddon, I recommend both videos.
Iain told us about some practices used within medicine for diagnosis and decision making. The way he explained indications and contraindications and how these can relate context factors towards what approaches might seem reasonable actually appealed to me. Although not structuring it that way, this might actually be inline with the organization wide test strategy that we are currently working on. We have concluded some aspects within our projects where certain approaches are needed. By then deriving the essence to why those approaches are good in those projects, we should be able to see patterns in new projects where the approach might be needed. I really like the ideas I got from this. However, just as Iain stated himself, I am also a little skeptical to creating a framework wider than the own organization for this. Maybe applicable within certain domains?
Chris Blain and Ben Kelly
They talked about the working environments and how these affect both creativity and the testing itself. Testers themselves are equally responsible for their own environment to make sure testing can be done. That is an important statement since I have seen way too many testers just accept the fact and thus not being able to deliver valuable testing to their organization. If change is needed and people around you don’t get it, it is important to visualize it.
Keynote: Elisabeth Hendrickson
The keynote slides can be seen here, so I don’t want to describe it in words. It was a compilation of aspects needed to grasp what “the thinking tester” might mean. Some takeaways that resounded with me are:
Testing is not dead: But the context continues to evolve, and so do we.
Testing = Checked + Explored
Testing: Any activity that yields empirical evidence about the extent to which out intentions, our implementation, and the actual business needs are aligned
A Thinking tester is Analytical, Relentlessly curious, Observant, Skeptical, Empiristic, Critical thinker, Investigator
Every bug filed is a defined requirement. Decision made by the tester.
I admire the depth of this talk and how Elisabeth walks on the edges together with context-driven community. Although, I don’t really agree with her about her analysis of job ads and how they are more and more heading towards programmer/testers. I actually think that is more of a general HR/recruitment problem than showing a tendency towards the type of testers that companies actually need. On the other hand, of course testers need to have technical skills, but I don’t equal that to programming skills.
That is a hole different story though.
My talk was about Developers Exploratory Testing – Raising the bar. More will be published later, but here is my overview picture.