Archive

Posts Tagged ‘exploratory testing’

A new teams encounter with DET/TET as a framework for testing – Part 3-Running with the basics!

June 24, 2013 2 comments

First of all, I want to be clear that this series of posts have nothing to do with my work at Atlassian. These posts are experiences from previous assignments with Jayway, and I am just now gathering my thoughts on it.

Be sure to read the previous posts which has some background:
Part 1-Context
Part 2-Intro workshop

Basics

The basic principles of DET (Session, People, Focus and Reporting) are pretty easy to follow, so just doing it basic style will give some value. This is of course a place for a disclaimer, since running with the basics here would be similar to scrum, easy-to-follow rules that are just not so easy to follow. The sections below are about some aspects of the basics that we tweaked to fit the context. Read more…

Agile Testing – Context driven testing perspective

January 15, 2013 20 comments

The context driven school of testing is a good representation of my personal normal state of mind when it comes to my profession. I really like to hang out with like-minded people that are outspoken about belonging to this community of testers. I also recognize many of the people within this community to be interested in and are practicing good Agile testing. When I started to write this post I also realized I gave it a shot 3 years ago, when I still hadn’t wrapped my head around context-driven. But although my knowledge and experience has changed a lot during that time I still think the post is valid in some sense.

Before continuing, read my previous posts in this series:
Bridging communities
Agile Testing – Traditional testing perspective
Agile Testing – Agile perspective
Agile Testing – Programmer perspective
Agile Testing – Project management perspective

Context driven testers

Similar to the Agile movement, the context-driven school has a set of principles (http://context-driven-testing.com/), followed by explanations to them for clarification.  Read more…

DET – The original mindmap

June 19, 2012 Leave a comment

I admit, inspired by the recent flood of mindmaps from ministry of testing I also wanted to post one.

The mindmap

Developers Exploratory Testing was created by my colleague Davor Crnomat. During the last years it has evolved as a practice in our company. As I am presenting about my experiences on it, I wanted to create an overview of the original rules. First time Davor presented these they were just a list of longer sentences. So to create the mind map I have shortened and grouped them. I have also realized that they are not all present in the latest articles written about it, but then again, it is the original that I have drawn. Read more…

Elaborating on DET – ETDD evolving?

January 5, 2012 11 comments

I got some positive response about DET that I wrote on my blog and in my CAST session proposal, so I thought I would elaborate a little on where I think this could be going. I will probably cover more hands on aspects in the coming weeks, but I really want to explain a vision I have around it first.

How about including the business stakeholders?

In my current project, I started to involve our main business stakeholders in our test sessions from the start when I got involved. And they have gotten really excited about attending once a week. Read more…

Developers exploratory testing – Expanding its value

December 17, 2011 6 comments

This post was also published on my company blog.

There is a common practice in our company to perform Developers Exploratory Testing sessions, explained by my colleague Davor here. The cool thing is that this way of performing higher level testing has actually become accepted by our developers, and they really enjoy it.

In my current work of developing our organization wide practices for quality, I have made a deep dive into how DET is carried out on a regular basis. What I have seen is that DET is accepted and acknowledged as a valuable practice, however it is not really carried out in its full potential. There are many details and aspects of it to work on, especially regarding reporting and follow-up.

The other day I was asked to help one of our teams with a DET session. Read more…

CAST 2011 – Testing competition with Happy Purples

August 22, 2011 1 comment

First of all, Ill have to report a bug in James’ blog post. We only got $23 for the worst bug report award.=)

Then I would like to thank for the fun competition James set up, it was really a learning experience and in retrospect I would maybe have put even more effort in the learning parts throughout the exercise. This, and my ability to concentrate may also have been impaired because of the time of day (after 6 pm after long conference day) and that I was still jet-lagged. But enough whining now, here is the story and my learnings that hopefully will help me make better decisions in the future. Read more…

Collective note taking – More value from your test notes?

June 15, 2011 4 comments

For quite some time now I have struggled with making note taking a natural part of my personal progress while testing. And well, I can say that it has really made impact on many other aspects of my work in other situations as well. I am actually quite proud to say that it has made great impact on how I perform in general, and how easy it is to make follow ups when done with anything. Now I would like to take this some steps further to explore how my notes can give more value to the whole team in a project setting, both mine and the teams’ collective notes.

Read more…

iPhone testing and bugs

September 14, 2010 2 comments

Some 1-1,5 years back, when the iPhone 3G had made such a success in Sweden, to develop apps for it got to be a hot potato in the software development business. Everyone wanted an iPhone app for their product/company, and we were not late to adopt this need and make it happen. Now we are pretty good at developing cocoa apps.

From the testing perspective, I started to look at if testing on iPhone was different from any other software testing. I started off asking a senior app developer about his view on this, and got this answer in an email:

Iphone has very many complete components, which means that the developer doesn’t have to invent the wheel all the time. These complete components are well tested by Apple.

Focus on iPhone app testing should be on the levels of
1. Application logic
2. Verifying that the Apple interface guidelines is followed.
3. Input validation

As a tester, this “easy to develop, easy to test” mentality should in my perspective trigger all bells and whistles as warning signs.
Read more…

Logging: Test tool and system under test in one

February 2, 2010 2 comments

James Bach brings up the logging in a system as a tool for the exploratory tester. I really like this, and am going to see if I can straighten out his list to fit my specific project and situation. I have been thinking about this some time now, but mostly about the information within the log entries, and not the logging mechanism itself.

The system I am testing at the moment has extensive logging features, or at least it logs very many things at multiple levels. I use the logging sort of like he mentions, but not relying as extensively on it as it sounds in his post. The system is pretty straightforward relying on one input that creates one single output. Apart from that, there is only logging and status management to keep track of events.

In our case, when testing a part of the system and keeping my eyes on the logging, it struck me that there was a line “check that x has not failed”. This line was very common to see, since it was logged for every event created successfully. But thinking about it, was this entry a useful entry? NO! In my case it was so obvious that none of the developers had been thinking about it, but it was an horrible entry. Why? One of our statuses for events was “FAILED”. So if an event would fail, the obvious thing is to grep for this word in the logs. But having every successful event instance log the word failed would be devastating for any operator trying to find the cause of a failet event.

So what I would like to extend in James’ list, is that of bullet 5″- Event type description: short, unique human readable label” and “- Event information: any data associated with the event that may be useful for customer service or assessing test coverage, this data may be formatted in ways specific to that event type.”

I want to point out the value and importance of having these string fields not interfere between different types of log entries. Like my example above, keeping the word ‘Failed’ away from any successful events at all times is crucial. The problem is that it is so easy to get it wrong. Developers need to think about this when coding, and not only log what the code does, but create the human readable log entry consistent from the user/operator perspective. If something is successful, it does not matter that the code is “checkJobNotFailed(job)”, but the logged entry should verify the success.

Other types of risks with the logging content is of course that of usernames and passwords. If this type of information is logged in a readable way at any logging level, the log files must be handled according to the security policy of this information.

I have made two examples to point out less good logging events. So, what more examples of “good vs bad” log entries are there? Remember that I am not talking about the logging mechanism and its’ good and bad practices like the ones presented by James, but the content of the event entry messages.

Please enlighten me with more input.

Afternoon bugs

January 21, 2010 6 comments

How often do you find a really serious bug the first thing you do on monday mornings? Or any mornings for that matter, when you just started to look at it.

How often is it the other way around, that the first thing you do is trying to remember what you did on friday, to cause that serious issue you found at 4.30 before you were leaving for home and for the weekend?

For me, it is always the latter. Not only fridays, but any day of the week, the most common time when I find bugs is really late in the afternoon, preferably if you are having an appointment or anything right after work. =) But why is that?

I have three possible explanations, that I have been thinking about:

1. You could be testing a really hard-to-set-up system, that takes half of the day setting up before it is possible to test even a little bit. This is the reason for me every now and then. Recently we have been setting up load tests during night, which have failed a couple of times, so the mornings have been devoted to investigate logs for failure explanations and then it has taken some time to reset databases and getting up and running again. Usually with a bug fix or configuration tweak that also takes some time to verify.

2. Exploring the product and its features and areas all day has made you get more sensitive to finding the weaknesses, and poking the system in these spots make the really tricky bugs appear. These might be the most severe afternoon bugs, or the opposite, the ones that are too complex to be a threat to the product and just are hard to fix, with no value added if fixed.

3. You are unintentionally defocusing from the tasks at hand since it is getting late in the day and the lack of energy is carrying your senses in completely other directions from what you have been doing all day. The defocusing strategy is actually something that is brought up in rapid testing contexes for finding other types of bugs than you normally do. Lack of energy just makes you get in this mode automatically during afternoons. These bugs are more too often hard to recreate for investigation, since your lack of energy might miss out on observing the reason behind the bug.

Have you experienced the afternoon bug heuristic? What have been causing you to find them? and specifically, why do you find them at that time?