Home > agile, testing > Good programmers check, Great ones test as well

Good programmers check, Great ones test as well

I wrote a comment on Janet Gregorys post about Programmers as testers, and still feel I have to elaborate my comment a little more with the concept of testing vs checking and how these differ. While Janet already cleared out in a more recent comment that we are talking about two different things, I think it is of great importance to have these differences in mind when talking about testing, especially in the Agile development context where the word testing has become a word with many different meanings in various discussions.

So what is the difference between testing and checking?

Michael Bolton wrote a really good (and quite long) blog post series about testing vs checking, followed by a lot of feedback and elaboration by others (links below).  Since then, I have taken every opportunity I got to discuss this with other developers. And when laying out the thinking behind this whole discussion in a simple and pedagogical manner, there has really been less resistance towards these thoughts than I thought.

Let me explain

Often when programmers are asked if they test, they hopefully say they do. But when asked about the type of testing carried out, it is mostly about writing and running automated unit or integration tests as they call it. Considering the discussions in the linked blog posts, these would really be checks. This is how Michael defines a check in the comments of Aarons post (original post):

  1. Is there an observation being made?
  2. Is there a decision rule linked to that observation?
  3. Could the determination be made by a machine?

If the answer to all three questions is Yes, then you’re looking at a check; if not, it’s a test, or neither a test nor a check.

This is the approach I have had in my discussions with developers. The answers to these will always be yes when talking about the unit/integration checking code that my developer friends call tests. The most important thing to add here is that the evaluation of the observation really has to be made by a thinking human being for the whole process to be called testing.

Does this distinction matter?

As a tester myself, it matters a lot that the software that is delivered for testing is of some level of quality. And by experience there are many things that can happen to a software product when its modules are put together in its context. If the modules have only been checked in isolation and integration against the predefined and expected criteria which are built into an automated check suite, there are usually quite a few issues left to be easily discovered. Programmers that are aware of this would not deliver the product to test without running it themselves and explore the functionality on a product level first (Running from IDE doesn’t always count).

Programmers tip: Do some real testing yourself and discover those /easily found/-/easily fixed/ bugs after checking is done, but before declaring implementation completely Done.


As an example of good practice, my colleague Davor Crnomat introduced DET, Developers Exploratory Testing, in his Scrum team. Not only did the programmers perform testing, but by using proper coaching they were able to perform good testing as well. And they enjoyed it, this is Ester.

Some more blog posts in this valuable concept discussion:

Markus Gärtner speaks of active vs passive testing, which is also another way of looking at it. However, I see a disadvantage using that wording for my developer discussions when I want to make the point that having a unit check suite is not testing at all.

James Bach brings out the need to go through a debate/discussion in order to understand or acknowledge the difference between words on this level. He also brings up the important aspect of being able to differentiate automated testing and automated checking, where the distinction between testing and checking makes it easier to talk about tool-aided testing vs tool-executed checking.

Scott Barber writes about being able to grasp and accept when new discussions like this come up, not just dismissing them because they don’t fit your context. About the value of these kind of discussions. He also has some good comments on Aarons post.

Aaron Evans uses another set of words, verification and exploration, for distinguishing the meanings of them. I like the use of them when used in a discussion between testers. However, in my perspective when speaking to programmers about these things, these words would not really suffice for my purpose of clarifying the distinction.

Btw1, I wonder if we will ever see a first unit check framework, using the word ‘check’ instead of ‘test’ as the trigger for checking input against predefined output. That would be really something.=)

Update:

I found Anders Dinsens post on why acceptance tests (automated checks on higher level) are not enough.

 

  1. March 29, 2011 at 23:34

    Great reminder about the discussion. I’d forgotten about it in my day to day testing and tools development.

    One goal I have is to create tooling that allows for the integration of “checking” and “testing” activities, because at the end of the day, they produce the same artifact, a bug (though that might not be exactly precise.)

    In practice, the idea is to allow for the mapping of exploratory work to defect and requirements tracking tools. The general idea being to provide integration points for automation, traditional test case definition, and exploratory sessions into coverage management and regression reporting.

    • Sigge
      March 29, 2011 at 23:50

      Yes, this discussion started some time ago, but it is still as valid. And I think it is important to continue the discussion to a broader audience in software development, not just amongst testers. There is always need to break some common ground on things.

      I like the ideas about creating tools for checking and testing activites to merge and somehow create common reporting and follow up. I would really like if you could give me some reading or background to where you are at at the moment with this.

      In my current project I actually do some ET follow up and reporting in a very traditional test management tool, but it is not optimal. But we do get some requirements coverage metrics for management. However, I am quite sceptic to those types of numbers in general.

    • Sigge
      March 29, 2011 at 23:54

      Btw, I cant seem to get into your http://qa-site.com/

  2. Ruud Rietveld
    March 30, 2011 at 06:46

    (this is a developer speaking) To me as a developer it seems that you are trying to say that our automated test suite is sort-of worthless, as it is only checking. When I saw the title on twitter I interpreted as “Good programmers check (is manually verifying that your code works by doing some test-runs) , Great ones test as well (is writing a suite of automated tests). Here you see that my view is almost the opposite from yours. But I’m coming from projects where I was the only one to write automated tests at all (unit- or UI-).

    I see the two that you describe as “automated testing” and “manual testing”. I believe most things can and should be tested automatically, but you cannot forgo the manual testing, as that discovers a whole different range of bugs. People that write automated tests and people that write manual tests should work together so that not too much of the same things are checked. (Note that I said people on purpose, I don’t care whether it is programmers or testers, we’re all developers).

    If you want to spread these ideas outside the ‘Testers’-world, be careful how you word things.

    • Sigge
      March 30, 2011 at 18:54

      Hi Ruud,

      Thank you for commenting. I am happy that you as a developer brings the effort of getting me to sort things out. That way I can perform better when explaining this in text, instead of the face to face discussions I am talking about. It is hard for me to know the context related issues that are missing. First of all, I fully agree with you on the developer being any type of person working in a development project.

      I want to point out that I am not saying that an automated check suite is worthless. Having a good check suite like that brings confidence when restructuring code for new development. However, the distinction that I would like to make is that when you are developing a future check, you as a developer is really doing testing. James Bach explains it really good like this in the first comment he got:

      “Strictly speaking you are “doing testing” by “writing checks”, but not actually “writing tests.” If you run the checks unattended and accept the green bar as is, then that is not testing. It requires absolutely no testing skill to do that, just as you wouldn’t say someone is doing programming just because they invoke a compiler and verify that the compile was successful. If the bar is NOT green, the process of investigating is testing, as well as debugging, programming, etc.”

      The most important here, is the human process of evaluating results or observations. If it is a trivial assert that you are making on a predefined input towards an output, this is something that the machine can do for you, checking.

      About the distinction you make on manual vs automated testing, I also here see a difference between our wording. Also here I would like to refer to James Bachs post, and the last paragraphs there:

      I now see a difference between automated testing and automated checking, for instance: automated testing means testing supported by tools; automated checking means specific operations, observations, and verifications that are carried out entirely with tools. Automated testing may INCLUDE automated checking as one component, however automated checking does NOT include testing.

      Making this distinction is exactly like distinguishing between a programmer and a compiler. We do not speak of a compiler “writing a program” in assembly language when it compiles C++ code. We do not think that we can fire the programmers because the compiler provides “automated programming.” The same thing goes for testing. Or… does that blow your mind?

      I am sorry for making the wording as you say it unclear, but I try to find a balance between writing a book about the context I am talking about and to get a good discussion on the key points in the context that matters when discussing this matter between testers and programmers.
      Thank you!

  3. emil h
    March 30, 2011 at 18:43

    It seems to me that there can’t be any tests by you definition of checks. Anything that can be tested by a human can be checked by a machine. The thing is that many tests are just not feasible to automate in this manner, since the checks would be fragile (need to be respecified when requirements change) or would be very costly. Can you give a counter example of a test that couldn’t possibly be done by a machine?

    • Sigge
      March 30, 2011 at 20:51

      Hi Emil,

      Let me re-phrase that the way I see it. No, there are no checks that can be made by humans that cannot be made by machines.

      But anything else could be tested as well. One big thing within testing is to frame your tests, since you can never test everything. With the similar thinking applied to checks, you will have to work quite hard to specify checks for any possible input in a system. This is where testing has to take over, by a human that explores and investigates the most probable failure scenarios and risks when exercising the product in a more production like context. This context is very hard to grasp fully when developing the product, even less being able to create checks for every possible scenario. And by experience, those are the scenarios that never get checked.

      For an example, I would just put it this way. Propose you check everything (line of code, business rules, decision paths etc) in your software. These check will be runnable by the machine. But then I add the possibility that when you coded all of this meticulously, you misunderstood a costumer requirement. That is, you check everything you coded, but you will have to test it with a human knowledge of the customers wishes to actually verify that it was as the customer wanted it. Apart from this, as you already stated, everything that you cannot check is a possibility for a bug that needs testing to be found.

      Let me know if this answer did not answer you completely.
      Thank you for your comment!

  4. Sigge
    March 30, 2011 at 20:56

    As a comment to both Ruud and Emil about manual checks.

    I actually try to limit all manual checks, as this is really time consuming to both specify and then follow. I call this scripted testing, when the test is first designed, documented and then followed for the check. So really that would be scripted checking. But as a human being, it is really hard doing that in the checking manner, since you always observe things that are not included in the script. Those things are rarely ignored, since if it is a strange behavior, it really should be filed as a bug. Those surrounding observations are not possible to evaluate if checks are made by a machine.

  5. David
    March 31, 2011 at 00:37

    Not sure any of this will make sense but here are my thoughts regarding the subject.

    In essence when a test is defined it would become a check and that is regardless of verification and validation, automatic and/or manual. Then one could argue that the checks are tests as they are the outcome, however not all checks are the outcome of tests. Checks created without objectivity shouldn’t be considered as tests but instead as validation. Defining objectivity isn’t easy but one definition is that the person developing the code isn’t objective since she will have behind the scene knowledge.

    To sum it all up. A test or a check is a matter of opinion and definition concerning objectivity. The challenge for the programmer regarding testing is to be objective and that requires both discipline and imagination, objectivity is the main advantage a tester have over a programmer. If the programmer can’t stay objective all she will produce is checks.

    /Godnatt.

  6. David
    March 31, 2011 at 01:07

    Damn you 🙂 I was hoping to get some sleep but I can’t seem to relax until I’ve written yet an other entry.

    An example: Look at TDD, one reason for defining the “tests” first is that you are likely to be more objective since you haven’t defined the code you’re suppose to verify.

  7. David
    March 31, 2011 at 04:40

    I really shouldn’t write posts late at night, it will only result in a grammatical mess 🙂

    • Sigge
      April 2, 2011 at 14:35

      David,

      If I try to use your own words, the shortest explanation of check would probably be to equal it to the validation as you put it. As of doing a data validation towards a schema.

      As I wrote to Ruud, the activity of creating that schema would be one type of testing activity, that I did not intend to refer the title of the blog post to. The testing I refer to is the activity of running the software as a user/tester before programmer sets development to done for that specific code. To not only rely on the checks created and run before handing over to test.

      About objectivity when performing tests or checks, I don’t really see the importance of striving towards objectivity, as long as you know about that bias. Of course, the check suite is completely objective, but it will never reveal those really important bugs or issues.

  8. Ruud Rietveld
    March 31, 2011 at 07:15

    Aha, I see what you mean now.
    A check is just the following of a script and ‘check’ whether the predefined input matches the predefined output. This can be manual and automated.
    A test however, is a session of testing.
    Testing is an activity in which you basivally create the check scripts, where you have to decide what to check for and what can safely be skipped.

    If this understanding is correct, I think we are discussing technicalities and linguistics…?

  9. Ruud Rietveld
    March 31, 2011 at 07:34

    …hmm… and about the title of this blog. This now implies to me that good programmers run the tests (= checking), and great programmer also write them and also investigate the results when not green?
    If that interpretation is correct, I would go farther and say: dumb programmers not fit for the job check, the rest of the programmers test as well. No programmer that I know delivers his code without doing at least *some* testing…

    • Sigge
      April 2, 2011 at 14:12

      A check is just the following of a script and ‘check’ whether the predefined input matches the predefined output. This can be manual and automated.
      A test however, is a session of testing.

      Here you got me correctly. Manual or automated check of predefined input to output is checking. This is what David calls validation, as I understood by his programmer background as well.

      Testing is an activity in which you basivally create the check scripts, where you have to decide what to check for and what can safely be skipped.

      This part I don’t fully agree upon. And I realized myself that I was not really clear about it. Yes, the activity of creating check scripts could be said to be an activity of testing. But for me, this does only constitute one type of testing activity.

      And about the title, I now realize that I wasn’t completely clear. The testing activity of creating the checks is not the testing I am implying on the title. That testing activity implies some kind of smoke testing session for programmers to execute after the check suite has run through ok but before setting development to Done.

      For me that activity is something that cannot be automated and will need human guidance, experience and validation. Programmer runs the application as a user, like the tester will do, and this way find the easy and really obvious bugs in that scenario. In my opinion, it is not until those have been washed out by programmer that the software should be considered ready for test.

      • Ruud Rietveld
        April 4, 2011 at 12:28

        The most developers I know do then at least testing, and do no checking since they do not plan – let alone automate – their tests.
        A quick smoke test; seeing whether the program works on one run with one on-the-spot-chosen set of input data, is enough for a lot of developers (the mediocre ones imho). It is the rigourous creation of test scripts, from unit-level to UI-tests, which makes the difference. But even then you never forget the running of the app and seeing how it looks, since a tool cannot check that.

  1. No trackbacks yet.

Leave a comment