Sonntag, 23. Juni 2013

Lightweight Bugtracking

Thank's to Joel Spolsky today the vast majority of software development teams knows how to track bugs. "Painless Bugtracking" as Spolsky calls that means that you enter categories, severities, reproducability and stuff like that. That takes time. That is not a bad thing in general, and for some stages in development it is well suited - but not for all. Let's see.

Early stages

In the early stages of development products tend to be full of bugs, not implemented functions, inconsistent design and so on. What you need the most here is good feedback: do workflows make sense? Is the UI nice? Is the program stable? But not every feedback must be implemented, not every bug will be reproduced, since it will get obsolete through other changes.

Late development stages

When your product or a new feature gets mature, less feedback is needed. It's time for hardening all bugs out of the program.


When the product is released and bugs occur, you spend much more time on reproducing and fixing per issue then in earlier stages. Makes sense, because such bugs are probably more complex and fixes could impact program parts severily.

How traditional bugtracking works

Most bugtrackers are web based and see every bug as a solitaire entity. You have an enormous amount of fields for categorizing bugs, for example the OS in which it occured. This is important since you may pile up hundreds of bugs if not more and you need to manage that masses. You may even have workflows for accepting and assigning bugs.
Traditional bugtracking doesn't think about how bugs are actually discovered. It doesn't separate the bugs found in a test session by a tester and a single entry bug discovered in production.

How lightweight bugtracking works

Lightweight bugtracking works at a test session level. A test run must be conducted at a work item such as a user story. That is since the test should focus on a local area. The test sessions will produce a test protocol with entries similiar to issues in a bugtracking system. The difference here is that you don't need to enter so much details, just enter the issue text, some screenshots and the severity. That is much faster. Well this alone doesn't make any sense. It does make sense when the developer fixes that bugs. It should be preferrably exactly one developer. The bugs get fixed in one batch. All bugs get fixed at once. And only the bugs that cannot be fixed in one fixing session may get piled in up the traditional bugtracker.

Quality Spy's test protocols:

When conducting a test session - either based on a test plan or exploratory - you note everything you find worth noting. This includes errors, crashes, bad UI layout, but also things you like and find positive. All this is added to the protocol:

The color coding represents the severity of the issue. Red represents an error that should be fixed, orange means it is a weak point that should/could be improved, if time or budget allows.

Images play an important role, since most bugs are visible through the screen. They can be added directly from the clipboard. (compare that with the capture, save to file, upload procedure in a bugtracker...)

When the test session is complete the protocol can be handed over to the developer responsible of the feature and he can investigate issues, fix them and mark progress in the document. For that purpose the protocol can be filtered just like the lists in a bugtracker:

So the biggest difference is that not single bugs get fixed, but a series of bugs get fixed together.

Why should that be better than bugtrackers? I don't even have a bug count and what about an archive? I would argue that bugs that occured early in some development stage aren't very usefull for long term archiving. Bugs should be detected early and fixed early, that's it.

When to use it and when not

So light weight bugtracking is perfect for testing early and fixing early. Later in the game the traditional bugtracking process as described by Spolsky is perfect and desirable.
Of cause, it will only work, when the new feature was developed by exactly one developer, but this should be the case, else there is something broken in the development process. When the thing is too big, it should be broken down into valuable chunks that can be again seen as features, not just technical parts.
When used correctly lighweight bugtracking will save time and money, but more important tester and developer will feel more productive, do less tedious work and are allowed to concentrate on the important stuff - their project - not managing issues.

Cool Presentation of Quality Spy

I created some cool slides that introduce Quality Spy:

Sonntag, 2. Juni 2013

More than just Pass/Fail

One thing that always seemed strange to me with existing test management tools such as TestLink was the "Pass/Fail" model. Can a test really just pass or fail? That depends how you structure your tests, but when I want to conduct things like scenario or usability testing and save the results "pass/fail" is inappropriate. A much better scale would be "good/bad/fatal".

That's why Quality Spy now comes with completely customizable result types.

Obviously, the first step is customizing your result types. This can be done for every project:

After you have done that, you can assign schemes as the default for the project or for a certain test suite. For example I can create a test suite for usability tests and assign the appropriate schema:

 That's it. After that you can execute the tests as usual:

Montag, 29. April 2013

Test Plan Design: Productivity Counts!

For years I've been working with TestLink. It's actually a pretty good tool, but using it just sucks.

It's so darn slow, expecially for creating test plans. Designing test suites is an creative task, productivity is important for that and TestLink is really poor here.

Since I didn't had another tool available in the past (no budget and TestLink was the only good open-source I knew) I sticked with some workaround. I created the test plan structure in Microsoft Word  and when I was done I started I entered all that stuff into TestLink.

When I started the development of Quality Spy - my own test management tool - I wanted productivity and simplicity to be the key design drivers.

Compare the sluggish web interface of TestLink with this:
  • A native typing feeling, almost as in Microsoft Word (ENTER, type, ENTER, type), with in-place-editing
  • Full drag & drop support
It looks like that:

When a test suite is selected:
  • ENTER creates a new test suite next to the current
  • CTRL+ENTER creates a new child
  • CTRL+LEFT moves the test suite "in" one level (makes it a child of the previous)
  • CTRL+RIGHT move the test suite "out" one level (makes it a child of the parent's parent)
  • CTRL+UP moves the test test suite up
  • CTRL+DOWN moves the test suite down
  • F2 toggles in-place-editing
When a test case is selected
  • ENTER creates a new test case next to the current
  • CTRL+UP moves the test case up
  • CTRL+DOWN moves the test case down
  • F2 toggles in-place-editing
The latest version can be found on sourceforge.

Samstag, 27. April 2013

Risk-Based Testing Cartoon

Cartoon Tester explains what Risk-based testing is all about:

I must mention: Quality Spy also includes some simple tooling for Risk-based testing.

Montag, 22. April 2013

Quality Spy now Supports Risk-Based Testing and a Graphial Outline

Quality Spy now includes a risk-based testing feature as described by testing guru James Bach.
It should be used before designing the actual test plan and allows to identify (bug) risks. One benefit of using it would be to focus scarce testing resources on high risk and high impact risks first. I highly recommend this article by James Bach for the full background.

Also I'm proud of the new outlining feature. It allows you to add some graphic file to your test project. It should be used to describe the test plan and process in a graphical way.

Both features are optional and must be enabled per project:

A risk analysis could look like that:

For example I know that testing with non-admin user account can find certain bugs, but I think that for the application under test this is unlikely to occur, so I wouldn't invest time in that.

And this is how the outline looks like with a practical example:

It's really just an image you can link here. It maybe overlaps a little bit with the test strategy, but you could also document which component should be tested by which tester (group) or you could highlight key components and so on.

So it's really versatile.

The latest version can be downloaded from sourceforge.