10 Out Of 10

This is a very old article. It has been imported from older blogging software, and the formatting, images, etc may have been lost. Some links may be broken. Some of the information may no longer be correct. Opinions expressed in this article may no longer be held.

How to File a Perfect Bug Report

My criteria for a perfect bug report…

  1. The report is filed correctly.
  2. The issue identified affects the latest version of the software.
  3. The report includes a test script illustrating the problem
  4. … which is self-contained
  5. … and is minimal
  6. … and conforms to the Test Anything Protocol.
  7. The report includes an explanation.
  8. The report includes patch
  9. … which is well-written
  10. … and obeys coding conventions.

Obviously all reports of genuine bugs are welcome, but that doesn’t mean all bug reports are equal; some are better than others. Getting 10 out of 10 is a lot of work, but even 6 out of 10 is better than average in my experience.

Why is writing a good bug report important? Because the better the bug report, the faster the issue can be solved; and the faster the issue is solved, the sooner you can use the fixed software.

So let’s look at those criteria a little more closely.

File the Report Correctly

My CPAN releases (indeed most CPAN releases) include a link to a bug tracker fairly prominently in the documentation. If you file a bug report anywhere other than that bug tracker, it has not been filed correctly.

There are occasionally reasons to follow up the report with additional communication through other channels (email, IRC, etc), but if a bug isn’t filed on the bug tracker, then I’m likely to lose track of it. One reason to follow up by e-mail might be that you have an example of some input data that some code of mine is choking on, but you don’t want this data in the public domain.

Another part of filing the report correctly is categorising it correctly. If you’re filing a feature request, mark it as “wishlist”, not “critical”.

Check which Releases are Affected

Make sure you have the latest released version installed, and the issue can still be observed. Except in rare cases where there are multiple officially supported branches of the software, most developers of open source software are unlikely to be interested in bug reports that only affect historic releases.

For extra brownie points, grab a copy of the current development code from the project’s source code repository and see if this is affected too.

In your bug report, mention which versions you have tested the issue against.

Write a Test Script

Here’s where you can really pick up the points: four points for a good test script. A test script is important for the following reasons:

  • it proves that the issue exists;
  • once the issue has been fixed, it tells the developer; and
  • if the developer retains the test script as part of the project’s test suite, it should prevent the issue from reoccurring in the future.

So what is a good test script? In the context of a bug report, it’s a script that currently “fails” (whatever is meant by failure… crashing or outputting the text “not ok” are good) but would pass if the issue were fixed.

If possible, the script should be self-contained. It shouldn’t, for instance, rely on files that might not exist on my system. It shouldn’t download stuff from the Internet: if if needs a copy of some HTML, then that can be hard-coded into the test script; if it needs to perform actual HTTP to illustrate the problem, then it should launch a local Test::HTTP::Server instance.

The script should be minimal. By which I mean it shouldn’t include anything not necessary to illustrate the problem. I do not mean you should be playing keystroke golf. Keep: stuff that is necessary for it to be self-contained; stuff that is necessary for clarity; stuff that illustrates the problem; stuff that illustrates due dilligence (e.g. “use strict; use warnings”).

This is where this article gets Perl-specific. You should use Test::More or another module that produces TAP-compliant output. Why? Because this makes it easy to add your test case to the project’s test suite, helping the developer avoid future regressions.

Here’s an example test case.

use strict;
use warnings;
use Test::More tests => 1;
use My::Prime::Number::Checker qw(check_prime);

my $result = eval { check_prime(0) };
ok(!defined $result, "check_prime(0) dies or returns undef")
    or note <<'EXPLAIN';
It is nonsensical to ask whether a non-natural number is prime.
The check_prime function should return undef, or die if asked to
check such a number.
EXPLAIN

(Thanks Vyacheslav Matyukhin for the fix to the above test script!)

Test scripts should avoid producing unnecessary output to the screen except when they’re failing. In a large test suite, output from noisy passing tests can obscure the real failures.

Explain the Issue

So you’ve written a test script; a script that currently fails, but ought to pass. Explain why it should pass. Convince me that the issue you’re experiencing isn’t intended behaviour.

Good arguments:

  • Current behaviour violates a specification from a recognised standards body. If I’m aiming to implement a published standard, then a test case showing my implementation falls short is very persuasive.
  • Current behaviour impedes interoperability with other software.
  • Current behaviour contradicts the documentation. Of course, that might just mean that the documentation needs changing.

For extra brownie points, embed at least an abbreviated form of this explanation into the test script.

Supply a Patch

This will not always be possible. Not every user of software has the necessary skills to modify that software. But if you have the ability, the inclination and the time to supply a patch, then this will greatly increase the speed the software maintainer is able to release a fixed version.

That said, if the patch is badly written it may not be much help. Things to avoid are:

  • unreadable code;
  • slow or otherwise inefficient algorithms;
  • massive refactoring where a one or two line fix would do; and
  • introducing unnecessary additional dependencies.

Try to match the surrounding code with regard to indentation and spacing, naming conventions and so forth.

You’ve got to roll with it

Now, there is always a danger that even if you’ve got 10 out of 10 for your bug report, the software maintainer just doesn’t agree with your arguments.

Say I’ve written an implementation of the FooBar 1.0 specification, and I don’t implement the xyzzy feature correctly. This may be deliberate: perhaps the technically incorrect implementation is actually more interoperable with other FooBar implementations, or maybe I think the xyzzy part of FooBar is plain dumb.

But in these cases I could probably be pursuaded to accept a patch which can be used conditionally. For instance:

my $parser = Foo::Bar->new(xyzzy_compliance => $boolean);

So if you’ve had an issue (especially a feature request) rejected as “not an issue”, be prepared to change your request. Don’t get too wedded to your original patch. Can the code be changed to make your feature request into an optional feature? Maybe, in the case of our Foo::Bar example, the implementation could actually detect at run time whether the “compliant” or “loose” xyzzy implementation was more appropriate and intelligently take the correct path.

The Exceptions

There are some kinds of trivial issues for which all of the above is overkill. Here are some examples:

  • Typographical errors in documentation.
  • Typographical errors in warnings and other messages emitted by the software.
  • Factual errors in documentation.
  • Dependencies not listed in META.yml.

While you should still be reporting these on the correct issue tracker, and checking that the latest version is affected, a test case would almost certainly be overkill.

This article was originally published on my blog at http://tobyinkster.co.uk/blog/2012/07/12/bug-reporting/.