The common programming challenge, FizzBuzz, is usually phrased something like this:

Write a program which does the following.

For each of the integers from 1 to 100 inclusive:

  • If the integer is divisible by 3, print "Fizz".
  • If the integer is divisible by 5, print "Buzz".
  • If nothing else has been printed, print the integer.
  • Print a line break, "\n".

If you are just starting to learn a programming language, or just starting to learn to program in general, FizzBuzz tests a few extremely basic concepts: variables, conditionals, loops, output and escaping. I'm teaching Perl again, so I've assigned FizzBuzz as a nominal piece of homework.

Unfortunately, manually checking every FizzBuzz implementation to see that it does the correct thing is a little tiresome. And simply eyeballing the output is prone to human error. Wouldn't it be better if I had another Perl script which could mark FizzBuzz submissions for me?

Introducing MetaFizzBuzz

MetaFizzBuzz ( is a Perl script which accepts the name of another Perl script as input, executes it, and tells you whether it implements FizzBuzz correctly or not.

MetaFizzBuzz returns a score out of 100, based on how many lines of output are correct. For example, printing "Fizz" instead of "FizzBuzz" when N is 15, 30, 45, 60, 75 or 90 loses a point. Such a person would probably score 94/100.

There are also some bonuses and penalties:

  • 1 point lost for each incorrect result printed, as mentioned.
  • 1 point lost if the script exits with a non-zero return code.
  • 1 point lost for using the wrong separator (e.g. spaces).
  • 1 point lost for stopping before 100 or continuing past 100 (e.g. fencepost errors).
  • 1 point lost if the script doesn't begin with use strict; use warnings;.
  • 1 bonus point for a script which fits in a tweet (140 characters or fewer).
  • 1 bonus point for a script which doesn't use conditionals (if, unless or the conditional operator, ?:).

(The bonus points are there for people who already know programming pretty well and want a challenge. The maximum possible score is 102/100 and yes, it is completely possible.)

Future work

MetaFizzBuzz has some open issues and there is limitless room for greater intelligence in evaluating results. At the time of writing:

  • MetaFizzBuzz performs very basic pattern matching to detect the use of conditionals in the input implementation. This could easily yield false positives. It would be preferable to actually parse the input Perl code and build an abstract syntax tree. Unfortunately, Perl cannot be parsed.
  • MetaFizzBuzz searches for use strict; use warnings; at the very beginning of the program, and doesn't take into account the possibility of leading comments, such as a hashbang line. And, again, only simple pattern matching is used here. It's entirely possible to engineer a false positive.
  • Strictly speaking, byte sequences (files) longer than 140 bytes may still "fit in a tweet". Naively using length to determine this is likely to give false negatives in some edge cases.
  • MetaFizzBuzz is currently helpless in the face of non-halting input programs - although, notice how the specification gives no time frame for the output, and does not even specify that the input program must halt!
  • MetaFizzBuzz can get quite confused when individual terms ("7", "Fizz") in the expected output and the actual output stop lining up (e.g. if N = 15 is rendered as "Fizz Buzz" rather than, as the specification requires, "FizzBuzz"). This single error has a knock-on effect on all later lines of output.
  • There should be test cases for each of MetaFizzBuzz's bonus and penalty conditions.
  • MetaFizzBuzz executes arbitrary code from untrusted sources-- namely, Perl students. Erm, SECURITY??!

And, of course, this whole thing is Perl-only at the moment. It's not possible to use it to mark a Java implementation, for example.

The real reason

The real reason for me creating MetaFizzBuzz is that MetaFizzBuzz itself is an interesting programming challenge, which I intend to give to my Perl students as a future piece of homework. Implementing MetaFizzBuzz tests new areas such as executing external programs, capturing output from the same, opening and reading files, and pattern matching.

Of course, now I run into a different problem. It seems that MetaFizzBuzz's own specification ("it should tell me whether another FizzBuzz implementation is good or not") is far too fuzzy. I'll have to lock that down before continuing.

And then maybe I'll need a program that can mark MetaFizzBuzz implementations too. Gee, I wonder what that could be called.

Back to Code
Back to Things Of Interest

Discussion (8)

2013-09-14 19:04:23 by rgoro:

I claim an extra point for correct indentation instead of spaguetti code.

use strict;
use warnings;
    print "$_\n"

2013-09-14 19:05:28 by rgoro:

Oops. Forgot to change the parser, sorry.

2013-09-16 12:32:22 by DSimon:


Take as first input a non-negative integer n.
If n is 0, implements FizzBuzz for the first 100 natural numbers.
Otherwise, print "OK" only if second input is a script that behaves the same as FizzBuzz^(n-1).

2013-09-16 16:07:21 by Dan:

"And then maybe I'll need a program that can mark MetaFizzBuzz implementations too. Gee, I wonder what that could be called."

The AutoGrader.
During my years at university, I was a tutor for the first (or second, depending on student experience) programming class that would be taken by incoming computer science majors. It was a ten week course, during which students had to write their own implementations of seven different data structures in three different languages (two projects in C, two in C++, three in Java). There were two sections of 30-something students each, and we absolutely did not feel like testing all that code by hand. Being a bunch of programmers, we wrote a unit test harness.

AutoGrader, as a fairly standard unit tester, would:
--Run everyone's code at once or re-execute a single student as required (all student code was organized into folders like /course_number/student_id/assignment_number/)
--Check for unauthorized I/O calls and fail if it noticed something it didn't like.
--Check for shell calls and fail if it was unhappy.
--Run the student's code against an arbitrary set of test cases and checking the output against the correct answers (both input and correct output for test cases were specified in a flatfile we would provide). We were nice and compacted spaces/newlines in student output to allow for a little variation, but the formatting requirements were pretty strict.
--Tally up test case successes into a couple of CSV files (one with grades for each student and one with test-case-specific stats for all students so we could see if some specific test case was failing commonly and focus on teaching that bug).

Unfortunately, I don't still have the code for this lovely piece of machinery, but it was ridiculously useful.

2013-10-06 19:50:54 by Elaborate:

Here's my try:

use strict;
use warnings;
my @a="Fizz";
my @b="Buzz";
my @c="FizzBuzz";
  print $c[$_%15]//$a[$_%3]//$b[$_%5]//$_,"\n";

I always liked Perl's "//" operator...

2013-10-12 23:27:27 by AF:

use strict;
use warnings;

for (1..100) {
    print ( ($_,"Fizz","Buzz","FizzBuzz")[!($_ % 3) + 2 * !($_ % 5)], "\n" );

2014-08-05 21:49:35 by mutecebu:

Ibra oniki sam. FIZZBUZZ FIZZBUZZ alef a sam.

2016-11-19 03:24:50 by xyz:

i would have just implemented fizzbuzz myself once (or used a known good impementation) to generate a file containing the correct output and `diff -q` it with the ouput of the students' versions. should return a list of all the 'broken' ones on stdout.
one could check for the 'use strict;use warings;' with `grep` i suppose.