Setting

Marking Entries

1. Preparation for Marking

Marking the multitude of detailed submissions from teams is a pretty intensive manual undertaking, and the setter should try and plan for this activity following the entry closing date in mid to late January.

As soon as teams have submitted their entries, they will be waiting impatiently for both the answers and the results. Whilst most will sympathise with the amount of time this activity will take, the setter should obviously try to publish both their solution and result as soon as possible, certainly within one month of the closing date unless there are mitigating circumstances. The post-Hunt Prize Giving evening should also be organised such that it quickly follows the publication of results. Much longer than a month, and the momentum and interest is in danger of being lost.

Certain tasks can be performed in advance to help speed along the marking process:

  1. Produce the complete Solution document before you start marking, perhaps even before the Hunt is published. Apart from obviously enabling you to publish the Solution as soon as you wish, this will enable you to clarify your thoughts regarding exactly where teams can earn marks, and what answers you will accept. You may need to modify the solution slightly in light of team submissions, but hopefully this will only be minor tweaks.
  2. Set up the Marking Scoresheet, most importantly listing all the questions, observations, puzzles and other bits and pieces for which teams can earn marks. It is important that you define this in advance so that all teams are marked against a constant framework. It is also good practice to include reference id's in the Solution document against which the Marking Scoresheet can cross-reference e.g. [REF1].

As ever, Pablo provides us with a fine example of how to produce both a Solution document and a Marking Scoresheet that includes the easy to follow cross-references.

2. Publishing the Solution

It is up to the Setter whether they issue the Solution after submissions are received, or wait to publish it along with the results. The latter approach reduces the risk of any gaffes being left in your Solution that might be brought to light by marking the entries.

Alternatively, if you envisage that the marking may not be completed for some time, you might choose to publish the results at some point during your marking, once confident that it is gaffe-free, in order to keep people's interest up in the interim. On the other hand, you may take the view that publishing the results is a bit of an anti-climax if the Solution has been devoured a couple of weeks before.

3. Team Submissions

You will receive submissions in all formats, and certainly most will not match the neat structure that would optimise your marking process. Typically submissions come in the form of Microsoft Word documents, or Excel Spreadsheets. You will have your own ideas on the most efficient way to mark papers. One method is to print out each submission (double-sided and two sheets to a page so as not to waste too much paper!) and mark whilst reading that. You can also recorded marks in pen quickly on this printed sheet in case of any subsequent queries or discrepancies raised by teams.

The advances in team collaboration facilities over the internet have certainly enhanced the options on how teams can communicate during a Hunt. Online message boards, Wiki pages, and other mechanisms have all been used. As the setter, you may be asked by a team whether they can simply provide you with access to this in order to mark their submission, primarily to save them having to 'write up' their entry. Such requests have been refused in the past, partly because the setter wanted a version of their solution submitted by the closing date that could then not be updated, but also partly because of the issues of having to wade through a team's 'workings' as their online communications developed. However, if satisfactory means are put in place to ensure that no changes are made post closing date, and the submitted answers are clear and easy to find, then you may opt to be more flexible.

4. Marking Scheme

The traditional marking scheme is a relatively simple approach that has stood the test of time, and basically scales marks according to the level of actual difficulty of a question, puzzle, or other markable Hunt element. It is strongly recommended that this approach be maintained.

In algebraic terms, the number of points [P] awarded to each of the [N] teams who correctly answer a question is calculated as P = T - N, where T is the total number of teams entering the Hunt. Thus if all teams correctly answer a question, zero points are awarded to all which reflects the relative ease of that question. At the other end of the scale, if only one team correctly solves a particular puzzle, they receive the maximum number of marks (T-1) in recognition of that item's perceived difficulty.

The only downside of this scheme is that some more than competent teams might answer a fair bulk of the Hunt, but because their rivals have too, may end up with a total score that looks deceptively low. It is worth explaining this side-effect when publishing the results, especially for the benefit of new teams.

5. Marking Scoresheets

Once more Pablo has ridden to the Setter's rescue by producing an Excel marking spreadsheet that can be used simply and accurately to calculate team scores when marking. Having prepared the Scoresheet by identifying all elements for which teams can be marked, each team's submission can then be marked using the Scoresheet by recording a '1' where a question, puzzle, observation etc has been correctly answered.

This scoresheet was extended when marking the 2007 Hunt, adding a second sheet that produces a sorted results table that can be used for publishing the results on this website. It also has an extra row to allow for recording any discretionary bonus points.

You can download both versions using the following links:

Hopefully, using the Marking Scoresheet is relatively intuitive. Perhaps the quickest way to see how it should be used is to look at an example: the completed 2005 (Standard) or 2007 (Extended) Marking Scoresheets. But there are also a few pointers provided below:

You may well spot further improvements to one or both Marking Scoresheets. Feel free to apply these, but it is recommended that you create a new version in doing so, and update the download links above accordingly.