Tuesday, March 3, 2015

Why Bother With the BQC?

By Abby Whiteley
Introduction (feat. Batman).
Why bother with the British Quidditch Cup (BQC), anyway? This probably sounds self-evident to the point of ridiculousness, but I mean this in almost a deep, spiritual way. Due to when the BQC is and the restrictions placed by the European Quidditch Cup (EQC) committee, the BQC will not fully be a qualifying tournament for the EQC. (Only the two highest-placing teams who haven’t qualified will be given a spot in the EQC based on the results of the BQC.) The BQC will be similar to the way La Coupe de France was, or in the way USQ regional championships act as selectional criteria for the World Cup. So what's the point?

Part I: The deep and inimitable satisfaction of beating your friends.
As to what people have to gain on the quidditch front, the answer is clearly bragging rights  eternal glory. Oh, sure, if you win the EQC then you're technically champions of an entire continent, and you get to defeat all those pesky mainlanders. But there is no victory sweeter than winning over your mates. Beating people you've been on the same mercenary team with, people you got drunk with andin quite a few casesshared a bed with is what makes the victory so much sweeter. It's like beating your brother at Monopoly. It just means so much more.

That said, it's not all about the winning. Winning games is pretty great, but it's not the only important thing. It also acts as a barometer by which we can judge each other, mutual judgement being the glue which forms the bonds of siblinghood we share. Everyone knows that it doesn't matter if you lose to someone at Southern Cup or Highlander Cup or whatever, as long as you beat them at the BQC. Especially for teams who aren't going to the EQC, the BQC is the one chance they get to secure a right to gloat over opponents for a whole year.
Part II: Why it matters.
Furthermore, for the majority of the next yearcertainly for the remainder of the seasonpeople will be referring to your placement at the BQC to make snap assessments on your team, and it is generally assumed that these placements are near-absolute. Even if you win every tournament you go to for a year but you placed eighth at the BQC, you will be tied to that eighth position in the QUK hive-mind until it is readdressed at the next BQC. For teams' reputations and perceptions of themselves, it's actually incredibly important. Even the Challenge Shield trophy would, for some, be cold comfort for a team that underperformed at the BQC; so strong is the hold it has over our collective consciousness.
It's obvious why this is; it's satisfying and convenient to see the skill of teams quantified in the way which a comprehensive tournament like the BQC can do. It's a contrived simplicity, of course, but one which is intuitively appealing. Many communities recognise the need for ranking systems which are at least grounded in a reasonably objective and fair means of assessment. You see this in university rankings, awards systems in the arts, and in sports of all kindsin everything from rugby to chess to electronic sports. People look to championships as the true means of understanding the relative skill of competitors, and invest enormous amounts of time in preparing for them. This is not an ideal system. Many sports use computerised ranking systems as a means to assess the full picture rather than the results from a single event. (NCAA football is a particularly good example, with one of the most extensive rating systems in the world.) This produces a far more comprehensive view than we currently have, which is rather linear and has severe weaknesses.

Part III: The Flaws in the BQC system.
One of the most obvious flaws of using the BQC as an overall assessment of teams’ abilities is its singularity. A single win or loss can determine your place which will be assumed to be your default position with little or no regard for other performances in the season. All performances above that are an ‘overachievement,and all performances falling short are a ‘disappointment,’ even if the original placement was a fluke or performed under extenuating circumstances. This is with the possible exception of the month or so preceding the following BQC, at which point its predecessor is considered obsolete even though it still has a huge effect over teams’ reputations. This singularity is addressed in the Challenge Shield ranking systembut we’ll talk more about that in a minute.

The desire for a ranking system is understandable, and totally consistent with our expectations of sporting achievement. That said, it is clear that the BQC leaves much to be desired in terms of formal rankings. It is true that, for the first time, the upper tier of BQC rankings will be supplemented with a second major club tournament in very quick succession to itthe EQCwhich will allow for much-needed rematches and a clearer view of how the top British teams stack up against one another. The waters will be muddied slightly with the inclusion of mainland European teams, but we will still be able to glean useful data. That said, it is still an imperfect complement to the BQC problem, and will make no difference whatsoever to mid- and low-tier ranked teams who do not have a second chance to consolidate their position. Even if people take into account non-BQC events in considering a certain team’s profile, which could provide a more rounded perspective, these would not provide a trustworthy assessment of the team as the results would not be transferable.

There is a gap in the market for a structured and comprehensive ranking system that does not give disproportionate emphasis to single tournaments, as well as single matches. A system that allows us to take into account not only final positions in a narrow set of tournaments, but also QPD, snitch catch percentages, strength of schedule, and so on. All of this information, collated and ranked, would create a far more reliable and continuous assessment of the teams in the UK rather than vague assumptions based on single tournaments. The best extant example of this is the USQ algorithm, which is the most comprehensive ranking system applied to quidditch on a large scale; the Canadian one is less complex, but still takes into account a great deal of detail which QUK has thus far overlooked. Something like this, endorsed by QUK, would be an excellent addition to reporting on teams, and would provide us with a system far superior to the one that exists.

Part IV: Thoughts on the Challenge Shield as a Ranking System.
While the Challenge Shield (CS) offers a different angle on the capacities of UK teams, it is not a substitute for the kind of ranking I envision. While it is a better representation of individual teams’ abilities in relation to one anotherthanks to its multiple match formatit does not take into account any of the aforementioned data sets, and therefore has a similar weakness to the BQC. Similarly, its participation is not sufficiently comprehensive. A team ranked second in the BQC and that has demonstrated its strength in other tournaments, but does not take part in the Challenge Shield would have no representation in the Challenge Shield rankings. And while it is clear that this would be the team’s fault, its absence is somehow a punitive measure for not participating; it would still mean that the Challenge Shield picture would be incomplete. The most considerable failing of the Challenge Shield as a rating system, however, is its format of following the calendar year rather than the regular season. This means that its data is not wholly transferable, and it gives disproportionate emphasis to matches played after major roster changes. For example:

May: Team A (consistently ranked No. 1 throughout this CS season) narrowly beats Team B, putting Team A ranked No. 1 and Team B ranked No. 2.
June: Team C plays Team B and loses, placing Team C at No. 3 and maintaining Team B’s slot at No. 2.
July: Team A loses several major players due to graduation.
November: Team C plays Team A and wins, taking its place at No. 1 and winning the CS.

The above situation would result in a ranking that presumes that Team C is capable of beating Team B, even if this were not the case, because Team C capitalised on weaknesses experienced by Team A directly as a result of the timeframe in which the Challenge Shield occurs. This result is incompatible with what we would expect to see at the following BQC, and therefore non-transferable. We would depend on Team B playing Team C preceding the next BQC, in a rather tight timeframe, to see the Challenge Shield offer a ranking congruous with what the normal season offers, and this is not ideal. No continuous assessment system should bridge such enormous shifts in the units it is assessing. For that reason I do not see the Challenge Shield as an equivalent to a full ranking system, even though it provides us with useful and interesting data, and remains an excellent, innovative and necessary way of formalising extra-tournament play. The Challenge Shield is a good means of continuous assessment within its own context, but does not wholly perform the role of a complete ranking of the kind which I previously outlined.

Part V: Coaches Polls as an Alternative.
Another method of ranking teams is one that has not gained much ground in the UK, but which is fairly prevalent in the US: media/coaches polls. The Eighth Man and the Quidditch Post regularly carry out polls of experts and coaches, respectively, to assess people’s opinions of teams. They can be extremely useful as a qualitative means of ranking teams because they demonstrate wider feelings about teams’ reputations beyond isolated conversations. The Quidditch Post carried out a small poll of the top 10 teams in the UK (editor’s note: check back later this week for an updated poll), but polls like this are rare in European quidditch. Opinion polls provide benefits which algorithm-based systems do not because, whilst it is important to have access to raw numbers, communities respond best to qualitative assessments as they provide opportunities for discussion and more comprehensive analysis. Statistics, whilst useful, also offer little in the way of meaningful analysis, nor can statistics provide hypotheses regarding matchups which we have not witnessed yet. The UK community would benefit greatly from some kind of rankings committee, or at least some bigger coaches polls, to supplement our understanding of individual teams and of the greater national picture. We will have to wait, however, until we have larger tournaments and more people see more teams in order for there to be enough knowledge that people can draw upon when making assessments.

I believe that both the BQC (with the EQC alongside it for further elaboration on top-tier teams) and the Challenge Shield provide us with invaluable information with regards to team rankings. I think it’s wonderful we have these structures to turn to when last year only two UK teams made it to the EQC, there was no such thing as the Challenge Shield, and the BQC was a much smaller. That said, I maintain that there is space for a system for team assessment which takes into account more raw numbers and contextual data, ideally supplemented with qualitative assessments from a committee. Unless QUK is willing to take on some excellent statisticians in our community who are willing to take up this project, I do not think we will see this for some time yetbut I live in hope.

1 comment:

  1. Intriguingly, the Pointstreak system isn't the Canadian ranking system. It's standings that are solely tied to wins and losses. For Canadian rankings, there's more math (although still less than USQ's).

    The Quidditch Canada ranking formula:

    (Success factor) x (Games played factor) x (Strength factor)

    More specifically…
    (Win percentage) * (2n-1) / 2n * (SOV + SOV + SOS) / 3

    Where win percentage = the percentage of games played that were won
    n = number of games played
    SOV = average win percentage for teams your team defeated
    SOS = average win percentage for teams your team played