Site icon Balance Beam Situation

That Was Good? The Execution of All Things

Advertisements

Everyone’s favorite avuncular analyst and UTRS expert (so, gymnecologist?) recently recalled a post comparing execution scores between the US and international judges that I wrote in June and then promptly forgot about. What, am I supposed to remember everything I say?

Basically, it amounts to the idea that we often think that international judging is some paragon of strictness that would never be as lax and charitable as the US judges, but over the last few years the international judges have been within a believable range with the national judges in execution scores. So, I began to wonder if that will continue this year and if the World judges will mimic what we have seen so far in 2013, which brings us to an analysis of execution scores at this year’s Nationals.

You probably had a lot of thoughts during last weekend’s P&G Championships, ranging from “Hey, fewer of these hairstyles look like shanty towns” to “Hey, that’s not a switch 1/2” to “Hey, so did Nastia kill Elfi?” and all of them are completely understandable. I bet you weren’t thinking, “Hey, this is some historically excellent execution.” But you know who was thinking that? The judges. Yeah. Deal with it.

The average execution score across the whole senior competition was an 8.515 this year. Guess what that’s higher than? 2012 Nationals. And 2011 Nationals. And 2010 Nationals. And 2009 Nationals. In fact, the only recent competition that beats that number is 2012 Olympic Trials, which is to be expected. Trials should contain only the very best athletes at the peak of their Olympic preparation and not these barely qualified, happy to be there types who are getting 8.1s for hit routines. 

Let’s also take a deeper look by event. (Numbers in parentheses indicate rank)

Aside from sucking the light fantastic on bars (and being the worst bars year in the United States is quite an accomplishment), 2013 Nationals saw remarkably high execution scores compared to recent competitions. The vault scores in particular are interesting. The vaults this year were okay, but two and three tenths better than recent years? Really? Would the new code alone justify such a bump?

It should be noted that I included all routines in these averages, even calamities in the 6s, which occurred at least a few times in every competition. But lest you think the other years are being dragged down misleadingly by falls, the differences exist quite clearly at the top of the scoring range. Let’s take beam as an example. In 2013, 18% of beam routines received an execution score in the 9s. Compare that to 9.5% in 2012 (Nationals), 10% in 2011, 5% in 2010, and 16% in 2009 (the only really comparable year). Is this truly the best beam group we’ve seen in the last five years? I don’t think so.

So, what’s up? Why are these execution scores significantly higher than in other years, since I think we can all agree that the standard of performance was not necessarily higher than in, say, 2011 when the average execution was over .250 lower. This is normally the part where I would provide conclusions, but I don’t have an answer to the question. I’m just compiling the data and inviting analysis. I’m all Tycho Brahe up in this piece. I’m legitimately curious as to why this is.

As mentioned, we have a new code, so we can certainly expect the evaluation of routines to change, as it always does. However, the major changes to the code came in the D-Score department. Were there enough significant changes to execution evaluation to account for these multiple-tenth increases in execution average? Have the judges been instructed to make a point of going softer this year across the events, or did it just happen?

And how will this increase affect the comparison between US and World evaluation that has become rather consistent?

Clearly, I have a lot of questions. 

Exit mobile version