There’s been a lot of discussion lately in circles I move in about measuring athletic performance — how to do it, why to do it, and what we do with the information. This article is the first of a 3-part series — the next will deal with benchmark tests for teams, and the last will address individual evaluations and goal-setting.
With combines for pro leagues coming up and players already starting to prep for club tryouts, making decisions about what kinds of testing should be done is a complicated matter. Hopefully I’ll be able to help you get a handle on the issue!
Why test?
Are you including athletic testing in your tryout because you think you have to? Lots of people with stopwatches and clipboards sure looks official, and it feels like it adds a sense of legitimacy – this isn’t a bad thing, but it shouldn’t be the only reason you do it. Do you intend to use the information gained in the testing in order to form the team? How much weight will the test results have? If you’re not sure how you’re going to interpret the data to give you a clearer picture of who should make the team, or to help guide a player’s training once they make the team, give it a bit more thought! It’ll make explaining the testing to the participants that much easier if you and the others running the tryout know the value of the data.
Which tests should be used?
There has been a rise in athletic testing at ultimate combines/tryouts in the last few years — needing some standardization within different teams in the pro leagues has been a part of that, I think, and there’s also more interest and understanding around the role of athleticism in ultimate. But are we using the right tests in the right way? There’s a lot of factors that go into a successful athletic testing structure, from personnel to equipment to setting up rotating stations of athletes. What are we actually measuring during a test, and does it sufficiently predict/mimic on-field ultimate performance? If you don’t know what you’re measuring, or if there isn’t enough sport applicability in the test, or if the test is too complicated for the athletes to grasp, then you should use a different test!
Let’s break it down!
I’m using a rating structure to evaluate some well-known tests along a couple different indices:
Equipment: Almost all tests require cones, but make sure you know how many clipboards/tryout lists/stopwatches you need. Also, if you’re doing a vertical jump or a yo-yo test, you’ll need special equipment for that.
Number of Testers: How many people do you have helping? This, more than anything else, will make or break your testing structure. Too few means that things run slower, mistakes get made, and overall integrity of the results goes down. You may need to limit the amount of testing you do in order to make sure you’re getting accurate results. Otherwise, what’s the point?
Number of Testees: Most of the testing is individual, which means only one person can go at a time. Consider the amount of time athletes spend waiting around to test and figure out how to best structure the testing.
Human Error Factor: How much of what you’re measuring depends upon variables that you’re able to control? If cones need to be moved because the ground is getting muddy, or if the same person is blowing a whistle, timing, and recording scores, chances are that mistakes will be made that will dilute your data.
Ease of Execution: How difficult is the test to perform? Are there lots of directions athletes have to remember? There’s also the matter of test familiarity – if an athlete has done this test before/practiced those movements often, do they have an advantage?
Indicator of Ultimate Performance: Does this test measure skills/athleticism/energy systems that are used in ultimate at the level it will be played on this team? Some tests are great for straight line speed but tell you nothing about deceleration or change of direction, and others are swell tests if used at the right point in the year (but not so much in others).
Here are my favorite tests for tryouts:
Cut Test (MLU) – The athletes sprints 5 yards one direction, then turns and sprints 30 yards in the opposite direction.
Equipment: cones, stopwatch
Number of Testers: 2-4, one at the first cone to ensure they cross the line, one timer at the end. It’s best if there’s also someone at the start to blow a whistle or signal to the person timing to start the timer. Having a recorder that the timer can read off too is also helpful.
Number of Testees: 1 at a time
Human Error Factor – Low (as long as everyone’s paying attention and the stopwatch works)
Ease of Execution: Medium. This won’t accurately measure the top speed of athletes from track and field, or others who aren’t used to change of direction. But…
Indicator of Ultimate Performance – High COD and speed are measured, both of which are crucial to on-field performance.
Note: I’d increase the sprint distance from 30 yards to 40 yards so athletes have a chance to really reach their flying speed.
Vertical Jump (standing and 3-step approach)
Equipment: Tandem or equivalent
Number of Testers: 2 (one to stand on a chair and reset the sticks, one to watch takeoff and record the scores)
Number of Testees: 1 at a time
Human Error Factor – Low. Make sure your testers understand how to set up/reset the equipment.
Ease of Execution: Medium. The 3-step approach is more game-like, so there’s not a clear testing advantage, but there’s technique to the two-footed jump that can be practiced.
Indicator of Ultimate Performance – High. How high you can jump is important in the game, but somewhat dependent on position. This will probably be less of a decision-making factor for handlers than cutters.
Yo-Yo Endurance Test (variation of the Beep Test) – US soccer women
Potentially more ultimate-friendly version of the beep test, with a quicker starting interval and 10-second rest every 40 meters (back and forth). The only max work capacity test I’m including on this list.
Equipment: cones, beep test app, portable speaker
Number of Testers: 2-3 (one to judge when someone is out, 1-2 to record scores, one at each end if large group is going)
Number of Testees: Group
Human Error Factor – Low (as long as testers are paying attention)
Ease of Execution: Medium (test familiarity and ability to change directions quickly help)
Indicator of Ultimate Performance – High, but not very indicative if done too early in the season (not the best test for pro league combines, as many players have been focusing on strength/power and not yet done as much aerobic conditioning)
These next tests are runners-up because they either have a higher human error factor or less applicability to ultimate, or both. They’re still really great tests, however, and there’s definitely a lot of teams that would benefit from running them!
Serpentine – Tim Morrill
I LOVE this cone pattern – I run variations of this drill with athletes all the time! This is a good test to run instead of a 40-yard dash to measure speed and some change of direction.
Equipment: cones, stopwatch
Number of Testers: 2-3, one watching cones, one timer, one recorder
Number of Testees: 1 at a time
Human Error Factor – Low. Best done on turf to avoid needing to move cones and disrupt test integrity.
Ease of Execution: Medium. Practice with the Serpentine pattern will improve testing.
Indicator of Ultimate Performance – Medium. While good for testing speed and running mechanics, the athlete only makes 90 degree turns — on-field turns are more often sharper.
40-yard Dash
This is an often-used test at ultimate tryouts – it’s easy to set up and you can run people through it pretty quickly. It’s not my first choice for our sport though.
Equipment: cones, stopwatch
Number of Testers: 2-4 (one at the first cone to ensure a good start, one timer at the end) It’s best if there’s also someone at the start to blow a whistle or signal to the person timing to start the timer. Having the timer be able to read scores off to a recorder is also helpful.
Number of Testees: 1 at a time
Human Error Factor – Medium. Make sure it’s actually 40 yards. Muddy combines where you have to move the cones several times can hugely skew the results, and because of the distance from start line to finish line, the timing will likely be slightly off/inconsistent. In my opinion, the best place to run this test is on lined turf. Also, you need to standardize how you trigger the start so the timing is accurate.
Ease of Execution: Medium. Those used to running a 40 will do better…many ultimate athletes struggle with finding a good start position, as it’s not natural to the sport.
Indicator of Ultimate Performance – Medium. Speed is measured, but COD is not. This is a good test to catch the diamonds-in-the-rough, great athletes who may have track experience but not much ultimate experience – they can develop into great players!
First of all, the standard version of the T-test has the subject backpedaling to the final cone, which isn’t a much-used ultimate movement. Secondly, in both tests, touching the base of the cone/the line before you change directions wastes time and requires weird postural stuff that cuts down efficiency. We’re not football players; we don’t need to touch the ground.
Equipment: cones, stopwatch
Number of Testers: 2-3 (one watching cones, one timing, one recorder)
Human Error Factor: Medium (tough to make sure the testee crosses all the cones, it’s hard not to cut corners)
Ease of Execution: Medium (practice with the pattern will improve testing)
Indicator of Ultimate Performance – Medium. With some revisions like crossing lines/cones instead of touch them they test COD pretty well, but familiarity with the pattern gives one a significant advantage.
What about other “strength endurance” and “max work capacity” tests?
Well, in my opinion, max push-ups/burpee tests are exhausting, impossible to have good enough quality control, and don’t tell you anything meaningful about who should or shouldn’t be on your team. Max work capacity tests for non-ultimate-type movements don’t provide super helpful data – let your participants save their energy for scrimmaging and a couple tests that will better reflect performance potential! There’s a place for tests like this, but a tryout isn’t it.
Stay tuned for the next testing installment: benchmark tests for teams! It really is super exciting to be working with an emerging sport and helping develop protocol. I’m grateful every day for the opportunity to work with the ultimate community. Please give me feedback on these ideas, let’s get a discussion going!
Comments Policy: At Skyd, we value all legitimate contributions to the discussion of ultimate. However, please ensure your input is respectful. Hateful, slanderous, or disrespectful comments will be deleted. For grammatical, factual, and typographic errors, instead of leaving a comment, please e-mail our editors directly at editors [at] skydmagazine.com.