XC Race Tire Thread

My buddy picked up a set at Sea Otter and used them for the XL. For his sample size of one ride in non-technical terrain he enjoyed them, thought they worked well and had no issues.

1 Like

Let me ask this… Would you run them at Heil and/or Bitterroot (i.e. frontside of Hall)? I easily flatted on Kimmons.

Not sure I would ride my Epic XC bike at Heil, but I have ridden those tires many times at Lair o the Bear and Centennial Cone which are high speed and pretty rocky

I had a ton of trouble comparing tests from different days as well and switched to comparing just 2 tires at a time, alternating each run. I try for 4 laps on each tire, about 14 min per lap, all at the same average power.

Curiously, how I reach the average power makes less of a difference than you’d think. But with a lot of laps (like 150) I’m getting pretty consistent so even that is less of a variable. I have a racing ray and rush sct on the way, hope to test them next month.


1 Like

Thank you for sharing the data Joe. I do appreciate you trying this experiment and taking the time to share the data. I mean that.

Unfortunately, the data is not good enough to draw any firm conclusions from it. There’s many reasons why, but this plot is probably the simplest way to demonstrate it.

Look at the top and bottom of each vertical line. Those represent the max and min values directly from your data. Every tire overlaps every other tire at 35.7 seconds. That means they all either measured at or above and below that value. So if they all can be that fast, then your data cannot definitely that one is faster than the other.

The reason for this is that there’s too much variation in runs for the same tire relative to the total time for each run. Your data has, for the same tire, differences between runs as large as 2.6 sec for only a 35.4 avg time. That’s 7% error just on the same tire just for 10 runs. However, if you ran 100 runs, you would have had values even further from the average, so even the 7% is a low estimate of the error.

Rolling resistance testing is very hard, especially for MTB. I’m not going to dive into all the nitty gritty on the various noise factors that could be at play. It doesn’t matter. The times for the tires overlap, so there’s no meaningful conclusions to draw from the times. Nothing personal, it’s just a fact of your testing.

However, riding the tires on the terrain and at the speeds that matter to you does have relevance. If you find that you feel more confident on a certain tire, chances are you will brake less, which is the most important thing to optimize for among similar rolling resistance tires. Braking is throwing away both speed and momentum, so it should be minimized. That and avoiding a flat are going to make a bigger difference than saving 2 watts of Crr.


The more interesting point to me is how his tests differ from the BRR data, not whether they’re statistically conclusive or if you can differentiate between the tires in his tests. (And, I think one of the reasons why he tested, not to prove one of these tires were necessarily faster)

For example, in the BRR tests, the Rush tests 6.5 watts (per tire) higher than the Race King @ 25 psi, with the difference increasing as the pressure drops. And, the Race King is also a lighter tire so should accelerate (very marginally) faster.

Yet, from his tests, they’re pretty much the same.

Again, not drawing conclusions, but it does say to me that maybe we should all be doing some more testing and not taking the BRR rolling resistance at face value.

Anyways, I’m still on team Race King for my marathon events because they test well pretty much everywhere, and have solid puncture resistance scores too.


this 100%

1 Like

The BRR testing is pure rolling resistance on a smooth drum, not real world (rolldown or time trial testing). Real world has many factors at play besides rolling resistance (aerodynamics, shock fluid temperature, exact line choice, precision of the timing and reaction time to trigger it, etc).

Since we don’t know how large of an effect each of them had on the tests, we can’t say if the reason for the results being close is because the tires are very similar in performance or that the other factors had a bigger influence and hid the tire difference. We just don’t know.

On the other hand, the BRR test results are very repeatable where the data follows very obvious trends consistently among the same tire and different tires, even across brands and road through fat bike tires. It just comes with the caveat of being on a smooth drum, which does have limitations for real world.

Certainly real world testing has importance and catches things that a drum test never would (hysteresis losses in human body, traction on loose surfaces, etc). But those same factors make the testing very noisy. That’s why it’s so important to be extremely careful in what data you let inform your decisions. Bad data makes for bad conclusions. That’s why I’m saying this data isn’t useful to me.


While not conclusive, not useless. I think it’s a good data point to add to the discussion, adds some context, even if more testing is required. Especially because it disagrees with BRR, and because there’s a lot of other “Anecdata” where people have done similar that does too. Loop tests, modified Chung tests, rolldown tests.

And, also not surprised that we had someone technical jump on this to discredit it, I’m surprised it took as long as it did. Engineers just can’t resist themselves :rofl: (And I say that as one, who does it from time to time)


And just to make sure we’ve covered everything on the BRR side, he doesn’t publish any confidence intervals or ranges with his tests.

While BRR is a great, consistent resource, it does provide an artificial level of precision. I would seriously doubt that those tests are all repeatable to within 0.1 watts. A lot of less experienced engineers believe that the precision of decimals is more valuable than the accuracy of ranges.

So in addition to the BRR tests being on a smooth drum, not in the real world, we also don’t know how large of a difference between tests actually matters and is repeatable.


:person_raising_hand: Me!

I found mine for $35… I think they are still under $40? Amazing tire for the price with good grip and fast rolling. Perfect XC tire (for me).

I have tried Aspen, Ikon, renegade and they all roll great but I am not faster on them personally. My favorite trail tire is Ground Control (all my DH bests are on them) but go Fast Trak for XC racing.


Do mine, do mine! Does this tell us anything?



and now that we have the Racing Ray done and dusted…are we throwing in with the Wicked Will? It is in winning position ATM. I reeeeally want to go with the majority here to get more MTB tires tested. We do have YET ANOTHER version of the conti grand prix in winning position too. Because apparently we haven’t tested that tire nearly enough LOL.



I’d like to see the Wicked Will tested, because in my anecdotal riding/testing…I like them better than the Ray/Ralph combo.

How do you get the team issue Aspen’s tested? The 170 tpi edition that the pros have been using forever but just now available to the general public. I think the rolling resistance of those would be more interesting for XC racers than a trail tire.

1 Like

If I may, 35 seconds is very short time and so the measurement tool and associated error is a relevant factor.
If a stopwatch, then most differences measured can be associated to reaction time.
Not trying to be mean to Joe. Personally, I have a 20-25 min lap I like to use for “real life” testing. Far from perfect, but it works for me.

1 Like

The best way to check if you can conclude anything from your test results is to apply what called the ‘Student’s T-Test’ to the data, which checks for statistical significance. It’s very easy to do in MS Excel with the T.TEST function. The box and whisker plots shown by @huges84 are a nice way to visualise the variability/spread of a dataset, but it doesn’t say anything about the confidence in the average value, or whether the difference between two average values obtained from tire testing occurs simply by chance alone. Other metrics like standard errors, confidence intervals and T-Tests are ways to quantify those things. They consider not only the spread of the data (the standard deviation) but also the number of measurements (the sample size) as well as the size of the difference between the two average values. The T-Test gives a probability value that the difference in the average values happened simply by chance. A low probability value is good, because it shows that a real difference has occurred. A probability value of 5% is often used as a threshold for statistical significance.

For your Thunder Burt versus Booster Pro tests, even though there were only three runs per tyre, the T-Test probability that you measured that difference in average times just by chance is quite low, 6% if I’ve calculated is correctly, so there’s a very good chance the TB is actually faster and the result isn’t ‘within the noise’. That’s pretty good considering the difference in average times is only 2.5%. It’s an indication that your test has good repeatability, quite small % variation between runs, and I think you mentioned in a previous post that you’ve got that dialled.

Statistical significance checks aren’t everything though. The design of the test is important. Your A-B-A-B style of testing is a good technique, because it reduces biases coming from evolving trail/wind/temperatures. Doing a loop or an out-and-back test helps too, because it somewhat reduces biases that occur from variable wind conditions, which single direction tests are prone to.


Awesome, thanks for the review of the numbers and the Student T-Test info, I’ll definitely check that out and use it going forwards!


With how often the pros flat, I don’t think the RR benefit is worth the risk. For those jumping in local races, how much time will it realistically save you and it it worth blowing up your race over?


Right on! I just mounted this setup this morning and will get out for a few longs ride here in CO and then in Bella Vista/Bentonville next week with them.

1 Like