Crowd Source Testing

@hubcyclist

Following on the thought to see about crowd sourcing experiments.

Would think most guys are set for the season, but maybe get some takers in the fall? Of course this is a global forum so year 'round potential. If one could set a few plans in place then ask people to do them and report certain data, one could end up with a rolling “study” where data comes in as people progress and perform tests. At some point the study could become adaptive.

It would be interesting to come up with say 6, 12, 18 and 24 week plans and compare them. Have to think about inclusion/exclusion factors and other confounders and the tests to be completed.

Lets collect interesting ideas and see where this goes.

I’ll think more before typing more…

-Mark

3 Likes

Polarized vs sweet spot for fast twitch vs slow twitch athletes.

Measurement metric is improvement in performance on 20 minute FTP test (or better, one of the Kolie Moore variants of FTP test).

My hypothesis is that for fast twitch athletes, sweet spot is more effective vs polarized.

And maybe polarized more effective vs sweet spot for slow twitch.

Ideally, find athletes who know that they clearly skew one way or another fast vs. slow twitch

1 Like

Dave,

I think that is interesting. The fast/slow will be hard to sort out maybe.

My guess (hypothesis) is that if we compared a TR type plan to someone doing POL we would find:

TR (sst) plan would produce a superior 20 min power test but that POL would produce a longer TTE.

Am assuming the POL type plan will be weighted more toward longer rides and the SST plan will be more time crunched.

I’m interested in comparing six week blocks of LT1 type training vs the SST base plans. I think to keep people engaged we’d need to start as a group and then report back results at some regular intervals to see what is happening goes against blinding rules but would be more fun.

Without doing any power calculations… could we find 6-8 subjects per group, one set willing to do a TR plan and another willing to go a different route. Report in every 6 weeks (tests tbd).

Need to think on what tests we want to see.

Mark

Don’t TR basically already have this data? Big sample set over multiple years not just of people following the SSB and TB plans at high, medium or low volume, but also all the people who make up their own plans using TR workouts. Or who don’t really use TR at all but have their non-TR workouts (indoor or out) automatically syncing to TR. It’s not a controlled experiment but it’s arguably better since you’ve got a much bigger sample size than you would ever get with volunteers, and more than enough data points to pull out all sorts of analysis with the right tools. My assumption is that this is already happening behind the scenes and that changes to the plans as well as future software features will all be informed by the data they already have.

What they don’t have is whether people are naturally “fast or slow twitch”. But this seems like a highly subjective and unreliable data point to me, unless you do DNA testing. And even then I’m not sure how good current science is, seems that we’re still a product of our environment as much as our genes.

Of course TR have all this information, and it’s their prerogative on how to handle and release findings. People are curious to know what the data reveals and there are some of us who are pretty research minded who wouldn’t mind analyzing if people are willing to participate.

There have been a few folks making a stink in some threads about TR not releasing more findings to “prove” stuff about their training methodology. This crowd sourcing process isn’t nearly as nice as tapping into the whole TR universe, but if people are truly curious on comparing, say, polarized vs purely sweet spot, this would give interested people an opportunity to share their data and do these comparisons

Don’t expect TR to release their data sets as its a competitive advantage for them. As it should be.

I might argue (in the intellectual sense of argue) that their data is going to be asymmetrical. They will have a large number of individuals who have done SSB I then II then Build then maybe Specialty. I expect the number of people that actually complete that full cycle and do it more than once is small.

With enough members though they can filter through to find completers and show that the program works. But works will be in terms of improvement from a baseline (starting RAMP test or FTP for most participants vs RAMP tests along the way).

What they may not know is if Program A beats B beats C and probably won’t have good data on any metric other than FTP as tested by RAMP. Meaning I will bet that of the completers defined above, very few will have done the right tests to flesh out a full power curve and TR data won’t readily show time to exhaustion.

Basically, when your primary metric is FTP defined by short tests, that is what you get. But there is a lot more to being good riding a bike than FTP defined by short tests. FTP tests are a proxy and while good, not great.

OK…

Here is something we could do:

“I’m a 45+ year old master’s athlete and I feel knackered and unable to complete the TR plan’s”

How about we compare the as written plans to plans with different work:recovery ratios?

Group A does SSB I+II then General Build as written.

Group B does same plan with 3:1 and 2:1 ratios and may substitute one of the weekend SST rides for a Z2 ride.

We can measure FTP according to the plans and we can ask some more subjective questions. Something we could do that TR can’t easily do is ask the participants to record other more subjective data. Fatigue and RPE for example.

At the end of the 20 weeks everyone will have done some good training (so everyone benefits), nobody “wastes” 20 weeks doing something too goofy, and we’d have good info about power curves (and we could do a full set of power tests before and after) and completability.

NOTE: I hypothesize that being able to complete the program for 20 weeks is a huge factor in improving.

If we could find participants, a Group C could be a POL type plan. Need someone to write out what that looks like.

If we had a good group of experienced cyclists we could also compare outcomes to their historic highs as a reference. TR won’t have that data.

I’m thinking out loud here so this is not to be taken as anything other than a conversation that might lead to something fun.

-Mark

I do like the idea of crowd source science. I think the most difficult part of it would be to define a good research question. In my understanding, crowd science works well when you can make a lot of people do something very simple. Because of the high amount of data points, reliability doesn’t matter as much (statistics can identify outliers, for example). This is basically the opposite of having a small, well defined control group. I’d think that it would be difficult to exclude enough outside parameters to do a well-controlled study if coordinating it on here and relying on a lot of self-reported data.