TSS and IF are simple mathematical formulas. It might be useful to take a step back and realize how much experience you are using to derive:
Yes to someone who is familiar with TSS and IF. But the fact remains that these measures describe neither the difficulty nor the training load of a ride, and they do nothing to describe the targeted energy system.
I get that but what influence would it have on AI and ML. As long as you don’t categorize the early end due to failure it shouldn’t affect the magic behind the scenes. They sure will offer a time-crunched or other option to select.
Yup.
I get that but they could still release it anyway. You could simply ignore the AT bit while others could already benefit from it. Delaying the release until it delivers value for everybody seems like a waste to me.
That’s all good. Though it would still be beneficial.
Haha, yeah I think we’re pretty much on the same page now. Enough to close the debate anyway . Apologies, I see how my statement reads, I’m just saying you can’t force the new platform on people if all rides aren’t accounted for. By all mean fire on and get it out before that though as long as the legacy option is still available, I think it will be useful for the TR only athletes.
Here is one question I have for the smart people here in regards to the inclusion of outdoor rides: Do you guys think weather conditions could “muddy the waters”, so to speak, for the ML? Because in the datasets there is essentially only heartrate and/or power (And I guess maybe temperature, depending on the head unit). Presumably performance is quite impacted by weather conditions (it for sure feels that way), so I wonder how much that impacts the ability to implement outdoor rides into the system?
Needs a long answer, but the more data variability the harder it may be to fit a model and produce useful outputs (training guidance). If a riders data is all over the map, it will be hard to tell what is going on. You are perhaps describing a rare ride? It would perhaps be easiest to exclude that data and not use outliers or rare events to guide training plan modifications.
By analogy, say I have a race and have an amazing day. Produce a 20 min power PR. Am very happy with that performance, but I don’t immediately take 95% of that number and call it my new FTP. It’s an outlier ride not representative of training conditions.
Perhaps a different question is how much variability can the current model adapt to? If it works great for a homogeneous indoor training setting (meaning well controlled environment) that is a great step. Huge accomplishment. The next questions are how much and what type of data and environment variability can the model tolerate without breaking? Or rather, how much variability can it take and still be valid and useful?
TR should be able to produce guidance for what good data input looks like. Riders wanting to use the early iterations should probably stick to those guidelines. TR can automate some aspects of data cleaning. Part of asking the rider questions post-ride is to help with that data cleaning or data weighting process.
As the algorithm or model evolves, the applicable use cases and acceptable data inputs will (may) expand. Without throwing shade, that’s the important GIGO discussion.
First step of a journey - its going to be very interesting.
Everything you listed should be accounted for today with AT.
But of course, if someone wants to do what they want to do, then we can’t stop them.
A far as other people doing “AI Training” I like to think of the cooking analogy.
Machine learning output is a combination of data + data engineering.
Just like a good meal is a combination of good ingredients and a good chef.
Data = Ingrendients
Chef = data engineering
You need both to have a great outcome. You can’t simply say “Well, this restaurant cooks food too! It’s already been done!” and expect it to be the same.
The goal is to beat a coach in terms of picking workouts.
We’ve already seen this on the closed beta. A coach had a “breakthrough” workout assigned to an athlete. They showed the progression data to the coach and they adjusted the workout to what AT suggested.
Coaches will still be very valuable for accountability, skills, racing tactics, and a sounding board. I don’t see them going away but I see them being used in tandem with AT.
This already happens with the progression system. Except instead of changing your FTP it puts you in a different point in your progression.
This is happening in our internal system with FTP predictions. I suspect we’ll run future models to optimize certain levels in specific workout zones. This will automatically take into account repeatability which is awesome.
This is what the survey data post-workout is for. If you stop early we get to know why you stopped. This is also why we can’t have the system tell if you’re overtraining at the moment because we can’t tell why you stopped a workout.
There are orders of magnitude more data for people not directly or not following plans at all.
Very rigid plan compliance over a season is extremely rare. And half of our users don’t follow a plan at all. Then we have all of the outside data plus people leaving and rejoining TR seasonly.
What?! Please link me where we said this, and I will preempt and say if we did say this we were wrong.
I’m going to share some SSB HV data tomorrow on the podcast. It’s very good at making some athletes faster. It’s also only for around 7% of our athletes.
We definitely have a problem with athletes picking plans that have too much volume for them. We need to put more guard rails into the system to help them self select down.
There is a vibe on the internet that “if the plan didn’t work for me, then it doesn’t work for anyone.” That is not the case.
I think you should think of it as a “Ramp Test Result” rather than your FTP. That helps us get out of all of these “what is your FTP and how to measure it” debates.
It’s more of “What can you do, and how can you improve it”. The ramp test gets you on the way to getting there and it’s fairly repeatable.