I’m sure that works fine but it’s not unreasonable to want a why for the changes a system makes.
When it’s just a black box you blindly follow it and hope for the best. At least with a coach you can ask them for the reasoning behind any changes. Hopefully that is something we see in TR at some point as it might help a lot of people have more confidence it knows what it’s doing.
So I just switched out today’s workout (which I havent even done yet) with a shorter, but harder PL
Its now downgraded other upcoming threshold work. There obviously is a reason for this (I’m guessing preventing long term fatigue) but it would be good to understand what that decision was made on. Is it because when I did similar in the past I then got sick? However that could be based on a multitude of factors including work/homelife which are now very different. Then I’d be able to make a judgement call on it. “Oh yes, I remember being exhausted” or “Ah, its ok because earlier in the year work was horrendous, but this time I’ve been on annual leave this week and have a couple of easy work weeks coming up”
The power of AI is pretty unlimited so although it seems like a big ask, to be able to “discuss” with the system would be an obvious future step.
There might be TR internal AI model experiment going on?
In the spring they silently released TrainNow update that ignored Progression Levels and proposed workouts that TN “thought” athlete can complete. Most were excited or terrified to see PL9+ anaerobic workouts popping up.
Maybe pendulum has swung now other way around, many of us find ourselves yawning seeing workouts with half of PL that we already achieved?
One way or another, unexpected recommendations without reasoning are not raising trust
But really, what’s the difference between a TR generated workout and ChatGPT? While the promo by TR hints that it’s AI/ML is a great coach, the bottom line is there is no adaption to you and no learning about me. It’s all based on the average of the 200 million workouts or whatever the number is. We’re treated as within the bell curve. As an Olympian should my generated workouts be different to Joe high school football star? Maybe, maybe not. TR doesn’t distinguish so…..perhaps not. Or should it?
That fundamentally misunderstands how ML models work. It is using your workout history to make predictions about the future. It is not averaging you nor is it simply looking at the fat part of the bell curve.
Obviously, I don’t have any knowledge of TR’s models specifically, but I have used ML models in the industry and have just wrapped up my course on probability theory and statistics (as lecturer). ML models need to be trained on diverse training data, where diverse means athletes that span the gamut in terms of gender, age and abilities. That includes athletes, which are similar to you. The closer you are to the mean, the more data you have, the easier it is to get good predictions.
Not knowing the models, I don’t know what factors into deciding what factors are relevant in deciding which athletes are similar to you. But you won’t be lumped in with an Olympian — unless your performance merits that.
Moreover, your own workout history will be crucial: it will inform the model what you are (not) capable of, what training load you can take, etc.
Also, TR has a base plan that you are on. It seems like the AI is adjusting the parameters up and down.
ChatGPT read every bullshit or not plan ever posted to the internet and is regurgitating some version back to you. ChatGPT isn’t looking at your power duration curve or anything else.
TR has training their models on previous data they have. My guess is that ChatGPT hasn’t trained specially on ride data. ChatGPT is a large language model. I don’t think it will read .fit files even if it found them.
As always, the rider is the coach and TR is a software training aid. If you are a former olympian, it’s on you to maybe pick workout alternatives at a higher training load if you can handle that.
Sorry, but at this time I’m not convinced based on what I’ve done this year, TR recommendations - Train Now, Red Light etc. It’s where I stand. I appreciate your opinion, it just doesn’t match my experience.
That’s like asking what the difference is between a young, inexperienced nurse at your annual checkup giving you nutrition advice based on your basic metrics (weight, body fat, height, heart rate, etc) and an Associated Press article on health trends in the population.
The young, inexperienced nurse can give you some bad or mediocre advice. And her ability to adapt what she’s learned to your specific case is limited both by her training quality and her limited knowledge of you.
But there is ZERO overlap and comparison between that and the AP article. You are utterly and totally misunderstanding the difference between a system that attempts to offer some limited level of individualization based on machine learning (the TR case) and a large language model whose ONLY capability – I repeat, ONLY capability – is generating text that looks like it’s what you want to hear based on the question and everything it has read before (ChatGPT or others).
If that sounds a little harsh, please note that’s not my intent: I’m trying to be helpful. It’s just that you’re comparing apples and freight trains.
You told Chat GPT you did those workouts and FTP went up 10 watts, a few minutes after it suggested them, and you couldn’t possibly have done them. It then believed you, and did not ask how you know your FTP went up 10 watts. Doesn’t sound like a great coach to me.
I get the point you’re trying to make, and I do think that AI will eventually get there, but using your specific example, I don’t really like what it did after your FTP went up.
I remember Coach Chad saying AI couldn’t easily replace the human part of the coaching and planning, and I think this is a good example. It treated you like a machine that can run hard constantly, without pointing out that you need periods of recovery and maintenance. It continued to push up your TiZ the very first week after your FTP increase, even though it went up a large amount. Shouldn’t it have backed off on the TiZ to start the next cycle of progression and ease you into your newer, and much higher, FTP? Also, it never recommended any kind of recovery week or warned you to take a break/listen to your body, etc., after 3 weeks of building and an FTP test. (Granted, that might have been in there and you left it out of the screenshots)
We have had several members who have experimented with ChatGPT in the past and the results were bad-to-mixed. I remember one training plan throwing the kitchen sink at the problem, completely ignoring basics like recovery.
I have students using ChatGPT at uni, and for more advanced problems of specific domains, it fails spectacularly. Think 85+ % failure rate. Without expertise, you can’t separate the wheat from the chaff.
Moreover, ChatGPT doesn’t know anything about ramp rates and the like. It regurgitates what others have written on the topic. An LLM can “explain”/summarize training fundamentals, but cannot vary ramp rates based on your workout history. Large Language Models work very differently than the type of Machine Learning models TR uses. ML-based models answer very specific questions and are designed so that similar inputs will yield similar outputs. It cannot explain training fundamentals to you, they need to be baked into the model.
An LLM is made to anticipate the most likely output from an input. That is how it works in the inside. It cannot, by design, weigh things or discriminate between the differences and limitations of different studies. LLMs are not intelligent, they do not and cannot reason. They work based on statistics.
What this means is that their training data contained sports science papers (good ones and bad ones), good and bad training plans, and all of that is mixed up.
With the exception of something like CoachCat it isn’t trained on a known good dataset for cycling. So LLMs are known to spit out garbage at times. The proverbial glue in your pizza recipe.
LLMs cannot quantify differences between two or more training plans. LLMs cannot make logical deductions, they can only do the right thing, statistically.
That is particularly deceptive as LLMs can regurgitate general training principles. That doesn’t mean it understands them or knows how to implement them.
In contrast, ML algorithm need a score function and constraints as inputs. If they or the data are bad, you will get junk, too, but you will get an optimal solution to a given problem.
We don’t have a scientific basis for making decisions based on HRV yet, so how is an LLM supposed to make good decisions? It literally can’t as its output is based on the knowledge you put in.
As soon as you use the same yardstick we use for TR, it all comes crashing down, i. e. when you insist on “how much easier/harder should my workouts be as a consequence”.
You seem to have a habit of just saying things with conviction rather than researching them. There are plenty of recent research papers on exactly this topic, and HRV response is very well researched as an identifier of sympathetic or para-sympathetic state, which in turn is well researched for readiness to train at intensity.
This has gone off topic a-bit, but I think it does quite clearly differentiate what AI/ML means to most people (ala the AI Endurance example, whether it is actually practical/effective to use or not) versus TR building some big data models to inform their choice of algorithm implementation, and calling it AI/ML on podcasts to make the sauce sound more special.