I get your point here, but in the beginning, getting the levels correct (or at least close) is essential. Otherwise every workout you do is overly easy. If you start with a bunch of Level 1.0 or even 3.0 ratings, but the last training block you did with the exact same ftp had a bunch of rides that were more like a 5.0, then it literally takes months to get back to those correct levels when you mark a ride as “Moderate” and that only moves you up .2 per ride.
Maybe, in the sense that we now have 8 metrics to worry about (FTP + 7 PL’s)
It’s possible some will game to kick them up. But from what I see, many of us just want the function that TR promised, where we do a workout & answer a simple survey, and let AT guide the way.
That of course takes good data (lacking from the stuff AT ignores at the moment), good rider feedback (clouded with all the uncertainty of current surveys), and our trust in the system to do the right thing (see points 1 & 2).
Yep - that table of descriptions is also completely useless. All it says is rate it “how it actually felt” and then we’re back to all the comments made in hundreds of posts above and elsewhere…
Nowhere does it say ‘Hard = these feelings’, ‘Very hard = these different feelings’ etc
Sure, and I am all for them listening, learning & responding. We just need to know how we need to adapt compared to the original guidance (and knowing what they are changing would be icing, but not necessary).
I thought the “faces of nate” idea that @mcneese.chad came up with was brilliant. I think we’re all saying we wish it was that simple. And it should be.
I’ve got a slightly different take. My gut feeling is that post-workout you can send one of two signals:
- Easy/medium/hard if levels are generally correct, and you are giving a normal signal to AT regarding how training is coming along.
- The red alert button to tell TR we have a serious problem with levels. In other words, my level is completely wrong, either too hard or too easy.
Right, it may have been ambiguous, but we DID get something in the first place was my point.
The confusion from that system is clear and obvious to anyone that followed this for more than a week.
So, to a larger point, how might this confusion affect the beta progress. If people are actively working outside the designed approach because of lack of confidence in the system or not getting the results they anticipated and responding in ways differently than TR anticipated (‘gaming’ ratings based on what they think they should be seeing, etc) what risk does that introduce that rather than being refined the system is being degregaded when “false” data is interpreted in the ML environment? I’m not a ML/tech guy. However, if I was a TR senior exec I’d be all over this internally to understand implication and aggressively communicating with the AT beta user base. I think the implications could go beyond the AT beta with ML also being part of the production TrainNow capability.
Or I may be totally wrong and it’s all part of the larger plan. But I guess I’d want to explicitly say that to my tester cohort and anyone who is paying attention to what TR is doing.
I have not read it yet, but we have an update on the survey:
Yeah, it’s a +0.0. I just wondered if maybe given I was starting from 1.0 it might be at least worth a point something given it was a breakthrough rated session. Oh well, no big deal… it’s given me a good idea where I am anyway.
Did you get the “struggle” survey? Or the “looks like you cut it short” thing?
Was cut short, then picked stress / motivation. Might have had a different result if I’d said time, but I recall it doing much the same on an otherwise successful session that I’d cut short for that reason.
Even though I’ve filled them out, I’m not sure how many different surveys could be presented to a participant. Would be good to see the surveys themselves in the article that was updated.
This and the other AT threads are really interesting discussions. How quickly we can find flaws. No wonder companies do customer-focussed beta testing.
However, this particular discussion, about survey responses forgets two things.
First, there is some objective information available to the machine about how the workout actually was for me. That is the power/HR graphs. Were the targets not met? Were there breaks? Did the power trace become ragged in the later intervals? Did HR get up to near the maximum seen in the past month or so? I’m no sports scientist, but these all seem to me to be indications that I struggled. I don’t know how clever the machine is at reading these signs.
Second, AT is meant to be a personalisation machine. We are not told exactly what that means. There are two possibilities.  Completing this workout at that PL with a rating of “moderate” means that any person’s PL goes up to that workout level and their future workout levels are increased.  Completing this workout at that PL with a rating of “moderate” means that my PL goes up to that workout level and my future workout levels are increased; for @mcneese.chad, it would take a rating of “hard” for this change to occur. In other words, are the responses to the survey rated individually or are they rated uniformly for everyone?
If the responses are rated individually, then the precise interpretation of the terms in the survey is less important than consistency in responding. If the responses are rated uniformly, then we all have to mean the same thing by “hard”, “moderate” and the like.
Thanks for sharing @mcneese.chad. I actually ignored that thread because I thought it was just another post with a rehash of this topic.
I’d like to see some of the guesswork re surveys taken out of the AT process. I would propose that AT makes a default assumption that some PL progression should happen and only launches the survey if there is a superpass or a struggle. The user can also have the option of prompting a survey if workout performance doesn’t tell the full story. Otherwise, PL progression should be automated…
Then how would AT know that you are on the verge of failing or digging yourself into a whole?
For example, you have a workout that is achievable, so this should be moderate or maybe hard, but you complete the workout (barely) and rate it very hard. Which most likely indicates you are fatigued / not fueling / not sleeping well, and giving you a harder next workout probably isn’t advisable
A follow-up question on this… Will the decay rate fix address it retro actively, or just moving forward? Today, I observed another plunge that defies logic (or even what I had previously observed) where it didn’t revert to even last highest completed workout in that retro-active rolling date window of 10-14 days… but well below a number of successfully completed workouts… especially illogical in the endurance energy system.
If they are smart (they are…) the algos will see our individual bias’ and start to course correct over time. The weight of an answer will need to be modified based on the variability of answers over time.
Said another way - someone who never clicks 5 all out should eventually have 4 become their new 5.
I need to confirm with the team, but I believe this is just a ‘moving forward’ fix after the update is released and as you complete more workouts and Ramp Tests, things should pretty quickly re-align in a way that makes more sense.