Adaptive Training Closed Beta Update

Sure, and I am all for them listening, learning & responding. We just need to know how we need to adapt compared to the original guidance (and knowing what they are changing would be icing, but not necessary).

1 Like

I thought the “faces of nate” idea that @mcneese.chad came up with was brilliant. I think we’re all saying we wish it was that simple. And it should be.


I’ve got a slightly different take. My gut feeling is that post-workout you can send one of two signals:

  1. Easy/medium/hard if levels are generally correct, and you are giving a normal signal to AT regarding how training is coming along.
  2. The red alert button to tell TR we have a serious problem with levels. In other words, my level is completely wrong, either too hard or too easy.

Right, it may have been ambiguous, but we DID get something in the first place was my point.

The confusion from that system is clear and obvious to anyone that followed this for more than a week.


So, to a larger point, how might this confusion affect the beta progress. If people are actively working outside the designed approach because of lack of confidence in the system or not getting the results they anticipated and responding in ways differently than TR anticipated (‘gaming’ ratings based on what they think they should be seeing, etc) what risk does that introduce that rather than being refined the system is being degregaded when “false” data is interpreted in the ML environment? I’m not a ML/tech guy. However, if I was a TR senior exec I’d be all over this internally to understand implication and aggressively communicating with the AT beta user base. I think the implications could go beyond the AT beta with ML also being part of the production TrainNow capability.

Or I may be totally wrong and it’s all part of the larger plan. But I guess I’d want to explicitly say that to my tester cohort and anyone who is paying attention to what TR is doing.

1 Like

I have not read it yet, but we have an update on the survey:


Yeah, it’s a +0.0. I just wondered if maybe given I was starting from 1.0 it might be at least worth a point something given it was a breakthrough rated session. Oh well, no big deal… it’s given me a good idea where I am anyway.

1 Like

Did you get the “struggle” survey? Or the “looks like you cut it short” thing?

Was cut short, then picked stress / motivation. Might have had a different result if I’d said time, but I recall it doing much the same on an otherwise successful session that I’d cut short for that reason.

Even though I’ve filled them out, I’m not sure how many different surveys could be presented to a participant. Would be good to see the surveys themselves in the article that was updated.

This and the other AT threads are really interesting discussions. How quickly we can find flaws. No wonder companies do customer-focussed beta testing.

However, this particular discussion, about survey responses forgets two things.

First, there is some objective information available to the machine about how the workout actually was for me. That is the power/HR graphs. Were the targets not met? Were there breaks? Did the power trace become ragged in the later intervals? Did HR get up to near the maximum seen in the past month or so? I’m no sports scientist, but these all seem to me to be indications that I struggled. I don’t know how clever the machine is at reading these signs.

Second, AT is meant to be a personalisation machine. We are not told exactly what that means. There are two possibilities. [1] Completing this workout at that PL with a rating of “moderate” means that any person’s PL goes up to that workout level and their future workout levels are increased. [2] Completing this workout at that PL with a rating of “moderate” means that my PL goes up to that workout level and my future workout levels are increased; for @mcneese.chad, it would take a rating of “hard” for this change to occur. In other words, are the responses to the survey rated individually or are they rated uniformly for everyone?

If the responses are rated individually, then the precise interpretation of the terms in the survey is less important than consistency in responding. If the responses are rated uniformly, then we all have to mean the same thing by “hard”, “moderate” and the like.

1 Like

Thanks for sharing @mcneese.chad. I actually ignored that thread because I thought it was just another post with a rehash of this topic.

I’d like to see some of the guesswork re surveys taken out of the AT process. I would propose that AT makes a default assumption that some PL progression should happen and only launches the survey if there is a superpass or a struggle. The user can also have the option of prompting a survey if workout performance doesn’t tell the full story. Otherwise, PL progression should be automated…


Then how would AT know that you are on the verge of failing or digging yourself into a whole?

For example, you have a workout that is achievable, so this should be moderate or maybe hard, but you complete the workout (barely) and rate it very hard. Which most likely indicates you are fatigued / not fueling / not sleeping well, and giving you a harder next workout probably isn’t advisable

A follow-up question on this… Will the decay rate fix address it retro actively, or just moving forward? Today, I observed another plunge that defies logic (or even what I had previously observed) where it didn’t revert to even last highest completed workout in that retro-active rolling date window of 10-14 days… but well below a number of successfully completed workouts… especially illogical in the endurance energy system.

1 Like

If they are smart (they are…) the algos will see our individual bias’ and start to course correct over time. The weight of an answer will need to be modified based on the variability of answers over time.

Said another way - someone who never clicks 5 all out should eventually have 4 become their new 5.

I need to confirm with the team, but I believe this is just a ‘moving forward’ fix after the update is released and as you complete more workouts and Ramp Tests, things should pretty quickly re-align in a way that makes more sense.


Something I am still kinda confused on is, how much does the level really matter? If you have your FTP set correctly, and you do a .95 IF workout but you level down and end up with a .94 IF workout, do you really think you are losing fitness? What if the workout AFTER that .94 ends up being a .96 with a lower level? DO you still feel you are losing out on fitness gains?

I definitely agree that the survey needs to be described better, and so should what the levels mean. I don’t really care what level it gives me. If I am struggling on a VO2 workout, but get another VO2 workout that is “easier” I still expect to struggle as long as the FTP setting is the same.


Maybe this has been addressed, but I wonder what AT does when you adjust the workout intensity. I did a VO2 max workout that was too easy. I ended up increasing the intensity by +5%. Should I rate how the workout felt at that +5%, or how it felt at its original intensity?

How the workout you completed felt.

That includes any modifications you made via Workout Intensity adjustment, or exceeding / missing power targets in Res or Std modes.

1 Like