Under the hood of Adaptive Training: Analysis in four graphs

With the introduction of the Adaptive Training (AT) beta questions have been raised about the underlying algorithms. One recurring response has been to assume AT is a black-box controlled by artificial intelligence:

It must be nice to be an AI and not have to explain yourself. Just spit out a result and produce a “new baseline. Trust it. Bleep!” But nobody does, yet. It’s going to take TR a long time to wean us off the explanations everybody is asking for.

It feels counter-intuitive though that modern training logic is suddenly obsolete and replaced by black-box-AI that cannot be explained. While acknowledging that this is a beta release with some known bugs, it is still interesting to explore the underlying logic. Below a best effort attempt in four graphs.

  1. Progression Levels
    The TR team added Progression Level (PL) scores to existing and new workouts. Note that these scores do not change with FTP and are a fixed workout parameter, similar to TSS and IF. As TR plans historically were based on gradually increasing TSS over time, it is reasonable to expect a correlation between PL and TSS. The graph below shows indeed a strong linear correlation between the two parameters for sweet spot (SS) and threshold (TH) workouts. PL is not the same as TSS though, as it is used to scale workouts appropriately across zones and workout durations.

  2. Adaptations
    When plotting progression levels recommend by AT as a function of time (taken from the calendar), another linear correlation is visible. AT is programmed to linearly increase TSS over time for SS and TH zones. Missing several workouts for a specific zone leads to a step function down (orange arrow), hitting a breakthrough workout for a specific zone leads to a step function up. Note that contrary to the previous graph, this correlation is likely user dependent. One would expect to see a steeper slope of progression for users with a lower-than-average W/kg and vice-versa.

  3. Effect of updating FTP
    This is where the beta release looks pretty buggy, as changing FTP sometimes leads to a full reset of all PLs. But when it does work, it looks like the graph below. Starting from an FTP of 250W – and having successfully completed workouts at this PL – adjustments were made in both directions. When FTP is lowered, the SS PL does not change. This makes sense given the history of completed workouts at 250W FTP. Although the SS PL does not change, the same workouts are performed at lower average power and kJ and are easier to complete, which was the intent of reducing FTP in this situation.
    When FTP is increased, the SS PL is adjusted downwards and again along a linear slope. The slope seems to be chosen such that total energy (kJ) of a SS workout at the recommended PL is roughly equal before and after the FTP increase. For the example below, the SS workout Newcomb (60 min, 65 TSS, 0.81 IF, PL 3.1) at 250 FTP (640 kJ) is replaced by Mount Field (60 min, 53 TSS, 0.73 IF, PL 1.6) at 260W FTP (637 kJ).

  4. The link between PL and nominal FTP
    Based on the previous graph one can also visualize how changes in SS progression levels and FTP are correlated. The mid-point of the PL scale (1-10) is 5.5. Let’s assume a user that starts at this mid-point with an FTP of 250W. Assume that over time this user improves the SS PL to a score of 9.1. As seen in the graph this equates to a nominal FTP of 110% or 275W. Updating FTP from 250W to 275W will trigger the AT algorithm to reduce SS PL back to the midpoint of 5.5.
    In this example an increase in SS PL of +1 can be translated to +2.8% FTP, which is a surprisingly steep slope. But from the second graph above one can see that progression is driven by about +1 SS PL per two weeks, and extrapolating that makes +3 SS PL for a 6-week plan. That is equal to an 8.4% improvement in FTP which is close to the 7.9% average gain per plan reported by users (link). It means users on average should be able to successfully complete several plans without changing FTP, and not changing FTP after every plan could actually be beneficial as it leads to a larger variety in workouts. The steep curve also makes it easier to accommodate users with high PL-variability across zones.

In conclusion, the linear logic used to assign progression levels for SS and TH workouts and adapting progression levels over time seems in-line with TR’s previous plan implementation, while providing more flexibility and a more gradual increase in training load over the course of a plan.
Let me know your thoughts.


Why is your profile hidden?

1 Like

Why is yours?

It is? Weird as I did not set it up that way and thanks for letting me know. I was genuinely interested why OP was posting with a hidden profile so we cannot look back at posting history.

It is. Can you see mine?

Yours and his both say the same when I click on them- that the profile is hidden.

Not sure if there is a setting we have control over in our profiles?

interesting and thanks. Will look at I don’t mind people looking at my post history but I do post quite a bit. Take care

I get a flavor of that from the forum.

However during the AT intro podcast (February?) I basically heard Nate/Jonathan/et al say what you wrote. Without going back and re-listening, after ignoring they hype I was left thinking something like:

  • TR has new off-the-shelf plans with less intensity than previous plans, and smoother progressions
  • each plans has specific goals in terms of developing fitness across the Coggan Power Training Zones
  • AT will use completed workouts and athlete feedback to adjust workout difficulty
  • AT also uses coaching logic coupled with the goals of plan in an attempt to keep fitness progression on a good track

All of that is consistent with your charts, and the fundamentals of progressive overload that form one of the foundations of TR’s approach.


Yes, and I like all that.

But it’s not artificial intelligence.

Which doesn’t mean there’s no AI in Adaptive Training. But this isn’t it.


I don’t think TR uses the AI language; their term is Machine Learning. I don’t know how to define either:)

its just a ton of IF/ELSE statements…



Great reverse engineering!

The way I see it, there are a number of different levels that machine learning could be used here. One classic (and relatively simple) application is “classification problems”. I have no doubt that machine learning is being used to classify a workout’s success or failure, and that it was/is also used to assign progression levels to workouts.

Whether it’s used to change a plan’s ramp rate or to pick the exact adaptations or to decide how to adjust PLs after an FTP test - I think these are higher level problems, and I don’t expect TR to reveal their proprietary work in any of these areas. I tend to think that these parts could in future involve a combination of ML and custom logic to bring ML outcomes together in the desired way. Right now, perhaps it’s more logic based and will evolve over time.

For example, a plan says I should have an “achievable” SS workout today. It’s fairly simple logic to pick one from the workout library based on my PLs. No real need for ML there - but perhaps in future the exact structure and ramp rate of the plan will change based on the user, and that’s the kind of thing that seems to be under discussion here. I don’t know whether that’s in place now, but if not, it doesn’t mean that ML isn’t being used at all. Deciding what my current PLs are has certainly used ML - by classifying my workout outcomes appropriately and probably also feeding my survey responses into a model.


Really helpful. Thank you for the great info.

Thanks. Writing on how human’s learn including how human learning differs from machine learning is fascinating to me. Way outside the realm of what I do as a Civil Engineer. Do you have books you would recommend on human or machine learning for general audience?

I have to say I’m no expert! Nothing specific to recommend. I was just posting the way I think it may work, and I could be totally off the mark. I have a computer science degree but it’s fairly dated now, things have moved on a lot in the last decade.

One question that often comes up is - how do you know WHY an AI model came up with a certain answer? If you google “AI transparency” there’s a lot of interesting stuff around that, and I think Adaptive Training will bring up some of these questions for users.

There have been cases of models analysing lung scans for pneumonia or COVID, where it turned out the AI was making inferences based on the font of the text on the scan (ie: which hospital it came from) rather than the lung scan itself! Or the position the patient was lying when the scan was taken, etc (if taken lying down, it’s more likely from a seriously ill patient). Definitely interesting stuff.


That is interesting, especially the simple underlying progression model. I suppose what is not captured is the speed someone is going through those levels, and the effect the plan volume plays. Do threshold progressions have the same gradient between low volume and high volume plans?

What I find interesting is what will happen when someone hits a performance plateau. Will you end up in an endless loop of the same workouts, that you then fail (drop down a little, progress, stall, drop down, etc)? Or will there be a recommendation to “switch it up” and push eg vo2max progression instead?

I think the next step would be personallised recommendations, but at the moment, a lot of that is outside the data TR is capturing. For example, if someone is struggling with long workouts, TR could recommend doing shorter ones, and perhaps adding an extra workout day.


you do realize the underlying progression model has always existed in the plans, right?

For example here is an analysis of SSB MV 1 that I did a couple years ago:

it was a first cut analysis of the underlying progressions, focused on the number of minutes in zone per workout and per week. Notice how my analysis was on time-in-zone and not on TSS or IF.

Anyways, scan down a day (column), and also the totals per week. The progressions pop out when you look at time-in-zone, and this approach works well for analyzing progressions of low aerobic (zone2) and high aerobic (SS) work.


First of all, thanks for the great work. Please don’t take my post the wrong way, it’ll seem nitpicky and overly critical, but I’m in science mode right now.

I’m not quite sure where you get this from. Think of it in terms of chess: computers can play chess much better than any human these days, and many of the best engines use ML (e. g. Google alpha zero). That doesn’t mean that ML will run counter modern training logic, in many/most cases they two will agree — unless certain criteria lie outside of the purview of TR’s model.

1. Progression levels

Did you graph only versions of a similar workout here? If yes, it isn’t very surprising. I know you did not graph VO2max workouts, but I think PLs become useful when you compare how unlike workouts compare, e. g. 15-15s vs. something like 3 x 3 x 3 minutes at 115-120 % of FTP. Is the determining factor mostly time in zone?

2. Adaptations

I agree with you that heuristically you would expect the slope to depend on the individual’s W/kg and perhaps even training history (are you close to your historical maximum and are expected to only make small gains?). I don’t think this is resolved in your graphs.

3. Effect of updating the FTP

According to TR support, they still use a global model to adapt your PLs after updating your FTP, i. e. they do not take your personal training history into account. I like that you figured out that total energy seems to be a key here.

I think for small changes of FTP, i. e. those that can be mapped onto PLs, your data looks convincing — with the obvious caveat that the slope will likely depend on the individual.

Yes, it is called a Taylor expansion, i. e. you can approximate any sufficiently nice function linearly in a small enough vicinity. If you have few data points that lie closely, linear regression often works very well. For example, the slopes will likely change when you have two individuals with very different FTPs, different training histories and of different genders.

IMHO this is where much of the secret sauce lies.


Like @bbarrera wrote, this is how structured training works: progressive overload. If you do it right, you will likely fail some workouts or at least be just at the cusp of failing workouts in some phases.

Simply put, TR replaced TSS by Progression Levels and is dynamically adapting progression levels to the user. TSS don’t distinguish between times in different zones and TSS has other problems (e. g. that 4x10 minutes at FTP yields the same TSS as 1x40 minutes).

Consensus is that you need to switch it up, i. e. you need to change the stimulus.

To some degree it is already doing that. E. g. if you fail a workout or if you find workouts easy (on the questionnaire), it’ll adapt your workout schedule. What it doesn’t do is adapt your schedule around constraints (e. g. if you have to move your Thursday workout to Friday, should you lower the intensity?). Or it doesn’t take your sleep into account. This might eventually be on the horizon, but the problem is whether you can actually make use of the additional data.

Was it the piece where @Nate_Pearson said he expects that in the far future (>10 years) workout selection will be left to algorithms and coaches will focus on other things?

If that is what you refer to, then I interpreted @Nate_Pearson’s words differently, namely that coaches will just be fine, but that something like TR’s algorithms will be a tool in their toolbox. Designing training plans from scratch would be like re-inventing the wheel.

Not the hype part, rather the substance of what AT is now.

1 Like