Under the hood of Adaptive Training: Analysis in four graphs

Perhaps I am not getting what you are saying, but I think the statement by @Janhein is not quite accurate: I don’t think anyone made a comment that “modern training logic” (I assume that refers to the state-of-the-art as practiced by best-of-breed coaches) is supposedly obsolete and to be replaced by a black box algorithm that cannot be explained.

I think TR has distilled the state-of-the-art (as TR sees it at least) into its ML model. We can disagree whether TR actually did use the state-of-the-art or whether at present we have sufficient knowledge to implement what TR tries to implement.

And in principle the algorithm can of course be understood and explained — unless we start a philosophical discussion what “explained” or “understood” means. Since this TR’s secret sauce™, I don’t expect that they will share their knowledge or that many people will just be able to understand it. But in principle, I believe it can.

More important than that for users is, in my mind, whether AT’s recommendations can be understood by users. And in my experience at least, it has reacted exactly the way I would have expected to with a few exceptions. I haven’t explored the logic consistently or done an analysis like @Janhein , but all the adjustments went in the right direction.

The only issue I had with it was that it was too conservative at times and that it did not know how well I have slept (because at present it can’t). And even that was something I could understand given what AT is — TR has to work for a wide variety of athletes and erring on the side of caution makes sense.

I remember from one of the early podcasts talking about AT where the internal testers took a while to get their heads around how the progressions were reined in/very conservative compared to the existing plans.

I did Neural Networks/Machine Learning as my final year project at university. That was over twenty years ago so likely now well out of date, so please, please correct me if I’ve got this wrong.

Generally you are trying to find an “optimal” solution which is one that if you deviate from it you get pulled back to it. One way of thinking about this is a heavy ball rolling across a surface. The potential solutions are pits in that surface, if the ball drops into a pit then gravity will keep it there: the ball will always want to roll downhill. The problem is that you don’t know if the pit you are in is the deepest (in this case deeper equates to more optimal) so the systems have random quirks that can perturb the ball and send it on its way again. Not too hard to conceptualise in three dimensions but NN systems usually work in a lot more, TR’s AT has twenty or so input parameters. Neural networks also have multiple layers of multiple surfaces, each with subtly different starting conditions/input combinations. It’s a bit like each of the above surfaces saying “this is what I think the solution is” so the next layer uses combinations of those intermediate answers to come up with more solutions. Eventually you end up at a “solution”.

Reverse engineering back through the layers of nodes (surfaces) is nigh on impossible since the original inputs have been lost in the manipulation by each layer. Each node may have done 30 or 40 thousand iterations to find its answer not just a few IF/ELSE statements.

Neural networks and their like solve a single problem but don’t then “learn”, they are essentially static from the point they leave their learning phase. Machine Learning makes that more generic and also provides feedback so that the original solution is constantly updated with each iteration (in this case an iteration being a workout), i.e. doesn’t just provide an answer but uses that answer to solve the next question.


I do remember it being said in one of the podcasts that after Traineroad applied progression score to each workout that they realised that the old plans did not progress particularly well with regards to progression levels. This was a big reason that the plans were updated prior to the adaptive training rollout.

1 Like

From observation I think the TR team also took the opportunity to take out some intensity as a reaction to some of the criticism. For example on a MV plan Wednesday you have a recovery ride rather than an endurance ride and your Sunday sweet spot session is very easy, too. I don’t have any experience yet to see the impact on my endurance, but FTP-wise their plans are still working for me, my FTP is still trending up. Well, that’s my N=1 anecdata :slight_smile:

1 Like

Yeah for HV, Wed and Fri are (in my opinion) too easy and I’ve been making them for standard endurance sessions, like 90-120min and keeping it at 70% and below. I made a topic recently and the tone of some suggested I was going to blow up from accumulating additional fatigue on a “HV” plan that only has 8:15 planned in it. I did 12:15 this past week lol

1 Like

They didn’t just reduce the intensity - the key thing for me was smoothing out the progressions, as there were some pretty well known very tough week to week progressions in the old plans. Bringing in PLs made that super clear. Now they are much more consistent increases in workout PLs week to week


Random people on the forum make those types of comments. Some jokingly say “I submit to our AI Overlords” and similar dystopian statements (that will trigger someone!). However Nate painted that as a future state in 10-20 years. That is the hype part I ignored. Nate and crew then went thru points 1-4 but in their own way, without those graphs but making similar points (smoother progressions in plans, PLs support adaptations, etc. ). That was my walk-away from that podcast, from what seems so long ago now.

1 Like

I am fairly sure from what the TR guys have said/my own experience that there is something in that. They’ve mentioned they look at cadence and heart rate etc when categorising pass/failure.

There also seems to be some learning with regards to survey responses vs actual performance. I.e., if your default response is 2-Moderate it learns that and gives reducing adaptations when you answer 3-Hard. Whereas if you answer 3-Hard more regularly it doesn’t seem to do that.

I think there may be some more complexity to it than that as well, I suspect that it takes into account your survey response plus the data from the pass/fail categorisation. There’s been some slightly oblique references to that from the TR guys. But obviously if that’s the case that’s their main IP, so not a surprise they would not give away too much about how it works.

1 Like

just an individual case, but last week I did a 5.0 anaerobic workout, I marked it as easy and got recommended a stretch 6.0 workout this week (which I did yesterday and marked moderate). My next anaerobic in a couple of weeks is getting bumped from 6.4 as was originally scheduled to 6.6. So in my case in this particular energy system I’m getting about 0.2 increase when moderate and got a 1.0 bump when I marked easy. Of course I’m sure the higher with the progression levels you go, the smaller the jumps would be if you rate something as easy.

I do think it’s funny how we analyze this stuff when part of the goal is to take some of the thinking out of scheduling workouts lol


This about sums it up hey? TR just cannot win :grinning_face_with_smiling_eyes:


But I will just point out that Adaptive Training is still in beta, and part of being in beta is that users determine a) if it works b) does it do what it is intended to do … not just UI issues (alpha testing)


IMHO this is where much of the secret sauce lies.

The secret sauce isn’t their method, i.e. whether they use xgboost, some neural net, or whatever else. it lies in their data, how they defined their loss functions, and features they built to use in the model. 90% of that is their data, however. None of this can be done without an enormous data set.

1 Like

so far the exalted mountains of TR data has proven of zero value to me.


TR also probably built some kind of workout classifier that interprets what systems a workout covers and at what applicable levels. This could be a nifty application of ML.

Jon and Pete mentioned something about having to add and test a bunch of new workouts. It would make sense that TR would have to add a bunch of workouts to make sure all levels were covered. The classifier would show what types of workouts need to be added.

This also tracks with what it sounds like they’re trying to do with understanding outside workouts and unstructured rides.

1 Like

How exactly would you expect it to work for you?

  • AFAIK, for nearly one year now, you have been following a custom training program via individual coaching with prescribed workouts and performance review entirely outside the TR universe.

  • You may be running some workouts from the TR app, and letting all of your non-TR rides/workouts on your calendar. But since you aren’t attempting to follow any TR plan, I don’t know how you’d expect AT to work for you?


its why I’ve said you either buy into TR plans, or not. The TR plans and Plan Builder are acceptable or preferred by a lot of people, but there is no plan B. Going back in time for February 2020, simply switching to FasCat off-the-shelf plan for 4 months revealed changing intensity distribution was a better approach to increasing my fitness versus TR’s take on SSB. Maybe someday TR will try and assess an athlete’s strengths/weaknesses, and offer a more custom plan. Right now its variations on progressing thru base/build/specialty.


To be clear then, you are judging the current TR offering (revised training plans + AT beta) without actually using them for any length of time, but based on your past experience with old plans, no AT, and in light your new experiences from other training resources.

I’m all for people training in any and all ways that work for them. But your comment seems a bit disingenuous to me when you haven’t even tried to use it in the present form.

“so far the exalted mountains of TR data has proven of zero value to me.”

Again, fine to claim and choose however you like but some context seems worthwhile here.


So be it.

1 Like

Large data set, just means you have a large data set, it doesn’t mean that you understand that data, see the patterns that you should, and it often leads you astray (if users are getting faster by doing sufferfest workouts in tr, you have to make that you pick that up) hence the saying , there are lies, damn lies and statistics … also virtually every user syncs to strava, I bet if I had say Zwift’s user base, I could access strava, spot the user who have “bald knob”, match across other workouts with the same name, and quickly “have” tr’s dataset … strava have a even bigger dataset

Also this was always the explanation for TR progression rate, before they changes it, and introduced AT

Data isn’t the answer, and if it does contain the answer, you need to be sure he answer matches the question


That is true for literally everything in science. Great machine learning model? Well that doesn’t mean anything if you don’t know how to use it. Great microscope? Well that doesn’t mean anything if you don’t know how to use it.

Also, I didn’t say it was 100% data, did I? If they do not understand their data, they are not using the correct loss functions nor are they building the correct features.

The point about Strava is irrelevant. Moreover, most people who link TR to strava also upload their Strava rides to TR. As such, TR has strava rides and zwift rides as well.

I don’t even know what you are talking about here. I responded to someone saying the secret sauces lies in some taylor expansion. I disagree.

Yes, the answer should answer the question.