Under the hood of Adaptive Training: Analysis in four graphs

For sure! this had to be the first step, I would think. That way they can go back and classify levels to all of the workouts to the right progression level and analyze failure rates, how PLs were associated to FTP increases and etc.

My applologies, I thought I saw somebody making the “TR is right because they have the data to prove it” argument, and should have taken more time to read what you were actually saying

1 Like

I see. I agree with that, and while I am excited because I think I can see where the puck is going, AT is v1.0. Think Windows 95, iPhone OS 1.0 or Mac OS 10.1 — early times, but you can recognize that the fundamental ideas are good.

1 Like

I am entering the last leg of my first season where I was on the new plans and mostly on AT. It definitely takes a while to get to know the system and build trust. Because IMHO that is an essential component that simply takes time: you need to understand how AT reacts to certain inputs and then see whether they work out in the end. For example, I finished 7 out of 10 VO2max intervals today on a level 7.0 workout. I earned a PL credit of 0.1. Is that deserved? I don’t know. Time will tell.

Those data sets are not the same. Strava, for example, does not know whether you are adhering to a plan and what plan you are adhering to. What is your compliance rate? Strava doesn’t know that. I think with a lot of effort they might be able to reverse engineer some of that, but I don’t think that is feasible in the long run. Likewise, Zwift and Peloton also have big datasets, but it attracts a different type of user. So IMHO TR’s data set is unique and probably the largest in the world.

1 Like

Edit: this text is in response to your comment that you went to a coach and changed your approach outside TR to make your recent improvements.

Slight side bar needed to make a point. But generally, you may be on to one reason coaches will not be put out of business by AT. Which I hear stated here and there. As well as the complaints lofted which likely are just recognition that coaches will stay put, though their job will evolve for cases like yours and to work within the new coral of ATs.

For reference ML and Adaptive are not new to cycling. The traffic industry has used ML and the term Adaptive to make real-time adjustments to traffic signals for decades. And by the way, Timing Plans are changed at signals. The similarity in these systems and also the conversations on this thread make me smile, a lot. Though adaptive traffic systems have been operational for decades, still Traffic Engineers are needed to make Timing Patterns and also setup the systems. There are no less Traffic Engineers around. Actually demand is high and more exist because there are more opportunities now for improvements and more demand to solve new problems. If what you have found is true, I think where TR AT might consider evolving is to support coaches defining some of the macro aspects to meet the plan itself, with the AT adaptions and 3000 workout library. Seems like a tremendous value for coaches and riders.

2 Likes

As somebody who provides large datasets of patient/population demographics, I don’t think I would have any trouble getting most of this done quite quickly, if I had access to strava’s dataset, I don’t but people like xert and interval’s icu do, as TR recommends a ramp test every 4/6 weeks, you have some great anchors before you even start doing analysis of the data, and most people don’t edit the name of their workout, deviation and noise could quickly be sampled out, not saying it would be easy, and there would be a few runs to clean the data, but don’t think it would be as hard, of course AT makes that a lot harder, but we are talking pre AT (as we are talking about TR large dataset allowing them to implement AT)

But this is exactly my point, don’t get blinded in the fact that you have a large dataset, first off where is your control group, second your data is incomplete. if you have two users, and one if getting faster than the other, is it because they are complying to the plan or that they are running, skiing, walking 4000 miles a day, to get to the gym … data which somebody like intervals might have, but is missing from your calendar in TR

IMHO 3rd (Strava and Garmin are larger datasource, and probably contain the TR as a subset, and like Nate said in the announcing video, contains things like steps, that have a big influence on fitness) and like I keep saying, don’t get blinded by the size if your dataset, unique isn’t the same as complete, and doesn’t mean you have all (enough) of the unique data

But there not the ones of interest, it’s people like the breakaway (Review: The Breakaway App: A Glimpse of Future Zwift? | Zwift Insider), (and I am not commenting on their app) who are doing (attempting) to analysis of training data (like xert) who are not constrained by their own dataset, and using strava/garmin e.t.c, to suppy there engines

After all TR is a small company, who specialise in making you faster, not data/statistical analytics, I think it’s wrong not to query either

1 Like

First of all, good to hear the perspective of an expert. I’m not, I’ve just been close to people who are for quite a while. So take what I write with a grain of salt. Overall, you are right that other data sets could be mined to extract the same type of information.

Oh I agree that it can be done, and that you would be able to extract some of the information rather easily. But I think the effort required to extract all the relevant information TR has is more substantial, so I am a bit skeptical how easy it is. For example, one core point, I think, is that you need to figure out what kind of plan the athlete is following to deduce what kind of improvements they are after — and therefore, by what metrics you should judge the level of success of a training plan.

The second thing is that Strava is only observing whereas TR can intervene. AFAIK what I describe is not implemented, but they could roll out two or more competing strategies to a subset of users and benchmark two different strategies. That is, in principle TR’s data is not static and its user base could be harnessed to create active feedback loops to refine its algorithms.

That is true for TR, the lack of certaind ata is a blind spot. But at present none of the platforms I know of easily log information I’d consider relevant: sleep, nutrition, menstrual cycle for women, etc. I know that TP can log some these info daily, but their app looks like a website designed by someone in 1997. And it doesn’t hook into e. g. Apple Health to extract that info automatically.

You are right that Strava is more complete than TR in this respect, something that is highly relevant for duathletes or triathletes, for example, but for people who are primarily cyclists, I think this is less of an issue.

I don’t know Zwift very well and I am not keeping up with the day-to-day improvements they are adding. However, just judging from how the people I ride with use Zwift, none of them do what I would consider structured training with Zwift. Some use Zwift as part of their training (e. g. by doing specific rides or races), but I don’t think any of them is following a training plan. Ditto for Peloton. Both appeal to different audiences, so I reckon the data they do have is very different.

Perhaps they are not big data experts yet, but they could become one in their niche. I know quite a few people in the Big Data/ML/AI realm, and when you are talking about very specialized applications, from pro sports to processing of satellite imagery, you don’t need a very big team. TR as a company is certainly big enough. The problem is mostly your ability to attract top people. But one way to attract them is closeness to cutting edge research. (That was the motivation of a former student of mine.)

Just to be clear, The BreakAway app has absolutely nothing to do with Zwift, absolutely nothing, doesn’t link to Zwift, doesn’t get it’s data from Zwift, and doesn’t offer Zwift workouts, the article about the app was just hosted on the the ZwiftInsider web site, which is independent from Zwift but often talks about AI, so I think you missed the point I was making when you started talking about Zwift and Structured programming, something I wasn’t talking about and I never mention (or zwift)

Have a good one, I’m off

1 Like

I think what most people misunderstand (because TR sells it to them like that) is that the neural network is NOT what picks workouts for them. I might be wrong, but from my experience with machine learning, that’s not how these tools work. You need lots of well defined input for any “question” you ask.

TR use some machine learning for classification problems (workout levels, fail/pass, maybe soon classified unstructured workouts) but other than that, it’s probably just some heuristics (if->then) that someone manually put in and tweaked a bit.

Of course they “learned” about attainable TSS ramp rates over the past years, but that’s no ML most likely.

OP is doing us all a great favor by experimenting with his data to find exactly these heuristics that pick your next workout.

There is no magic black box. AT does not “learn” the individual athlete.

The first real challenge will be the categorization of unstructured workouts. That’s actually a hard problem and I’m curious how they will figure that out. We see with the not very great fail/pass algorithm that it might take a long time before this will work well.

3 Likes

It’s really genius by TR.

Make everyone believe an artificial intelligence picks your workouts when in fact it does not really. Because when you use some ML to set up your algorithm, you can phrase it like that, without lying.

I see tons of posts where users have the wildest speculations what AT picked up on to give them a certain workout. It’s funny and sad at the same time.

1 Like

TR does not oversell this, this is how machine learning algorithms work.

With machine learning there is a computationally expensive learning phase, at the end of which you receive a set of parameters for the algorithm. Simply put, you pick a suitable algorithm that has lots free parameters. Then you tell the algorithm what to optimize for and throw test data at it. Sounds easy, but it is not, because you e. g. need to choose the right algorithm type for the problem, choose the right thing to optimize for and then make sure you don’t have any inherent bias in your sample set. The last bit is a lot harder than it sounds, for example.

This second stage is simpler, because you “just” take the parameters you have obtained from the first step. Since this still involves a lot of matrix multiplications and such, it benefits from specialized hardware. All modern smartphone chips include specialized hardware to accelerate ML workloads these days.

A lot of people use the buzz words AI and machine learning (and formerly Big Data) interchangeably, even though they could mean subtly different things to experts. For sure, ML algorithms will not lead to an AI in the sense that non-experts think of it.

ML has become quite standard, e. g. if you have an iPhone or iPad then you have APIs for all of that which make it easier. You don’t need to be an ML expert to train an algorithm, many things are ready-made.

2 Likes

Thanks for your great input! Let’s speculate some. What exactly do you think the smartphone computes? I think we both agree that the training of the neural networks (or whatever they use) happens on their end. But I think also the application of it (e.g. to classify a workout as fail/pass) probably happens on their servers, not on people’s phones.

For workout adaptations or adjustment of progression levels, I doubt that any neural network is executed, that should be just simple IF/ELSE logic, like @Joelrivera put it. But that’s speculation on my part.

You seem to have some knowledge in the area. I use ML myself for work, but wouldn’t call myself an expert.

I agree with both! They do not spread lies about their product, but it must be very convenient for them that many people vastly overestimate the current capabilities of their product and the timeline of higher-level capabilities. I have not seen them dispute an exaggerated user speculation in the forum yet.

2 Likes

I’m not sure what you mean here.

It does not matter where these computations happen, an algorithm remains the same algorithm independently of where it runs. E. g. Google and Apple both have image recognition algorithms. Google’s tend to run on their servers while Apple’s tend to run on the users’s devices. That doesn’t make Apple’s approach “true ML” and Google’s “fake ML” (or vice versa). We can argue which algorithms work better or which approach is better, but the fundamental nature of the algorithms is the same. That is what matters.

Reading your post, I get the impression that you think “traditional programming” (which is what I meant when you referred to IF/ELSE statements) is easier than ML. These days this is not the case. Put plainly: what you consider the easier approach is actually the harder approach for this type of problem.

Just look at the evolution of chess engines. In the past, the most powerful chess engines relied on huge databases of old games, lots of what you’d consider IF/ELSE programming where e. g. the algorithm for the evaluation of positions was driven by humans.

Turns out, it is easier to scrap all that and use ML: you set the rules of the game and then you let the computer play against itself millions and millions of times. Google’s Alpha Zero trounced the best traditional chess engine when it came out.. And Google has used the same techniques to create a Go engine, which was traditionally harder than a chess engine.

And precisely because you have robust ML APIs that run on all of the major platforms, you can do things like image recognition and data analysis with these tools. When I first heard of these techniques in the mid-to late 2000s, they required expert programmers from the field and had no hardware acceleration. Oh, and computers were much slower, too.

Just because the end result looks like a bunch of simple linear regressions for a single user does not mean that this TR could just replace AT with a bunch of linear regressions.

I have enough knowledge to be dangerous, but I am not an expert in the field of Big Data/AI/ML and I don’t use these techniques. But I have a good math background, so I understand the fundamentals. And by luck I have met lots of experts in the field a few years ago.

If you just listen to the official pronouncements on e. g. the podcast or in this forum, I think they have been quite apt. I think managing people’s expectations correctly is hard, as is meeting deadlines for software products. But I don’t see any malintent that you seem to infer in TR’s actions and pronouncements. If some people participate in the beta and expect the polish of a final release and perhaps some features that are a few years down the road, then this is not necessarily TR’s problem. (I’m saying that as someone with extensive beta testing experience.) Managing expectations of upcoming releases properly is hard for software and hardware vendors, but I don’t see that this is the case. People who don’t listen to the TR podcast or read this forum, which I assume is the majority of its users, likely still don’t know about AT.

Lastly, I don’t even think anyone can say how AT is working, because no one (outside of TR possibly) has trained a single season on it. It is still in beta. I have switched to it in the middle of a block, and it seems to do what it is supposed to do. Since I haven’t traveled in about two years (with one exception), I can’t stress test it when it comes to rescheduling, though.

1 Like

I do not think that, sorry if I gave the impression. It’s of course super application dependent.

Here, I think I just don’t understand how the data set would look like that you would use to train an ML tool that suggests workout adaptations. You can’t just tell a ML tool “find the best sweet spot workout for next Tuesday for a person with XYZ progression levels”, because there is no data set to train that tool on. What would that even be, a random forest regressor? So I am guessing that these decisions are just “traditionally” coded, as you put it.

You identify why machine learning for something like chess, go, the reconstruction of an image of two colliding black holes (yes, that’s an actual application, very exciting) or Adaptive Training is still a hard even if “the algorithms are known”. I think this is TR’s secret sauce, i. e. a secret they won’t share.

If I had to speculate, at this point Ireckon that they gave a rough progression and specific workout types — a skeleton of the training plans, and AT “just” modifies the progression values. That would fit what happened: they introduced a bunch of new workouts, which are essentially versions of one type of workout at different PLs. Then they updated the plans so that they ramp via PLs rather than TSS. It would also constrain the problem space to variations of a given set of workouts.

You are totally right that a key ingredient is what you would consider success and how to quantify it. I have no idea to be honest. I could spitball, but I’d just be shooting in the dark. But this is something you’d have to use as an input when you train your ML algorithm to rank different versions during the training phase.

1 Like

Skipping the ML chat which goes over my head, I don’t think I fully understand the chart below.

What is meant by “nominal FTP”? Is it just that a +1 increase in PL tends to give you a workout with +2.8% intensity? Or is it that cyclists who do a 9.1 SS workout then tend to see a 10% increase in intensity?

It seems strange the way this is framed around “FTP increase” rather than just “workout intensity”.

With perhaps a goto or default switch case at the end of the FTP updated handler to reset all progression levels to 1.0.

1 Like

That is what is depicted in the 4th graph above.

1 Like