Please don’t put words in my mouth. Of course, I don’t think that, and I never wrote that.
Yes, which is why applying ML is difficult and fraught with risks. Even independently of what techniques you use, you want distill some essential information from a whole host of data. Deciding which metrics are important and which aren’t is key.
For example, as best as I can tell, the purpose behind PLs is not to quantify performance, but to select workouts and achieve progressive overload at prescribed rates. I suspect this is the reason why TR hasn’t done much with them yet (at least publicly). For this limited purpose, PLs work well for me even without AT.
Of course, this means AT’s functionality is quite limited at present, and I am missing good analysis tools to judge my progress.
I’m not quite sure what you mean. Yes, the public version of AT cannot ingest unstructured rides at present. But I struggle to understand the rest of the sentences.
Can you be more precise? What can multiple programs predict currently? And from what data?
Yes, and? TTE is not necessarily a relevant measure. For shorter VO2max and higher efforts, you might want repeatability and rather than TTE. Certain smart watches also estimate your VO2max. But what does that number tell you? How does it inform your training? Ditto for TTE? Why should I care, i. e. how does it relate to performance outcomes that are relevant to me?
To be honest, that’s one important issue we haven’t talked about in this thread at all: what information do you expose to the user? I have ranted about TR’s poor performance analysis tools in several other threads (e. g. here), but I understand the problem isn’t simple.
Basically, people can only track a few metrics, and it is the choice of metrics that is the tricky bit. This is really where a good coach can help an athlete: they find out where the strengths and weaknesses lie, what the athlete wants (e. g. be good at a certain cycling discipline) and then weigh whether to focus on strengths or weaknesses (limiters). TR could (and should) take a stab at this, but simply predicting numbers from data might not be helpful at all.
Ideally, I want that TR analyzes an athlete’s past performance, identifies strengths and weaknesses and tells athletes what a particular plan emphasizes. Athletes should know why they should track certain numbers (and not others). Perhaps it does surface TTE as a metric for people choosing the 40k TT plan or the tri plan. But it exposes other metrics to athletes from other disciplines.
I completely agree, and I wrote as much above. End users don’t care whether “the computer” got the result by a traditional algorithm written by a human, an ML-based algorithm or a weegee board.
ML should also be used with a lot of caution. If you listen to the talk I linked to, a vast part of the research was to ensure that the algorithms were not biased — i. e. it reconstructs what we expect the event horizon of a black hole looks like because we want to see a reconstructed image of a black hole.
I know full well it is not a panacea: Amazon used ML carelessly when pre-screening applications: they trained their algorithms to reproduce sexist bias, where e. g. being a member of a women’s chess club was counted as a negative. Another one is Google’s image recognition snafu, they apparently had too few (or no) black people in their training set and black people were categorized as gorillas. I also don’t quite like if colleagues cannot answer what ML did to their data and why it was necessary.
However, treading carefully and resisting the urge to simply produce numbers (e. g. TTE or whatever other metric you want) without thinking about whether this is useful for the user and in what way it is useful seem like good ideas.
Most likely. From what I remember, they have a Quantifier that judges how hard workouts of different energy zones are, although if memory serves there was some human input as well. So in this sense it is also a matter of semantics whether AI FTP is just a feature of AT or not.