I’m just exploring the updated platform and it’s clear that a lot of work has gone into it. Bravo team!
I’m wondering whether there is any concern over the potential negative effects of ‘AI Predicted Difficulty’? We know that psychological priming is a real and powerful effect. For example, seeing someone enjoy a new type of food is more likely to make us find it pleasurable than if we see someone find it unpleasant. This isn’t an effect that we can just ‘opt out’ of, by asking ourselves more carefully. Like the placebo effect, it still happens even if we know about it!
As a part of this update I’m now being told in advance whether I’m likely to find a workout hard, very hard, all out, etc. We already know from discussions on this forum that many users find the feedback survey challenging because of its ambiguity, but now we’re being heavily primed to give a particular response. I believe we’re more likely to give the guided response in these cases, even if we’re thinking very carefully about it and trying not to!
This risks setting up a problem for the AI, which is measuring its accuracy based on the feedback that it has suggested itself.
I’m wondering what research has been done by the team that led them to create a potential priming feedback loop in the AI Difficulty Prediction?
Thanks for that question! I experience the same problem: all my current endurance workouts are predicted as Easy. After riding some of them I would say, that not all are easy but I would tend to target them as moderate (simply because riding 1 1/2 hours at 60-75% FTP on ERG mode is nothing that I would target as really easy but more likely as moderate). So I tend to label them as moderate. But this causes AI FTP detection to lower my predicted FTP progress and reduces the level of following workouts because it seems to say: hey, you have to do easy workouts! If you don’t feel so, we will reduce the effort you have to do.
But I have no problems to ride the original workouts, it’s just that I do not feel them as easy as doing pure recovery rides. But now I tend to label them as easy just because of trying to avoid a reduction in training.
And another thing:
Endurance 4.1 / 1.5hrs / TSS 65 / IF 0.66 is predicted as easy
Endurance 2.8 / 0.45hrs / TTS 34 / IF 0.67 is also predicted as easy
But compared to each other, the first is definitely more ‘challenging’ then the second. But both are labeled as easy. Not really intuitive.
I’ve found it’s not as simple as always getting a downward AI FTP adjustment after rating an endurance ride “moderate” sometimes the opposite happens.
If you click into the predicted difficulty graph you will probably see that these are not nailed on “easy” rides - mine are usually 70% easy 30% moderate or something like that…. so it follows that it’s normal to be rating around a third of these rides “moderate”.
Easy is the lowest rating there is. If an 1.5 hour endurance ride is predicted to be easy it seems intuitive to me that an easier ride is also labeled easy. What else would you label it as?
Just to clarify that there are two separate issues here, but that overlap in our experience of the product.
The first is the perennial issue of how we rate perceived effort. This was easy but not without effort, therefore is it moderate, etc. If I rate something as very hard will I get easier workouts, etc. The TR team have spoken a lot about this.
The other, which I haven’t heard them speak about, is how being told that we’re likely to find something moderate, easy, hard, primes us to judge them as such. This is a new element that has come in with the recent update, but I’m interested as to how they have thought about it, and how it works (or doesn’t) with the self-refinement that we find in machine learning of all sorts.