I would never, ever say that 5x5’ @100% “wasn’t demanding at all” or was only “a bit more than a recovery ride”.
Agreed, it’s semantics. I’ve seen a 7 W swing in my prediction after rating a workout max effort, when it was expected to be hard. That felt huge relative to the total expected gain. I also think it’s fair for RPE to factor in like that. Clearly, my feedback was telling the model that I was not where it thought I was.
The key to the rating for me is the bit about being able to do one or more intervals after the end of the workout.
Did you miss the part that my whole post circles around the relativity of assessing difficulty?
Easy ≠ easy
Just read the chart. Thats it.
Easy DOES equal easy. It’s in the chart.
My addition to the chart: if I need pump up music, it’s a Hard at minimum.
Let me go into detail for clarity. You’re trying to answer the wrong question. The question is NOT “What was your RPE compared to other workouts of this type? The question is “What was your RPE”? So, even if that felt Easy for a VO2 workout, it didn’t feel like it “wasn’t demanding at all” or was only “a bit more than a recovery ride”.
I made a couple rules that I think help me be consistent in my responses:
- If it’s an endurance ride, and I ever find myself looking at how much time is left until the next interval, I should probably rank it as “moderate.”
- If I’m on the border between moderate and hard, and I feel great after a shower and then nail my strength workout, I get to call it moderate.
- If a friend or family member asks how my workout went, and I say “It was very hard,” I have to change my survey response from “hard” to “very hard.”
- I’m not allowed to stop early because it’s starting to feel too hard and then claim I stopped early because of time constraints.
I like those rules. I tend to always er on the side of caution when rating workouts so if I’m even slightly unsure and hovering between ratings I always go for the harder one ![]()
For me:
- bored and not noticing my breathing - easy
- Somehow I’ve never rated anything moderate, so no advice to give.
- Hard - uncomfortable but breathing is still controlled albeit a little hard.
- Very hard - hyper ventilate at any point, muscles feeling real tired and uncomfortable. Maybe some cadence dips.
- All Out - counting down the seconds. Reaching my max indoor HR. .
Considering the continued discourse over the post workout effort scale do we think it might be time to retire the easy/hard/all-out scale and maybe scale it based on expectation such as:
1.) much easier than expected
2.) easier than expected
3.) as expected
4.) harder than expected
5.) much harder than expected
It might not be perfect but it would be a bit more clear as how to answer the post ride survey rather than needing to have the chart constantly reposted. Such as stated above there shouldn’t be any such thing as a very hard endurance ride nor would anything above threshold be necessarily considered “easy or moderate.” For example, basing on the suggested scale an endurance ride at the start of a recovery week might be called “harder then expected” due to fatigue and over the course of the week come in line with “as expected” or “easier than expected” by the end. Where as a Vo2 workout that has you fish-facing after the last set but completed properly might be considered “as expected” if you came into it knowing it was going to suck that bad and it did. Such would be the case with me and sweet spot where if things are going normally for me I would generally call most sweet spot moderate where as on this new scale I’d call it “as expected” since me and sweet spot get along pretty well. I feel like the predicted pre-ride scale the AI provides on each workout could stay the same since it might still be good to know if the computer thinks something easy (below level) versus hard (but doable, at level) but would circumvent the whole “wine label suggestion” (label says you’ll taste oak and cherry “why yes I do taste oak and cherry”) post workout that could potentially happen with the AI guiding you towards a perceived outcome from the start if the post ride uses different wording. This would also mean the AI only has to worry about deviation above or below on a scale +2/-2 where it would be able to interpret whether to provide workouts that are just a little bit easier/harder versus much easier harder if it the outcomes are at the far ends of expectation where being able to answer 3 consistently would let the AI know to stay the course and not deviate. Sure, current scale could be interpreted as I said but then why make it nebulous for no good reason? Especially because then every workout would be “hard” regardless of perception.
Nope.
Keep it simple: “how did it feel?”
Honestly, it’s such a simple question, and it’s baffling how some people just seem unable to accept the sheer simplicity of it! All the other suggestions that people propose have one thing in common: they’re all more complicated than the existing simple question… ![]()
![]()
The AI has a model that allows it to simulate how difficult we’ll find a workout: Easy, Moderate, Hard, Very Hard, Max Effort, Fail. After completing a workout we just give it feedback about how it felt in reality, along a similar matching scale. Nothing more. KIS(S).
Am I watching Netflix - easy or mod
Has nasty Dutch techno been playing? - hard at best but probably v hard or all out!
But that’s exactly the problem: Yet another thread where someone is motivated to re-post the TR approved scale, somebody bound to post the McNeese interpretation of the scale because they like it better and then 2 others posted their own interpretations. I feel like I’ve seen the chart more in the last few weeks than I have in the last few months so clearly it is not clear. With my suggested scale the AI makes the prediction at the beginning whether hard/moderate/not recommended then at the end you get to state your agreement or level of disagreement all relative to a center.
I have an image of Nils Bohr in my head now ![]()
This is the way! That’s exactly how you should approach them. The case I was analyzing here is one where the response seemed to vary significantly from what would be expected.
We think there are some athletes who will always want a more complex/granular survey option, but even in their case, it’s best to simplify it.
As was mentioned above, just answer “How did this effort feel?”, and just be consistent. If you find yourself tempted to answer something different to sway outcomes, that’s a good indication that you’re overthinking it ![]()
Agree, People are trying to break it and then complain when it breaks. Jonathan has to look at Power Data and HR because he didn’t do the ride. The person doing the ride knows how if felt. The overthinking is not needed, the forensics is needed when people say there is a warranty problem with their car’s engine after they add sugar to their gas.
This
There’s more work involved for the punter with your suggested protocol.
The AI currently models the predicted difficulty prior to workout execution, but we don’t have to look at that if we don’t wish to. Indeed, we have a cohort of users requesting that TR let them hide this info because they don’t wish to see it!
Currently, on workout completion, we just answer “how did it feel”. Simples.
In your proposal, we have to first look up what the AI prediction was, then formulate our response relative to that prediction, so that’s an extra step (or two) depending on how we judge that. In any case, that’s more work (cognitive load), more scope for errors (eg. misreading the AI prediction which we need to base our response relative to), and therefore worse outcomes due to more errors creeping in.
Sorry, but this is like a Govt suggestion for “improving efficiency” that adds complication & cost. ![]()
TR has reinforced since the very beginning, that “expectations” are irrelevant and not useful with their surveys. So this proposal is headed in the wrong direction.
It is always about “how it felt”, and is why I emphasize that in the chart I made.

