AT: Easy vs Moderate vs Hard vs ... [Pass / Success Survey Responses]

Is there a guide to what TR considers easy/moderate/hard/very hard? It seems that if an endurance ride is hard or a threshold ride is easy, something should be set wrong, or maybe I’m misunderstanding. I could use some guidance on what to respond when the intervals seemed just right for where I’m at (i.e., threshold intervals were hard during the interval, but the entire workout was manageable and left me ready for whatever else I had to do that day.

Moderator edit: Here is the official TrainerRoad support site document that covers workout surveys.



This is via The Other Chad :grin:, his take on it. Has worked well for me so see no reason not to go by it.
Post 1209 in th Beta thread - worth a read


For clarity, that is my personal list that I shared. It is not official or ‘TR Approved’ in any way.

Notably, based on some comments from people who have discussed these ratings with actual TR reps, I am pretty sure my list does not align exactly with the comments TR shared.

I’d really appreciate it if TR would make an official rating list. There is constant confusion about them and the ‘just rate it how it feels’ comments of the past aren’t helpful when I compare my own list (above) to the info I’ve seen via TR rep discussion with others.

Maybe @IvyAudrain can look into getting clear and official guidance here.


(04:24:09 PM) Bryce L: I understand, and I think better messaging can be around that.
1 - Easy - Easier than expected
2 - Moderate - As expected
3 - Hard - A bit of a challenge to finish successfully
4 - Very Hard - Had to take some extra recovery, really struggled but made it though

I have issues with the official rating info shared, when we have heard countless times that things like Sweet Spot workouts are frequently “Hard but doable” from Nate and others. That just seems counter to the 2-Mod is “as expected” based on so much of what the team has shared and discussed over the years.


Sorry Chad - I was just adding that and you jumped in before I’d finished


Much appreciated for the edit :+1:

What I’m curious to know is how AT takes each answer. From the other thread, if you answer “Very Hard” (and presumably “All Out”) you will get a decrease in the PL of the next workout. Which to me, doesn’t sound right, at least for the “Very Hard”. Maybe I had a stressful day at work and the workout felt extra hard today. Now for the next workout, it’ll suggest something easier (from a 4.7 to 3.5 in the guy’s example). Now if I have a normal day, the 3.5 should feel pretty easy if I’m completing 4.7s. Seems like I’d get a better workout if I did at least something in the mid-4s.


This isn’t really going to work. For example: I expect Pettit to be easy. If I do it and it is easy, then that is as expected. So I’m supposed to mark it as moderate? There has to be some objective reality here; I think that @mcneese.chad has nailed the difference between easy and hard.


IMHO this is why TrainingPeaks has separate surveys for:

  • how did you feel?
  • perceived exertion

Seems to me that TR is trying to simplify to the point where you need to game the system.


Thanks. I did a search, but never found that.

Agree that TR needs to clarify if the rating is an absolute rating, or a relative rating, where Relative would be compared to the ranking. E.g., if you are doing a Stretch workout which theoretically should be hard to very hard, but it didn’t feel too hard you rank it as a Moderate. Conversely, if you are doing an achievable workout which should be moderate to easy, and it felt more than this, you’d rate it as a hard.


If their data scientists are on the ball it shouldn’t matter as long as you are consistent with your own ratings.


No. That is the whole point of having different ratings for different types of workouts. Someone could have VO2Max as a real strength, and able to execute VO2Max workouts with ease. Someone else could be a complete diesel, able to hold aerobic power for hours without HR drifting at all, but lack high end “snap”. Different types of workouts will be challenging for different athletes - it doesn’t mean anything is set wrong.

I think thats what I gather from that list. Though like you I was marking rides like Petit as Easy before. Sounds like I need to change my rating concept too. I imagine if we did Petit the Tuesday after a rest week then maybe its Easy though. You know those early days where everything seems to just spin effortlessly. But Petit gets to Moderate in 2-3 weeks?

My thought is the ML is really are after those outliers like Very Hard markings to know if they need to ratchet back, or Moderate and Easy on over unders to know if they need to ratchet up. Personally I’ve been manually racheting my over unders down from my plan knowing I’m still around 3-4 and not 5-6. But thats me. I’m curious if anyone else is seeing that auto adjust.

I agree with the need for clarity. If AT is trying to quantify adaptations out of the ratings, then we need to know what the expected measures are.

In defense of Customer Support, this advice did help me. I was rating “as expected” Threshold & SS workouts as “hard” and a couple of “finished fine but had to push” as “very hard” and I would get no progressions or even downgrades. Shifting to what CS recommended has helped me progress at a reasonable pace with no failed workouts.

I like @mcneese.chad definitions as well; however, it all comes down to how AT is using the ratings. From what I can tell, they seem to have a significant impact. For example, I believe the issue in question was that I substituted a more difficult threshold session, completed it fine but it was tough. I rated as very hard based on how I was personally rating. This caused a slight downgrade, which seemed off since I completed the workout fine. The way we fixed it was simply changing the response.


Except we already know if you choose all out, you get a survey that indicates AT thinks the workout was too hard, and doesn’t give you PL credit. So if hypothetically I shifted everything two places: Easy → Hard, Moderate → Very Hard, etc. I would “break” AT. So there is some assumption with the way TR implemented the rating system and how it impacts AT.

This is the problem with ML: if the builders put in a bias due to the data set used for training, then the model is biased. I’m not saying TR purposely biased the model, but I’m pretty close to 100% sure that the model and training data is biased around how the ratings should be used. TR had use to use survey responses to tune the model, and these initial responses came from TR employees (hypothesis). I’m guessing there was some discussion initially, and the TR employee set isn’t a good proxy for the general set of TR users, so that dataset is skewed.

Yes, I’m stringing together a lot of assumptions to reach a conclusion, but I really think this is one place where TR could help people, and itself by providing more direction around the intent of the survey responses, and how to use it.


For me in relation to 1-10 (easier scale for me personally in terms of thinking about the effort):
Easy: 1-3
Moderate: 4-5
Hard: 6-7
Very Hard: 8-9
All-out: 10

I know that is all subjective but this is my translation. And for example 4x20@92% usually falls into moderate/Hard, depending on a day, 4x15@97% is usually Hard. VO2 max is very hard and all-out is for testing. I treat all-out as “I have fallen off the bike and lying in the corner crying and cursing myself” - good example would be my “favourite” 5 min test. I rarely do all out workout - if I have not finish the workout there are usually some other factors.


I asked customer service this very question, and it is not based the perceived effort of the ride (i.e. RPE), so you can’t just mark rides easy if it felt easy and hard if they were hard. The survey is meant to be answered based on what you expected the ride to be (so endurance rides can be hard if exhausted) and threshold rides can be easy (if well rested, have increased fitness / FTP, etc.)

This thread, to me at least, highlights the need for TR support team to have a staff only thread or FAQ help page.

There’s a few repeated questions in the beta thread, missed information & here we have a few misunderstandings on personal feedback. The bug tracker page doesn’t really cover things like best practices or how results work mainly cos that’s not a bug just training.


This is a good thread. I was thinking of posting the same question myself. I have been answering the AT question with an interpretation exactly like that from @mcneese.chad , so it’s interesting to hear that AT might be working with a different definition. I’m sure it’s important for everyone to be using a consistent definition.

1 Like