AT: Easy vs Moderate vs Hard vs ... [Pass / Success Survey Responses]

Yes I’m confused. I think I’ve been completing it in a different way to how TR is advising.
e.g. VO2 max felt hard to me (though I completed successfully) so I marked it as Hard.
Based on what I’m reading above, if I expect it to be hard, and it was hard, I should mark it as Moderate?

1 Like

The rating doesn’t matter globally. A 2 for me will not be the same for anyone else and with machine learning it does not need to be. Machine learning is just looking at this like it’s an up or a down and as long as you are consistent it will score it relative to YOUR other scores, not anyone else’s. I think you should find a reasonable classification, such as Chad’s above to make it easier for you to be consistent with your choice. I like using Hard for most productive workouts because that gives me somewhere to go up if the workout is harder than normal, but then you still have one even higher when it’s way above your pay grade. I think having 2 notches below normal is also a smart choice for those workouts that you felt completely unchallenged or were pure recovery rides. A 5 star rating such as this seems like the easiest simple way for folks to gauge anything.

1 Like

It should be this. An official communique saying “it doesn’t really matter as long as you are consistent with yourself”. Pretty sure that podcasts had that in one of them anyway.

1 Like

What is the point of “how I expected to be”? If i expected to be hard (let’s say 4x15@97% FTP) and it felt moderate so easier than I expected how I should score it? If I score it moderate then it is based on RPE if I score it hard I am giving contradicting score.

I basically do not care as I do not use plans/plan builder etc but I am just curious.

3 Likes

I am attempting to pool some info here for reference, along with my thoughts on it that I hope TR can review and comment.

I have more comments to add and I would like to see what else we can learn. As of now, this is more confusing than helpful to me.

21 Likes

Yes, is it all based on some measure of perceived effort — it is just a question of what it is relative to. My understanding is that RPE is relative to just your perceived level of exertion (an example would be to rate how fast the Ford Mustang felt to drive), but TR is asking us to rate it based on the perceived level of exertion compared to what we are used to for that type of workout (how fast did the Mustang drive compared to a Toyota Camry). For your exact question, I would just rate it as something easier than hard depending on how much easier it felt.

2 Likes

FYI, here is the communication I had with customer service.

My question: Are the survey responses supposed to be similar to RPE (i.e. for example you should always expect an endurance workout to be less than threshold), or is it relative to how hard you expect an endurance or threshold work out to be?

Customer Service Answer: As for the survey responses, they should be answered relative to how you expect a ride in that zone to be, and not related to the RPE scale.

4 Likes

And that is why I have a problem with the TR comments. The one prior says 2-Moderate = as expected. But what if you are doing something like a challenging workout like Leconte (Painful Over-Unders)?

I have never been anywhere close to “Moderate” on that workout. I have been a 4 at best when everything was on point and I have had my fair share of what I would give a 5 too often with that beast. I guess we should “expect” it to be miserable and then rate it a 2 if I understand that logic?

Again, crazy to me if that is the intended use, with the fact that there are 3 ratings harder than that 2-Mod. Based on all I learned over the years, and in light of the new Workout Levels, I know that workout in particular is in Breakthrough/Not Recommend territory for me more often than not. So maybe that just points to that example being something that should not have been on my schedule.

All that is to say that I still don’t think I like or align with what we are learning from TR with respect to the “expected” use of surveys and the likely impact as a result. That is especially so since the initial and repeated comments since the start of the beta were “rate it how you feel” and then telling us the system would “learn from us”.

Maybe that is true, and my system not aligning initially with what we seem to be learning from TR are somehow meshing and becoming a proper rating system. But I have serious doubts based on their more recent comments, and looking back at the PL changes as a result of my ratings.

This is pretty frustrating to be told to just “go with it” in the beginning, only to learn that there may well be more specific definition they have on hand and use, without sharing until we pull harder to get the info.

8 Likes

Maybe they are still figuring out how to make AT work best for more users… and just go with it was all they had at the time, and later realized it led to inferior adaptations

1 Like

Yeah, the two different comments seem to contradict. To me, the easiest way to have set this up would have been to simply use the traditional RPE scale, but based on the feedback I got that does not seem to be the case. We just need some clarification at this point, because we are getting different apparent answers from TR.

3 Likes

Yeah, sure could be the case. If so, that is great they are learning and working to improve.

But we need actual guidance from them if those prior instructions are not correct any longer. And that should be shared wide and far as opposed to the few people getting it via direct contact with TR Support. We only know what we know now because beta users are sharing (and I am thankful for that).

We need a clear and direct set of instructions (be the more clarity on the “old” system or whatever “new” system may be in place) via TR in these threads, the FAQ and such.

6 Likes

When I was trying to figure out which scenarios supported progression level updating, someone from TR was going to write up an FAQ. Did that happen?

1 Like

I think I saw it posted on another thread that a 4 will slow and possibly even slightly reduce your progression rate.

Coming off of a rest week this morning I jumped backed into SSB-HV II with Antelope. Prior to riding AT adapted my plan to my SweetSpot PL (8.3). During the workout I needed a little extra rest between the last two sets so I rated it a 4. AT adapted the plan after the ride and made the subsequent workouts a little easier (PLs 6-7).

Yes:

1 Like

Maybe you should revisit the notion of “threshhold”. By definition, threshhold is the hardest effort you can make in a given time frame. If it’s easy, it isn’t threshhold.

All in all, I wish the post ride poll was in terms of expectations - i.e. easier than expected, as expected, harder than expected, etc. THAT would be clear as a bell. No second guessing involved.

5 Likes

@mcneese.chad , as usual great observations and nice work jumping in the the chart. Thank you!

I was one of the posters a while back asking for FAQs, users’ guide, etc from TR. A beta would be a very good time to leverage participants in providing input and feedback so it’s ready to go when AT goes into production.

And yes, the survey ratings seem to be a key factor in helping people get faster quicker and more consistently so should have appropriate attention.

2 Likes

I think the problem is they do not have a “best practices” yet. I think they have way more issues to figure out than they anticipated and one of them is communication around the surveys.

2 Likes

I agree.

I, and I think plenty of other beta users, including yourself, have been responding to the survey in accordance with the simple, specific question being asked, which is:

“How did this effort feel?”

The question isn’t “how did it feel compared to what you expected” or any other variation that’s possible or been discussed, it’s: “How did this effort feel?”. There’s no wiggle room there - it’s a simple question and I think many of us have been answering (correctly :wink: ) in accordance with the straightforward question being asked.

It’s enlightening that a few people have posted up some of the feedback they’ve received from TR support staff, because if those support staff are correct (and perhaps they’re not on script and have made an error?), it’s implying that the question we’re currently being asked in the survey is perhaps nonsense and needs to be re-worded. And, it feels a little like the rug has been pulled from under us.

The question wording is crucially important so that we know what question to answer - doh! - and not have to attempt to game our responses in order to engineer an outcome that we think is correct. Pretty fundamental stuff…

Responding to the survey should be straightforward without requiring any over-thinking, second-guessing or gaming. The wording needs to be aligned with how TR interpret our responses and it needs to be clear, otherwise our responses are just feeding back garbage.

Hopefully this issue will be clarified, and if TR really is asking us something different to the (current) simple “How did this effort feel?” then they need to (a) inform us of that and (b) they need to change the wording of the question so that we know what we’re being asked. This is - or should be - basic stuff.

13 Likes

Great post!!! :+1:

I added the actual survey question into my Google sheet, because I think stepping back to that reality is important!

image

1 Like

I don’t think I know my cycle-self enough to know how I expected it to feel. I always answer in respect to RPE. I hate RPE.
Imagine asking someone new to exercise “How did you expect it to feel?” Huh???

2 Likes