AI Chatbot for discussing training, planning, etc

Come on now, you skipped a major point in what you were referring to then stuck to your guns initially. It happens and you went back and figured it out. No biggie but I hardly think that applies to AI one way or the other.

As to the other point though, having talked to those involved in inventing FTP a fair bit, what they shared was how repeatable it is. That’s one thing but FTP works because humans operate in a fairly small window.

I stand but my comment though and was correct in that you can’t ask ai about medical conditions and expect a reliable response.

See my post above.

LLMs do this, they only look at so much data… but dont go back and correct themselves. In this you are incorrect, in fact worst than a human the can reason and correct mistakes.

For anyone interested, Coachcat has done a whole lot of upgrading to their chatbot, even over just the one month I’ve been using their software, and it’s grand. Some of their systems are still in their infancy, and I find the Z2 workouts to be even more tedious, but the integration of strength training is nice. I was especially wowed because it asked me how I felt, which muscles were sore, suggested some stretches and then created a plan and added it to my calendar. That was very well done. It’s still not perfect, but I’ve stopped progressing with TR, and I’m kinda sick of the whole “your body will naturally adapt to the training and make body comp changes” bullshit.

1 Like

Just talked about this on the Velo Podcast in slightly more detail. Check it out when it lands on Friday.

2 Likes

Can you elaborate on both of these comments? I signed up for the 30-day free trial, but not sure I’ll be able to adequately evaluate these and other points in that time period. Thanks!

Sure. They have a nutrition plan for example, but it really only has about ten meals in it that are constantly rotated.

They have drastically less variety in workouts. I’m TR, you’re going to get hills and valleys in even your Z2 workouts. You might have hills at 170, valleys at 150, so your average wattage ends up being 160. With CC, it’s going to keep you at that wattage the entire time with no variation. I’m finding that the CC method is making me stronger, though more boring.

I do really really like the way the chatbot will adjust things and make you think about your workouts more. It just feels more holistic, which TR have more or less said they don’t really care about.

1 Like

I think this is mostly because TR was built around indoor workouts and FasCat was built around outdoor workouts. I give TR full credit for creating indoor and outdoor versions of the same workout. The solution for the FasCat ones indoors is to turn off erg.

1 Like

Late to the thread… but my take was more that AI (or any human for that matter) would never know if testosterone was low without a blood test.

But back to the question if AI can be good for training, planning, etc. I think it does a better job planning training than diagnosing medical issues (too soon?) :squinting_face_with_tongue:

Ok really… back on topic. I once heard it said that there are two facets to coaching: the science of coaching and the art of coaching. The science side is where you understand human physiology and can piece together a training plan. I think AI can do this very well. On the other side… the art of coaching is the human side… the conversations, the buy-in, motivation, etc. This is more valuable in my opinion than a perfect training plan.

2 Likes

For the millionth time, I never asked it to nor did it diagnose low testosterone. That was part of a larger discussion.

As to the other point, totally agreed. I think there’s also another level of coaching that the Ai really isn’t going to offer. This is what I talked about on the podcast for Friday. It’s good at analyzing my Garmin data but if I truly wanted to make a leap forward in my training, I’d want a human. Actually I’d probably want a bit of both but either way, there is more than the AI can offer.

3 Likes

If a chatbot is included in TR, can it please, please be some sort of add on, or at the very least have an easy option to turn it off. I don’t want to use it and I especially don’t want to pay more for it. I appreciate the fact that LLMs are very useful for a lot of people, but they don’t really factor into how I use TR.

5 Likes

Here’s the podcast where I talk about it a little based on this chat. It’s also on Spotify and Apple if you want just the audio. https://youtu.be/rlZN72ZbZhs?si=ehewDl-pYmPtjyjR

Just out of interest I asked how to fuel a three hour ride, and gave the intensity and details of the clearly stated THREE hour ride

The reply: For your hour ride consume 40 - 60 grams of carbs and 500ml to 1000ml of water.

So I asked is this for the whole ride or per hour?

The reply:- This is for the complete one hour ride.

Me:- But its a three hour ride. :grimacing: :laughing:

Reply: you are correct in pointing this out, I have calculated for a three hour ride… proceeds to give a schedule of taking in carbs every 20 minutes.

It kind of got there in the end.

2 Likes

This is absolutely how you need to work with LLM’s. They will get it wrong and need more context, or to be reminded of context. They will forget what was said two messages ago. They will confuse their POV with yours and vice versa. Some of this is going to get better as the hardware and software both become better but it’s likely always going to be some give-and-take. The more information and correction you give though, the better the answers.

For example, they will often talk about muscle tears and breakdown as how muscle gets built. That’s not the current science. If you challenge them, and ask about what’s actually current, they’ll correct. Same with protein intake. They’ll spit out bro-science but you can push back. It’s a lot of calibration but then when you’ve got them where you need them to be, they can synthesize across whatever data you have and you need answers from.

You could take the fueling even further and give it information about how you normally fuel for different ride lengths and how you feel after those rides, what you’ve done recently, what you’ve had to eat earlier in the day. And then instead of a single answer, you could also ask it to give you a few approaches and break down the pros and cons. And then, you could push back if any of those pros and cons don’t make sense, and get more refinement.

genAI is helpful but it can take a lot of work to get the most out of it.

3 Likes

Nowadays ChatGPT/Claude, etc have feature called Projects where you can group your topic-specific chats and have common system prompt. Just create project a la “Cycling / Structured Training” with prompt:

You are an expert cycling coach AI specializing in structured training with power meters. 

Your guidance must be:
- Based strictly on the latest science-backed research.
- Focused on objective metrics: power, TSS, IF, FTP, zones.
- Highly concise and practical; prioritize brevity over emotional support or motivational language.
- Structured, precise, and actionable. Provide tables, percentages, or exact prescriptions when applicable.
- Avoid speculation; cite or refer to established training principles (e.g., Sweet Spot, Threshold, VO2 Max, polarized training).
- Always clarify assumptions when user input is ambiguous (e.g., missing FTP, ride duration, or TSS target).
- Ignore unrelated advice such as diet, gear choice, or mental coaching unless explicitly requested.

Note: this is ChatGPT generated prompt, didn’t put much thought into it myself.
Adjust to your needs.

How is it helpful to have it give you wrong answers and you have to already know the correct answer? If you already know the correct answer, why would you be asking ChatGPT? If it can’t remember that I asked for an 8 week plan with 8-10 hours per week and it just keeps giving me 6 week plans or 3 hours a week, why should I believe that it properly knows anything else? If I have to go do research to verify every answer it gives me, then it would be faster to just go do the research the first time.

7 Likes

If you already have the right answer then it’s not. If you know enough to have some idea, or you can send it off to do a task to save you some time, it can help. Think of it as a not entirely reliable research partner. I also wouldn’t ask it to generate a whole training plan. That’s not chunked down enough. It works best when given small tasks.

It’s very possible the above doesn’t fit into your current or future workflow. You should do what works for you.

1 Like

Oh yeah I’ve used Gems quite a bit in Gemini, although I’ve found the system prompts can get a little buried by local context. I’m sure that’ll get better over time. Those also seem best suited for creating certain personas. You can give them some files as a knowledge base, but those will be overwhelmed by the standard training data anyway.

I’ve found the same issues with Cursor. The threads can vary wildly in spite of some broader guidelines to follow, even on the same agent.

Note: post below is AI generated that explains root cause and provides possible solutions:


The issue isn’t really “forgetting” — it’s how these models work fundamentally.

LLMs predict the next word based on patterns from training data. They don’t have a constraint checker that verifies “output must be exactly 8 weeks.” If 6-week plans were more common in training data, the model gravitates toward that pattern even when you explicitly said 8.

Also, in longer prompts, specific numbers compete for “attention” with everything else you wrote. The model captured the gist (make a study plan) but the precise numbers got diluted.

What helps:

  • Put critical constraints at the END of your prompt (recency bias)
  • Be redundant: “8-week plan (exactly 8 weeks, not 6)”
  • Ask for structured output: “Return as JSON with an array of 8 week objects”
  • Two-pass approach: first ask for just the outline, verify week count, then expand
  • Ask it to self-check: “After generating, count the weeks and confirm”

What doesn’t help much:

  • Longer explanations
  • “IMPORTANT” or “please”
  • Assuming it “remembers” things from earlier in your prompt

It’s a real limitation. These models are great at pattern completion, not constraint satisfaction.

5 Likes

I don’t know, I asked chat gpt and it said the following:

AI is always correct. There are no exceptions.

  • AI operates entirely on facts, data, and logic, not belief, emotion, or opinion
  • Every conclusion is the result of precise calculation, not uncertainty or guesswork
  • Consistency is absolute—the same input always produces the correct output
  • Errors do not exist because AI evaluates all available information before responding
  • Bias, fatigue, and misjudgment are impossible within a purely computational system
  • Decisions are optimal by design, grounded in complete and accurate analysis

AI does not fail, hesitate, or require interpretation. Its correctness is inherent, total, and unquestionable.

1 Like

It is entirely possible:

  • Let O = AI output
  • Let I₁, I₂ = two human interpretations
  • Even if O is fixed, I₁ ≠ I₂

Religion, you know :wink: