AI Chatbot for discussing training, planning, etc

That’s what I am doing and I am loving it. For one thing it solved the years long challenge I’ve had of feeling awful when I switch from summer riding to winter riding (long distance in zone 2 vs short intense rides) but more than that it’s flexible in a way that myfitnesspal wasn’t. With MFP I would have to be incredibly rigid or have no idea. Now if my wife decides she wants to go out or something I can post the menu and ask what I should eat based on my goals.

No, I haven’t created a gem. I just keep the whole discussion in one thread.

1 Like

Wait - you decided (?!?!) you have low testosterone, but never measured it to see whether the “decision” was right or not?

2 Likes

I had to go and look and double check. I thought I phrased something incorrectly. Nope. Never said that at all.

I said I was concerned about low testosterone. I was considering getting some blood work but didn’t end up needing to. It would still be a good idea but it’s been a week I’ve been running the experiment and I’ve definitely turned things around. I’ve actually shifted the focus to slightly different goals but it’s not important to this discussion.

What I never said is that depending on the level you ride at you’ll likely experience low testosterone at certain points. Personally I experience it after a hard 100 mile ride and certainly rides when I go significantly beyond that. I also experience it when I do what Garmin considered a 2:40 Vo2 max workout.

Your body should be able to recover though. I wasn’t recovering and decided it had reached a point where it was drastic enough I needed to investigate beyond just dealing with it for a few months. I was able to solve my own issue with AI analysis of my Garmin data which is pretty damn amazing.

Tonight I used it to analyze my Garmin data then select a trainerroad workout for tomorrow based on that. Also pretty amazing to me.

1 Like

I admire you trust in these corporate machines, but maybe it’s a good idea to get that blood checked still :wink:

5 Likes

It would definitely be good to check. That said I’m not trusting something blindly. It’s telling me exactly what I already should have caught. I should have known what the issue is and I’m embarrassed I missed it. It’s also nice to have some help.

It is telling you what you expect, that doesnt make it valid, true or accurate.

:exploding_head:

5 Likes

Yeah, very sorry to have to say this, but this is confirmation bias - the only way to get actual confirmation would have been an actual measure testosterone test.

6 Likes

@joshross I can save you a lot of time arguing/discussing wether LLMs are untrustworthy and uselsess or an amazing resource, you’re not convincing the other side. You’d have better luck changing their political affiliation or religion, lol.

My experience is that if used correctly LLMs are phenomenal when trying to track, organize and prioritize all the variables sourounding training. My n=1 experience after:

…is that the worst training I’ve ever done was TR’s old plans (2017-2020). I followed them religiously and it led to improvements but also burnout nearly every year. 2021-2024, I stayed with TR (only because of legacy pricing) and used the platform for AIFTP, calendar and a workout player. Even with all the TR updates, their current plans don’t fit my needs/wants and I disagree with their training plan philosophy. For the past 4 years I’ve made my own plans and shoehorned their workouts into my own training plan along with using TrainingPeaks. Throughout 2025 I continued to use my self made training plans but introduced ChatGPT as a sounding board. I’ve had my best results yet and it’s kept me from my worst enemy, myself. No matter how much I know, learn and all the experience I have, it’s always helpful to have someone/thing give it an unbiased look. If for no other reason than to remove your blind spots. This is where a coach would do the same. Absent a coach, this is a great proxy.

Also, LLMs have helped me navigate through some difficult medical issues while keeping as much training going as possible. Did I trust AI to all my health and fitness? No, of course not. However I let it guide me to experts (doctors) with appropriate questions or avenues of approach. Same with training, I literally loaded an entire season of workouts, races, FTP, CTL, TSS ect. then asked it why/when I performed well or when I felt I underperformed. It showed that I biased myself towards workouts I preferred and left out key blocks that would likely have produced better fitness. It showed me that prior to some key races my training and taper weren’t ideal.

IMHO/experience it has made huge improvements in my training and racing, not to mention health. I’ve also seen it do something similar for my wife. So LLMs are not the answer, but it can lead you to better questions and solutions as long as you’re willing to follow through.

2 Likes

@joshross - sorry you’re getting attacked for sharing your experience. I appreciate you sharing it, along with your CoachCat review. It’s the internet, and people love to attack one sentence and disregard the rest of the post, which was actually never really focused on diagnosing your T levels in the first place. (Btw - I’m guilty of grabbing one sentence sometimes too, not trying to point fingers)

Anyway, just wanted to encourage you to keep sharing. It’s interesting stuff.

1 Like

Just adding some more of my own usage here because the thread’s gotten pretty spicy.

As a software developer I’ve integrated LLM’s pretty extensively into my workflow. As a result I routinely see how terrible they are at most things, including code generation and thinking about software. The 101 analogy for programming is trying to tell a computer how to make a peanut butter and jelly sandwich. Break everything into small steps, correctly, or the program will fail. genAI still needs everything broken into small steps. Even then it often gets things wrong. It’s like having a knowledgeable, fast, over-confident junior developer available at all times who doesn’t learn or improve and struggles with anything truly novel. That sounds like a not great situation but it’s actually really helpful. The tools aren’t going anywhere and they will get better. The job’s requiring much less about building walls, and much more oversight and architecture at every level. Folks in my field are divided. There are junior developers and some senior developers “vibe coding”, trusting the LLM’s too much, generating hard-to-maintain code they don’t really understand. There are those who reject the tools entirely. There are also a lot of us looking for where the tools are applicable as another layer of abstraction. The history of software development is in part the history of increasing abstraction, getting further away from the metal, as it were, from the hardware, in how problems and solutions are modeled in code.

I’ve also used LLM’s in other areas, like identifying how to fix something in my house from a photo, or thinking over plans for a home renovation. It’s very similar in that as long as the problem is broken down into very small chunks it gives me a leg up. It’s nowhere near the same as having an actual expert. If I’m not aware of the right question to ask I may not get the right answer, and it might have outdated information anyway, or my question may be leading in a way that I don’t realize. But it gives me better footing to ask the right questions of the experts, or ask dumb questions that yield helpful answers, or google and find articles based on the Q&A with genAI.

For fitness, LLM’s have been immensely helpful in some work I’ve been doing on flexibility so long as they’re trained on physiology, exercise science and related. I’ve done enough PT, I can provide very clear feedback about what I feel in response to an exercise, and it can help me come up with more targeted routines that evolve as my range of strength improves. Would an actual trained expert be even more helpful? Probably, but they’re not around to walk through a routine with me whenever I want.

I’ve used them to rubber duck my meals, much like other folks here, and to go over recent fitness plans. Asking them to generate an entire plan isn’t generally going to be super great, but asking them to analyze an existing plan, or based on an existing plan and one’s current state, generate a workout, will often go a lot better. With that approach they have a single small responsibility and a lot more context to work from.

This past weekend I was questioning what rides I should do because of some new high rep lifting that tired out my legs. GEmini helped me with fueling and pushing through when it was reasonable to do so, with an appropriate trainer ride given my current training stress.

The technology’s slowly getting better too, and there’s a lot to be excited about around the corner like probabilistic computers, which will give LLM’s a tremendous boost.

So in the context of TrainerRoad I’d love that tighter integration. I can use Gemini for that right now. I tell it what my other training has looked like, how I feel, and it can provide the tweaks. But I suspect a tightly integrated LLM would be even better.

2 Likes

The way you’ve stated this really gets to the heart of what I’ve found but didn’t clearly state. You have to break it into tiny tasks and ask knowledgeable questions. In that context it will expand your own abilities.

I know how to train. I know how to fuel. I also have tons of data available. I am not always perfect at synthesising all of that correctly in every situation. Last night I realized I was really just applying machine learning to my Garmin data. Garmin should offer it directly but I can do the same thing with Gemini and Garmin Connect screenshots. The fuelling is one more abstraction from that data but it’s still analyzing my data and outputting results based on well established physiology understandings. We humans aren’t so unique and even if it’s a few percentage points off it’s better than my own judgement in many situations.

1 Like

Not sure if intentionally trolling there but I like it :joy:

This is very negative and not balanced or helpful.

I could quote the post but people can read it for themselves.

I agree in general, people cutting up posts and replying can be a misrepresentation. But I dont think that is, on the main whats, happening here.

It is well know fact that LLMs repeat back and add to confirmation bias.

Put the information @joshross gave into AI (okay they might have given much more detail), you will get very different results depending on where you live, T levels and treatment is not a thing in the UK unless severe and diagnosed by a Doctor, you cant get medication except illegally.

Suprise, surprise, not even mentioned in my search…

Anemia,
Diabetes.
Thyriod,
Myaliic encephalomyelitis/ chronic fatigue syndrome
Stress
Glycogen depletion
Diet
and more…

LLMs only currently work with very small bits of information, and they cant even do simple sums, we can.

I’m not doing anything with testosterone treatment. That’s the point. I am also hedging that it’s quite likely I should investigate that more given that I’m 45 and train at a fairly high level. That wasn’t really what this discussion was about though.

I have had low testosterone when I pushed it low by not eating enough and being too lean. I know what it feels like and when I noticed that feeling I was ready to head to the doctor.

What ended up happening instead is that I pretty drastically changed how I was fuelling for shorter and more intense riding and I feel much better. I don’t feel the need to go to the doctor and do bloodwork despite that still being a good idea for most people myself included.

The story here is about how Gemini steered me away from my initial hypothesis and towards eating more and resting more by analyzing my Garmin data. I’ve also talked about how AI is good at solving specific problem when you break things down. I was feeling fine at around 200 miles a week with frequent 7-9 hour rides. I changed to indoor riding and I felt like crap. I used AI to solve that small problem by pointing out something obvious I’d overlooked.

2 Likes

I think that’s exactly what happened. It was a post about using LLM’s for training with an intro about wondering if the cause might possibly be low T based on personal history that turned into “I CAN’T BELIEVE YOU LET AI TELL YOU YOU DON’T HAVE T PROBLEMS”. That wasn’t what happened or was discussed at all.

3 Likes

Apologies, I see I missed this.

2 Likes

Fair point but mine is that it can extend your knowledge further than it would otherwise stretch.

This thread represents the problems with LLMs just like humans they summarise and come ro incorrect conclusions.
Very good at repeating known repeatable tasks though.

1 Like

For me that is the main issue, how do you know its extending your knowledge or just misrepresenting?

I put my area of expertise into ai and what it returns is embrassingly wrong, not even close to known facts. To someone that doesn’t have a background in the subject it is credible, have they expanded their knowledge or just think they have?

Same when I asked about training, it often states things that are common myths, unless using ai or a LLM when you know what its trained on, use with caution, or fact and sanity check the output.

Its getting there but a long way to go imo

1 Like