I admire you trust in these corporate machines, but maybe it’s a good idea to get that blood checked still ![]()
It would definitely be good to check. That said I’m not trusting something blindly. It’s telling me exactly what I already should have caught. I should have known what the issue is and I’m embarrassed I missed it. It’s also nice to have some help.
Yeah, very sorry to have to say this, but this is confirmation bias - the only way to get actual confirmation would have been an actual measure testosterone test.
@joshross - sorry you’re getting attacked for sharing your experience. I appreciate you sharing it, along with your CoachCat review. It’s the internet, and people love to attack one sentence and disregard the rest of the post, which was actually never really focused on diagnosing your T levels in the first place. (Btw - I’m guilty of grabbing one sentence sometimes too, not trying to point fingers)
Anyway, just wanted to encourage you to keep sharing. It’s interesting stuff.
Just adding some more of my own usage here because the thread’s gotten pretty spicy.
As a software developer I’ve integrated LLM’s pretty extensively into my workflow. As a result I routinely see how terrible they are at most things, including code generation and thinking about software. The 101 analogy for programming is trying to tell a computer how to make a peanut butter and jelly sandwich. Break everything into small steps, correctly, or the program will fail. genAI still needs everything broken into small steps. Even then it often gets things wrong. It’s like having a knowledgeable, fast, over-confident junior developer available at all times who doesn’t learn or improve and struggles with anything truly novel. That sounds like a not great situation but it’s actually really helpful. The tools aren’t going anywhere and they will get better. The job’s requiring much less about building walls, and much more oversight and architecture at every level. Folks in my field are divided. There are junior developers and some senior developers “vibe coding”, trusting the LLM’s too much, generating hard-to-maintain code they don’t really understand. There are those who reject the tools entirely. There are also a lot of us looking for where the tools are applicable as another layer of abstraction. The history of software development is in part the history of increasing abstraction, getting further away from the metal, as it were, from the hardware, in how problems and solutions are modeled in code.
I’ve also used LLM’s in other areas, like identifying how to fix something in my house from a photo, or thinking over plans for a home renovation. It’s very similar in that as long as the problem is broken down into very small chunks it gives me a leg up. It’s nowhere near the same as having an actual expert. If I’m not aware of the right question to ask I may not get the right answer, and it might have outdated information anyway, or my question may be leading in a way that I don’t realize. But it gives me better footing to ask the right questions of the experts, or ask dumb questions that yield helpful answers, or google and find articles based on the Q&A with genAI.
For fitness, LLM’s have been immensely helpful in some work I’ve been doing on flexibility so long as they’re trained on physiology, exercise science and related. I’ve done enough PT, I can provide very clear feedback about what I feel in response to an exercise, and it can help me come up with more targeted routines that evolve as my range of strength improves. Would an actual trained expert be even more helpful? Probably, but they’re not around to walk through a routine with me whenever I want.
I’ve used them to rubber duck my meals, much like other folks here, and to go over recent fitness plans. Asking them to generate an entire plan isn’t generally going to be super great, but asking them to analyze an existing plan, or based on an existing plan and one’s current state, generate a workout, will often go a lot better. With that approach they have a single small responsibility and a lot more context to work from.
This past weekend I was questioning what rides I should do because of some new high rep lifting that tired out my legs. GEmini helped me with fueling and pushing through when it was reasonable to do so, with an appropriate trainer ride given my current training stress.
The technology’s slowly getting better too, and there’s a lot to be excited about around the corner like probabilistic computers, which will give LLM’s a tremendous boost.
So in the context of TrainerRoad I’d love that tighter integration. I can use Gemini for that right now. I tell it what my other training has looked like, how I feel, and it can provide the tweaks. But I suspect a tightly integrated LLM would be even better.
The way you’ve stated this really gets to the heart of what I’ve found but didn’t clearly state. You have to break it into tiny tasks and ask knowledgeable questions. In that context it will expand your own abilities.
I know how to train. I know how to fuel. I also have tons of data available. I am not always perfect at synthesising all of that correctly in every situation. Last night I realized I was really just applying machine learning to my Garmin data. Garmin should offer it directly but I can do the same thing with Gemini and Garmin Connect screenshots. The fuelling is one more abstraction from that data but it’s still analyzing my data and outputting results based on well established physiology understandings. We humans aren’t so unique and even if it’s a few percentage points off it’s better than my own judgement in many situations.
Not sure if intentionally trolling there but I like it ![]()
I’m not doing anything with testosterone treatment. That’s the point. I am also hedging that it’s quite likely I should investigate that more given that I’m 45 and train at a fairly high level. That wasn’t really what this discussion was about though.
I have had low testosterone when I pushed it low by not eating enough and being too lean. I know what it feels like and when I noticed that feeling I was ready to head to the doctor.
What ended up happening instead is that I pretty drastically changed how I was fuelling for shorter and more intense riding and I feel much better. I don’t feel the need to go to the doctor and do bloodwork despite that still being a good idea for most people myself included.
The story here is about how Gemini steered me away from my initial hypothesis and towards eating more and resting more by analyzing my Garmin data. I’ve also talked about how AI is good at solving specific problem when you break things down. I was feeling fine at around 200 miles a week with frequent 7-9 hour rides. I changed to indoor riding and I felt like crap. I used AI to solve that small problem by pointing out something obvious I’d overlooked.
I think that’s exactly what happened. It was a post about using LLM’s for training with an intro about wondering if the cause might possibly be low T based on personal history that turned into “I CAN’T BELIEVE YOU LET AI TELL YOU YOU DON’T HAVE T PROBLEMS”. That wasn’t what happened or was discussed at all.
Fair point but mine is that it can extend your knowledge further than it would otherwise stretch.
Come on now, you skipped a major point in what you were referring to then stuck to your guns initially. It happens and you went back and figured it out. No biggie but I hardly think that applies to AI one way or the other.
As to the other point though, having talked to those involved in inventing FTP a fair bit, what they shared was how repeatable it is. That’s one thing but FTP works because humans operate in a fairly small window.
For anyone interested, Coachcat has done a whole lot of upgrading to their chatbot, even over just the one month I’ve been using their software, and it’s grand. Some of their systems are still in their infancy, and I find the Z2 workouts to be even more tedious, but the integration of strength training is nice. I was especially wowed because it asked me how I felt, which muscles were sore, suggested some stretches and then created a plan and added it to my calendar. That was very well done. It’s still not perfect, but I’ve stopped progressing with TR, and I’m kinda sick of the whole “your body will naturally adapt to the training and make body comp changes” bullshit.
Just talked about this on the Velo Podcast in slightly more detail. Check it out when it lands on Friday.
Can you elaborate on both of these comments? I signed up for the 30-day free trial, but not sure I’ll be able to adequately evaluate these and other points in that time period. Thanks!
Sure. They have a nutrition plan for example, but it really only has about ten meals in it that are constantly rotated.
They have drastically less variety in workouts. I’m TR, you’re going to get hills and valleys in even your Z2 workouts. You might have hills at 170, valleys at 150, so your average wattage ends up being 160. With CC, it’s going to keep you at that wattage the entire time with no variation. I’m finding that the CC method is making me stronger, though more boring.
I do really really like the way the chatbot will adjust things and make you think about your workouts more. It just feels more holistic, which TR have more or less said they don’t really care about.
I think this is mostly because TR was built around indoor workouts and FasCat was built around outdoor workouts. I give TR full credit for creating indoor and outdoor versions of the same workout. The solution for the FasCat ones indoors is to turn off erg.
Late to the thread… but my take was more that AI (or any human for that matter) would never know if testosterone was low without a blood test.
But back to the question if AI can be good for training, planning, etc. I think it does a better job planning training than diagnosing medical issues (too soon?) ![]()
Ok really… back on topic. I once heard it said that there are two facets to coaching: the science of coaching and the art of coaching. The science side is where you understand human physiology and can piece together a training plan. I think AI can do this very well. On the other side… the art of coaching is the human side… the conversations, the buy-in, motivation, etc. This is more valuable in my opinion than a perfect training plan.
For the millionth time, I never asked it to nor did it diagnose low testosterone. That was part of a larger discussion.
As to the other point, totally agreed. I think there’s also another level of coaching that the Ai really isn’t going to offer. This is what I talked about on the podcast for Friday. It’s good at analyzing my Garmin data but if I truly wanted to make a leap forward in my training, I’d want a human. Actually I’d probably want a bit of both but either way, there is more than the AI can offer.
If a chatbot is included in TR, can it please, please be some sort of add on, or at the very least have an easy option to turn it off. I don’t want to use it and I especially don’t want to pay more for it. I appreciate the fact that LLMs are very useful for a lot of people, but they don’t really factor into how I use TR.
Here’s the podcast where I talk about it a little based on this chat. It’s also on Spotify and Apple if you want just the audio. https://youtu.be/rlZN72ZbZhs?si=ehewDl-pYmPtjyjR