Just adding some more of my own usage here because the thread’s gotten pretty spicy.
As a software developer I’ve integrated LLM’s pretty extensively into my workflow. As a result I routinely see how terrible they are at most things, including code generation and thinking about software. The 101 analogy for programming is trying to tell a computer how to make a peanut butter and jelly sandwich. Break everything into small steps, correctly, or the program will fail. genAI still needs everything broken into small steps. Even then it often gets things wrong. It’s like having a knowledgeable, fast, over-confident junior developer available at all times who doesn’t learn or improve and struggles with anything truly novel. That sounds like a not great situation but it’s actually really helpful. The tools aren’t going anywhere and they will get better. The job’s requiring much less about building walls, and much more oversight and architecture at every level. Folks in my field are divided. There are junior developers and some senior developers “vibe coding”, trusting the LLM’s too much, generating hard-to-maintain code they don’t really understand. There are those who reject the tools entirely. There are also a lot of us looking for where the tools are applicable as another layer of abstraction. The history of software development is in part the history of increasing abstraction, getting further away from the metal, as it were, from the hardware, in how problems and solutions are modeled in code.
I’ve also used LLM’s in other areas, like identifying how to fix something in my house from a photo, or thinking over plans for a home renovation. It’s very similar in that as long as the problem is broken down into very small chunks it gives me a leg up. It’s nowhere near the same as having an actual expert. If I’m not aware of the right question to ask I may not get the right answer, and it might have outdated information anyway, or my question may be leading in a way that I don’t realize. But it gives me better footing to ask the right questions of the experts, or ask dumb questions that yield helpful answers, or google and find articles based on the Q&A with genAI.
For fitness, LLM’s have been immensely helpful in some work I’ve been doing on flexibility so long as they’re trained on physiology, exercise science and related. I’ve done enough PT, I can provide very clear feedback about what I feel in response to an exercise, and it can help me come up with more targeted routines that evolve as my range of strength improves. Would an actual trained expert be even more helpful? Probably, but they’re not around to walk through a routine with me whenever I want.
I’ve used them to rubber duck my meals, much like other folks here, and to go over recent fitness plans. Asking them to generate an entire plan isn’t generally going to be super great, but asking them to analyze an existing plan, or based on an existing plan and one’s current state, generate a workout, will often go a lot better. With that approach they have a single small responsibility and a lot more context to work from.
This past weekend I was questioning what rides I should do because of some new high rep lifting that tired out my legs. GEmini helped me with fueling and pushing through when it was reasonable to do so, with an appropriate trainer ride given my current training stress.
The technology’s slowly getting better too, and there’s a lot to be excited about around the corner like probabilistic computers, which will give LLM’s a tremendous boost.
So in the context of TrainerRoad I’d love that tighter integration. I can use Gemini for that right now. I tell it what my other training has looked like, how I feel, and it can provide the tweaks. But I suspect a tightly integrated LLM would be even better.