AI Chatbot for discussing training, planning, etc

Is anyone actually arguing this here? LLMs are a tool. Like all tools, your results depend heavily on your skill with the tool. Are they great for every use? Nope, absolutely not. They’re never a substitute for using your brain. But they can be really handy for a lot of things when you know when and how to use them.

Kitchen knives can be super sharp and make everyday cooking tasks significantly easier, but you don’t swear them off because your neighbor somehow managed to cut off his thumb.

2 Likes

I don’t think he’s arguing. That’s the actual response from the LLM.

Presenting fallacies as facts is a known bug/feature of LLMs, aka AI hallucinations or AI slop.

In the end, the AI companies don’t care what’s presented, as long as you keep generating them cash by clicking their affiliate links.

2 Likes

Is it? I thought it might have been sarcasm?

We’re so far through the looking glass with this stuff I’m not even sure which way is up. Otherwise, pretty much agreed on all points with you. Especially Atlman, he’s another Zuckerberg waiting to happen. Focus on engagement above all else and consequences are for other people.

1 Like

This is tad pessimistic.

Agree, current level of investment flow into AI (more precisely into building data centers), looks like a bubble. Too much capacity with no clear return of investment.

Same time, long-term transformation is real. Whether it is currently fashionable LLM/SLM approach or something else later on does not really matter.

I am working on best practices for AI-powered development teams. My guess is that senior developers will mostly work at the architectural level: defining constraints, writing guardrails (expert knowledge) for agents, and stopping hallucinations and general slop. Mid and junior developers will mainly use AI tools to generate code, tests, documentation, etc., and over time will also grow into architectural roles. AI does not remove the career path; it shifts where the work happens.

How this ties to the current topic: coaches document expert knowledge, and athletes use an app where a capable agent interprets their ambiguous input, passes it through a simpler rule-based expert system, and presents it in a structured, easy-to-use form. If you use current web-based conversationalist AI, then it is totally up to yourself to be expert and ask correctly.

I don’t want to discourage you from fighting the good fight, but taking a look over the past 10 years has shown that truth doesn’t matter in the click economy, only what drives the most eyeballs.

You speak of guardrails, but every guardrail that has existed in the name of maintaining some semblance of reality has been systematically demolished so that nobody can be held accountable. The goal of the LLMs is no longer to lead people to the right answers, it’s to hopefully be enough convincing enough so that the user doesn’t go off to their competitor. Again, noting that truths don’t necessarily need to be a part of that cycle.

Meta and Twitter/X killed off their entire content moderation teams not because AI replaced it, it’s because they no longer cared.

1 Like

We are talking from different perspectives:

  • our common understanding is that AI generates something for our input
  • your opinion comes from conversationalist AI that is driven by company prompts to keep you engaged i.e. provide company return of investment if it manages to cash in on your time somehow
  • my opinion comes using it as generator that is driven by my expert prompts to get specific results i.e. provide return of investment on my time if it manages to save me time on specific tasks

Swapping out my glib hat for my serious hat, I think this is a misreading of the market. There are several tiers of models, and Llama/Grok are not in the top tier, with Grok mostly a punchline between Mechahitler and their models that are overfit for benchmarks and bad for most actual tasks.

The real money isn’t in clicks, it’s in the enterprise, and that’s where the top tier models (Gemini/Claude/GPT) are focused. This environment requires accuracy and repeatability to whatever degree possible in the non-deterministic world of LLMs. Google and Anthropic in particular are highly focused on solid models that aren’t going to start spouting crazy shit in medium to high risk environments.

OpenAI, I think, is a weird one. They have the best brand recognition but they’re starting to fall behind and as oveleveraged as they are building data centers, they’re coming for the spare change in your couch cushions along with any other revenue they can scare up. They’ll be the first to actually start doing ads, pushing vendor links, and the other stuff you’re concerned about, I suspect.

I’m skeptical of the value of using an LLM for training purposes, but this thread gave me an excuse to sit down and learn about MCPs, so I built one that combines activity data from Intervals.icu, sleep and recovery from Whoop, and upcoming workouts from TrainerRoad and makes them available to an LLM. It’s very rought around the edges, but seems to work.

1 Like

On what platform did you build your MCP?

It’s a TypeScript app, hosted on Fly.io. I didn’t add any multi-tenancy to it but the app is Dockerized so if you’re a software dev it shouldn’t be too difficult to fork it and run it locally or host it somewhere else with your own credentials: https://github.com/gesteves/domestique

1 Like

My thing is already obsolete :joy:

My use of it as a more general tool that I tend to use most often for fuelling and general life as it relates to training probably won’t change much even with Trainnerroad getting better.

Christmas day I took a picture of a platter of food I was sharing with the fam for lunch and it picked specific options on that platter I should focus more on with an eye on the ride I had planned in the afternoon. Pretty damn cool and a huge upgrade in my life from the years I spent being incredibly rigid about what I ate because that was the only way to control fuelling.