Consensus - ChatGPT for Science

Potentially interesting tool given the natural interest in physiology and endurance training in this forum

Ask a scientific question and Consensus AI will deliver summary of top papers. papers. Naturally, powered by GPT-4.

Registration required.

1 Like

Did you intentionally write papers twice? Sorry…couldn’t resist.



I don’t have an account, out of curiosity, what does it say if the question is ‘Is pyramidal training better for recreational athletes?’

1 Like

No useful result.

*Not enough relevant results. Try asking a well-researched question in the format ‘Does x cause y?’"


(no - not intentionally - lol)

1 Like

Unclear how relevant this is, but an attorney in my office has been playing around with ChatGPT for motion writing and they reported that it does a great job with one major caveat: it seems to also generate it’s own made up case law, making it useless.

If it does the same thing with summaries of science research, it would be similarly useless.

This could be total user error on my colleague’s part, but I thought it was interesting. Sooner or later it’ll definitely replace a good portion of my work, which will be really nice.


No doubt that better prompting also leads to better answers. That being said, ‘AI hallucinations’ is very much a feature of current generation Large Language Models.

By 2023, analysts considered frequent hallucination to be a major problem in LLM technology, with a Google executive identifying hallucination reduction as a “fundamental” task for ChatGPT competitor Google Bard.[6][23] A 2023 demo for Microsoft’s GPT-based Bing AI appeared to contain several hallucinations that went uncaught by the presenter.[6]

Hallucinations are not well understood. This is going to be quite a ride…

The hallucination phenomenon is still not completely understood.[2] Therefore, there is still ongoing research to try to mitigate its apparition.[25] Particularly, it was shown that language models not only hallucinate but also amplify hallucinations, even for those which were designed to alleviate this issue.[26]

I’ve been playing with Google Bard a little and it’s interesting to ask it things that may or may not be controversial that I already have an opinion about. Examples - is a low carb diet good for cyclists, or what is the ideal weight for a cyclist that is 6’4.

It’s kind of fun but it’s answering in ways that seem like common knowledge despite what some sections of the internet would like you to believe, but if I was asking about something that I wasn’t familiar with I’m not sure how much I would trust the results.

I guess it’s an interesting thing to play with but I think when people start using this stuff for work or to replace people we are going to get news about glaring and obvious errors.

1 Like

Hahaha :smiley:


Easier to try GPT SCI

No registration is required.

Chat GPT did my ramp test but now I have to deal with a 678 FTP.


Very early days still for these Large Language Models and the tools that are being developed to leverage them. They will become more useful over time and the speed of the development might be very impressive.

That said, the roadmap for chatGPT doesn’t seem to involve actually pushing the pedals yet, so we humans are still going to have something to do in order to make our bikes move more quickly for longer…

I’ve tried ChatGPT for various technical things like writing code for network routers and it writes a load of plausible looking stuff that would fool anyone that didn’t know that it was basically talking a load of crap.


I do a ton of technical report writing for my job (environmental consultant). One of my colleagues has been obsessed with chat GPT, saying it’s saving him tons of time, etc. Yesterday he asked me to review a discussion section for a report. It was absolute garbage, completely unacceptable to send to a client. When I asked him he came clean and said he just used chat GPT. Same experience as you, it provides lots of plausible and informed sounding sentences, but if you have a deep knowledge of the subject matter it’s trash.

1 Like

It does a poor job drawing conclusions and summarizing but if you have it elaborate on each point, one by one in separate prompts, and provide it with some of your expertise, it often figures it out and aligns. It can be extremely powerful in this way.

It needs better prompting and better followup prompting.

For example, if you tell it “write me some database code to do blah blah blah”, it’ll spit out something within the token limit that may or may not be shit.

But if instead you go, “write me a MySQL query using PDO prepared statements in PHP. Here’s my table schema ______. Use PHP filter_var, etc.” It’ll actually create some very usable code that can save you hours every day.

1 Like

This is the best explainer I’ve seen for LLMs for people who don’t have a technical (read: machine learning) background: Chat GPT is a Blurry JPEG of the Web

And yes, “AI hallucination” is a thing. It would be better if we called it what it was - which is that LLMs make up information based on statistical relationships between words, but this is where we are right now…

Totally agree.

I am using ChatGPT within my field of work. ChatGPT can certainly produce a lot of plausible sounding garbage without the right prompting.

With better and more specific prompting (including use of system personas) the quality improves drastically. But asking the right questions within a field where you don’t have much knowledge can be tricky.