0:00
/
0:00
Transcript

A Trusted Advisor

Don’t Worship the Bot, it's not an Oracle

People whine about Artificial Intelligence hallucinations. They ridicule mistakes their pet bot makes, point fingers when an answer leads them down a rabbit hole. Sure, you can get into deep trouble taking AI outputs as gospel. But why do that? This distress is caused by unrealistic expectations mixed with a heaping helping of poor prompting technique, asking AI for answers rather than engaging it in a dialog.

It’s well known that Large Language Models ( LLMs) and AI tools in general make mistakes. They hallucinate. Sure, they can’t be fully trusted, at least not like you trust a calculator or a well-sourced encyclopedia. That doesn't mean they aren’t trustworthy in the right contexts.

Context is critical. What type of work are you doing? LLMs cannot replace search engines. They’re not built to give reliable answers to deep research questions in science, engineering, or law. Never use them for critical decisions in finance, safety or healthcare.

In fact, asking AI to give final answers in anything (including Jeopardy) is risky.

Here’s a personal experience to illustrate my point. I am dealing with a chronic medical problem. AI has been invaluable in figuring out what is going on and how to deal with it. Here’s the skinny (it’s kind of gross; you should leave now if you’re fixated on good manners). I have been coughing for a year and a half. I came home from my Thanksgiving holiday in 2023 and started coughing. It went on day and night, accompanied by a lot of reflux, mucus, and spitting up

.

It took six months for an initial diagnosis. I had a hiatal hernia, a condition where my stomach pushed up through the diaphragm and stomach juices entered the esophagus as non-acidic reflux. Irritation from the reflux was causing my cough. It was bad, I coughed non-stop for four hours one night. Eventually they had to repair the hernia, which, it turns out, is major surgery.

It took three months to recover from the operation. Eighty percent of my cough was fixed. I would only cough non-stop for a half hour at night and frequently during the day. Another round of doctor appointments brought a sinusitis diagnosis, for which I took antibiotics. The infection cleared up, but the cough didn’t go away.

At this point I was beyond frustrated. My Primary Care Physician (PCP), who has been treating me for 30 years and saved my life at least once, was also baffled. He suggested I go to a university medical center for a full workup. That didn’t go well, no new ideas.

I had to figure this out by myself, and my go-to resource was ChatGPT 4o.

I typed my history (with much more detail) into a prompt and asked the LLM to ask me questions based on my history. I did not ask for an answer. The prompting technique is close to flipped interaction, which I’ve written about.

It was a long dialog; it went on for days. I noted every detail; how I reacted to each medication, what helped a little, what didn’t help at all, and what changed on good or bad nights.

ChatGPT suggested five possible conditions that aligned with how I presented. I continued to report what was going on. Without throwing out any of the possible causes, it seemed like some were more likely to be the culprit.

I asked ChatGPT to summarize the entire session in a letter to my PCP, which I sent to my doctor. He got back to me the next morning, scheduled an appointment and we discussed the findings.

I’m still coughing. I do have a new plan. Apparently, the constant irritation from the reflux, sinus condition, and abundance of mucus could have created a hypersensitivity of the nerves in my throat and larynx. My symptoms closely align with this diagnosis. I see another specialist soon for an opinion and treatment plan.

A friend of mine who has done a fair amount of consulting work defines his job as someone who is paid a lot of money to give advice that is ignored.

The point here is that I used ChatGPT as a trusted advisor, a consultant, not an oracle. I didn’t take its suggestions as gospel. I checked its findings with my doctors. I read online about the condition. That’s how it’s supposed to work. AI contributes to solutions. By itself, it isn’t a solution. Like any expert you consult, its advice needs to be evaluated and verified. The decisions, and their consequences, are yours alone.


Paid subscription needed

Byte Sized Tech Tips offers short and focused blog posts and videos aimed at AI users while also exploring the meta world of human thinking opened by the rapid development and deployment of AI tools. It helps to be comfortable with the idea that machines that can think, or closely approximate, thinking. Subscribe, Like, Comment – you know the drill.

Byte Sized Tech Tips is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Thanks for reading Byte Sized Tech Tips! This post is public so feel free to share it.

Share

Leave a comment

Discussion about this video

User's avatar