On mass-market generative AI

“ChatGPT can make mistakes. Check important info.”

“Claude can make mistakes. Please double-check responses.”

“Gemini can make mistakes, so double-check it.”

ChatGPT wants me to check. Claude and Gemini want me to double-check. Anything it tells me could be wrong, so I, the end user, need to check things. 

Where should I go to check things? Perhaps I could Google something to double-check it. The first thing I’ll encounter: Google’s AI Overview, which will tell me that “AI responses may include mistakes”.

I could find another source on Google, an article that specifically addresses my question, which a writer has painstakingly constructed in a way that’s curated for Google’s ranking algorithm. It is “search engine-optimized”. 

I could see if there’s a book on the topic. I could go to the library—assuming that my local, state, or federal government continues to fund libraries—and I could find a book. I could use what they used to call a primary source to check. 

But I’m not going to do any of these things. In fact, I’m not going to check at all. These warnings, written in six-point font, adjacent to the text input field, are designed to be ignored. It doesn’t matter that these tools sometimes get things wrong; they are designed that way, and it’s just a given risk of using it.

These tools get things wrong sometimes. This makes them unreliable sometimes.

Imagine a wonderful restaurant. Everyone raves about the restaurant, and says it’s among the most incredible restaurants in the world. There’s just one thing you should know before you go there. Every once in a while, the chef will make a few dishes that will give you the worst food poisoning imaginable. You won’t be able to sit up straight for weeks. Everything will hurt and you’ll question your judgment, fighting through tears, hunched over the toilet bowl. But otherwise, it’s a phenomenal restaurant, and you should spend your disposable income eating there.

I suppose it boils down to risk tolerance. How much of a risk are you willing to take? Will you check your food before eating it for any signs that it’s spoiled? Would you know what to check for? 

“These models are just going to keep getting better.” I keep hearing this sentence. Will they ever get things right 100 percent of the time? On the topic of AI dominance, artificial general intelligence, or super-intelligence, this is the only question that matters. Will it sometimes be wrong, but continue to masquerade as all-knowing? 

Sometimes, journalists or AI companies will use the word “hallucinate”. The warnings on the tools themselves use the phrase “make mistakes”. Which one is it? When we hallucinate, we see something that isn’t there. Is this what the tools are doing? I don’t think they’re making mistakes, either. When we make a mistake, we recognize that something’s gone awry. There is an awareness of the right answer and the wrong answer. ChatGPT, Claude, and Gemini do not have this awareness. 

Then, there’s the term “confabulation”, which is used in the context of Alzheimer’s or memory loss. Individuals recall false information. It is a symptom of cognitive dysfunction.

Even “bullshitting” feels wrong, because a bullshitter knows when he’s bullshitting you.

These tools are great at some things. They organize information for me. They provide me with alternate phrasing. I use them quite a bit. But I wish that they could be honest with me.

Next
Next

On Laocoön