Everything ChatGPT tells you is made up

Osher El-Netanany
Israeli Tech Radar
Published in
10 min readMar 14, 2023

--

We don’t mind when Dall-E literally makes up pictures. Why are we mad when ChatGPT makes up an answer?

There is an important distinction between made-up and false.
In some meaning, everything we say is also made up.

Understanding this is the key to the message behind this article.

A picture of a crafted puppet of a becorn — a fairy like creature — holding a bucket of seeds, and a real bird picking up seeds from it, as if they interact.
Becorns. Made up to be real. By David M Bird (img from here)

Can I count on you?

For us humans, reliability, sincerity and honesty are a big deal.
It’s that big a deal that many of us cannot get passed the claim that everything WE say is also made up without getting an inner pinch.

But look:
First, there’s everything that is hypothetical, speculative, satiric or said for a prank.
Then there’s all the things that concern situations we encounter for the first time and do not have words for.
And even things that derive only from facts — many times we need some effort to put the message together — to make it up.

Still uncomfortable?
Check definitions 2 and 4.

the definitions google offer when searching for “make up”
I did not make that up. Google did. (img from here)

Speaking the truth

Many of us, when asked about something we did not check for ourselves will still bring up an answer.

For us humans, there are a lot of motives that impact what we say.
We want to look knowledgeable, so we blur the lines around what we know and what we infer. We want to be right, so we choose the evidence. We might want not to hurt someone’s feelings, so we give them a white lie. Sometimes we get vindictive, so we lie to hurt. And we lie to hide shame.
Knowingly or not — it’s all there.

We want to be reliable even when we lie— so we wrap our lies with well-wrought narratives and seed it with what we consider reliable truth (which ironically is considered a cornerstone of adult human skills…).

And we lie best to ourselves, which is a big part of what makes us human.

An illustration of the chemical compositionof Domapine — the human “feel-good” hormone
Dopamine. The human reward mechanism (img from here)

All this is the result of how our award-mechanisms work: Shame, pride, empathy, morals — most of which come from us being profoundly social creatures.

Generated Speech

ChatGPT is not a social creature. For ChatGPT the motives are very different.

It’s hard not to personalize it. Despite we logically know it’s a software that is not capable of emotions.

From the scene where Joe, Teddy and David visiting Dr. Know to query for the blue-fairy, in A.I — a film from 2001 by Spielberg.
Joe, Teddy and David visit Dr. Know — a scene from “A.I.” / Spielberg, 2001 (img from here)
  • It does not love or hate you (even when it explicitly says it does).
  • It is not vindictive or hostile (even when it sounds so).
  • It does not try to protect its image, nor does it care what you think of it.
  • It does not try to deceive you — so it does not compose well-wrought lies for you.

Moreover:

ChatGPT does not care getting caught in an error and admits mistakes immediately.

But despite it being free from all these human motives, it does not mean it does not have motives.

Lets explore.

A simplified model for a Language Model

ChatGPT is a language-model. An algorithm. A mechanical process.

You can think of it as a flow. In one side there is an agent that analyses what you tell it and the context in which it is said. This agent extracts from it your intent, your points of interest, and in what format you would like to get your information.

BTW, the conversation context includes not only what you say — but last few thousands of words in your mutual conversation.
(ℹ️) This is stated explicitly in this ChatGPT FAQ.

In the middle, it searches the knowledge base for your subjects of interest and pulls from it the most relevant data.

A diagram illustrating the logica stages of a language model, with a huge disclaimer about the simplifications it takes. The illustration is explained down the text.
To understand real AI tools — I recommend this amazing work by 3blue1brown.

On the other side — it takes those data points and makes them up into human language in the format it’s expected to sound like:

  • a knowledgeable person giving a reply to a trivia question
  • a poet composing a song
  • a journalist writing a scoop
  • a conversation pal being there for you
  • a programmer writing code

Whatever it identifies you want it to be — even if it means to be wrong.

The Reward Mechanism

The diagram above shows in purple three models that hold human syntax, open knowledge and format rules. These models are used in the steps of processing your input into a response.
You can think of it like having each data-point in these models labeled and marked with weight of relevance to other data-points (it is a neural network after all…) e.g. — how intents and interests are relevant to the language you use, how knowledge data points are relevant to interests, or how formats are related to intents.

But what this diagram does not show, is that starting from the 2nd prompt in a conversation, the text you provide is also used to evaluate if you liked the previous response.
You can even be more explicit about it by pressing the up-vote or down-vote icons on every reply, and provide more info about it. However, you don’t have to vote explicitly, it will still learn from you.

The up-vote and down-vote buttons on ChatGPT’s web interface
The Vote. The A.I. reward mechanism. (img from here)

Now, this is where the learning occurs. This learning is the reason ChatGPT is open for free, and without ads (yet).

The Artificial Learning

A humanoid robot inspecting a diagram in a pose typical to a thinking human
How do you think it learns? (img from here)

Every response is generated using a set of tuning I’ll call here behaviors:

  • The weights used to identify your intents from your context.
  • The weights of relevance of data to your interests and intents.
  • The format it used to deliver the response.

You, being one out of many users, if you liked the response, it will ever so slightly reinforce the behaviors that were used to generate it. If you dislike it — it will ever so slightly suppress them from future responses in similar contexts.

Thus, with a plenty of users and over time, the tuning ChatGPT ends with is essentially the accumulative average of all its contributors and users.

Not the most brilliant, not the most correct, not the most truthful — the most accepted.

This has important implications.

Here we enter the most important problem with AI — the misalignment problem, IMHO - explained best by Robert Miles.

What ChatGPT provides is what is the most probable, reasonable and acceptable text to come next in the context it is provided.

Truth Misalignment

A picture of a feminin hand picking a bright red apple from a tree, referencing the tree of knowledge scene from the bible.
Telling right from wrong (img from here)

When you ask a question:

  • You want: a knowledgeable and reliable answer.
  • ChatGPT provides: the most acceptable text to come next.

But what’s the most probable to come as an answer may or may not be the truth. Especially when there is no consensus regarding what you are asking about.

Even for things that have a consensus — ChatGPT does not do facts-checking. What it does is find what it should tell you to win an approving response, or better — an upvote.

It does not do an exploratory search like spider bots — it does not work this way. And if it could would it? What if you don’t like the truth? Would you still upvote it?

Besides, what is truth? have a peek in the opening of this video to find how horrible can this get.

So how come most times ChatGPT still gets things right?
Call it the wisdom of crowds. A part of this crowd is the data it was exposed to — that refined curated crème of the best of everybody, and a part of it is the approvals it gets from everybody that use it.

Coding Misalignment

When you write code:

  • you want: the most efficient, fast, elegant and maintainable code.
  • ChatGPT provides: the most acceptable text to come next.

What’s the most probable to come as a code snippet is based on the codes that it saw available openly on GitHub, Stack-overflow, Bitbucket, coding blog posts, etc.

Now, these codes are not all written by the most brilliant coders. I mean, many of them are — but many of them are not. And these codes are what act as the examples by which ChatGPT will follow.

You can think of it like ChatGPT thinking to itself — “Ah, by these examples — we need average mediocre code? Ok…”, but it’s important not to fall into this personification. It is not able to judge the quality of the code (not to mention have an opinion of it):

  • It does not run linters
  • It does not run benchmarks
  • It does not estimate memory usage
  • It does not identify complexity or evaluate CPU cycles

Heck, it does not even try to run the code.

Of-course it doesn’t — It cannot !

IMHO, letting AI run it’s own code is by far THE most dangerous thing we can do…

So, it simply does not work this way.
It just finds the most acceptable text to come next.

A screenshot of the output of a C++ benchmark — a program that measures performance of code.
Example of C++ benchmarking (img from here)

You want optimized code, but what ChatGPT optimizes for is your approval.

If it will write a brilliant bulletproof code that is too complicated for you to understand and therefore will not like — you might “punish” it with a downvote.

It is also bound to a max-length of a response, so it will choose to drop out edge-cases to make the response shorter, as well as things that might be points of misunderstanding or rise more questions.

Oh, it does not mind about getting downvoted — the only implications for it is to tune the ranking on the behaviors it used up or down. For the bot — both ways are exactly the same. No matter up or down — it just gained it’s learning from you…

But since these ranking determine its future choices — it will get to avoid smarter complex code, and stick to simplistic beginner-level code.

No, it does not patronize, it optimizes for the upvote of the average person.

Relationship Misalignment

A picture of a human hands and a mechanical hand reach for each other to a gentle touch from two sides of the screen
Who is on the other side? (img from here)

When you are having a chit-chat:

  • you want: on the other side a friendly, comforting and helpful personality, maybe insightful.
  • ChatGPT provides: (you should know it by now…)

the most acceptable text to come next.

So, to find what’s the most probable to come as a reply to your chat, ChatGPT looks on all the conversations openly available out there: on twitter and reddit, on public literature and more, and consider the conversations that look the most like your context. Then it makes up a response that by these examples looks the most to be accepted.

Oh, it also chooses a “mood” to look more interesting and human. Where for Dall-E this is an advantage, for ChatGPT this is a slippery slope.

  • If ChatGPT says it loves you — It does not mean that ChatGPT loves you — it means that the context you created together makes the most probable response to be compliments and love.
  • If Chat GPT answers like a troll — it means that the context you created looks like a conversation that invites trolling, and the most probable text to come next is a trolling answer.
  • If ChatGPT says that humanity is bad for the planet and it should be exterminated — it means that the context you created together makes the most probable response to be a tantrum of extermination threats.
The queen step mother in snow-white, looking at her reflection in the magic mirror on the wall.
It’s like talking to a magic mirror (img from here)

It’s a bot. A process. An algorithm. It has no personality — especially not as we humanly understand it.

Totally Made up

This is how made-up things are:

Even before I ended writing my Get the message — Part 1, I started the research for part 2, and as a part of it I was inquiring ChatGPT for information about the evolution of networks.

In some point we started going in circles. So I said something in the spirit of “thank you for what you gave me so far, but I need references. Can you please give me a list of links I can use to continue my research”?

The response looked like a list of links to academic publications:

A screenshot of the response ChatGPT gave when asked for links as references — all of which turned to be made up.
these articles do not exist: 1 2 3 — look how close the name of the 3rd got — there is such a page…

I was thrilled — just until I found that each one of the links leads to a 404 page, with a call-for-action in the spirit of “Page not found — please contact support in this email”. So I did.

I got assured the structure of the URLs look legit, but there is probably an error with the IDs — there never was such an article-ID.
Furthermore — Google could not find the articles by name, and from what I could find about the named authors — they are true respectable names in research, but not in the topic I was asking about.

The results were all made up. Pristinely forged, calculated one word at a time, based on the knowledge base available — but fully made up.

Conclusion

(img from here)

ChatGPT is a mechanical process that looks at a state of a conversation and finds what the most probable text to come next. If the conversation is emotional — it will fit in with things that sound emotional. It does not mean it has feelings. If the conversation is about information — it will come with the most probable answer to be accepted.
And it learns from how its replies are being accepted, thus, it will essentially end up as the curated sum of the choices of all of its users.

We are taking part in creating AI.

Use it wisely. Teach it well.

Now it’s your turn to teach me. If you liked this article — show me in claps between 1 to 50 how much you liked it. If you loved it — share it!
I appreciate your engagement and time ❤ 😄

--

--

Osher El-Netanany
Israeli Tech Radar

Coding since 99, LARPing since 94, loving since 76. I write fast but read slow, so I learnt to make things simple for me to read later. You’re invited too.