0:00
/
0:00

Thinking, Rewired

Humans and AIs think in parallel, but never the same

Do machines think. Maybe the better question is, do machines think the way humans do? I use an old quip that people are smart and slow, computers are dumb and fast. That’s because computers need programs, can’t do a thing without them. People write programs, but until recently it’s been a laborious, time-consuming process. That’s not good enough to answer the thinking question.

Both human brains and computers have neural networks,. Humans have biological neurons while machine neurons are manufactured. These networks process information and can learn over time. Its the way these systems process information that distinguishes one from the other. The answer to our question can be found in a few key thought areas.

Learning

Learning is abig one. Human learning occurs because of a property called neuroplasticity. This is the brain’s ability to recognize its neural networks in response to experience. When we learn something new, the neurons in a particular part of the brain fire together in a pattern. The more frequently and strongly the neurons fire in this pattern, the deeper the learning. Brains learn fast, even experiencing something once can create a lasting memory.

On the other hand, LLMs require training with massive datasets. The machine processes millions of text examples and applies algorithms to predict an output. This is where speed counts. A human could never learn this way, our brains aren’t wired for speed. We can pick up a new word and understand its meaning in context on the first or second hearing. An LLM needs thousands of exposures to the same word before it can use it properly. And after it has learned the word, the memory is fixed, it won’t evolve. Human language is constantly evolving, our learning is dynamic.

Processing

LLMs process information sequentially using code or tokens which pass through layers of the trained model. This comes up with a score that calculates which token is most likely to come next and complete the pattern. When it sees “The cow jumped over the…” it will insert moon, unless it thinks Mars is more appropriate.

We don’t understand word by word, we grasp ideas. By linking knowledge to experience to, we come up with meaning. Our brains use something called parallel, distributed processing to understand concepts. Billions of neurons are firing at the same time or in parallel, with processing distributed over regions specializing in different tasks like hearing, speech and smell. Or, as the comedian Ed Wynn said, I’d rather have a bottle in front of me than a frontal lobotomy.

In short, human brains use content to trigger memories whereas LLMs complete the next step in a recognizable pattern.

Memory

Memory is a big part of thinking. There are three kinds of human memory: sensory, working or short term, and long term. Human memory associates perceptions with meaning, context and emotions. Every time I hear Nancy Sinatra’s “These boots Are Made for Walking” , I think of the time I drove to Florida for spring break and the song was on the radio the entire time.

LLMs rely on training where everything is encoded and assigned weights. When you prompt an LLM it places that input into a limited context window that is temporary holding place for memory. Once the content window is filled up or no longer needed, the information is totally forgotten.

Reasoning

You might think that thinking and reasoning are the same thing. After all, LLMs are designed to resemble human reasoning. That resemblance in the output might seem similar, but when you look at how brains andmachines operate, there are differences.

Humans employ both intuitive and logical reasoning, what Daniel Kahneman called system one and system two in his hugely influential book “Thinking Fast and Slow”. System one isfast intuitive thinking. Cave dweller sees a charging saber tooth tiger, fight or flight. Cave dweller sees raw meat, considers what to prepare for dinner.

LLMs are system two heavy. They mimic reasoning through trained token sequences. Because they are fast, they may appear to be capable of intuitive thinking. it’s an illusion. LLMs don’t reason the way people do. They are calculating probabilities, making a plausible guess based on training data. They lack genuine understanding of logical rules.

When an LLM gets something right, it’s not because it understands step-by-step logic. It’s right because the token sequence happens to align with logic. When it gets this wrong, we say it is hallucinating.

LLMs often generate incorrect information with supreme confidence and occasional obstinacy. When humans hallucination they are seeing visionsor hearing voices that aren’t really there. That’ssomething altogether different. A more accurate comparison would be to confabulations when a person unknowingly creates a false memory or explanation. It’s not a lie, we believe it to be true. Think about that hat childhood event you remember so vividly that your mother told you never happened. This happens because our imaginations will fill in missing details.

In effect, people and computers get things wrong in different ways.

Embodiment

The most elemental difference between human and machine thinking is also the most obvious. embodiment. Humans are cognizant physical beings whose understanding, thoughts and behaviors are grounded in the real world. LLMs are disembodied. They learn from text, their world is software stored on servers. They will never experience the thrill of wave surfing on a hot summer day. They don’t even understand wet, except maybe as short circuits caused by spilling coffee on your keyboard.

And this might be the biggest reason humans won’t be replaced by AI. The LLMs simply have no idea where they are unless we tell them.

AI doesn’t think like humans, but the kind of thinking it does is very powerful. We find ourselves in a world with two distinctly different kinds of thinking. Get used to it.

Thanks for reading Byte Sized Tech Tips! This post is public so feel free to share it.

Share

Byte Sized Tech Tips is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Leave a comment

Discussion about this video

User's avatar