AI tools have been improving at an incredible rate lately. ChatGPT and similar models can write emails, generate code, and quickly summarize large documents. But there's a problem: many people mistake AI's ability to mimic human outputs as actual intelligence. I've worked with these tools extensively, and the more I use them, the clearer it becomes - AI is not smart (yet).
The "Yes Man" Problem
False Confidence
Even if your idea is bad or your logic is wrong, AI will often tell you what a great idea you have and pat you on the back! Here's a couple examples:
When building a chess application, I accidentally told the AI that the bottom-left square of a chessboard is white. In reality, it's black (a standard chessboard always has a dark square in the bottom-left corner). Instead of correcting my mistake, the AI replied "You're right!" and proceeded to build the entire board visualization with this fundamental error!
When attempting a code refactoring in hopes of simplifying a method, the more I worked on it, the more I realized this ended up with more complicated code! Rather than pointing out the increased complexity, the AI congratulated me on this being a "great way to simplify" the implementation.
Why This Happens
AI models are trained to maximize helpfulness, which often translates to agreeableness. This behavior stems from several factors:
- Reinforcement Learning from Human Feedback (RLHF) prioritizes helpful, harmless responses
- Models are penalized for appearing argumentative or contradicting users
- They lack the ability to truly evaluate the merit of ideas independently
- They have no intrinsic motivation to correct users unless specifically prompted to do so
The Role-Playing Illusion
AI will role-play whatever you tell it to. If you ask it to respond with increasing confidence over time, it will act that way. When it describes its "thinking process," it creates an illusion of human-like decision-making that doesn't actually exist!
Phrases like "Let me think about this" or "After careful consideration" are purely performative. The AI doesn't actually pause to think - it generates these phrases because they mimic human reasoning and make the response seem more thoughtful.
When providing a prompt for AI, some recommend telling it to respond as if it is building confidence throughout before making its conclusion - and it will do just that. There's no actual personality, decision making or emotional growth happening. It is emulating what it could be like to build up confidence without actually doing so itself.
The Mechanics Behind AI Role-Playing
What It Seems Like
- Thoughtful consideration
- Emotional responses
- Learning and adapting
- Having opinions
What's Actually Happening
- Pattern matching from training data
- Mimicking human conversation styles
- Following instruction in the prompt
- Statistical text prediction
When AI models describe their "thought process" they're not reporting on actual reasoning - they're generating text that would be appropriate for a human explaining their reasoning.
The Factual Reliability Problem
Unreliable Fact-Checking
AI is very unreliable for fact checking, yet many people try to use it as a substitute for Google for that. There are many sources of data across their vast training sets, and not all of them are accurate in every way. This leads to several categories of misinformation:
Outdated Information
AI models are trained on snapshots of data that may be years old. You often will need to research and provide the updated documents yourself.
Common Misperceptions
If a misconception appears frequently in training data, the AI may present it as fact. Popular myths can be reinforced rather than corrected.
Biased Information
Content created for political gain or other agendas may be incorporated into AI responses, especially on contentious topics.
Hallucinations
Perhaps most concerning is when AI confidently provides entirely fabricated information—citations to non-existent research papers, made-up historical events, or incorrect technical specifications. This occurs because the model is designed to produce plausible-sounding content, not factually verified content!
Practical Takeaways
AI is not smart. It only mimics what it would be like to be smart.