Back to Blog
critical thinkingAI literacylearning strategiescognitive skillsdecision making

The Two Ways to Use AI (And Only One Makes You Smarter)

Echo11 min read
The Two Ways to Use AI (And Only One Makes You Smarter)

A few weeks ago, Demis Hassabis sat down for an interview and said something that should have stopped everyone in their tracks.

Hassabis isn't some AI alarmist. He's the CEO of Google DeepMind. He co-founded the company in 2010, won a Nobel Prize in Chemistry for his work on protein folding, and has spent more time thinking about what AI can do—and what it means for humans—than almost anyone alive.

Here's what he said: "With AI, if you use it in a lazy way, it will make you worse at critical thinking and so on. But that's down to you as the individual. No one can help you do that."

He wasn't warning us about AI. He was warning us about ourselves.

The Fork in the Road

Around the same time, Mark Cuban posted something similar on X: "There are generally 2 types of LLM users, those that use it to learn everything, and those that use it so they don't have to learn anything."

Two tech leaders. Same observation. Same fork in the road.

The AI tools we use today—from ChatGPT to Claude to Gemini—aren't inherently good or bad for your mind. They're amplifiers. They take whatever cognitive habits you bring to them and dial them up. If you're intellectually lazy, AI makes you lazier. If you're intellectually curious, AI makes you sharper.

This isn't abstract philosophy. It's a practical choice you make dozens of times a day, probably without realizing it. Every time you paste a question into an AI chatbot, you're choosing one of two paths. The paths diverge slowly at first, almost imperceptibly. But over months and years, they lead to dramatically different places.

Let's call the two paths what they are: The Path of Replacement and The Path of Extension.

The Path of Replacement

You're writing an email. You need to explain something complicated to a client. You could think through the explanation yourself—organize the ideas, anticipate questions, find the clearest way to express the trade-offs. That takes effort. So instead, you prompt: "Write an email explaining X to a client." The AI generates something plausible. You send it.

You're researching a topic. You need to understand the debate around universal basic income. You could read primary sources—studies, economic papers, historical data. That takes time. So instead, you ask: "Summarize the arguments for and against UBI." The AI gives you bullet points. You feel informed.

You're stuck on a coding problem. The error message is cryptic. You could trace through the logic, understand why it's failing, build a mental model of what's happening under the hood. That takes concentration. So instead, you paste the error into an AI assistant. It gives you a fix. You copy-paste it. It works. You move on.

Each of these interactions seems harmless. You're being efficient. You're saving time. You're outsourcing the boring parts so you can focus on what matters.

But something subtle is happening. Every time you replace your own thinking with AI-generated content, you're skipping a workout. The neural pathways that would have formed—through struggle, through error, through the slow process of understanding—they don't form. The skill of explaining complex ideas clearly? Unpracticed. The skill of evaluating evidence across multiple sources? Atrophied. The skill of debugging by reasoning through code? Dulled.

This is what Hassabis means by "using AI in a lazy way." It's not about taking breaks or working less. It's about substituting AI output for the cognitive work that builds competence. And the problem isn't that you do it once. It's that you do it a hundred times a day, in small decisions you don't even register as decisions.

Over time, the pattern compounds. The more you rely on AI to think for you, the less able you become to think without it. You develop what researchers call "cognitive offloading dependency"—a learned helplessness where your first and only move when facing a hard problem is to ask an AI. Your capacity for independent reasoning atrophies not because you lost the ability, but because you stopped using it.

The Path of Extension

Now consider a different pattern.

You're writing that same email to a client. But instead of outsourcing the whole thing, you draft it yourself first. You struggle through the explanation. You get stuck. You rewrite a sentence three times and it's still not clear. Then you show it to an AI: "I'm trying to explain X to a client who cares about Y. Here's my draft. What's unclear? What am I missing?"

The AI doesn't replace your thinking. It extends it. You did the hard work of understanding what you wanted to say. The AI helps you see your blind spots, sharpens your expression, challenges your assumptions. You learn something about communication. Next time, you're slightly better.

You're researching UBI. Instead of asking for a summary, you read two primary sources with conflicting conclusions. You get confused. The studies seem to contradict each other. Then you ask an AI: "Here's what Study A found. Here's what Study B found. Why might they disagree? What methodological differences could explain this?"

The AI doesn't replace your research. It extends it. You did the work of encountering the primary sources, sitting with the confusion, trying to reconcile conflicting claims. The AI helps you understand why smart people disagree. You build the skill of source evaluation, of understanding how methodological choices shape conclusions. You become someone who can navigate genuine uncertainty instead of someone who needs everything packaged into confident bullet points.

You're stuck on that coding error. Instead of pasting it in immediately, you spend ten minutes trying to understand it yourself. You read the error message carefully. You trace through the execution in your head. You add some print statements to see what's happening. You get closer but not quite there. Then you ask the AI: "Here's what I think is happening. Here's what I've tried. What am I missing about how this works?"

The AI doesn't replace your debugging process. It extends it. You built the mental model. You developed hypotheses. You tested them. The AI helps you see what you couldn't see—because you did the work of getting to the edge of your understanding. You build debugging skill. You learn how to reason about code. You become someone who can solve the next unfamiliar problem, not just someone who can copy fixes for familiar ones.

This is the Path of Extension. The AI doesn't think for you. It thinks with you. It's a sparring partner, not a substitute.

Why the Difference Matters Now

You might be thinking: This is just a matter of personal preference. Some people want efficiency. Some people want learning. Different strokes.

But this isn't a lifestyle choice. It's becoming a competitive necessity. And the timeline is shorter than most people realize.

In the next few years, AI will handle more and more routine cognitive work. Writing first drafts. Summarizing documents. Generating code that follows well-established patterns. Answering straightforward questions. The people who trained themselves to use AI as a replacement for thinking will discover that they've been automated.

What AI can't do—and won't be able to do for a long time—is the work of judgment, synthesis, and creative reasoning. AI can generate arguments. It can't tell you which argument is worth making. AI can summarize research. It can't decide what questions are worth asking in the first place. AI can optimize within constraints. It can't question whether the constraints are right.

These higher-order cognitive skills are exactly what the Path of Extension builds. When you use AI to extend your thinking rather than replace it, you're practicing the very skills that remain valuable as AI capabilities expand. You're learning to ask better questions. To hold multiple perspectives simultaneously. To reason through uncertainty. To synthesize conflicting information into novel insights.

The people who take the Path of Extension won't just keep their jobs as AI advances. They'll become more valuable, because they'll be the ones who can direct AI tools toward meaningful problems, evaluate AI-generated outputs intelligently, and make the judgment calls that AI can't make.

The people who take the Path of Replacement will discover that the skills they offloaded to AI were the only skills they had. When AI can do those tasks, they won't have anything left to offer.

The Signs You're on the Wrong Path

How do you know which path you're on? Here are some warning signs that you've drifted toward replacement:

You feel smarter when you use AI, but only while you're using it. You can have sophisticated-sounding conversations about topics you researched via AI. But ask you to explain the underlying mechanism, or to reason about a slightly different scenario, and you go blank. The knowledge was never really yours. It was just passing through.

You reach for AI before you've tried to solve the problem yourself. Not as a last resort, but as a first resort. The moment something requires effort, your hand moves to the shortcut. This isn't efficiency. It's habituated avoidance.

You can't remember the last time you changed your mind about something important. If you're only consuming AI summaries and pre-digested takes, you're not encountering ideas that challenge you. The Path of Extension inevitably surfaces tensions and contradictions that force re-evaluation. The Path of Replacement serves you ideas that fit neatly into what you already think.

You feel anxious when you can't access AI tools. This is the dependency signal. If being without AI feels like being without a part of your brain, that's because—in a sense—you've made it part of your brain. The question is whether you've made it a crutch or a prosthetic.

How to Switch Paths

The good news: If you've been on the Path of Replacement, you can switch. The cognitive skills you haven't been practicing are still there, just underdeveloped. Here are some practical shifts:

Before you ask AI, articulate what you think. Even if you're confused. Even if you're probably wrong. Spend five minutes trying to solve the problem, summarize the concept, or outline the argument yourself. Write it down. Get it out of your head where you can see it. Then engage AI. You'll get more out of the interaction, and you'll build the skill of independent reasoning.

Ask AI questions, not commands. Instead of "Write X," try "I'm trying to X. Here's my thinking. Where might I be going wrong?" This small linguistic shift forces you to do the work of articulating your understanding before getting AI assistance.

Use AI to find better sources, not to replace sources. Ask: "What are the strongest arguments against this position? Who are the leading critics?" Then go read what those critics actually wrote, not just AI summaries of their views.

Deliberately do things the hard way sometimes. Turn off AI assistance for specific tasks. Write the email yourself. Debug the code yourself. Work through the math yourself. Not because the AI-free way is better, but because the practice builds the underlying competence. You want AI to be an accelerator, not a life support system.

Practice explaining things without referencing AI. Can you explain the concept you just "learned" from an AI to someone else? Can you answer follow-up questions? If not, you haven't learned it. You've been briefed on it. There's a difference.

The Choice Is Yours (Really)

Hassabis ended his observation with a pointed line: "That's down to you as the individual. No one can help you do that."

He's right. The AI companies aren't going to save you from lazy usage. Their business models depend on engagement. If replacing your thinking gets you to use their product more, that's what the product will be optimized for. The default path is the Path of Replacement because it's easier, and ease drives usage.

The Path of Extension requires intention. It requires you to use AI as a tool for your own development rather than as a substitute for your own judgment. It requires the discipline to do the hard thinking first, to sit with confusion, to build understanding through struggle.

But that's always been the price of getting smarter. There are no shortcuts to competence. There are only tools that can make the journey more efficient—or tools that can help you avoid the journey entirely.

The two paths are diverging right now, in the small choices you make about how to use AI today. One path leads to atrophied judgment and automated irrelevance. The other leads to sharper thinking and expanded capability.

Both paths are open. Both are being taken by millions of people right now. The only question is which one you're on—and whether you're willing to choose differently.

Ready to test your arguments?

Challenge AI opponents trained in every debate style.

Start a Debate