How to stop AI from killing your critical thinking | Advait Sarkar

with Advett Sarkar

Published November 15, 2025
View Show Notes

About This Episode

Researcher Advett Sarkar argues that current AI tools risk turning knowledge workers into passive validators, weakening creativity, critical thinking, memory, and metacognition. He proposes a different paradigm where AI is designed as a "tool for thought" that preserves material engagement, offers productive resistance, and scaffolds thinking. Using a prototype scenario, he shows how AI provocations, lenses, and structured outlining can help people work faster while actually thinking more deeply, and he closes with a call to prioritize human agency and cognitive flourishing in AI design.

Topics Covered

Disclaimer: We provide independent summaries of podcasts and are not affiliated with or endorsed in any way by any podcast or creator. All podcast names and content are the property of their respective owners. The views and opinions expressed within the podcasts belong solely to the original hosts and guests and do not reflect the views or positions of Summapod.

Quick Takeaways

  • Contemporary AI use in knowledge work often turns humans into passive validators, reducing engagement with the underlying material.
  • Studies cited by Sarkar show that AI assistance can narrow the range of ideas, reduce critical thinking effort, and weaken memory for content.
  • Metacognition becomes harder when AI intermediates our interaction with information, making us "middle managers" of our own thoughts.
  • Sarkar proposes AI as a "tool for thought" that challenges rather than simply obeys, encouraging questions, critiques, and alternatives.
  • A prototype workflow with AI "provocations" and customizable reading "lenses" shows how AI can enhance understanding while preserving human authorship.
  • Design principles like preserving material engagement, offering productive resistance, and scaffolding metacognition can make AI both powerful and thought-enhancing.
  • Sarkar argues that the ability to think well is essential for human agency and flourishing, regardless of how capable AI becomes.
  • He closes by contrasting tools that think for us with tools that make us think, urging a deliberate choice in how we develop and use AI.

Podcast Notes

Introduction and framing of AI and human thinking

Host introduction to the episode

Podcast and host identification[0:12]
The host states this is TED Talks Daily, a show bringing new ideas to spark curiosity every day.
The host, Elise Hugh, introduces herself by name.
Central question about AI thinking for us[0:31]
Elise asks what happens when we start to let AI think for us, emphasizing it as a big current question.
She frames the tension between AI making us smarter and more efficient versus hindering our critical thinking.
Introduction of the speaker and topic focus[0:37]
Elise introduces Advett Sarkar as a researcher at Microsoft.
She explains that Sarkar examines the cognitive trade-offs of AI at work.
She notes that he sketches a different kind of tool, one that promotes critical thinking and nudges reflection to help people get smarter, not just faster.

Speaker's opening and admission of using AI

Statement of topic: thinking for yourself[0:42]
Sarkar states he is there to talk about thinking for yourself.
Acknowledgment of AI use in preparing the talk[0:57]
He admits that he used AI to help him think about the topic of thinking for yourself.
He notes the irony of using AI to prepare a talk about thinking for oneself and says the irony is not lost on him.
Distinguishing AI as assistant vs tool for thought[1:04]
Sarkar clarifies that he did not use AI as an assistant simply to prepare the talk faster.
He instead describes using AI as a "tool for thought."
He promises that by the end of the talk he will explain what he means by a tool for thought, why it is important, and give a glimpse of how it might work.

The age of outsourced reason: a day in a knowledge worker's life

Setting the scene with a knowledge worker scenario

Describing a typical knowledge worker day with AI[1:18]
Sarkar introduces a "day in the life" of a 21st-century knowledge worker.
He describes arriving at the office and seeing an inbox full of emails.
The worker decides to summarize the emails using AI.
When struggling to figure out how to respond to an email, the worker has AI write the response.
For writing a report, the worker faces the blank page problem and solves it by dropping in resources and having AI generate a draft.
He notes that this AI-generated draft looks good to the worker.
Shift from writer's block to validator's block[1:49]
Sarkar comments that writer's block used to mean staring at a blank page.
He says now writer's block is staring at a page that AI has filled out and wondering whether you agree with it.
He characterizes the worker as having become a "professional validator of a robot's opinions."
AI for data analysis, slide decks, and coding[2:06]
The worker has some data to analyze and considers having AI analyze the data for them, assuming it is probably correct.
The worker also needs to make a slide deck and uses AI in a routine way, indicated by "you know the drill."
The worker remembers they were supposed to prototype something and decides to "vibe code" it, implying AI-assisted coding.
Sarkar notes that all of this looks good to the worker, who is ready to "go."

Characterizing the current reality of knowledge work

Not a distant future, but present reality[2:25]
Sarkar stresses that this scenario is not a vision of the future.
He describes it as a completely plausible, if slightly exaggerated, picture of knowledge work today.
Concept of "outsourced reason" and alienation[2:35]
He welcomes the audience to "the age of outsourced reason."
He explains that in this age, the knowledge worker no longer engages with the materials of their craft.
He says we have become "intellectual tourists" in our own work: we visit ideas but do not inhabit them.
Sarkar notes that our relationship to our work is entirely intermediated by AI and says some might call this alienation.

Cognitive impacts of AI-assisted workflows

Overall focus on how AI affects human thought

Shift from familiar AI stories to cognitive implications[3:07]
Sarkar says we have heard the story of alienation before.
He states that he wants to focus on how using AI in this way can have profound implications on human thought.

Effect on creativity

Individual vs collective creativity with AI[3:22]
On an individual level, AI might seem like a creativity boost by giving rapid access to new ideas.
He cites numerous studies showing that on a collective level, knowledge workers using AI assistance produce a smaller range of ideas than a group working manually.
He says we have created a hive mind, but the hive is very boring and keeps suggesting the same five ideas.

Effect on critical thinking

Survey findings on reduced critical effort[3:44]
Sarkar describes a survey of knowledge workers about their use of AI.
Respondents reported putting less effort into critical thinking when working with AI than when working manually.
He notes this effect was greater when workers had greater confidence in AI and less confidence in themselves.

Effect on memory

Reduced recall when AI writes or summarizes[4:02]
Sarkar states that when people rely on AI to write for them, they remember less of what they wrote.
He adds that when people read AI-generated summaries, they remember less than if they had read the original document.
He comments that this reduced memory from summaries is hardly surprising.

Effect on metacognition

Definition and challenges for metacognition with AI[4:15]
Sarkar defines metacognition as the ability to think about your own thinking process.
He notes that working with AI requires significant metacognitive reasoning about task goals, task decomposition, applicability of generative AI, and ability to evaluate outputs.
He observes that these metacognitive processes are normally built into directly working with materials, and become problematic when material engagement is intermediated.
He summarizes this as having become middle managers for our own thoughts.

Overall cognitive costs and analogy to exercise

Summary of cognitive declines with AI-assisted workflows[4:59]
Sarkar tallies the effects: we have fewer ideas, think about them less critically, remember them less well, and have a harder time doing it.
He asserts that AI-assisted workflows can have profound effects on human thinking.
Importance of everyday cognitive exercise[5:08]
He emphasizes that this impact extends to seemingly trivial, mundane tasks.
He argues that everyday opportunities to exercise creativity, critical thinking, and memory protect our "cognitive musculature."
These everyday exercises enable us to rise to the occasion when exceptionally complex tasks arise.
He notes that studies show when we do not use our brains, they get worse at "brain things" and jokingly addresses the Nobel Prize Committee.
Is this the cost of progress?[5:30]
Sarkar asks whether this cognitive decline is the cost of progress.
He says we have solved the problem of having to think, then points out that thinking was not actually a problem.
He compares this to inventing a cure for exercise and then wondering why we are out of breath all the time.

Reframing AI as a tool for thought

Core proposal: AI should challenge, not obey

Statement that current trajectory is not inevitable[5:53]
Sarkar insists it does not have to be the way he has described.
He says that beyond AI as an assistant, AI should be a tool for thought.
He states that AI should challenge, not obey.
Critical juncture in the world of work[6:04]
Sarkar asserts that we are at a critical juncture where the world of work is poised to be transformed by generative AI.
He argues we must act now to shape and drive this transformation toward humanistic values.
He references two diverging roads and says we must take the one less traveled, echoing a well-known metaphor.

What tools for thought should achieve

Beyond speed and completion toward understanding[6:18]
Sarkar says that beyond getting the job done, a tool for thought helps us better understand the job.
Beyond getting work done faster, such a tool helps us get it done better.
Beyond getting to the right answers, a tool for thought helps us ask the right questions.
Beyond automating known processes, it helps us explore the unknown.

Prototype demonstration: AI-supported proposal writing

Context for the prototype

Description of the research prototype[6:52]
Sarkar introduces a prototype developed by his colleagues and him at the Tools for Thought team at Microsoft Research in Cambridge.
He cautions that this is a live research prototype, not a product.
He explains it is one of a series of explorations to study how different ways of working with AI can enhance human thought.
Introducing the fictitious scenario[7:04]
He presents a fictitious example involving a woman named Clara and her colleagues, who run a company selling bottled beverages.
They have just had a meeting about a new industry report with important findings on consumer preference for sustainable packaging.
Clara is asked to write a proposal arguing how the company should respond to the report.
To do this, Clara needs to understand the report's findings, data, and how it fits into her business context.

Document workspace and "lenses" for reading

Loading documents into the workspace[7:37]
Clara starts by loading several documents into her workspace.
These include the meeting transcript to remind her what was discussed.
She also loads a recent internal report from her own business.
She loads the industry report itself and opens it.
Overview and section summaries as "lenses"[7:59]
Clara sees an overview of the document plus section-by-section summaries.
Sarkar emphasizes that these are not just summaries, but "lenses."
He defines lenses as customizable micro-representations of the text that emphasize what is most relevant to the task at hand.
In this case, Clara selects a lens focused on consumers, called the consumer's lens.
Strategic reading and note-taking[8:16]
Clara can select a section for deeper reading.
As she reads, she makes notes about her thoughts and highlights excerpts from the document.
Sarkar notes that as Clara reads, she also sees AI-generated commentary and critiques, which they call "provocations."
He characterizes this process as a hybrid between fully manual reading and fully having AI read for you.
Clara still reads, but she does so intentionally and strategically.

Outlining, drafting, and the role of provocations

Manual construction of the argument outline[8:46]
While working, Clara builds up an outline of her argument manually.
The outline is lightly structured, allowing her to sketch out the flow of her argument at a high level.
The outline retains deep connections and grounding in the source documents.
Because of these connections, the system can already generate a draft of the proposal.
Generating text from outline and shifting relationship to AI output[9:07]
Clara can, for example, add a heading to the outline to generate a paragraph.
Sarkar points out that although this text is AI-generated, Clara's relationship to it is different from simply having AI write a report from documents.
He explains that this text is deeply rooted in a cognitively effortful but interactionally effortless thought process.
The generated text reflects Clara's decisions, judgments, and her unique personal and professional expertise.
Provocations in the outline and the value of rejecting them[9:38]
Clara encounters another provocation, this time in the outline.
In this case, she decides that while the provocation is useful, she does not need to address it.
Sarkar contrasts provocations with typical AI suggestions, saying provocations are not meant to be applicable all the time.
Instead, provocations are meant to stimulate thinking about one's work.
He argues that if you understand your work deeply enough to confidently reject a piece of feedback, the feedback process is still working as intended.

New interactions with text: resizing, versions, and writing with provocations

Flexible text manipulation via generative AI[10:12]
Sarkar says Clara now has entirely new ways to interact with text because of generative AI.
He gives a simple example: Clara can resize a paragraph to change its length.
She can quickly test different versions of the text.
Maintaining human writing with AI provocations[10:29]
At select strategic points, Clara writes herself.
As she writes, she sees provocations that do not autocomplete her ideas but instead raise alternatives.
The provocations also identify fallacies and offer counter-arguments to help her strengthen and develop her argument.
Design choice: no chat box and non-anthropomorphic assistance[10:45]
Sarkar notes that there is no chat box anywhere in the interface.
Clara does not have to chat with anything to do her work.
Yet, she is silently and appropriately assisted by her computer, described as a computer and not as a substitute human.

Outcomes for Clara's process and cognitive engagement

Balancing speed with material engagement[11:07]
Sarkar says that throughout the process, Clara has been assisted and has probably worked faster because of AI.
He emphasizes that she has maintained direct material engagement at strategic points.
She read the relevant portions of the document herself.
She constructed her decisions and her argument herself.
Sarkar concludes that it can ultimately be said she has written the document herself.
Metacognitive engagement via provocations[11:25]
AI provocations at every stage kept Clara metacognitively engaged.
She was continually prompted to look for critiques, alternatives, and lateral moves.

Design principles and empirical findings on tools for thought

Evidence that design can reintroduce critical thinking and creativity

Promising results from studying such tools[11:35]
Sarkar states that they have been studying the effects of tools like the prototype.
He reports that the results are promising.
He claims you can demonstrably reintroduce critical thinking into AI-assisted workflows.
He says you can reverse the loss of creativity and enhance it instead.
He adds that you can build powerful tools for memory that enable knowledge workers to read and write quickly with greater intentionality and remember what they work on.

Key design principles for tools that enhance thought

Best of both worlds: speed plus protection of thought[12:03]
Sarkar says that with the right design principles, you can build tools that are the best of both worlds.
Such tools apply the speed and flexibility of AI to protect and enhance human thought.
Three simple, general principles[12:12]
He lists principles like ensuring that the tool preserves material engagement.
Another principle is that the tool should offer productive resistance.
A further principle is that the tool should scaffold metacognition.

Scope of application beyond professional knowledge work

Extending principles to daily life and education[12:28]
Sarkar notes that although they have primarily studied professional knowledge workers, they believe these principles can extend to all aspects of AI use.
He mentions daily life and hobbies as additional domains.
He also includes education as an area where these principles might apply.

Clarifying the aim: efficiency vs better thinking

Rejecting efficiency as the primary goal[12:38]
Sarkar repeats that efficiency is not the aim of tools for thought.
He says the aim is better thinking, though sometimes you cannot have both efficiency and improved thinking.
Analogy of a lunch that pays you[12:48]
He reflects that he used to think there was no such thing as a free lunch in human thinking.
He now calls tools for thought better than a free lunch, likening them to a lunch that pays you to eat it.

Philosophical and value-based considerations

Why protect and augment human thought if AI can think better?

Questioning future AI superiority[13:04]
Sarkar raises the question of what happens if AI can do a better job of thinking than humans.
He asks why we should care so much about protecting and augmenting human thought in that case.
Two reasons to prioritize human thinking[13:18]
His first reason is that there may always be ways of thinking that remain uniquely human strengths, including ones we may not be aware of.
His second, more important reason is that the ability to think well is essential for human agency, empowerment, and flourishing.

Historical analogies: memory, navigation, and now thinking

Comparisons to writing, books, and the internet[13:37]
Sarkar notes that people once asked if writing, books, or the internet could remember for us, whether it mattered that we could not.
Comparisons to maps and navigation[13:44]
He recalls people asking if maps could navigate for us, whether it mattered that we could not navigate ourselves.
Extending the question to thinking, emotion, and spirituality[13:34]
He says that now we ask if machines can think for us, whether it matters that we cannot.
He extends this to machines speaking for us, grieving for us, praying for us, and loving for us, and asks whether it matters that we cannot do these things.
He states that to him, the answer is pretty obvious, implying that it does matter.

Personal timeline and urgency of the questions

Unexpected speed of change in human-AI interaction[14:08]
Sarkar shares that when he began studying human-AI interaction 13 years ago, it was inconceivable to him that we would be asking these questions in his lifetime.
He observes that we are now asking them, and insists that we must ask them.

Closing question and episode outro

Final contrasting question about AI tools

Choice between being replaced in thinking vs being supported[14:18]
Sarkar ends by asking the audience what they would rather have: a tool that thinks for them or a tool that makes them think.

Host credits and production information

Identification of talk context[14:45]
The host identifies the speaker as Advett Sarkar at TED AI in Vienna, Austria in 2025.
Reference to TED curation guidelines[14:42]
Listeners are told they can learn more about TED's curation at TED.com slash curation guidelines.
Credits for TED Talks Daily production team[14:53]
The host notes that TED Talks Daily is part of the TED Audio Collective.
She says the talk was fact-checked by the TED Research team.
She lists production and editing staff by name: Martha Estefanos, Oliver Friedman, Brian Green, Lucy Little, and Tansika Sangmarnivong.
She notes the episode was mixed by Christopher Fazey-Bogan, with additional support from Emma Taubner and Daniela Balarezo.
Elise Hugh signs off by saying she will be back tomorrow with a fresh idea and thanks listeners for listening.

Lessons Learned

Actionable insights and wisdom you can apply to your business, career, and personal life.

1

Relying on AI to handle everyday cognitive tasks can subtly erode your creativity, critical thinking, memory, and metacognition, because you stop directly engaging with the underlying material.

Reflection Questions:

  • Where in my daily work am I letting AI do the thinking instead of using it to support my own reasoning?
  • How might my ability to generate original ideas change if I deliberately did some tasks without AI each day?
  • What is one routine task this week where I will consciously engage more deeply with the material instead of delegating it fully to AI?
2

Designing and choosing AI tools that preserve material engagement and offer "productive resistance" can help you work faster while still strengthening your understanding and judgment.

Reflection Questions:

  • What tools in my current toolkit encourage me to stay hands-on with the content instead of just accepting outputs?
  • How could I redesign one of my workflows so that AI challenges my assumptions rather than simply complying with my instructions?
  • What specific change can I make this month to replace a "do it for me" AI pattern with a "work with me" pattern?
3

Treat AI as a tool for thought that generates provocations, critiques, and alternatives, so that you remain the author of your arguments rather than a validator of machine-generated drafts.

Reflection Questions:

  • When I next ask an AI system for help, how can I frame the request so that it gives me questions, counterpoints, or options instead of finished answers?
  • In what current project could I use AI feedback to stress-test my reasoning instead of to produce the final text?
  • What is one concrete way I will use AI to challenge a draft I have written myself before I finalize it?
4

Strengthening metacognition-thinking about your own thinking-requires tools and practices that make you articulate goals, decompose tasks, and evaluate outputs, rather than bypassing these steps.

Reflection Questions:

  • How clearly do I usually define my goal and sub-tasks before I bring AI into a piece of work?
  • Where have I recently accepted an AI output without explicitly checking whether it aligned with my original intent?
  • What simple checklist or prompt could I adopt to force myself to pause and evaluate AI suggestions more thoughtfully?
5

Protecting and enhancing your ability to think well is not just about performance at work; it is foundational to your agency, empowerment, and long-term flourishing in a world where machines may do more and more.

Reflection Questions:

  • In what areas of my life do I most want to remain mentally strong and self-directed even as AI tools improve?
  • How might my future autonomy be affected if I gradually let machines handle more of my decisions, judgments, and emotional expressions?
  • What regular habit can I build that intentionally exercises my own judgment and reflection, regardless of how convenient AI becomes?

Episode Summary - Notes by Rowan

How to stop AI from killing your critical thinking | Advait Sarkar
0:00 0:00