We're doing AI all wrong. Here's how to get it right | Sasha Luccioni

with Sasha Luccioni

Published October 30, 2025
View Show Notes

About This Episode

AI sustainability expert Sasha Luccioni argues that current AI development is being driven by a "bigger is better" mentality that concentrates power in a few large tech companies while causing significant environmental and social harms. She contrasts massive, energy-hungry large language models and data centers with smaller, task-specific and open AI systems that can run on modest hardware and support climate solutions. Luccioni calls for transparent energy metrics, supportive regulation, and user choices that prioritize sustainable, equitable AI that serves all of humanity and the planet.

Topics Covered

Disclaimer: We provide independent summaries of podcasts and are not affiliated with or endorsed in any way by any podcast or creator. All podcast names and content are the property of their respective owners. The views and opinions expressed within the podcasts belong solely to the original hosts and guests and do not reflect the views or positions of Summapod.

Quick Takeaways

  • Luccioni argues that framing AI as either humanity's savior or its doom obscures the concrete reality that current AI practices are environmentally unsustainable and socially harmful.
  • A handful of big tech companies are building gigantic, energy-intensive data centers and massive general-purpose language models that consume huge resources for relatively trivial uses.
  • Small, carefully trained language models can match the performance of large models on many tasks while using orders of magnitude less data, compute, and energy.
  • Alternative AI approaches already support climate action, from monitoring rainforests and detecting illegal logging to predicting renewable energy output and mapping floods.
  • Most AI users lack information about the energy use and carbon emissions of the models they use, making it impossible to choose tools with sustainability in mind.
  • Luccioni's AI Energy Score Project rates models from one to five stars for energy efficiency, revealing vast differences between small task-specific models and large LLMs for simple queries.
  • Current laws and incentives do not require big AI companies to disclose or take responsibility for the environmental impacts of their models and data centers.
  • She urges developers, regulators, and users to "take back the power" by favoring small, efficient, transparent AI and pushing for legislation that holds big AI accountable.
  • Luccioni envisions a future where AI models are "small but mighty," run on everyday devices, and serve all of humanity and the planet rather than a handful of for‑profit tech firms.

Podcast Notes

Podcast introduction and framing of the AI discussion

Host introduces TED Talks Daily and the topic of AI

Identification of the show and host[2:21]
The host states that the listener is tuned to "TED Talks Daily" and introduces herself as Elise Hu.
Common binary framing of AI's impact on humanity[2:26]
Elise Hu notes that AI is often discussed in terms of whether it will help transform humanity's future for the better or bring about the end of humanity as we know it.
She presents these as the two dominant narratives people focus on when they talk about AI.
Introduction of Sasha Luccioni and her critical stance[2:33]
Elise describes Sasha Luccioni as an AI sustainability expert.
She explains that Luccioni believes both the utopian and apocalyptic AI questions miss the real point.
The host summarizes that Luccioni thinks we are currently doing AI wrong at the expense of people and the planet.

Host previews the key themes of the talk

Focus on sustainability and who benefits from AI[2:46]
Elise says Luccioni will show why decisions about AI must be made with sustainability in mind.
She notes that Luccioni will paint a picture of a future where AI is used for the sake of all humanity and the planet, not just a select few.

Reframing the AI narrative and stating the core problem

Opening promises and fears about AI

Grand promises attached to AI[3:05]
Luccioni lists promises often made about AI: revolutionizing science, turbocharging productivity, and even solving climate change.
Apocalyptic warnings about AI[3:05]
She contrasts the promises with claims that AI is set to bring about the end of humanity as we know it.
She observes that which story dominates depends on who you ask.

Luccioni's rejection of the dominant narratives

Both extreme statements are wrong and distracting[3:21]
Luccioni explicitly says that, in her opinion, both the transformative-savior and extinction narratives are wrong.
She argues that these narratives distract from the real issue: we are doing AI wrong at the expense of people and the planet.

Critique of Big AI, data centers, and the "bigger is better" mentality

Concentration of AI power in large corporations

Large tech companies pushing LLMs as universal solutions[3:34]
Luccioni states that a handful of large corporations are using huge capital to sell large language models (LLMs) as the solution to all problems.
She suggests these companies may believe LLMs will lead to various forms of superintelligence or emotional intelligence, depending on what is trending in Silicon Valley.
Expansion of massive data centers with little regard for impacts[3:50]
She describes a race in which companies build more and bigger data centers, with "people and the planet be damned."

Examples of large-scale, resource-intensive AI infrastructure

Meta's planned data center[3:56]
Luccioni notes that Meta is set to build a data center the size of Manhattan in the next few years.
She says this is part of an investment of hundreds of billions of dollars toward a quest to develop superintelligence.
OpenAI's Stargate data center in Texas[4:13]
She explains that OpenAI announced the first phase of their Stargate data center in Texas.
Once operational, Stargate is set to emit 3.7 million tons of CO2 equivalents per year, which she compares to the emissions of the whole country of Iceland.
XAI's Colossus data center and local health impacts[4:33]
Luccioni states that XAI is currently being sued by residents of South Memphis.
She says the lawsuit concerns air pollution caused by 35 "questionably legal" gas turbines powering XAI's data center Colossus.
She notes that this pollution exacerbates health issues for the city's most vulnerable residents.

Parallels between Big AI and Big Oil

Warnings from activists and scientists about AI's unsustainability[4:40]
Luccioni says that for years, activists and scientists like herself have been sounding the alarm about AI's increasing unsustainability.
Big AI following Big Oil's playbook[4:53]
She explicitly compares Big AI to Big Oil, saying Big AI is following the exact same playbook.
This playbook includes using more and more resources, building bigger and bigger data structures, and promoting a narrative that this growth is somehow inevitable.

Proposed alternative: AI that gives back to the planet

Vision of small but mighty models[5:11]
Luccioni asks what if we could learn lessons from the past to build a future where AI gives back to the planet instead of taking from it.
She envisions a future where AI models are small but mighty, both better performing and more sustainable.
Redistributing power from big AI companies[5:19]
She argues that to build this future, we have to take back the power, "pun intended," from big AI companies.
She says the power should be put back into the hands of developers, regulators, and users of AI.

Criticism of inefficient AI use and the "bigger is better" mantra

Metaphor of stadium lights for trivial tasks

Stadium lights analogy to illustrate waste[5:32]
Luccioni compares using current AI to turning on all the lights of a stadium just to find a pair of keys.
She says we are using huge AI models trained with the energy demands of a small city for trivial tasks like telling knock-knock jokes or deciding what to make for dinner.

Description of the bigger is better mentality in AI

Core elements of the bigger is better mantra[5:46]
She states that the field has adopted a bigger is better mentality as something of a mantra.
This mentality is summarized as bigger models, more compute, bigger datasets, and more energy equaling better performance.
LLMs as the pinnacle of the bigger is better approach[5:46]
Luccioni calls large language models, such as ChatGPT, the pinnacle of this approach.
She explains that these models are trained to be general purpose: answering any question, generating any haiku, and even acting as a therapist.

Energy and cost consequences of general-purpose LLMs

General-purpose models use more energy per task[6:02]
Luccioni points out that models trained to do all tasks use more energy each time than models focused on a specific task.
Study comparing LLMs and small task-specific models[6:26]
She references a recent study she led that examined using LLMs to answer simple questions like "What's the capital of Canada?"
The study found that, compared to a smaller task-specific model, LLMs used up to 30 times more energy for these simple queries.
Shrinking pool of organizations able to build state-of-the-art AI[6:42]
As energy use grows, so does cost, which reduces the number of organizations that can afford to build and deploy what is considered state-of-the-art AI.
Luccioni says this capability is becoming limited to a handful of big tech companies with millions of dollars to burn.
She notes that startups, academics, and non-profits are being left in the dust by this trend.

Power and values of the companies steering AI's future

Concentration of decision-making power[7:26]
Luccioni emphasizes that a small group of big AI companies now decides the future of a technology that can impact billions of lives.
Critique of "move fast and break things" mentality[6:59]
She characterizes these companies as largely guided by a "move fast and break things" mentality.
The implication is that this mindset is ill-suited to responsibly developing AI with wide societal and environmental impacts.

Rise of small LMs and the shift away from bigger is better

Emergence of a "small LM" revolution

A quiet revolution behind headline-making LLMs[7:11]
Luccioni says that behind the hype around systems like DeepSeek and ChatGPT, a revolution has been quietly building.
She identifies this revolution as being driven by "small LMs," which are also language models but much smaller than traditional LLMs.
Scale comparison between small LMs and large models[7:41]
She notes that the smallest model in this family has around 135 million parameters.
She states that this makes it 5,000 times smaller than DeepSeek's model.

Advantages of small LMs in efficiency and performance

Flipping the script on bigger is better[7:32]
Luccioni says these small models flip the bigger is better script by using less data, less compute, and less energy while maintaining the same level of performance.
Curated training data for small LMs[7:52]
She explains that the data used to train Hugging Face's small LM models was carefully curated.
Specifically, 60% of the training data came from educational web pages selected for the quality of their content.
She says this careful selection means the resulting models are less likely to produce misinformation or toxicity when queried.
Deployment on everyday hardware and user devices[8:17]
Luccioni notes that, because the models are so small, they can literally run on a phone or in a web browser.
This allows users to access state-of-the-art AI in the palm of their hand without requiring massive data centers.

Broader benefits of small models beyond environment

Cybersecurity, privacy, and data sovereignty[8:14]
She highlights that small models have benefits for cybersecurity and data privacy.
They also support data sovereignty, giving users more power over the AI they are using.
Enabling smaller AI companies and community collaboration[8:24]
Because small models are cheaper to train, they enable smaller AI companies to connect with communities and compete with big AI firms.
She explains that these companies can afford to train and deploy such models, adapt them to different uses, and share them back with the community.
Luccioni uses the phrase "reduce, reuse, recycle" to suggest that this principle also applies to AI models.

Beyond LMs: AI approaches for climate and environmental action

Limits of general-purpose chatbots for climate tasks

Why ChatGPT is not suited for weather prediction[9:01]
Luccioni says that while ChatGPT can tell you which countries signed the Paris Agreement, it cannot predict extreme weather events.
She explains that predicting extreme weather requires understanding the physics of weather patterns and geography, which is different from what LLMs do.
Why Claude is not suited for crop-planting decisions[9:07]
She notes that Claude can explain the whys and hows of climate change but cannot directly help a farmer decide when to plant crops.
That decision requires integrating factors like temperature, humidity, and historical weather patterns, which other AI approaches may handle better.

Alternative, energy-efficient AI approaches for climate

General statement about diverse AI methods[9:12]
Luccioni emphasizes that many AI approaches use less energy than large LMs and are still very useful in fighting climate change.
Galileo models funded by NASA[9:27]
She describes a team of researchers funded by NASA who trained the Galileo models.
These models can handle tasks such as crop mapping and flood detection without needing specialized hardware.
She notes this makes Galileo models accessible to governments and non-profits.
Rainforest Connection's bioacoustic monitoring[9:42]
Luccioni explains that Rainforest Connection uses AI for bioacoustic monitoring, listening to rainforest sounds around the world.
Their system identifies species and even detects the sounds of illegal logging in real time.
She notes that their AI models are so small they run on old cell phones powered by solar panels.
Open Climate Fix and renewable energy forecasting[9:57]
Luccioni describes how Open Climate Fix uses AI to analyze satellite imagery, weather forecasts, and topography data.
These analyses are used to predict the output of solar and wind installations.
She says this capability helps move forward the decarbonization of energy grids around the world, including data centers.
She points out that data centers are currently powered mostly by coal and gas but could use renewables if the right tools are available.

Lack of transparency on AI energy use and the AI Energy Score Project

Problem: Users can't see AI models' energy and carbon impacts

Comparison to informed choices in other domains[10:16]
Luccioni notes that AI users do not know how much energy a model uses or how much carbon it emits when they use it.
She contrasts this with the ability to make sustainability-minded choices about food or transportation.

Creation and methodology of the AI Energy Score Project

Testing over 100 open-source models[10:41]
This lack of transparency led her to create the AI Energy Score Project.
She says the project tested over 100 open-source AI models across various tasks, including text generation and image generation.
Star-based energy efficiency ratings[10:48]
Models were assigned scores from one to five stars based on energy efficiency.

Energy comparison example: Small Alarm vs DeepSeek

Revisiting the "capital of Canada" example with real numbers[10:54]
Luccioni returns to the example of forgetting the capital of Canada and reminds the audience that the answer is Ottawa.
She says that using a model like "Small Alarm" to answer this question would consume 0.007 watt-hours.
By contrast, using a model like DeepSeek for the same question would use 150 times more energy.

Resistance from big AI companies and gaps in regulation

Big AI's reluctance to participate in transparency efforts

Non-cooperation with AI Energy Score methodology[10:38]
Luccioni says that big AI companies did not want to "play ball" by evaluating their models with her project's methodology.
She adds that she cannot blame them, implying that honest results might make them look bad.

Legal and incentive gaps for environmental accountability

Absence of requirements to evaluate environmental impacts[11:27]
She states that currently we do not have the laws or incentives needed to encourage AI companies to evaluate the environmental impacts of their models.
She also says there are no strong requirements for them to take accountability for those impacts.
EU AI Act as an early but limited step[11:33]
Luccioni notes that the EU AI Act has started the process by introducing voluntary disclosures about AI models' energy and resource use.
She cautions that enforcing this act in Europe, and eventually writing similar laws worldwide, will take time we do not have given the climate crisis' urgency and scale.

Envisioning a sustainable, equitable future for AI

Refusing to remain locked into Big AI's trajectory

Breaking the pattern seen with coal, plastic, and fossil fuels[12:01]
Luccioni stresses that we do not need to stay hooked on AI as sold by big AI companies in the way society stayed hooked on coal, plastic, and fossil fuels sold by Big Oil.

Challenging the idea that AI's future is predetermined

Alternative to the narrative of inevitable huge LLMs[12:13]
She argues against believing that the future of AI is already written as one dominated by huge LLMs powered by infinite energy.
She describes that narrative as assuming such systems will lead to superhuman intelligence and magically solve all problems.
Call to "take back the wheel"[12:23]
Luccioni urges that instead of accepting this story, we can take back the wheel and shape an alternative AI future together.

Core elements of Luccioni's desired AI future

Small but mighty models on everyday devices[12:28]
She envisions AI models that are small but mighty, running on cell phones and performing their intended tasks without huge data centers.
Ability to choose models based on carbon footprint[12:42]
She imagines a future where users have the information needed to choose one AI model over another based on carbon footprint.
Legislation forcing Big AI to take responsibility[12:47]
Luccioni calls for legislation that makes big AI companies take accountability for damage they cause to people and the environment.
AI serving all of humanity rather than a few companies[12:58]
Her vision includes an AI ecosystem in which AI serves all of humanity, not just a handful of for-profit tech companies.

Empowering users through everyday choices

Impact of prompts, clicks, and queries[13:04]
Luccioni concludes that with every prompt, click, and query, people can reinvent AI's future to be more sustainable.

Outro: Event context and production credits

Context of the talk within TED initiatives

Event and partnership details[13:19]
After the applause, Elise Hu states that this was Sasha Luccio speaking at a TED Countdown event in New York.
She adds that the event was in partnership with the Bezos Earth Fund in 2025.
Information about TED's curation[13:28]
Elise invites listeners who are curious about TED's curation to find more information at TED.com/curationguidelines.

Show closure and credits

TED Talks Daily as part of TED Audio Collective[13:32]
Elise notes that TED Talks Daily is part of the TED Audio Collective.
Fact-checking and production team acknowledgments[13:38]
She mentions that the talk was fact-checked by the TED Research Team.
Elise lists members of the production and editing team: Martha Estefanos, Oliver Friedman, Brian Green, Lucy Little, and Tansika Sangmarnivong.
She says the episode was mixed by Christopher Fasey-Bogan and notes additional support from Emma Taubner and Daniela Balarezo.
Host sign-off[13:47]
Elise Hu signs off by saying she will be back tomorrow with a fresh idea and thanks listeners for listening.

Lessons Learned

Actionable insights and wisdom you can apply to your business, career, and personal life.

1

Bigger and more general-purpose is not always better in AI; choosing smaller, task-specific, and energy-efficient models can deliver comparable performance while drastically reducing environmental and financial costs.

Reflection Questions:

  • Where in your work or life are you defaulting to the most powerful or complex tool available when a smaller, simpler one would suffice?
  • How could you evaluate the energy, time, or financial cost of the tools you use relative to the value they actually provide?
  • What is one process this month where you could deliberately replace a "heavyweight" solution with a more focused, efficient alternative?
2

Lack of transparency about resource use prevents responsible choices; demanding and using clear metrics (like energy or carbon scores) enables more sustainable decision-making.

Reflection Questions:

  • What important tools or services do you rely on where you have little or no visibility into their environmental or social impacts?
  • How might your purchasing or technology choices change if energy and carbon information were as visible as price and features?
  • What is one area where you could start asking vendors, partners, or teams for better data about resource use before making your next decision?
3

Concentrating critical technology in the hands of a few large actors increases systemic risk; supporting open, accessible, and community-driven alternatives can diversify power and foster resilience.

Reflection Questions:

  • In what domains of your life or business are you heavily dependent on a single platform or vendor, and what risks does that create?
  • How could engaging with open-source, smaller, or local alternatives strengthen your flexibility and bargaining power?
  • What is one dependency you could begin to diversify over the next year to reduce your exposure to decisions made by a small group of large players?
4

Framing complex technologies as inevitable saviors or existential threats can distract from practical, near-term harms and opportunities; focusing on concrete uses and impacts leads to better governance and design.

Reflection Questions:

  • Where are you getting caught up in grand narratives (utopian or dystopian) instead of examining the specific, present-day effects of a technology or decision?
  • How might your approach change if you shifted attention from "What will this become in 20 years?" to "What is this doing to people and the environment right now?"
  • What is one contentious technology or trend in your field where you could reframe the conversation around measurable, current impacts?
5

Individual choices about which tools to use, support, and advocate for-combined with collective pressure for regulation-can meaningfully steer technological development toward sustainability and equity.

Reflection Questions:

  • Which everyday digital tools or services you use could you swap for more sustainable or transparent alternatives with minimal disruption?
  • How could you incorporate sustainability and social impact criteria into your personal or organizational technology procurement processes?
  • What is one policy, standard, or norm you could help champion in your workplace, community, or industry to push technology providers toward greater accountability?
6

Designing technologies to fit specific real-world problems-such as detecting illegal logging or forecasting renewable energy output-often yields higher impact than building general systems and hoping they solve everything.

Reflection Questions:

  • What are the most pressing, concrete problems in your work or community that could benefit from a focused technological solution?
  • How might you reverse-engineer a project by starting from a clearly defined problem and constraints instead of a particular tool you want to use?
  • What is one initiative you are involved in where you could narrow the scope and align tools more tightly with a single, high-leverage use case?

Episode Summary - Notes by Riley

We're doing AI all wrong. Here's how to get it right | Sasha Luccioni
0:00 0:00