What if you could talk to your favorite character in a movie? | Christoph Lassner

with Christoph Lassner

Published October 28, 2025
View Show Notes

About This Episode

AI engineer Christoph Lassner introduces a taxonomy of digital content he calls Content 1.0, 2.0, and 3.0, and explains how generative AI is enabling the next phase. He describes Content 3.0 as media that is dynamically generated with and for each individual viewer, allowing them to co-create stories, interact with characters, and explore worlds without preset narrative boundaries. He also discusses the technical underpinnings, creative possibilities, and economic implications of this shift for storytellers and the entertainment industry.

Topics Covered

Disclaimer: We provide independent summaries of podcasts and are not affiliated with or endorsed in any way by any podcast or creator. All podcast names and content are the property of their respective owners. The views and opinions expressed within the podcasts belong solely to the original hosts and guests and do not reflect the views or positions of Summapod.

Quick Takeaways

  • Lassner defines three generations of digital content: 1.0 (professionally produced and just viewed), 2.0 (personally uploaded and shared), and 3.0 (generated with the viewer using AI).
  • Massive amounts of digital data have enabled generative AI models that can create text, images, music, and increasingly convincing video.
  • Content 3.0 aims to build entire interactive worlds, not just clips, where viewers can explore from any angle and meaningfully engage with characters.
  • In a Content 3.0 future, creators design settings, characters, and rules while AI generates individualized narratives on the fly for each viewer.
  • Characters in these experiences could break the fourth wall and converse directly with each audience member, turning them into active participants.
  • The economics of Content 3.0 will differ from traditional media and social media because dynamically generated experiences are more costly per viewer.
  • New storytelling tools historically take decades to mature, so early Content 3.0 experiments may be awkward but can eventually yield enduring art.
  • Lassner expects that in future Content 3.0 experiences, viewers will not just watch but be inside stories as characters or interactive participants.

Podcast Notes

Show introduction and context

Host introduces TED Talks Daily and the episode topic

Elise Hugh sets up the central question about talking to James Bond while watching a film[3:01]
She asks listeners to imagine having a chat with James Bond mid-movie as a way into the topic of interactive entertainment
Introduction of Christoph Lassner and his focus[3:04]
Elise describes him as an AI engineer discussing the future of entertainment in the age of generative AI
She summarizes that he will explain a concept he calls Content 3.0 and how it could transform famous films into personalized interactive experiences

Defining Content 3.0 and the evolution of digital content

Examples of AI-generated content as a starting point

Visual examples that were created by AI[3:43]
Lassner mentions an image of the Mona Lisa with a twist, "Ted AI" on the surface of the moon, and a 3D fantasy world
He states that these were not created by humans in the traditional sense but generated by AI
Definition of Content 3.0[3:50]
He calls this moment the very beginning of what he terms "Content 3.0"
Content 3.0 opens up possibilities for co-creating content between storytellers, artists, creators, and viewers
He highlights interacting with creations, talking with characters, exploring worlds, and each viewer leaning into what they enjoy most
Expected impact of Content 3.0[4:22]
Lassner expects Content 3.0 to have as profound an impact on media and entertainment as the shift from scheduled broadcasting to on-demand streaming
He calls it a true paradigm shift that will make experiences more engaging than ever

Taxonomy of digital content: Content 1.0, 2.0, 3.0

Overview of the taxonomy[4:41]
He proposes a taxonomy of digital content to unpack his vision
Content 1.0 is defined as content that is just viewed by you, with "view" generalized to all types of content you can experience
Content 2.0 is content that is possibly uploaded by you, such as modern social media
Content 3.0 is content generated by you; professionals train models and prepare settings, but the viewer becomes part of creating it

Characteristics and economics of Content 1.0

Professional, high-cost production[5:15]
Content 1.0 is professional; many people come together to create the best experiences for their audience
He lists books, articles, music, movies, and games as examples
Content 1.0 makes a lot of money on the internet but is very expensive to make
Entire companies are built around creating Content 1.0, owning distribution channels, and aiming to shape pop culture

Characteristics and evolution of Content 2.0

Personal nature of Content 2.0[5:42]
Content 2.0 is described as personal
It started with simple homepages for everyone, memes, and blogs
As consumer recording devices became more prevalent and media sharing platforms more powerful, content fidelity increased
He notes that some videos have over 100 million views, which would have been unthinkable for most traditional movies or shows
Culmination in modern social media[6:14]
Content 2.0 culminates in modern social media, with billions of people creating and sharing content

Scale of Content 1.0 and 2.0 data

Projected data volume[6:25]
Between Content 1.0 and 2.0, he states we expect to create over 100 zettabytes of data in 2025 alone
HD camera thought experiment to convey scale[6:37]
He imagines setting up an HD camera back in time recording 3 gigabytes an hour, a typical HD video stream
To record 100 zettabytes, this camera would have to start 3.8 billion years in the past
He notes there would be little motion at first because this is far before the dawn of complex life on Earth
If this timeframe were a 20-episode TV show, the entire period of dinosaurs would be maybe one episode
He adds that 300,000 years of human history would barely register in the credits, perhaps only as a teaser for the next big thing
He uses this analogy to emphasize how unbelievably large that amount of content is, and notes this is just what is expected to be added in 2025

Generative AI and the technical foundation of Content 3.0

From massive data to generative models

Data powering Content 3.0[7:27]
He explains that this great amount of data and content powers the dawn of AI models and Content 3.0
Content 3.0 is characterized as generative
With such data volumes, AI models can learn to understand content, reproduce it, and produce new content
Large language models as an example[7:41]
He notes that large language models need no introduction today
He says GPT-3 was trained on hundreds of billions of words from articles, books, social media posts, and code
When prompted, such a model relies on what it has seen before to create a likely continuation
Beyond text: multi-modal generative AI[8:01]
He states that Content 3.0 goes beyond text, and other models can do similar things for code, images, and music
He notes that outputs in these modalities are starting to look pretty good
He adds that even video is becoming interesting as a generative medium

From clips to entire worlds: spatial intelligence and WorldLabs

Vision of generative worlds instead of single viewpoints[8:15]
Lassner asks what if, instead of creating a video clip from a single viewpoint, we could create an entire world to explore from every angle
He imagines being able to look around every corner in such a generated world
Introduction of WorldLabs and spatial intelligence work[8:34]
He explains that as co-founder of WorldLabs he is working on this problem
WorldLabs is described as building systems that let users interact with spatial intelligence
Spatial intelligence includes understanding, reasoning about, and generating spatial data for artists, creators, and developers
He notes these systems can also be used simply to enjoy a beautiful, fantastical environment
Spatial interpretation of data[8:54]
He says their models can generate these environments because they help the models interpret vast amounts of data in a spatial way

Content 3.0 as a new medium, not just a tool

Content 3.0 models as puzzle pieces and beginnings

Current models are only the start[9:08]
Lassner asks whether the described models represent everything Content 3.0 will be
He says he sees these models as pieces of a puzzle and the beginning of something rather than its conclusion
Using Content 3.0 to augment earlier content types[9:02]
He acknowledges that Content 3.0 models can be used as tools to create more Content 1.0 and 2.0
He invites listeners to imagine a streaming service where you describe your dream movie and it appears in minutes
He also imagines a social platform where you upload a video as just a prompt and it is immediately transformed into a professional influencer clip
He says these scenarios may become reality but still do not realize the full potential of Content 3.0

Vision: new kinds of storytelling and interactive worlds

Dreaming of novel narrative forms[10:13]
He says this is where he starts to dream about new kinds of storytelling, interactive worlds, and experiences
He envisions characters we can interact with in more natural ways than just dialogue boxes
He imagines worlds we can explore with no artificial boundaries
He speaks of narratives that are not predetermined
Personal background and trajectory[10:24]
He recalls being a teenager, playing 3D games, and developing mods, levels, and characters
He says his journey has taken him further in this direction over time
He believes now there is an opportunity to take a huge leap forward thanks to Content 3.0

Key enabling change: creating content at or above consumption rate

Creation rate surpassing consumption[10:42]
He states that we can now create interesting content at or above the rate of consumption
He says the impact of this fact cannot be overstated
Analogies with Michelangelo and Ian Fleming[10:47]
He asks listeners to imagine Michelangelo painting an image in the blink of an eye, faster than you can look at it
He asks them to imagine Ian Fleming writing James Bond novels while a reader is reading them and even producing a full movie at the same time
He concludes that this capability means narrative can be developed on the fly for every individual viewer
He emphasizes that this has never been possible before at any point in human history

Comparison to improv and individualized narratives

Improv theater as a reference point

Nature of improv performance[11:17]
He says improv is interesting and fun because actors perform without a plan, improvising what happens next on stage
He notes that improv is limited to one narrative per group of actors, shared across the whole audience

New model: creators set the stage, AI and viewers co-create

Producers define world elements rather than full scripts[11:35]
He suggests artists and producers can create a stage for a story: a villain with an agenda, a hero with an interesting backstory, and a visual style for the world
He says that from there, the story develops between the world and every individual viewer
He points out how different this is from Content 1.0, even games, where stories are prepared in advance and characters are scripted
He notes that this prior approach results in a very different amount of agency for the viewer

Breaking the fourth wall for individual viewers

Individualized character interactions[12:13]
He says characters can break the fourth wall for every individual viewer, something never possible before
He invites us to imagine an actor reaching out to every single viewer and having a meaningful conversation or reacting to them
He also imagines a viewer reaching out to a character to tell them where the villain is or suggest a new plot twist
James Bond interactive example[12:44]
He proposes that a future James Bond, between hunting supervillains and racing sports cars, might turn to the viewer and have a casual chat
He jokes about asking Bond about his Wiener schnitzel recipe and says he would stick with martinis
He suggests viewers could strategize with Bond about where the villain can be found
He concludes that this opens an entirely new and currently underexplored medium

Business models and economics of Content 1.0, 2.0, and 3.0

Economics of Content 1.0

Production and target audience[13:48]
He describes Content 1.0 as involving producers coming together to create content in a broad sense-writers, actors, directors, artists, and others
Content 1.0 pieces are created once and must appeal to as many people as possible
It has high fixed costs, with producers paying for production and distribution in advance
Because of this, they must aim for big successes at low risk with a very large target audience

Economics of Content 2.0

Viewers as producers and distribution via social networks[13:53]
He says Content 2.0 changes the equations because viewers are producers
There are many smaller content productions at lower cost
A social network shoulders the cost of distribution and matches content to the right audience
Everyone has an incentive to create viral content, but content can still be niche
He notes that fans of niche content are also producing that content
He states that this model fits perfectly with an ad-driven business model

Economics and structure of Content 3.0

Producers focus on models and settings[14:22]
He says Content 3.0 can change these dynamics again
Producers focus on creating a model, which does not have to be final content but can be the setting, stage, or world
They define background stories of characters and their situation
The content is then co-created with each viewer individually, playing out live
He says this blurs lines between classical linear media like movies and interactive media like games, and goes beyond both
Cost structure and early-stage challenges[14:29]
He warns that Content 3.0 will not be easy; initial attempts may fail or look awkward
He reminds the audience it took several decades after the invention of movies to make films that stand the test of time
Similarly, it took decades after computers were invented to create computer games that stand the test of time
He notes that in recent years we have only seen the very first experiments with these new tools
He observes that AI tools are reducing costs for static content production
However, for fully on-the-fly generated dynamic experiences, the cost per viewer is much higher than for Content 1.0 and 2.0
He concludes that the economics of making hits may look quite different for Content 3.0 compared to earlier content types
He describes this area as rapidly evolving and says he is excited to see how it develops in the coming years

Summary of the content taxonomy and future participation

Recap of Content 1.0, 2.0, and 3.0

Concise restatement of definitions[15:55]
He summarizes that Content 1.0 is just viewed by you
Content 2.0 is possibly uploaded by you
Content 3.0 is generated by you or together with you

Future: viewers inside stories

Increased viewer embodiment and interaction[15:33]
He says that going forward, he expects viewers will be in the content and part of the story or interacting with it

Role of storytellers and new tools

Storytellers adapting to new media[16:19]
He asserts that the best storytellers of a generation will always tell the best stories with the tools they have
He says Content 3.0 gives them an entirely new set of tools that are just beginning to be understood and explored
Need for new skills and new artists[16:34]
He notes these tools will require new skills to master
He anticipates a new generation of artists will show what Content 3.0 can look like
He ends his talk with a "Thank you"

TED context and production credits

Talk context and curation guidelines

Event and location information[16:48]
Lassner identifies himself as speaking at TED AI in Vienna, Austria in 2025
TED curation reference[16:53]
Listeners are directed to TED.com slash curation guidelines if they are curious about TED's curation

Podcast closing and production team

Host closing remarks[16:56]
Elise Hugh says that is it for the day and notes TED Talks Daily is part of the TED Audio Collective
Fact-checking and production credits[17:04]
She says the talk was fact-checked by the TED Research Team
She lists members of the production and editing team: Martha Estefanos, Oliver Friedman, Brian Green, Lucy Little, and Tansika Sangmarnivong
She notes the episode was mixed by Christopher Fasey-Bogan, with support from Emma Taubner and Daniela Balarezo
She closes by saying she will be back tomorrow with a fresh idea and thanks listeners for listening

Lessons Learned

Actionable insights and wisdom you can apply to your business, career, and personal life.

1

New technologies like generative AI are not just tools for faster production; they can enable entirely new mediums where audiences co-create and inhabit stories rather than passively consume them.

Reflection Questions:

  • Where in your work or hobbies could you shift from creating finished products for others to designing systems or spaces that people can actively shape themselves?
  • How might inviting your audience, customers, or teammates into the creation process change the kinds of experiences or outcomes you produce together?
  • What small experiment could you run this month to turn a passive experience you offer into a more interactive, co-created one?
2

Thinking in generations of content (1.0, 2.0, 3.0) clarifies how production, participation, and economics change with each technological shift and helps you position yourself for the next wave.

Reflection Questions:

  • If you map your current projects onto the 1.0, 2.0, and 3.0 framework, where are you mostly operating today and why?
  • How could understanding the changing cost structures and participation patterns of your field influence the bets you make over the next few years?
  • What is one concrete step you could take to move a key project you care about one "generation" forward in terms of participation or personalization?
3

Massive data and real-time generation make it possible to personalize narratives for each individual, but they also demand rethinking business models and cost structures instead of forcing old models onto new media.

Reflection Questions:

  • In your domain, where are you still applying old business or success metrics to new kinds of products or experiences?
  • How might per-user or real-time costs change the way you design and price anything you offer that is highly customized?
  • What new metric or economic model could you experiment with to better align with increasingly personalized or dynamic experiences in your work?
4

Historically, powerful new storytelling technologies take decades to mature, so early awkwardness is a normal part of the process rather than a signal to abandon experimentation.

Reflection Questions:

  • What current project or tool feels "awkward" or immature that you might be judging too harshly because you're expecting polished results too soon?
  • How could you reframe early failures or clumsy prototypes as necessary steps toward long-term mastery in your craft or business?
  • What long-term experiment (measured in years, not months) could you commit to, knowing that enduring mediums like film and games also took decades to refine?
5

Designing generative systems means focusing on rich settings, characters, and rules rather than fixed scripts, which can be a powerful mental model for building flexible strategies and organizations.

Reflection Questions:

  • In your current role, are you spending more time writing "scripts" for others to follow or defining the environment and rules that let good outcomes emerge?
  • How might you redesign one process, project, or team structure so that people have more agency to improvise within clear boundaries?
  • What core parameters (values, constraints, resources) do you need to clarify so that others can generate diverse, high-quality results without constant direct instructions?

Episode Summary - Notes by Kai

What if you could talk to your favorite character in a movie? | Christoph Lassner
0:00 0:00