Neil deGrasse Tyson, Gary O'Reilly, and Chuck Nice talk with physicist and author Adam Becker about how tech billionaires envision the future through ideas like AGI, space colonization, transhumanism, and digital immortality. Becker explains why many of these visions are scientifically dubious or incoherent, how they misread science fiction as literal blueprints rather than cautionary tales, and how extreme wealth concentrates power over humanity's technological trajectory. The episode closes with a reflection on the need for wisdom and ethical guardrails alongside scientific and technological ingenuity.
Stephen speaks with technology ethicist Tristan Harris about how incentives in the tech industry led from social media harms to a new wave of powerful AI systems, and why current AI development is on a trajectory most people would not choose if they saw it clearly. Tristan explains the race toward artificial general intelligence (AGI), the private beliefs and fears of AI leaders, the likely impacts on jobs, politics, and social fabric, and the emerging risks from AI companions and therapy bots. They conclude by outlining potential governance, design, and civic responses that could steer AI onto a narrower, safer path if enough people act in time.
Joe Rogan and Chris Williamson discuss how smartphones, social media and emerging AR technologies shape attention, mental health and groupthink, and contrast that with the value of time, presence and physical experience. They debate climate change activism, pollution, perverse incentives around green funding and why some protest tactics may backfire, then broaden into existential risks like AI, engineered pandemics and nuclear war alongside concerns about censorship and the UK online safety regime. The conversation also covers trans athletes and fairness in women's sports, high‑stakes boxing matchmaking, hypnosis and memory reliability, and what it means to pursue greatness while trying to remain happy and authentic in an AI‑mediated world.
Host Elise Hu introduces AI futurist Akram Awad, who explores how artificial intelligence may not only displace jobs but also trigger a deeper crisis of identity and purpose. Awad argues that as AI automates more work, societies must decouple human worth from economic productivity and build new systems that value contribution, connection, and meaning. He proposes a framework of future human roles-guardians, adapters, and pioneers-and outlines changes needed in compensation, education, emotional infrastructure, and cultural norms to support purpose in the age of AI.
Joe Rogan describes an unusually vivid dream involving humanoid beings and uses it as a springboard to ask Brett about what dreams are and how lucid dreaming works. They then move into an extended discussion of artificial intelligence as an emergent, biology-like phenomenon, its potential to manipulate humans, and its interaction with social media, sexuality, education, and governance. The conversation also covers intelligence agencies, systemic corruption, pedophilia and blackmail, COVID-19 policy and vaccines, pharmaceutical incentives, wealth, socialism versus markets, academic resistance to paradigm shifts, and whether there is a viable path from the current crisis to a healthier societal structure.
Josh and Chuck explain how extinction works, distinguishing between slow background extinctions and rare but catastrophic mass extinction events. They walk through the history of scientific ideas about extinction, the Big Five mass extinctions in Earth's history, and evidence that we are likely entering a human-driven sixth mass extinction. The episode also touches on de-extinction efforts, ecological cascades from species loss, and a listener letter about how interrogation settings can make innocent people appear guilty.
Joe Rogan and Elon Musk discuss topics ranging from extreme human physiques and giant strongmen to SpaceX's Starship program, reusable rockets, and the vision of building cities on Mars and bases on the Moon. They examine government corruption and incentives, including homelessness policy, immigration, Social Security fraud, and how political parties allegedly exploit these systems, and they revisit controversial deaths such as an AI whistleblower and Jeffrey Epstein. Musk also explains his concerns about the "woke mind virus" in media and AI, outlines his work on X/Twitter and Grok, and describes a potential future of AI-driven universal high income, deep automation, and even the possibility that reality is a simulation.
Joe Rogan talks with Andrew, a scientist and author of "Death by Astonishment," about the phenomenology and neuroscience of DMT and why he believes the DMT state is one of the deepest mysteries in science. They explore how the brain constructs reality, how DMT experiences differ from dreams and ordinary hallucinations, and the possibility that DMT may allow contact with non-human intelligences or post-biological civilizations. The conversation also covers near-death experiences, artificial superintelligence, simulation-like views of reality, Japanese urban culture, and a new continuous-infusion DMT research approach known as DMTX.
Avi Loeb discusses the anomalous interstellar object 3I Atlas, arguing that its unusual trajectory, mass, and composition warrant serious consideration of technological or otherwise non-standard explanations rather than automatic classification as a normal comet. He contrasts the scientific community's resistance and institutional inertia with the high potential stakes of discovering alien technology, and describes his own efforts such as the Galileo Project and an expedition to recover fragments of an interstellar meteor. The conversation also explores AI-driven societal risks, philosophical humility about humanity's place in the cosmos, and concrete proposals for systematically searching for extraterrestrial intelligence and technosignatures.
Neil deGrasse Tyson and co-host Chuck interview YouTube science communicator Jake Roper in a Cosmic Queries episode focused on aliens in movies and TV. They discuss the plausibility of alien diseases, energy weapons, and iconic movie aliens, as well as how humanity might react to first contact, whether governments would hide evidence of intelligent life, and why self-replicating machines are a likely form of extraterrestrial visitors. Throughout, they compare cinematic depictions with basic physics, biology, and astrobiology concepts to assess what could and could not work in reality.
Joe Rogan speaks with atmospheric scientist Dick Linson and physicist Will Happer about climate science, the history of climate narratives, and how they believe politics and funding have distorted the field. They discuss CO2, water vapor, ice ages, solar variability, and climate models, while arguing that the current climate crisis narrative is exaggerated and tightly tied to financial and political incentives. The conversation also explores historical analogies like eugenics and the Salem witch trials, structural issues in academia and peer review, and the psychological and societal impacts of climate alarmism.
Neil deGrasse Tyson, co-host Matt Kirshen, and astrophysicist Charles Liu explore the science and cultural meaning of monsters, from Godzilla, dragons, King Kong, and Frankenstein to zombies and black holes. They discuss how physics, biology, and scaling laws constrain what monsters could exist, and how stories about monsters reflect human fears, technological change, and environmental anxieties. Throughout, they argue that the real "monsters" are often human hubris and ignorance, and that science can both demystify and reframe these fears.
Host Preston Pysh and guest Seb Bunney discuss Karen Howe's book "Empire of AI: Dreams and Nightmares of Sam Altman's OpenAI," using it as a springboard to explore Sam Altman's biography, the founding and evolution of OpenAI, and the opaque 2023 boardroom crisis that briefly ousted Altman. They examine OpenAI's unusual nonprofit/for‑profit hybrid structure, its partnership with Microsoft, tensions between AI safety and competitive speed, and the hidden labor and economic costs of training large AI models. The conversation also touches on AGI definitions, human-AI interaction, other labs like Anthropic and DeepMind, NVIDIA's role in AI, and briefly previews their next book on longevity.