Kara Swisher and Scott Galloway open with personal banter about Las Vegas, aging, relationships, and Kara's upcoming trip to Korea to film a show about demographic aging. They then discuss the nationwide No Kings protests against Trump, the Trump administration's proposed Compact for Academic Excellence and universities' coordinated pushback, and the White House's conflict with Anthropic over AI regulation amid broader concerns about regulatory capture by big tech. The hosts also cover GLP-1 drugs like Ozempic and Trump's claim about cutting their price, a major Chinese-linked cyberattack on F5 and U.S. infrastructure vulnerabilities, the externalities of AI data centers, and wins and fails including the protests, George Santos' commuted sentence, and debates over billionaire influence and philanthropy.
The episode explores how scams and cybercrime are being transformed by AI, deepfakes, and global connectivity, with cybersecurity expert Bogdan Botezatu explaining the scale of financial losses and the sophisticated business structures behind modern scams. The conversation covers deepfake-driven fraud, psychological manipulation tactics like pig butchering romance scams, technical tools such as honeypots, and vulnerabilities in critical infrastructure like solar inverters. The guests also discuss the challenges of detecting deepfakes, the role of law enforcement partnerships, and why reporting scams is crucial despite the stigma victims often feel.
The episode explores two ways artificial intelligence is reshaping criminal activity: AI-powered voice cloning scams targeting individuals and banks, and AI-driven trading bots that can destabilize or manipulate financial markets. In the first half, the hosts demonstrate a voice deepfake scam, talk to a fraud-prevention entrepreneur and a bank executive about weaknesses in voice authentication and the shift to layered security, and discuss how consumers can better protect themselves. In the second half, experts explain how more autonomous trading algorithms can unintentionally collude, raising hard questions about liability, regulation, and the broader risks AI poses to market integrity.