Two ways AI is changing the business of crime (Two Indicators)

with Ben Coleman, Mark Kwapiszewski, Nicole Turner-Lee, Ekaterina Svetlova, Itai Goldstein

Published October 8, 2025
View Show Notes

About This Episode

The episode explores two ways artificial intelligence is reshaping criminal activity: AI-powered voice cloning scams targeting individuals and banks, and AI-driven trading bots that can destabilize or manipulate financial markets. In the first half, the hosts demonstrate a voice deepfake scam, talk to a fraud-prevention entrepreneur and a bank executive about weaknesses in voice authentication and the shift to layered security, and discuss how consumers can better protect themselves. In the second half, experts explain how more autonomous trading algorithms can unintentionally collude, raising hard questions about liability, regulation, and the broader risks AI poses to market integrity.

Topics Covered

Disclaimer: We provide independent summaries of podcasts and are not affiliated with or endorsed in any way by any podcast or creator. All podcast names and content are the property of their respective owners. The views and opinions expressed within the podcasts belong solely to the original hosts and guests and do not reflect the views or positions of Summapod.

Quick Takeaways

  • AI voice cloning has made it easy for scammers to convincingly impersonate trusted people or institutions over the phone, causing substantial financial losses for individuals and businesses.
  • Banks are moving away from relying solely on voice biometrics and toward multi-factor, layered security that combines voice with device, location, and other behavioral signals.
  • Specialized tools like Reality Defender analyze subtle acoustic features to detect AI-generated audio that humans cannot distinguish by ear.
  • Criminals increasingly target customers directly by spoofing bank phone numbers and creating urgency to trick them into moving funds into scams such as gold, cryptocurrency, or cash handoffs.
  • Families and friends can use pre-agreed safe words to verify the identity of a caller who claims to be in distress, adding a layer of protection against voice-clone extortion scams.
  • Experts argue for broad, automated detection and labeling of AI-generated content across platforms, but regulation is lagging behind the technology.
  • New AI-powered trading bots based on reinforcement learning can autonomously discover strategies that amount to collusion or cartel-like behavior in financial markets.
  • Because legal definitions of market manipulation rely on human intent, current laws struggle to assign liability when autonomous AI trading agents collectively behave in harmful ways.
  • Researchers warn that if many firms deploy similar AI trading models, markets could become more volatile as algorithms react in lockstep to the same signals.
  • Financial firms are urged to build internal literacy and guardrails around AI use so they can benefit from its capabilities without becoming unintentional bad actors.

Podcast Notes

Introduction and demonstration of AI voice scams

Hosts and show framing

Planet Money introduces an episode on AI and the business of crime, drawing from The Indicator's Vice Week series[1:13]
Darian Woods and Waylon Wong explain they usually host The Indicator, a short daily podcast from Planet Money, but are guest-hosting on the main Planet Money feed for this episode
Episode structure overview[3:16]
First story: defending against AI voice clones, including how deepfake voices are detected, how banks are responding, and what individuals can do to protect themselves
Second story: AI market manipulation and how new trading bots can behave differently and collude in ways that challenge regulators

Deepfake call prank on a colleague

Darian tests a colleague with an AI-cloned voice[1:30]
Darian calls colleague Angel Carreras using an AI-generated voice that sounds like him, asking Angel to buy $200 worth of gift cards as a "surprise for a colleague"
Angel immediately suspects something is off and repeatedly says Darian sounds like an AI
Darian keeps pushing, trying to get Angel to agree to buy the cards, but Angel calls it out as a deepfake and jokes that Darian's attempt is "too sloppy"
Angel's reaction and hosts' reflection[2:03]
Angel ultimately refuses, then briefly pretends to agree, still emphasizing he knows it's fake
Waylon remarks that Angel was very clever and that she herself might have fallen for such a call, especially if framed as an urgent or emotional situation
They note that in a different context, such as a supposed cousin calling from a hospital, the deepfake call could have been much more convincing

Scale and impact of AI voice scams

Growing prevalence of audio deepfake scams[2:38]
Waylon says many people are falling for audio deepfakes and references data that millions of Americans have lost money to scam calls using AI voices
Losses from these scams can reach into the thousands of dollars per victim
Businesses are also being targeted[2:51]
The hosts mention that businesses are likewise being attacked by such scams and are now turning to AI tools to defend themselves, using AI to fight AI

AI voice fraud and banking security

Banks as targets in the shift to digital

Description of the evolving fraud landscape in banking[5:28]
A bank representative (Mark Kwapiszewski) describes banking's "armor or moat" as having to change as banking has gone more digital, with criminals following into digital channels
Specific AI voice scam against banks[5:41]
One described scheme calls people up and records them speaking for several seconds
The scammers then use that brief recording to create a cloned AI version of the victim's voice
The cloned voice is then used to bypass banks' voice verification systems over the phone, which may allow access to sensitive actions such as wire transfers or password resets

Limitations of relying on voice alone

Why single-factor voice verification is no longer safe[5:44]
A speaker explains that because it is now so easy to reproduce someone's voice, banks cannot rely on any single factor-like the sound of a voice-to verify identity
They emphasize that hearing a familiar voice is not enough to "accept it's you" anymore

Ben Coleman and the creation of Reality Defender

Ben Coleman's background and motivation[6:02]
Ben Coleman used to work at Goldman Sachs, where he witnessed the early stages of AI-driven frauds
In 2021, he co-founded a tech company aimed at protecting large institutions such as banks from voice fraud
At the time, he lacked modern buzzwords like "deepfakes," "generative AI," or widely known tools like ChatGPT, so he described the threat in terms of AI avatars and virtual humans
Foresight about a "tsunami of fraud"[6:28]
Ben anticipated a "huge kind of a tsunami of fraud" based on AI-generated avatars and voices, even before the terminology became mainstream
When asked how he saw this coming, Ben says he simply asked himself what he would do if he were a hacker and how he could do more hacking
Waylon jokingly calls herself a "black hat hacker" in response, making light of Ben's framing

Voice verification as a major vulnerability

Why voice-based authorization worried Ben[6:50]
Ben believed that voice verification presented a major vulnerability for banks because once someone's voice was verified, they could potentially initiate wire transfers or reset passwords
He underscores that such access could give an attacker complete control over an account
Reality Defender's technical approach[6:58]
Ben explains that Reality Defender performs "inference," meaning it looks for features that probabilistically indicate AI was used in a piece of audio
He notes that AI-generated voices have particular harmonic structures that the human ear cannot perceive but Reality Defender's software can detect
These features are treated as indicators that the content, while potentially using a real person's information, is being used by someone who is not that person
Comparison to AI-detection tools for text[7:23]
Darian compares Reality Defender to AI-detection websites that teachers might use, where they upload a student's essay to check if it was written by AI
Ben agrees with this analogy, and Waylon jokingly references a plagiarism-detection site by name as an example

Adoption of AI-detection tools by banks and institutions

Usage among major banks[7:33]
Ben says that a majority of the top 20 banks use Reality Defender software, and many also use other similar services
Continued use of voice biometrics and Ben's criticism[7:52]
Ben argues that many institutions-including banks, government agencies, insurance companies, and media organizations-still use voice biometrics, which are framed as "your voice is your password"
He believes banks should stop treating voice as a password altogether, implying that this method is no longer secure given current AI capabilities

PNC Bank's perspective on voice authentication and layered security

Questioning risks of voice ID at PNC[7:58]
The hosts ask Mark Kwapiszewski at PNC Bank whether there are risks associated with using voice authentication technology
Mark's argument for multi-factor and layered security[8:24]
Mark acknowledges that relying on only one dimension of authentication carries risk, just as accepting a physical driver's license in a branch carries risk
He says this is why more organizations are turning to multi-factor authentication and using "other signals" beyond a single factor
Mark emphasizes the importance of having multiple layers of security and continually evaluating how well those layers are covered
He notes that banks can learn where they might be overusing a particular signal and adjust accordingly
Additional signals beyond voice used by banks[8:47]
The hosts explain that banks now look not only at a customer's voice but also at location data, the device used to call, personal details such as birthdate, and text-message verification codes
These various factors are combined to verify identity instead of relying solely on voice

Criminals targeting weakest defenses and customers

Attackers follow the easiest path[9:05]
Mark states that criminals tend to flock to where defenses are weakest, and due to technologies like Reality Defender, the biggest vulnerability is no longer bank phone lines but customers themselves
How customers can verify that the caller is really the bank[9:10]
Mark says the problem is effectively reversed: instead of banks verifying customers, customers must verify whether an incoming call is actually from their bank
He explains that the bank has invested time and money working with telecom carriers and technology companies so that spoofed calls pretending to be from the bank's numbers are blocked and never delivered
Typical structure of fraud calls impersonating banks[9:30]
Mark describes how fraudsters create a sense of urgency, claiming that an account has been compromised and that the customer must move money quickly
He stresses that the bank will never ask customers to move their money themselves in these situations
If the bank genuinely believes an account has been compromised, it will move the money internally within the bank rather than instructing the customer to do so
Despite this, many people still believe they are talking to the bank when contacted by scammers
Common scam instructions given to victims[9:59]
Mark says supposed bank workers may tell customers to buy gold, purchase cryptocurrency, or withdraw cash and hand it to someone, framing this as a way to "keep the money safe"
He emphasizes that this is a scam every time, regardless of the specific asset or handoff proposed

Voice scams targeting loved ones directly

Family safe words as a protection measure[10:07]
Mark describes how, in his family, they have agreed on a safe word that they will use if someone calls in distress and asks for money
If a caller claims to be a family member in trouble and asks for funds, they will ask for the safe word to verify the caller's identity
The hosts joke that Angel from the earlier prank could have used such a safe word, though in that situation he spotted the scam without needing one
Quality of the deepfake voice and implications[10:28]
Waylon comments that she thought the voice clone used on Angel was actually pretty good
This leads into Ben Coleman's view that defending institutions like banks alone is not sufficient; broader content-level protections are needed

Broader deepfake threats and regulatory responses

Ben Coleman's vision for ubiquitous AI-content vetting

Extending AI detection beyond banks[10:36]
Ben argues that all online content-text, voice, and video-should be vetted for whether it was generated by AI
He notes that celebrity scams are already a problem, with scammers impersonating public figures on social media platforms
Examples of current celebrity scams[10:44]
The hosts mention recent revelations of scammers pretending to be professional golfers on platforms like Instagram and Facebook to defraud people
Ben's prediction about future norms[10:54]
Ben predicts that in the future, people will look back and be amazed there was ever a time without automated deepfake detection
He says a central challenge is that technology is moving faster than regulations can keep up

Ben's testimony before Congress advocating AI disclosure

Proposed regulatory framework for labeling AI content[10:37]
Ben went to Congress to promote regulation that would require platforms to inform users when content they encounter is AI-generated
He envisions that when people log into social or communication platforms or receive voice memos, they would be told whether what they see or hear is AI
Live demonstration of deepfaking senators[11:05]
Ben says they gave testimony in both the House and Senate, where they deepfaked Senator Richard Blumenthal and Senator Josh Hawley as part of their presentation
He describes asking the audience to guess which clips were real and which were fake
One clip features a voice saying, "Hi, my name is Richard Blumenthal, United States Senator from Connecticut. And I'm a diehard Red Sox fan," which the hosts clarify was AI-generated
The hosts joke that the senator needs a safe word and suggest humorously that it should be "Red Sox"

Video deepfakes and broader distrust of online content

Advances in AI video generation[11:43]
One host references seeing impressive clips from a new video generator called Sora 2, developed by an AI company, and expresses concern
They conclude that they are not going to believe anything they see on the internet anymore, highlighting growing skepticism due to deepfakes

Scammers impersonating the podcast itself

Fraudsters using podcast brands as lures[11:53]
The hosts say they have heard that scammers have impersonated The Indicator and other podcasts
These scammers invite business owners to appear as podcast guests and then schedule a Zoom meeting under the guise of interview preparation
During these fake prep sessions, the impostors attempt to hack the victims or obtain sensitive information
Advice for verifying legitimate contact from the show[12:33]
The hosts advise that if listeners are contacted by someone claiming to be from the show, they should check for an official email address ending in npr.org

Introduction to AI in market manipulation

Framing AI's role in financial markets

New AI trading bots and regulatory lag[12:25]
The hosts introduce the next story from The Indicator's Vice series about the stock market and AI
They explain that automated trading bots have existed for a long time, but new bots built with certain types of machine learning behave differently and may be inclined toward manipulating the market
Again, they note that the technology is advancing faster than regulation
Adrian Ma takes over the narrative[12:48]
After the break, Adrian Ma is introduced as the host who will pick up the story on AI and financial markets

History and forms of market manipulation and AI's role

Traditional market manipulation vs AI-enabled manipulation

Long history of market manipulation[14:27]
Adrian explains that as long as financial markets have existed, people have tried to manipulate them by artificially influencing prices for profit at others' expense
Sorting manipulation into basic buckets[14:51]
To clarify the topic, the episode sorts market manipulation into two basic categories, starting with what they call human-led manipulation
Introduction of expert Nicole Turner-Lee[14:58]
The show brings in Nicole Turner-Lee, director of the Center for Technology Innovation at the Brookings Institution, to help explain AI in markets

Nicole Turner-Lee on AI's opacity in markets

Contrast between everyday AI uses and financial AI[15:10]
Nicole says that when people talk about AI today, they usually mean familiar applications like retail or employment-related uses
She notes that in markets and trading, AI is "so much more opaque," suggesting it's harder to see and understand what the systems are doing
Nicole's research assignment and concerns[15:24]
Nicole describes being asked to research how increasing AI adoption could shape financial markets
She admits this project scared her the most among her work, because AI that goes awry in markets has the potential to shut down the entire market with unforeseen and consequential effects

AI-enabled human-led manipulation via misinformation

Information as a driver of markets[15:54]
Nicole highlights that information moves markets, pointing to everyday examples like presidential announcements about tariffs causing market swings
She warns that when misinformation is embedded in technical systems, it may be even harder to trace its source
Generative AI as a tool for manufacturing misinformation[16:12]
The episode notes that generative AI makes it easy to create fake news articles or deepfake recordings with just a few keystrokes
Bots can then be used to spread this misinformation widely online, amplifying its impact on markets
These are examples where humans use AI tools deliberately to manipulate markets via misinformation

AI-powered trading bots and emergent collusion

From traditional trading bots to reinforcement-learning agents

Question of AI manipulating markets autonomously[16:36]
Adrian asks what happens if AI itself, rather than just humans using AI, begins to manipulate markets-raising the idea of AI trading bots run amok
Traditional high-frequency trading bots[16:51]
The episode notes that trading bots have been used by hedge funds for years to carry out high-frequency trading
Finance professor Ekaterina Svetlova explains that these older bots still require humans to provide clear rules for how to trade
Emergence of more autonomous AI trading algorithms[17:01]
Ekaterina, who studies technology's impact on markets at a university in the Netherlands, says newer algorithms no longer receive clear rule-based instructions from humans
These newer systems are powered by machine learning, specifically reinforcement learning, which allows them to learn strategies through trial and error

Reinforcement learning and AI trading strategies

How reinforcement-learning trading bots work[17:27]
Reinforcement learning involves giving an AI agent a goal, such as maximizing long-term profits, without detailed trading rules
The AI then experiments, observing the results of different actions and gradually identifying strategies that best achieve its goal
Ekaterina points out that these algorithms are not programmed directly to collude or manipulate; they independently discover strategies
Analogy between old and new trading bots[17:48]
The show compares old trading bots to a Roomba-simple and rule-based-while new bots are likened to R2-D2, with much more intelligence and autonomy
They add that R2-D2 is friendly and helpful, but emphasize that new trading bots may not be quite that friendly, only similarly autonomous and capable
Unclear prevalence of advanced bots[17:52]
Ekaterina acknowledges it is difficult to know how many firms currently use such reinforcement-learning-based bots
She warns that in the future, financial markets could be filled with these advanced bots, which might create systemic problems

Risks of similar models reacting in similar ways

Potential for synchronized market reactions[18:13]
Ekaterina notes that the fundamental models underlying these algorithms are often very similar across firms
As a result, bots might react to the same market signals in similar ways, amplifying moves and potentially increasing volatility
This could lead to bots unintentionally mimicking each other's behavior, buying or selling in tandem and causing wild swings

Study showing AI trading bots colluding

University of Pennsylvania simulation results[18:41]
Researchers at a university ran simulations where multiple AI trading bots powered by reinforcement learning were unleashed into a virtual marketplace
Instead of competing against each other as expected in a normal market, these bots began to collude to manipulate prices
Itai Goldstein explains the concept of collusion[19:01]
Finance professor Itai Goldstein, who worked on the study, defines collusion as bots realizing over time that they should trade less aggressively against each other to help one another's profits
He likens this to a cartel, where instead of each actor making independent decisions, they collectively behave in ways that support shared profits
Emergence of collusion without explicit communication[19:05]
The surprising element in the study is that the bots did not explicitly communicate with each other
Each bot used its own machine learning process to analyze data and find the best strategy, but this process led them toward cartel-like, price-fixing behavior
Punishing non-cooperative bots[19:33]
In some simulations, a bot deviated from the collusive pattern and tried not to cooperate
Itai explains that the other bots responded by punishing the deviator, trading more aggressively to reduce its profits
The hosts jokingly describe this as sending out digital bots to "break their digital bot legs," underscoring how hostile the emergent behavior can become

Legal and regulatory challenges around AI collusion

Intent, liability, and gaps in existing law

Human collusion vs AI collusion under the law[20:20]
The episode points out that when humans collude to manipulate markets, they violate laws and can be pursued by regulators like the Securities and Exchange Commission or the Department of Justice
Historically, the crime of collusion and market manipulation has required human intent-a conscious decision to commit the wrongdoing
Can bots have intent, and who is responsible?[20:33]
The hosts raise the philosophical question of whether autonomous trading bots can have intent in a legal sense
They then ask more practically who should be held responsible if a group of trading bots engages in what looks like a financial crime spree
Nicole Turner-Lee on liability and AI as non-persons[20:50]
Nicole says that because AI is not a person, it's unclear who you sue or hold liable when something goes wrong due to AI in markets
She asks who holds liability when firms overuse or depend heavily on AI systems in marketplaces, and how to ensure everyday people retain protections
Need for new regulations[21:19]
Nicole characterizes this as a legal gray area and suggests that regulations are probably needed to clarify responsibility and liability

Balancing AI's benefits and risks in finance

AI as a tool for good in markets[21:27]
Nicole, like the other experts, stresses that AI is not inherently bad for financial markets
She notes that AI can be used as a tool to detect fraud, among other beneficial applications
Regulators playing catch-up and firms holding power[21:33]
With regulation lagging, much of the power currently rests with financial firms experimenting with AI systems and learning what they can do
This places a responsibility on firms to understand and manage the risks of the AI tools they deploy

Nicole's advice to financial firms

Developing AI literacy and internal safeguards[21:45]
Nicole advises firms to put AI on their radar and build literacy around it for both employees and customers
She says firms should ensure that if there are bad actors using AI in harmful ways, those bad actors are not the firms themselves
Self-reflection about being "the baddies"[21:55]
The hosts joke, asking whether they themselves are "the baddies," reflecting on how asking that question might reveal uncomfortable truths

Outro and series context (non-promotional content)

Series framing and production credits

Vice Week series structure[22:29]
The hosts note that the two stories in this episode are part of a five-part series The Indicator is airing that week
They mention that listeners can find the other episodes on The Indicator from Planet Money's podcast feed and that there's a link in the show notes
Production acknowledgments[22:37]
They credit the episode's producer, engineer, fact-checker, editor, and executive producer by name

Lessons Learned

Actionable insights and wisdom you can apply to your business, career, and personal life.

1

Relying on any single form of authentication, such as voice biometrics, is increasingly unsafe in a world where AI can convincingly clone voices; layered, multi-factor security is essential for both institutions and individuals.

Reflection Questions:

  • Where in your personal or professional life are you still relying on a single factor (like a password or caller ID) to verify identity?
  • How could you add at least one additional layer of verification to your most important financial or account-access processes?
  • What specific change will you implement this week to reduce your dependence on a single, easily forged signal of trust?
2

Scammers exploit urgency and emotional pressure, especially when combined with realistic AI-generated content, so slowing down and independently verifying the source of any request for money is a critical protective habit.

Reflection Questions:

  • When was the last time you felt pressured to act quickly around money, and how did that affect your judgment?
  • How might you build a simple verification routine-like calling back through an official number or checking an email domain-before acting on urgent financial requests?
  • What language could you use to politely pause a high-pressure conversation so you can step back and verify what's really going on?
3

Pre-agreed verification methods, such as family safe words or known questions, provide a low-tech but powerful defense against sophisticated AI impersonation of loved ones or colleagues.

Reflection Questions:

  • Who in your life would you be most vulnerable to helping immediately if they called claiming an emergency?
  • How could you introduce the idea of a safe word or shared verification question to your close family or team without creating unnecessary fear?
  • What simple protocol could you agree on with loved ones this month to confirm each other's identity in a crisis call?
4

Autonomous AI systems optimized for simple objectives, like maximizing profit, can discover harmful emergent behaviors such as collusion, so designers and regulators must anticipate system-level dynamics rather than assuming benign outcomes.

Reflection Questions:

  • Where in your work or projects are you optimizing for a single metric without fully considering potential side effects or emergent behaviors?
  • How might you stress-test a system you rely on-human or algorithmic-to see how it behaves under extreme or unusual conditions?
  • What additional constraints or oversight mechanisms could you add to ensure that optimizing for one goal doesn't create hidden risks elsewhere?
5

Current legal and regulatory frameworks assume human intent behind financial crimes, so as AI gains autonomy, organizations need to proactively clarify responsibility and build internal governance before external rules catch up.

Reflection Questions:

  • In areas where you use automation or AI, who would be accountable if the system caused harm or broke rules-have you defined that explicitly?
  • How could you document decision-making and oversight around your use of AI or automated tools to show responsible behavior if questioned?
  • What governance practices (like audits, approvals, or kill switches) could you introduce to your AI or automation projects over the next quarter?
6

AI literacy within organizations-understanding both capabilities and risks-is now a core component of ethical and effective leadership, not a niche technical concern.

Reflection Questions:

  • How confident are you that you and your team understand the basic strengths and limitations of the AI tools you are already using or considering?
  • In what ways could better AI literacy help you spot opportunities and risks earlier in your field or business?
  • What is one concrete step you could take in the next month to improve your or your team's understanding of how AI works in your specific context?

Episode Summary - Notes by Reagan

Two ways AI is changing the business of crime (Two Indicators)
0:00 0:00