Deepfakes and the War on Truth with Bogdan Botezatu

with Bogdan Botezatu

Published October 17, 2025
View Show Notes

About This Episode

The episode explores how scams and cybercrime are being transformed by AI, deepfakes, and global connectivity, with cybersecurity expert Bogdan Botezatu explaining the scale of financial losses and the sophisticated business structures behind modern scams. The conversation covers deepfake-driven fraud, psychological manipulation tactics like pig butchering romance scams, technical tools such as honeypots, and vulnerabilities in critical infrastructure like solar inverters. The guests also discuss the challenges of detecting deepfakes, the role of law enforcement partnerships, and why reporting scams is crucial despite the stigma victims often feel.

Topics Covered

Disclaimer: We provide independent summaries of podcasts and are not affiliated with or endorsed in any way by any podcast or creator. All podcast names and content are the property of their respective owners. The views and opinions expressed within the podcasts belong solely to the original hosts and guests and do not reflect the views or positions of Summapod.

Quick Takeaways

  • Global scams are estimated to cause around $1 trillion in losses annually, within a broader cybercrime ecosystem of roughly $9 trillion, with many incidents never reported.
  • Scammers increasingly use AI and deepfakes of trusted public figures to run one-to-many fraud campaigns via social media ads and compromised high-subscriber accounts.
  • Most scams rely far more on psychology than technology, exploiting curiosity, loneliness, fear, and greed to override victims' critical thinking.
  • Romance-based pig butchering scams can last months, use deepfaked photos and videos, and often cause devastating emotional harm in addition to massive financial losses.
  • Deepfakes are becoming harder to spot through visual artifacts alone, so assessing whether the behavior and message fit the real person is more reliable than looking for glitches.
  • Cybercrime groups operate like corporations, with product, translation, web development, and call center divisions, and even use lie detector tests when hiring.
  • IoT and renewable energy infrastructure, such as internet-connected solar inverters, create new national security risks by exposing parts of the power grid to remote compromise.
  • Law enforcement and cybersecurity firms increasingly collaborate to dismantle cybercrime rings, but only a small fraction of scams-around 7%-are reported, limiting effective response.
  • Reporting scams helps allocate proper resources and combats stigma, as even highly educated and prominent individuals can fall victim to sophisticated attacks.
  • Future defenses are likely to focus less on detecting whether content is AI-generated and more on identifying malicious intent and educating users about red flags.

Podcast Notes

Framing the conversation: AI, scams, and the "end of the world" tone

Hosts introduce the theme of AI-enabled scams and deepfakes

They joke about Frankenstein's monster now looking tame compared to AI and cybercrime[1:07]
They preview an exploration of how technology might accelerate societal problems, framed humorously as "going to hell in a handbasket"[1:14]

StarTalk Special Edition setup

Show is identified as StarTalk Special Edition focusing on "scams in the age of AI"[1:33]
Hosts Chuck Nice and Gary O'Reilly emphasize that scams now feel constant via texts, calls, and emails[1:45]
They question whether this is just their perception or genuinely a global issue
They point out that technology and AI have supercharged scams and raise the question "what is real" online[2:09]
They tease topics like deepfakes, the "dead internet theory," and whether people are failing daily Turing tests in a bot-dominated internet[2:24]

Guest introduction and role

Introducing Bogdan Botezatu and Bitdefender

Gary introduces guest Bogdan Botezatu, highlighting his title "Director of Threat Research and Reporting" at Bitdefender[4:15]
Bitdefender is described as a Romania-based company with a goal of protecting the world, with offices beyond Romania[4:24]
Bogdan joins the show and jokes that people may think his appearance is itself a deepfake[4:55]
He notes he prefers to be called "Bob" because it is easier for everyone to pronounce[5:08]
Hosts joke that "easier for everyone" really means easier for Americans who often shorten unfamiliar names

Scale and economics of global scams and cybercrime

Difficulties in measuring global scam losses

Bogdan explains that it is very hard to quantify global scams because most are unreported or not aggregated internationally[5:44]
He cites GASA, the Global Anti-Scam Alliance, which estimates about $1 trillion in scam-inflicted losses for 2024[6:00]
He clarifies this $1 trillion figure is an estimation rather than a complete, fully reported total
He puts scams in context of a broader global cybercrime market of roughly $9 trillion[6:49]
Within that $9 trillion, he says $1 trillion in scams is a conservative and "reasonable" figure

Underreporting and shame among scam victims

Bogdan notes that many victims are ashamed to admit they lost money, so cases never reach authorities[7:09]
He emphasizes that victims often lose hundreds of thousands of dollars, not just small amounts like $100 or $500[7:18]
He highlights that scams can run for a very long time, with criminals building trust before inflicting maximum damage[7:30]
While agencies like the FBI have U.S. stats, many affected countries do not centrally report scam incidents, adding to uncertainty[7:42]

Tools and channels used by scammers

Preferred attack avenues: instant messaging, calls, and email

Bogdan says attackers have a wide variety of channels but prefer instant messaging and direct phone calls because they are immersive[8:18]
Those channels allow scammers to apply pressure and create urgency, making compliance more likely
Email is described as more "static" because recipients can pause, think, and decide not to respond[8:34]
He gives an example of being woken at night by a message claiming to be from a bank reporting live account depletion and urging immediate contact[8:49]
This scenario illustrates how fear and time pressure are exploited to override skepticism
He lists channels: instant messaging, SMS, phone calls, mass communications, mass advertising, and compromise of business social media accounts[8:18]

Mysterious silent phone calls and possible motives

Technical explanation for silent calls

Bogdan offers a technical theory: scammers use complex VoIP and call-center software to spoof numbers and origin countries, which can glitch[12:23]
Glitches can lead to calls hanging, operators being placed on pause, or failures that cause silence on the line[12:23]

Speculative explanation: harvesting voice samples

He shares a more speculative concern: scammers might be building massive databases of recorded words like "yes" and confirmations[13:36]
He observes that in many European countries people answer the phone with "yes" instead of "hello"[13:09]
He suggests recorded acknowledgments could later be used to bypass voice-based authentication or to simulate contractual consent[13:53]
He notes that voice is a biometric, and in some contexts saying "yes" can substitute for a signature
He asks listeners to consider how, 10 years ago, few would have believed that a two-minute voice sample could enable long-form voice spoofing[13:57]

Deepfakes in scams: prevalence and mechanics

One-on-one versus one-to-many scams

Bogdan distinguishes one-on-one scams (e.g., instant messaging approaches) from one-to-many mass scams[14:59]
He says deepfakes are especially powerful in one-to-many scams, where a single fake video can reach huge audiences[14:46]

Use of recognizable public figures in deepfakes

Cybercriminals create deepfakes of globally recognized and trusted figures: influencers, politicians, and doctors[15:27]
He notes hosts themselves are vulnerable because criminals can train algorithms on abundant online footage of them[15:37]
Deepfakes are then used to promote scams ranging from medical supplements to "huge crypto investments"[16:00]
Distribution channels include stolen YouTube accounts and paid social media ads featuring the deepfake videos[16:16]
He describes this as leveraging the "trust" audiences place in the impersonated figure to drive scam conversions

Scale of deepfake ad campaigns

Bogdan reports seeing tens of thousands of such deepfake ads running on social networks[16:59]
He mentions large YouTube channels being compromised and repurposed as billboards for crypto scams[17:06]
One compromised account he cites had 28 million subscribers, giving scammers access to an audience larger than Romania's population[17:09]

Personal deepfake anecdotes and the full scam pipeline

Chuck's experience falling for a deepfake endorsement

Chuck recounts being fooled by a deepfake of Sam Harris promoting a type of product (not a specific brand)[17:18]
After seeing the deepfake, he searched the product, and then received increasingly targeted ads, eventually leading him to buy it[17:31]
He describes the process as a back-and-forth amplification between his search behavior and ad targeting systems

Bogdan on AI-powered scam "corporations"

Bogdan describes Chuck's experience as "AI going full circle": AI creates the billboard and platform algorithms profile the user to serve ads optimally[19:09]
He notes that cybercriminals often operate like corporations with organized divisions[19:33]
He lists "product" teams that build the deepfakes, translation teams for multilingual content, web developers for scam infrastructure, and QA and sales support
Through cooperation with law enforcement, Bitdefender has learned that some cybercrime groups run call centers that onboard victims and sign them up for fraudulent schemes[20:16]
Employees in these criminal call centers can be polygraphed before hiring to ensure they are not undercover police and will not betray the operation[20:13]
He characterizes this as "cybercrime incorporated," emphasizing this is not just a lone scammer in a basement but a serious investment-driven business model[20:38]

Who gets targeted: demographics, regions, and specialization

Lack of centralized targeting but high specialization

Bogdan says everyone is "welcome" as a victim; criminals are happy to take anyone's money[21:06]
He clarifies there is no unionized "scamming syndicate" coordinating global demographic targeting[21:31]
Instead, distinct cybercrime groups specialize in scams that convert well in their specific regions and contexts[21:45]

Regional differences in valuable data and scam types

He contrasts how leaking a social security number is hugely damaging in some countries, but nearly valueless or pseudo-public in parts of Europe[21:51]
Attackers adapt to what type of information or scam motif leads to successful conversions in each region[21:43]

Romance scams and gender tendencies

He notes some groups specialize in romance scams and tend to target men more than women[22:22]
According to Bogdan, men are generally more careless when it comes to sharing information with partners in these contexts[22:38]
He says women tend to be more reserved and slower to advance the relationship but, when they do fall for a scam, may suffer more profound impacts[22:52]
He stresses that the high volume of daily scam attempts can create the illusion of a single organized entity, while it is actually multiple groups attacking simultaneously[23:13]

Reputational deepfakes targeting public figures

Neil deGrasse Tyson's experiences with deepfake impersonations

Neil describes a video where he was deepfaked narrating content about the Big Bang, which was about 85% correct and 15% misleading or wrong[23:52]
The video showed him in a podcast-like setting and gained many views online[23:52]
A friend, Terry Crews, texted Neil praising the video, believing it was real, illustrating how convincing it was even to people who know him[24:18]
Neil notes that the fake lacked the rhythm and vocal timbre he recognizes in his own speech, and that his lower registers were missing, which helped him personally identify it as fake[24:46]
He mentions another deepfake of him allegedly commenting on a video game release, using language and vulgarity that do not match his public persona[25:26]
The video portrayed him as someone who "likes sitting in my mother's basement" while playing a video game, which he finds comically out of character

Dual-victim nature of such deepfake scams

Bogdan frames these impersonations as crimes with two distinct victims: the impersonated figure and the scammed audience[26:04]
For the public figure, the harm is reputational; they become unwitting accessories to a scam and may disseminate misinformation under their name[26:10]
For the audience, the primary harm is financial loss when they heed the fake endorsement or call to action[27:03]

Recommended responses for impersonated individuals

Bogdan advises not to call Bitdefender for individual deepfake impersonations but to contact the hosting platform to have the video removed[26:25]
He suggests using one's public outreach to inform audiences about the impersonation and encourage due diligence[27:33]
He emphasizes the educational role of public figures in warning that anyone can be deepfaked and that hidden agendas may underlie seemingly authentic content[28:00]

Detecting deepfakes: current tells and future approaches

Technical artifacts as imperfect signals

Bogdan acknowledges that some deepfakes still show poor lip sync or visual artifacts, such as earlier AI struggles with teeth and correct numbers of fingers[28:46]
He notes, however, that these obvious glitches are progressively being ironed out as technology improves[27:37]

Focusing on plausibility and behavioral consistency

He argues users should rely less on spotting technical defects and more on judging the likelihood that what they see or hear is real[29:11]
When analyzing Neil's fake videos, his team focused on inconsistencies like Neil discussing topics he normally would not, using atypical language, or recommending products[29:36]
He suggests future defensive tech may incorporate knowledge bases about public figures: what they would realistically endorse or discuss[30:14]
Example criteria: Neil is known as a science communicator, not a vulgar game reviewer or commercial product pitchman, so such content should raise suspicion

Psychology of scams versus technology

Scams as brain hacking

Bogdan estimates that 90% of scams are about psychology and only 10% about technology and science[31:42]
He says scamming is essentially hacking the human brain by pushing emotional buttons[31:50]

Examples of emotional triggers exploited

Failed package delivery scams create curiosity about what the package is and where it came from, distracting from phishing risks[32:04]
Romance scams prey on loneliness by targeting people who are willing to spend all day talking to a "stranger" who contacted them via a supposed wrong number[32:40]
Investment scams tap into natural greed with promises to multiply money quickly, without explaining the economic impossibility[32:57]
He concludes that technology primarily amplifies reach and efficiency but the core manipulations are psychological[32:39]

Technology as an enabler: translation and automation

Bogdan describes getting a scam message in Romanian and replying in Finnish, which he uses as a kind of Turing test[32:49]
The scammer deleted their original message and replied in Finnish, then sometimes reverted to Romanian, deleted, and replaced with Finnish translations in near real time[34:15]
This demonstrated to him that scammers leverage instant translation tools to reach niche languages like Finnish, dramatically broadening their target markets
He explains that APIs let computers control messaging platforms to broadcast to dozens or thousands of victims simultaneously[33:45]
He notes universal payment mechanisms like credit cards and cryptocurrencies further simplify cross-border exploitation[34:59]
He defines API as "Advanced Programming Interface," a way to hook applications like instant messaging to systems for mass communication[35:11]

Pig butchering and honeypot concepts

Pig butchering scams explained

Bogdan says pig butchering is a scam type popular in Southeast Asia, named for fattening a pig before slaughter[36:57]
In these scams, criminals build targets' trust over weeks or months before inflicting large financial losses[37:16]
He outlines a common pattern: a stranger texts with a wrong-number pretext, e.g., a woman saying she is at the airport waiting to be picked up[37:32]
When the recipient says it's the wrong number, the scammer thanks them, mentions visiting the city, and asks for local recommendations, initiating an ongoing conversation
Over time, scammers exchange photos and videos, often generated with deepfake technology, to strengthen the illusion of a real relationship[38:25]
Eventually, the scammer claims success through cryptocurrency, saying they want to share their secret and guide the victim into high-return "investments"[37:11]
By the time financial fraud begins, the victim has often fallen in love despite never meeting the person in real life[39:03]
Bogdan recounts speaking with victims who lost hundreds of thousands of dollars and said they cared less about the money than losing the person they texted every morning[38:59]
He emphasizes the psychological damage and loneliness that make these scams particularly cruel

Honeypots as a cybersecurity research tool

Bogdan explains that in cybersecurity, a honeypot is a system that pretends to be a victim machine to attract attackers[41:04]
When criminals attempt to hack it, the honeypot records every step, allowing researchers to reconstruct attack methods[41:44]
Honeypots help reveal tools, tactics, and procedures, and the early-stage signs that can be used to block attacks in real environments[41:59]
He notes honeypots are used to collect virus samples, study IoT device hijacking for botnets, and record scam conversations to extract red flags[42:16]
He frames honeypots as a way for "good guys" to stay up to date with the latest hacking techniques[42:58]

Dead internet theory and human vs bot content

Reality of humans and bots online today

Gary asks whether anything online is real or whether the internet is dominated by bots and scripts[45:02]
Bogdan responds that nearly everything is "real" in the sense that critical systems like nuclear plants, financial flows, and communications are now internet-connected[45:34]
He says there are still far more humans than bots on the internet, though jokes that this may be what bots want people to think[46:11]
He points out that most social media videos, even if trivial, are still created by humans putting in effort[45:57]
Bots and AI are heavily used to scrape content and train algorithms rather than fully replace all human content creation at present[46:44]

AI influencers and future trends

Bogdan notes there are already famous online influencers whose personas are entirely AI-generated, with millions of followers[46:50]
He anticipates growing automation and a flood of AI-generated content but believes that many consumers of that content will still be human[47:20]
He differentiates between the "dead internet theory" narrative and his observation that user-generated content is still predominant today[47:02]

Defending against AI and deepfake-driven threats

Shift from traditional malware defense to AI-era risks

Bogdan says the deepfake and AI front is relatively new compared to long-standing issues like malware and phishing links[49:03]
Historically, defenses were built around blocking malicious links, executable malware, and endpoint compromises[49:21]
He notes that there are now dozens of AI-run influencer accounts, showing how AI-generated content has already become normalized[49:26]

Focusing on malicious intent rather than AI detection

Bogdan argues the more important task is not detecting AI per se but detecting malicious goals behind content or interactions[50:22]
He frames this in context of hybrid warfare, where disinformation and deepfakes aim to sow uncertainty and erode trust rather than always steal money directly[50:25]
He highlights deepfakes used to push political messages, impersonate leaders, and dilute people's ability to distinguish true from false[50:07]
He warns that if people cannot tell truth from lies, they may become apathetic and stop caring about messages altogether, which itself serves adversaries' goals[51:28]

What Bitdefender and similar tools do in practice

Evolution from antivirus to broader cybersecurity

Bogdan explains that cybersecurity is now fundamental to everyday technology use[53:14]
He says Bitdefender started as antivirus in the 1990s when home computers and the internet were booming[52:32]
Over time, needs expanded beyond simple virus protection to securing companies, data, and more complex attack surfaces[53:38]
He notes that talk of "antivirus is dead" is inaccurate; modern products have become full security suites rather than disappearing[53:23]

Addressing scams in modern security solutions

He reiterates that scams now account for about one-ninth of total cybercrime losses, making them a major focus for defenders[54:05]
Bitdefender includes features that automatically detect scam messages and classify them as such[54:05]
They also offer advisory tools where users can describe what they are seeing or upload screenshots and ask an AI assistant if it looks dangerous[53:30]
The AI assistant assesses the likelihood of a scam and points out red flags so users learn to recognize similar patterns in the future[55:02]

Systemic risks: national security, war, and IoT vulnerabilities

Deepfakes in active war zones: Zelensky example

Bogdan situates Romania near the eastern NATO and EU borders and notes the ongoing war in neighboring Ukraine[55:45]
He recalls a deepfake video of Ukrainian President Zelensky calling on armed forces to lay down their weapons because Ukraine had allegedly surrendered[57:06]
Ukrainian security services quickly countered the fake, but he stresses it could have had severe consequences if parts of the army believed it[56:42]

IoT, solar inverters, and grid vulnerability

Bogdan explains that Bitdefender has an IoT security research wing focused on devices like smart assistants, appliances, and lights[56:52]
He highlights solar inverters as a newer IoT category that convert solar panel output, manage storage, and feed energy into the grid[57:03]
These inverters are often internet-connected and frequently manufactured in China, raising security concerns[57:29]
In August of the prior year, his team examined popular inverters in Europe and found that an attacker could potentially seize control of every inverter made by a specific brand[58:39]
This vulnerability could have given an attacker influence over about 140 gigawatts of electricity[58:43]
He admits he is not an energy professional but says that figure is "a lot by any standard" and could enable large blackouts
He says it is unclear whether this was an accidental software bug or a deliberately hidden backdoor that could be used by a rival nation-state[58:02]
Germany is cited as a state that has taken cybersecurity in the inverter space seriously, recognizing the potential grid-level impact[58:56]
He notes that historically power grids were isolated from the internet, but now millions of homes have internet-connected devices that interface with the grid[58:54]
He characterizes this as millions of entry points into something that directly affects national security[1:00:07]

Prospects, law enforcement cooperation, and the importance of reporting scams

Cat-and-mouse dynamic and reasons for hope

Asked for hopeful news, Bogdan says the future will remain a cat-and-mouse game between attackers and defenders[1:01:19]
He expresses confidence that, as with malware, defenders will progressively find ways to proactively protect against deepfakes and scam tactics[1:01:42]
He notes that most online interactions remain safe, indicating that defenses are already working in many cases[1:01:58]
He emphasizes that Bitdefender is not alone; other security vendors and law enforcement agencies are powerful partners in this space[1:02:19]

Successful collaborations with law enforcement

Bogdan describes multiple cases where Bitdefender and law enforcement jointly opened investigations and dismantled cybercrime rings[1:02:25]
He says police agencies worldwide now take cybercrime extremely seriously[1:02:57]
He notes that cybersecurity experts provide technical expertise while law enforcement possesses the authority to execute arrests, metaphorically "kicking down doors"[1:03:01]

Stigma, victimhood, and underreporting of scams

Neil raises the issue of embarrassment when famous or wealthy individuals are scammed and asks whether stigma will diminish as people realize they are not alone[1:03:16]
Bogdan insists cyberattacks can happen to everyone, and victims are not at fault for enabling them[1:03:36]
He points out that even highly respected individuals have had their accounts compromised despite best efforts[1:04:03]
He emphasizes the large "surface to defend": email, messaging apps, wearables, home devices, and more, making perfect security extremely difficult[1:04:11]

Why reporting scams matters

Bogdan strongly urges victims to report scams to local law enforcement[1:04:39]
He explains that reporting can sometimes lead to assistance and also helps authorities gauge the scale of cybercrime phenomena[1:04:43]
He cites an estimate that only about 7% of scams are reported, leaving police underbudgeted and unable to properly assess impact on communities[1:05:00]
Chuck paraphrases the lesson: by not reporting, victims effectively enable and help the perpetrators by withholding crucial information[1:05:25]
Bogdan agrees, using the analogy that if a tree falls in a forest and no one hears it, it might as well not have fallen; similarly, unreported scams are invisible to authorities[1:05:44]
He says talking about scams pushes them onto local agendas and gives agencies a reason and mandate to act[1:05:58]

Closing reflections by hosts

Balancing depression and usefulness of the information

Chuck and Gary describe the conversation as depressing but necessary and ultimately helpful for listeners[1:06:42]
They joke about wanting to burn their computers, return to abacuses, or start writing letters again in response to the threats described[1:06:37]

Final sign-off

Neil jokes that AI will be our overlords and will take our money, then pulls back to say he exaggerates[1:08:03]
He ends with his signature encouragement to "keep looking up" despite the challenges discussed[1:08:09]

Lessons Learned

Actionable insights and wisdom you can apply to your business, career, and personal life.

1

Most successful scams exploit human psychology-curiosity, fear, loneliness, and greed-far more than technical sophistication, so strengthening your emotional awareness is as important as upgrading your software.

Reflection Questions:

  • In what kinds of situations do I notice my curiosity, fear, or desire for quick gain overriding my usual skepticism online?
  • How could I build a personal checklist or pause routine to engage my critical thinking before clicking links or responding to emotionally charged messages?
  • What is one recurring emotional trigger (e.g., urgency about money, fear of missing out, loneliness) that I can consciously watch for over the next month when I'm online?
2

Judging the plausibility and context of a message or video-whether the speaker would realistically say or do that-is often more reliable than hunting for visual glitches when evaluating potential deepfakes.

Reflection Questions:

  • When I see a surprising or extreme statement attributed to a public figure, do I pause to ask whether it fits their usual behavior and values?
  • How might my online habits change if I made it standard practice to cross-check unusual claims against at least one independent source before reacting or sharing?
  • What signals could I define for myself (e.g., product endorsements, vulgarity, off-brand topics) that would immediately prompt me to question the authenticity of a piece of content?
3

Cybercrime is now a large, organized industry that can target anyone, so experiencing a scam attempt is not a sign of personal foolishness but a structural reality that requires systemic and individual defenses.

Reflection Questions:

  • Where am I currently assuming "this could never happen to me" in my digital life, and how might that assumption be leaving me exposed?
  • How could I reframe scam attempts or security incidents from personal embarrassment into useful signals for improving my defenses and habits?
  • What specific steps can I take this week-such as updating passwords, enabling two-factor authentication, or educating family members-to treat cyber risk more like an ongoing business risk than a one-off event?
4

New technologies like IoT devices and AI systems greatly expand the attack surface, so adopting technology should be paired with a conscious assessment of the new entry points and dependencies it creates.

Reflection Questions:

  • What internet-connected devices in my home or workplace have I never really thought about from a security perspective (e.g., routers, cameras, smart appliances, energy systems)?
  • How might mapping out the critical systems I rely on-communications, power, finance-change the way I choose, configure, or update connected devices?
  • What is one concrete action I could take in the next two weeks to reduce unnecessary exposure, such as disabling unused remote access, segmenting networks, or changing default passwords?
5

Reporting scams and cyber incidents, even when embarrassing, is a civic act that helps allocate law enforcement resources, improve defenses, and reduce stigma for other victims.

Reflection Questions:

  • Have I ever chosen not to report a scam attempt or security issue because I felt it was minor or embarrassing, and what impact might that have had beyond me?
  • How would I handle it differently next time if I thought of reporting not as admitting failure but as contributing to a shared defense effort?
  • What channels (local police, consumer protection agencies, platform reporting tools) do I need to familiarize myself with now so I can act quickly and calmly if I or someone close to me is targeted?
6

Education and ongoing awareness-understanding common scam patterns, psychological hooks, and emerging technologies-are among the most effective long-term defenses in a fast-evolving cat-and-mouse landscape.

Reflection Questions:

  • What are the top two or three scam patterns (e.g., romance scams, failed delivery, fake bank alerts) that I and my family should be able to recognize instantly?
  • How might setting a recurring reminder to spend even 15 minutes a month updating myself on new online threats improve my resilience over the next year?
  • Who in my personal or professional circle is most at risk from these threats, and how could I share what I've learned with them in a practical, non-alarming way?

Episode Summary - Notes by Devon

Deepfakes and the War on Truth with Bogdan Botezatu
0:00 0:00