TECH006: Open-Source AI That Protects Your Privacy w/ Mark Suman (Tech Podcast)

with Mark Suman

Published October 29, 2025
Visit Podcast Website

About This Episode

Host Preston Pysh interviews Maple AI founder Mark Suman about building privacy-preserving, verifiable AI using trusted execution environments and secure enclaves. They discuss the cultural importance of privacy at Apple, the risks of feeding proprietary AI systems with intimate personal data, and how verifiable, open-weight models can mitigate manipulation and data leakage. The conversation also covers Maple's architecture, AI memory, the open-source vs proprietary model race, AI-assisted software development, and the potential future of running personal AI servers at home.

Topics Covered

Disclaimer: We provide independent summaries of podcasts and are not affiliated with or endorsed in any way by any podcast or creator. All podcast names and content are the property of their respective owners. The views and opinions expressed within the podcasts belong solely to the original hosts and guests and do not reflect the views or positions of Summapod.

Quick Takeaways

  • Mark frames the core AI privacy goal as verifiability: users should be able to inspect code, models, and data handling rather than simply trusting providers.
  • He warns that proprietary AI systems can capture a user's unique thought process and memories, creating a risk of subtle long-term manipulation he calls "subconscious censorship."
  • Maple AI uses secure enclaves and trusted execution environments so users can cryptographically verify that the server is running the open-source code they see.
  • Open models have rapidly closed much of the performance gap with proprietary systems, and in many real-world uses a 90-95% quality match is sufficient.
  • Maple's roadmap includes hybrid local-cloud inference and a sovereign AI memory that users can inspect, edit, and encrypt with their own private keys.
  • Mark expects the AI race to shift from ever-bigger general models toward specialized models orchestrated by routing systems.
  • He reports that 90-95% of Maple's code is now written by AI, with humans guiding, reviewing, and integrating it, leading to a roughly 10x productivity gain.
  • Long term, he would like to see personal home servers running users' AI and data, but notes that convenience has historically kept most people on centralized services.
  • He views Nostr-like public/private key identity as a powerful piece of a broader verifiable communication and AI ecosystem.

Podcast Notes

Show introduction and framing of decentralized, private AI

Framing decentralized inference as separating AI from big tech

Preston introduces decentralized inference as analogous to Bitcoin separating money from the state[0:05]
He describes a "quiet revolution" shifting control of intelligence from centralized data centers to individuals and small developers
Trusted execution environments highlighted as key enabling technology[0:29]
Preston says they will unpack how trusted execution environments and secure hardware can protect both data and computation

Podcast and host introduction

Show is identified as "Infinite Tech" by the Investors Podcast Network, hosted by Preston Pysh[0:42]
Preston notes the show explores Bitcoin, AI, robotics, longevity, and other exponential technologies through a lens of abundance and sound money[0:51]

Guest introduction and Mark's background in privacy and Apple

Preston welcomes Mark and stresses importance of the topic

Preston calls the topic "crazy important" and predicts people will realize its importance over the next 5-10 years[1:28]
Mark says he has listened to the show a lot and is excited to be on[1:43]

Mark's early career and lifelong focus on privacy

Mark explains he began his career working on online backup software in the early 2000s[2:27]
The goal was to back up people's computers to "this new cloud thing" in a way that preserved privacy
They used client-side encryption with private keys so users could encrypt data before sending to the cloud[2:51]
He contrasts this with backing up to "someone's computer" where the operator could see all the data
Mark says privacy has "always been part of who I was" professionally[2:55]

Apple's internal privacy culture and its impact on Mark

On his first day at Apple, his manager tasked him with building a retail store system that had to be "totally private"[3:08]
He says privacy is one of Apple's core tenets and that he saw this from the inside
Within the third week he was working closely with a privacy lawyer embedded in the project[3:18]
Apple refused common industry practices like directly capturing faces, identities, and detailed banking transactions[3:33]
Instead, they had to separate data and build systems that were "totally privacy-preserving"
They had to invent new tools for tagging and annotating AI and machine learning training data in privacy-preserving ways[3:52]

Apple's AI posture, privacy constraints, and big-company dynamics

Preston's question about Apple's slow visible entry into the AI race

Preston notes that Google, OpenAI, xAI, and Apple are major players, but Apple appears slow and "on the sidelines" in AI infrastructure buildout[4:32]
He asks whether Apple's privacy focus is slowing them down versus issues like leadership[4:41]

Mark's view on Apple's privacy-driven AI approach and internal politics

Mark clarifies he is not speaking on behalf of Apple[4:51]
He believes privacy is definitely an element in Apple's slower AI rollout[5:02]
Apple announced "Private Cloud Compute" about two years prior, relying on secure enclaves and verifiable approaches
Apple does not open source their code, so users must trust third-party auditors who may only see images of server code[5:16]
He says Apple is trying to do AI in a way that "cares about the user" and is responsible in a non-censorship sense[5:38]
Mark also cites Apple's large size (100,000+ employees) and internal turf wars, duplicated products, and funding decisions as slowing factors[5:42]

Concept of verifiable AI and threats from centralized proprietary models

Defining the right lens for privacy-preserving AI

Asked how people should think about protecting their data with AI, Mark chooses the word "verifiable" as the core concept[6:35]
He explicitly connects this to the Bitcoin ethos of "don't trust, verify"
Verifiable is described as an ideology rather than a specific technology[6:51]
Examples of verifiability include open-source code, running LLMs locally, and using secure enclaves in the cloud with mathematical proofs
Goal is to inspect and verify what is happening with the model and user data rather than accept hidden processes[6:57]

Long-term threat of giving proprietary AIs your thought process

Preston admits heavy, habitual use of OpenAI and acknowledges it likely knows a lot about him[8:02]
Mark says convenience is why such tools are widely used and notes he himself has OpenAI and Grok accounts[8:09]
A friend framed the risk as giving away your thought process to a proprietary service that can store and copy it indefinitely[8:27]
Once given, that data can be reused, manipulated, or trained into models without any realistic way to "get it back"
Mark argues that a person's most unique aspect is their thought process and memories, more than their physical appearance[8:54]
He worries we might be handing over what makes us uniquely human to proprietary systems[9:22]

Subconscious censorship and manipulation potential

Mark introduces a concept he is writing about called "subconscious censorship"[10:21]
He defines it as proprietary systems capturing your memories and thought processes, then being able to alter or steer them toward more mainstream or other desired views
Preston elaborates that such models could shape influential users' thinking by gradually nudging them into cognitive "ruts"[10:46]
Mark relates this to social media algorithms that already influence emotional states by ordering posts[11:15]
He notes that by showing an anger-inducing post right before good news, platforms can downplay the impact of the good news
He argues AI could do this in a more intimate way because it knows users deeply and can embed false anchors and emotional triggers in outputs[12:02]
He emphasizes the subtlety and repeatability of such guidance over weeks, months, and years, making it hard for users to notice
Mark says he does not currently suspect large models like OpenAI or xAI are being used to deliberately manipulate users psychologically[12:38]
He and Preston agree that government involvement could rapidly make such manipulation scenarios more concerning[13:54]

Verifiable AI as mitigation and current signs of commercialization

Mark reiterates that he loves technology and uses AI daily but wants to highlight vulnerabilities[13:45]
He presents verifiable AI-open datasets, inspectable weights, open code, local or enclave execution-as the mitigation path[13:27]
He notes early signs of AI commercialization like shopping integrations and ChatGPT working while you sleep to present recommendations[14:02]
He interprets "ChatGPT will work while you sleep" as effectively meaning it will line up advertisements and product recommendations for the morning
Mark says he prefers not to "give himself" to that system and instead pursue openness and verifiability[14:21]

Maple AI: architecture, verification, and user experience goals

Need for privacy plus convenience and Maple's positioning

Mark observes it's a "tough sell" to get people to sacrifice convenience purely for privacy[16:16]
He says Maple aims to build a ChatGPT-like experience that is as good or better but with privacy at the core[16:52]
He emphasizes Maple will not harvest user data for other business purposes[16:48]

Secure enclaves, open-source code, and attestation in Maple

Mark explains Maple builds everything in the open and publishes all code online before deploying updates[18:08]
They run code in secure enclaves in the cloud that provide an attestation-a mathematical proof linking running code to the open-source version[17:57]
This is meant to solve the gap where many "private AI" services claim privacy but can't actually be independently checked
Users see a green verified checkmark in Maple and can click to inspect details of the attestation[18:08]
Mark compares this to the HTTPS lock icon in browsers and coins his own term "HTTPS-E" for secure enclaves[18:36]

Local vs cloud AI and Maple's hybrid model vision

Mark calls local AI "the most private AI" because it can run entirely on your phone or computer with the internet off[29:19]
He notes that powerful models currently cost tens of thousands of dollars to run, making pure local use impractical for most people[29:17]
Maple's design encrypts data locally using a private key and then sends it to a cloud secure enclave for processing[31:07]
Inside the enclave, data is decrypted with the user's key, processed by the AI, re-encrypted with the same key, and sent back
Mark says Maple, as the service operator, cannot see data "in the middle"; only the enclave sees plaintext and its code is auditable[33:20]

Model selection and future automated routing in Maple

Currently Maple presents a model picker, which some users find anxiety-inducing because they must choose the "best" model[33:18]
Planned 2.0 features include modes like "big brain thinking" or "quick trivia" that map to suitable models[33:30]
Longer term, Mark wants an "auto" mode where a classifier inspects prompts and automatically routes to the appropriate model[34:34]
Advanced users will still be able to override and manually select models if they wish

Open-source vs proprietary models, performance, and business dynamics

Performance gap between open-weight and proprietary models

Preston notes that in his experience, the latest proprietary models substantially outperform open-weight models[24:13]
Mark acknowledges that early open models were far worse but says they have rapidly improved over the last 2-3 years[24:24]
He describes a progression from roughly 50% as good, to 75%, to now around 90+% of proprietary performance
He mentions the Qwen3 Coder model, which scores similarly to proprietary models on some programming benchmarks[24:39]
Mark argues that most people do not need the last few percentage points of model quality to get high value[24:51]

Signals that proprietary progress may be slowing and open is catching up

He characterizes GPT-5 as an incremental improvement over GPT-4, with some users preferring 4 and asking for it back[25:08]
He notes analyses suggesting GPT-5 still routes many queries through GPT-4 under the hood[25:20]
He mentions OpenAI open-sourced "GPT-OSS", which he says is essentially based on GPT-4[25:28]
He points to Chinese models like DeepSeek and Qwen as competitive open options that will likely remain open to gain adoption[25:50]

Business model questions for open models and geopolitical angles

Preston asks how open providers make money and what incentives they have to open their models[25:50]
Mark admits he does not have a complete answer but speculates that if a country like China lacks chip access, it may open models to spread its worldview[26:22]
He suggests open release lets them compete by distributing their perspective widely when they cannot match proprietary incumbents directly[25:48]

Specialized models, AI memory, and user-controllable context

Shift from bigger general models toward specialization and routing

Mark says it is hard to predict even 2 years ahead but expects progress in ever-larger general models to slow[27:44]
He expects more specialized models in domains like coding, medicine, and law, with general models acting as routers or general contractors[27:44]
He notes that a coding model fine-tuned on software engineering already outperforms general models for programming tasks[27:53]

System prompts vs long-term memory and how Maple will handle both

Mark distinguishes between custom system prompts (like ChatGPT's GPTs) and AI memory[31:35]
He explains system prompts define personality or role (e.g., pirate voice, contract lawyer) and are proactive instructions
Maple already exposes a system prompt users can edit and he plans to support multiple presets such as a "legal mode"[32:22]
For memory, Maple plans an open-source memory component that shows exactly what the AI remembers about the user[32:40]
Users will be able to inspect, edit, and add to this memory, and it will be encrypted and tied to their private key
Mark likens AI memory to sitting down with a biographer who learns your life story and thought patterns[33:08]
He contrasts this with proprietary systems that show a limited "things we know about you" UI that may not reflect all stored data[33:22]

Technical challenges of AI memory dominating future conversations

Preston worries that a strong memory might inappropriately dominate new conversations on unrelated topics[38:03]
Mark agrees this is a big challenge: current AI memories often overweight a few stored items because their memory corpus is small[38:27]
He says the solution involves better annotation and categorization of memories by topic (e.g., finance, health, mountain biking)[39:19]
The system should then selectively ignore irrelevant memories for a given chat instead of pulling everything in
He wants transparency into which memories the AI is suppressing or using and why, instead of opaque, proprietarily chosen behavior[39:43]

AI infrastructure, inference economics, and competitive landscape

Inference vs training and user experience as future moat

Mark explains training is far more resource-intensive than inference, analogous to the long human learning phase before conversation[41:24]
He expects inference costs to keep dropping as chips get more efficient even as models get bigger[40:58]
He argues the main competitive moat will be user experience and the apps built on top of the inference layer[42:06]
He describes a hybrid approach where small local models handle initial processing and sensitive data, then efficiently prompt cloud models[42:32]

Custom hardware, massive capital flows, and possible AI bubble

Preston cites reports that xAI is building custom ASICs for inference that are 10-20x faster than top GPUs for that task[43:48]
They discuss how capital-intensive it is to build such hardware stacks and large AI companies[44:24]
Mark mentions reports of Broadcom and OpenAI deals where press releases and stock moves effectively create the capital needed to fund chips[45:09]
He notes Department of Defense and other government contracts awarding around $200 million each to multiple major AI labs[45:50]
He expects overinvestment and a bubble-like pattern similar to the early internet, followed by retrenchment where the strongest players remain[46:55]

AI-assisted software development and Maple's startup challenges

Competing with giants and prioritizing features

Mark says Maple's challenge is keeping up and reaching feature parity with giants like OpenAI[47:23]
With only two people, they must selectively implement the most important ChatGPT-style features first[46:44]
They plan to raise money to hire a few more people but will never match big tech team sizes[47:09]

Using AI to build AI: productivity gains in coding

Mark says they are "using AI against them"-leveraging AI tools to build faster[47:51]
He recalls prior startups where getting a product to market took close to a year, whereas Maple shipped and grew revenue in about nine months[48:26]
He estimates 90-95% of Maple's code is written by AI, with humans directing, guiding, and reviewing it[51:15]
He feels his effective productivity has increased on the order of 10x compared to pre-AI tooling, although he has not measured precisely[52:04]

Role of "vibe coding" for non-engineers vs production software

Mark cautions that there is a lot of salesmanship in demos where someone types one paragraph and gets a complete app[50:05]
He distinguishes proof-of-concept apps from production systems with millions of users and full edge-case coverage[50:27]
He sees huge value in non-engineers using AI to prototype and refine ideas before approaching engineers[49:47]
Such proof-of-concepts can reveal whether an idea is "a piece of garbage" or worth iterating, improving communication with developers
For professional engineers, he views AI as an accelerant rather than a replacement[50:58]

Maple's internal AI development workflow

They run local coding environments with AI integrations such as GitHub Copilot-like tools and other coding assistants[50:48]
They experiment with tools from multiple providers, including Claude and Qwen3 Coder plugged into IDEs[51:04]
On each GitHub pull request, two different AI models act as code review agents and leave comments[51:13]
They then ask Claude to respond to those comments, creating an AI-assisted code-review loop before human final review
Humans still provide the final approval and may override AI-proposed approaches when necessary[53:39]

Future of home AI, sovereignty, Nostr-like identity, and closing thoughts

Prospects for home AI servers and comparison to email

Preston asks whether home AI servers could become as common as appliances, especially for privacy-conscious users like Bitcoin node operators[53:19]
Mark says he would love a world where everyone has a home server handling AI for all their devices[53:48]
He believes that in 10 years hardware and UX could make plug-and-play home AI boxes feasible[53:58]
However, he notes that with email, people could run home servers but overwhelmingly choose convenient hosted services like Google Workspace[54:55]
He hopes AI might be the line where people decide "you can have our emails, but you can't have our brains" and keep their cognitive data local[55:09]

Nostr, public/private keys, and verifiable communication

Preston notes that Nostr is often described as an identity layer built on public/private key pairs[56:09]
Mark says the power of Nostr-like systems is verifiability that a message truly came from a given person[56:03]
He wants to integrate signing into Maple's pipeline, such as signing builds with a private key[56:39]
He views the core value as verifying communications, not simply replicating a Twitter-like social feed[57:39]

AI as a toolbox and Maple as a privacy tool

Mark encourages people to think of AI as a toolbox with multiple tools rather than abandoning ChatGPT entirely[57:14]
He suggests adding Maple as an extra tool to use when conversations involve sensitive information like children's names or family details[58:04]
He emphasizes the "refreshing" feeling of a private room with an AI where no one else is recording or selling the information[57:45]
He invites listeners to try a free account at TryMaple.ai and optionally upgrade to support development[57:58]

Closing appreciation and importance of privacy-preserving AI

Preston tells Mark he believes Maple is working on one of the most important problems in the world right now[58:13]
He reiterates the URL TryMaple.ai and says the importance of this topic will only grow[58:24]
Mark thanks Preston for having him on the show[58:39]

Lessons Learned

Actionable insights and wisdom you can apply to your business, career, and personal life.

1

Treat AI systems through the lens of verifiability rather than blind trust: insist on being able to inspect code, models, and data flows, or at least rely on architectures (like secure enclaves) that provide strong cryptographic proofs of what is running.

Reflection Questions:

  • Where in your current tech stack or workflow are you implicitly trusting opaque systems with sensitive data?
  • How could you move at least one critical workflow this quarter to a more verifiable setup, such as open-source tools or services that provide attestations?
  • When you evaluate a new AI tool, what specific questions will you add to your checklist to assess how verifiable it really is?
2

Convenience is powerful, but handing proprietary AI full access to your thought process and personal history creates long-term risks of subtle manipulation and lock-in.

Reflection Questions:

  • What kinds of prompts, documents, or personal stories are you currently feeding into proprietary AI that you might later wish you had kept more private?
  • How might your beliefs or decisions change over years if the system guiding you could quietly steer what information it emphasizes or omits?
  • What is one concrete boundary you could set this week around what you will and will not share with closed, centralized AI services?
3

You can get many of the benefits of advanced AI without sacrificing privacy by using a toolbox approach: combine different models and providers, and route the most sensitive tasks through more private or verifiable systems.

Reflection Questions:

  • Which of your recurring AI use cases actually require the absolute top proprietary model, and which would be fine on a strong open-weight model?
  • How could you redesign your daily workflow so that private or identity-sensitive tasks are automatically handled by more privacy-preserving tools?
  • What is one workflow you could split in two-using a mainstream AI for generic work and a verifiable/private AI for the sensitive parts?
4

AI dramatically amplifies developer productivity when used as a collaborator rather than an autopilot, with humans still responsible for direction, review, and integration.

Reflection Questions:

  • In your own work, where could you delegate more of the rote implementation or drafting to AI while keeping yourself in charge of design and quality control?
  • How might adding AI-based review or feedback loops (like code review agents or document critics) change the speed and robustness of what you ship?
  • What guardrails or review practices do you need to put in place so AI-generated work never bypasses your own critical thinking?
5

If you don't intentionally design for sovereignty now-owning your keys, your models, and your memory-you may find your most human asset (your way of thinking) effectively captured inside someone else's black box.

Reflection Questions:

  • Which parts of your digital life today are truly under your control (keys, data, configurations) versus merely accessed by you on someone else's terms?
  • How could you start building a personal "sovereign stack" over the next year-whether that's local storage, self-hosted services, or key-based identity?
  • When you imagine yourself 5-10 years from now, what would it mean for your future self if your long-term AI memory and identity are owned and governed by you instead of a corporation?

Episode Summary - Notes by Finley

TECH006: Open-Source AI That Protects Your Privacy w/ Mark Suman (Tech Podcast)
0:00 0:00