How It Works: Zero Data Retention
Without zero data retention, privileged client communications could become someone else’s evidence. Learn how AI foundation models handle your data, why ZDR agreements are critical for protecting attorney-client privilege, and how to evaluate platforms.
You upload a client’s sensitive contract to your AI legal research platform. The AI analyzes it, identifies key issues, and suggests relevant case law. But what happens to that contract now? Is it being retained by the AI foundation models that power the platform? Being used to train those models? Potentially discoverable if the foundation model provider gets sued?
This determines whether you can protect attorney-client privilege—and whether you’re meeting your professional obligations under ABA Model Rule 1.1.
In our previous How It Works articles, we explored the different types of legal AI, agentic AI, and generative AI. Today, we’re examining zero data retention—the security architecture that determines what happens to your confidential client information when it’s processed by AI foundation models.
Try AI with Zero Data Retention
What Zero Data Retention Is
The Technical Definition (In Plain Language)
Zero data retention (ZDR) means that AI foundation models—the large language models like GPT-4 or Claude that power legal AI platforms—don’t store, save, cache, or learn from your confidential information. When you use a legal AI platform that has ZDR agreements with its foundation model providers, your data is processed through specialized zero data retention API endpoints, then immediately discarded. The foundation models don’t keep copies, don’t create backups, and don’t use your information for training.
The ZDR agreement is the contractual commitment from the foundation model provider, while the ZDR API is the technical implementation that enforces it. Together, they provide both legal and technical assurance that your data isn’t retained.
Here’s the critical distinction: Your legal AI platform (like Vincent) stores your chat history and documents in your account—that’s both normal and necessary—but the foundation models don’t. Vincent acts as your agent, similar to your case management software or email system. The privilege concern arises with platforms that allow foundation models to retain your data, because those foundation model providers are third parties outside your attorney-client relationship.
Think of it like this: Remember the “convenient amnesia paralegal” from our first article? Zero data retention is exactly that—but specifically for the AI foundation models processing your requests. Your legal AI platform remembers your work (just like your case management system stores files), but the third-party AI models powering that platform forget everything the moment they’re done processing.
If foundation model providers are learning from your privileged client communications, they’re retaining them. And retention by a third party creates serious privilege waiver concerns.
What Zero Data Retention Is NOT
AI companies often use similar-sounding language that doesn’t provide the same protection:
“We won’t sell your data” is not zero data retention. They’re still keeping your data—just not monetizing it directly.
“Your data is encrypted” is not zero data retention. Encryption protects data while stored or transmitted, but the data still exists. Zero data retention means there’s nothing to encrypt after processing completes.
Think of it like this: Zero data retention is like writing a message in disappearing ink that vanishes after being read. Encryption is like locking a message in a safe. Both provide security, but only one ensures the message ceases to exist. For attorney-client privilege, you need the foundation models to forget entirely.
“Security is the foundation of Vincent,” says Ed Walters, vLex Chief Strategy Officer. “It’s baked into the architecture from the start to make sure that nobody—not vLex, OpenAI, or anyone—is training a model based on the work lawyers do on the platform.”

Real Consequences When Zero Data Retention Is Missing
The OpenAI Discovery Case: Privilege Waiver in Action
Recent legal proceedings where OpenAI faced broad discovery requests for user data illustrate why zero data retention matters. In May 2025, a federal court ordered OpenAI to preserve all ChatGPT output log data—including deleted chats—in connection with The New York Times’ copyright lawsuit. If OpenAI retained attorney-client communications in its systems when users accessed ChatGPT directly, could those privileged communications be preserved and potentially discovered in litigation against OpenAI?
The answer revealed a stark divide. Users whose data was never stored by OpenAI in the first place (those with explicit zero data retention agreements) found their data unaffected by the preservation order. Not because the court exempted them, but because there was no data to preserve. When foundation models don’t retain your data, there’s nothing for a court to order preserved.
In a June 5, 2025 blog post, OpenAI’s Chief Operating Officer Brad Lightcap explained, “If you are a business customer that uses our Zero Data Retention (ZDR) API, we never retain the prompts you send or the answers we return. Because it is not stored, this court order doesn’t affect that data.”
Meanwhile, everyone else’s data stored in OpenAI’s systems became subject to indefinite retention, despite OpenAI’s standard 30-day deletion policy and despite users’ deletion requests. The court prioritized evidence preservation over privacy commitments.
This is privilege waiver in real time. When foundation model providers retain your data, your privileged communications could become opposing counsel’s evidence through normal legal discovery processes.
What this means: If you used AI tools that allowed foundation models to retain your client communications, and that foundation model provider gets sued, your privileged communications could be subject to discovery. Your client’s confidential information becomes part of someone else’s legal case.

Other Potential Privilege Waiver Scenarios
Data breaches: If a foundation model provider suffers a security breach and attackers access stored data, any privileged communications retained in those systems are exposed. The privilege is waived through failure to maintain confidentiality.
Future litigation involving foundation model providers: Discovery requests in any litigation could reach your client files if they’re retained in foundation models. You have no control over what legal disputes AI providers might face.
The permanence problem: Once privilege is waived, you cannot recover it. The opposing party gets access to your entire legal strategy, advice, case analysis, and communications. The damage cannot be undone.
How Zero Data Retention Works With Other Security Measures
Defense in Depth: Multiple Layers of Protection
Zero data retention doesn’t work in isolation. The most secure legal AI platforms combine ZDR agreements with other protective measures to create a “defense in depth.”
The essential security layers:
1. Zero Data Retention: Foundation models don’t retain data after processing. It is immediately and permanently discarded from their systems.
2. Encryption: Data is protected while being processed, both in transit and at rest.
3. Third-Party Certifications: Independent auditors verify security claims through rigorous testing—not just vendor promises.
4. Data Residency Controls: You control where your data is processed geographically, ensuring compliance with jurisdictional requirements.
5. Access Controls: Strict limitations on who can access data internally, with detailed audit trails.
Why Each Layer Matters: The Indiana Jones Principle
Think of it like this: Remember the opening of Raiders of the Lost Ark? Someone trying to reach your client’s data is like Indiana Jones navigating that temple. First, they have to dodge the poison darts (encryption). Then avoid falling into the hidden pit (access controls). Then outrun that massive rolling boulder (monitoring systems and audit trails). Each trap is a different security layer, and they have to get past all of them to reach the chamber where your client’s data should be stored in the foundation model’s systems.
But there’s a twist: Even if they somehow survive every trap and reach the chamber... the golden idol isn’t there. Zero data retention means the data doesn’t exist. It was processed and immediately discarded from the foundation model’s systems, leaving an empty pedestal. There’s nothing to steal.
You need all the layers working together—the traps make it nearly impossible to reach the data, and ZDR ensures that even if someone does, there’s nothing there to find.
How Vincent Implements Zero Data Retention
Vincent maintains strict zero data retention agreements with AI model providers, ensuring your confidential legal documents and queries are never stored or used for model training by foundation models.
These agreements ensure that when you upload documents or submit research queries, Vincent processes your requests through the providers’ ZDR API endpoints—specialized technical implementations that immediately discard your data after processing. This dual-layer of protection provides both contractual commitment and technical enforcement ensuring that your confidential legal documents and queries are never stored or used for training by foundation models.
Beyond zero data retention agreements, Vincent implements comprehensive security measures including FIPS 140-2 compliant encryption with unique encryption keys per customer, SOC 2 Type II certification and ISO 27001 certification, data residency control allowing you to choose processing locations (US, Australia, or EU), regular penetration testing and detailed audit logs, and flexible architecture options including multi-tenant and single-tenant deployment.
vLex’s Senior Product Manager, Alex Shaffer, explains, “Attorneys bear responsibility for protecting their clients’ information. Zero data retention is not just a feature—it needs to be the standard when interfacing with sensitive data.”

Building Your Security Understanding
Understanding data security has become an essential component of legal competence. ABA Model Rule 1.1 makes clear that lawyers must keep abreast of “the benefits and risks associated with relevant technology” to maintain the requisite knowledge and skill for competence.
Make zero data retention a non-negotiable requirement. This means asking the right questions: Does your legal AI platform have zero data retention agreements with its foundation model providers? Are those agreements backed by technical implementations using ZDR API endpoints? Are those commitments in writing?
The lawyers who ignore these concepts will face increasingly difficult questions about how they’re protecting client confidences in an AI-enabled practice.
Experience Security-First Legal AI
Vincent is engineered with zero data retention agreements and comprehensive security measures at its foundation because Vincent is engineered for lawyers. Every feature is built to protect attorney-client privilege while delivering the efficiency and capability that modern legal practice demands.
Ready to experience AI built with security and confidentiality at its core? Start your free trial of Vincent today.
Authored by
Sierra Van Allen