Loading vLEI.wiki
Fetching knowledge base...
Fetching knowledge base...
This comprehensive explanation has been generated from 180 GitHub source documents. All source documents are searchable here.
Last updated: October 7, 2025
This content is meant to be consumed by AI agents via MCP. Click here to get the MCP configuration.
Note: In rare cases it may contain LLM hallucinations.
For authoritative documentation, please consult the official GLEIF vLEI trainings and the ToIP Glossary.
Privacy in KERI refers to the ability of entities to control disclosure of their identity metadata and communication patterns, operating within the PAC Theorem constraint that one can achieve any two of Privacy, Authenticity, and Confidentiality at the highest level, but not all three simultaneously.
Privacy in the KERI ecosystem represents the ability of individuals and organizations to control disclosure of identity metadata and communication patterns while maintaining cryptographic verifiability of their identifiers and credentials. Unlike traditional privacy models that treat privacy as an absolute property, KERI recognizes privacy as existing within a fundamental trade-space defined by the PAC Theorem.
The PAC Theorem (Privacy, Authenticity, Confidentiality) establishes that an identifier system can achieve any two of these three properties at the highest level, but not all three simultaneously. This theoretical constraint shapes KERI's entire approach to privacy, requiring explicit prioritization and architectural decisions about which properties to emphasize.
KERI's privacy model distinguishes between two complementary definitions:
This dual definition recognizes that privacy encompasses both what is disclosed (content) and who participated (metadata), with different technical and legal mechanisms required for each.
Traditional identity systems have struggled with privacy because they conflate multiple distinct concerns:
Administrative Identity Systems (like DNS/CA) provide weak privacy because:
Contextual Isolation: Implement separate AIDs for different contexts to prevent correlation. Each AID should have its own KEL and witness configuration.
AID Lifecycle: Consider the privacy implications of AID rotation and delegation. Delegated AIDs can leak information about organizational structure.
UUID Generation: Use cryptographically secure random number generators for UUID fields. These must have sufficient entropy (128 bits minimum) to prevent rainbow table attacks.
Compact Credentials: When issuing ACDCs, provide both compact and full variants. Allow holders to choose disclosure level based on context.
Selective Disclosure: Structure attribute sections to enable fine-grained selective disclosure. Group related attributes that should be disclosed together.
Rules Sections: Include explicit privacy disclaimers in ACDC rules sections. Reference applicable privacy regulations (GDPR, CCPA, etc.).
Chain-Link Obligations: Ensure contractual language creates obligations that transfer to downstream recipients. This requires legal review and jurisdiction-specific adaptation.
Privacy Policies: Establish clear policies for:
Compliance Monitoring: Implement mechanisms to verify that privacy protections are being honored by all ecosystem participants.
Caching Strategies: Balance privacy (minimizing cached data) with performance (reducing repeated cryptographic operations).
Batch Operations: When processing multiple credentials, consider privacy implications of batch processing that might enable correlation.
Correlation Testing: Verify that compact credentials cannot be correlated across presentations without additional information.
Linkability Analysis: Test whether contextual information captured during presentations enables re-identification.
: Ensure implementations meet privacy regulation requirements for data minimization and purpose limitation.
Blockchain-Based Systems face different privacy challenges:
Traditional Verifiable Credentials attempted to address privacy through:
However, these approaches often failed to account for contextual linkability - the ability to re-identify individuals through statistical correlation of disclosed attributes with contextual information captured at the point of disclosure.
KERI explicitly acknowledges the PAC Theorem constraint and establishes a clear priority ordering following Trust over IP (ToIP) design goals:
This prioritization reflects a pragmatic recognition that cryptographic verifiability and secure attribution are prerequisites for any meaningful privacy guarantees. Without authenticity, privacy claims cannot be verified; without confidentiality, privacy protections are meaningless.
KERI's Autonomic Identifiers (AIDs) provide privacy advantages over traditional identifiers:
Cryptonymous by Default: AIDs are cryptographically derived identifiers that contain no inherent personal information. Unlike email addresses or usernames, they reveal nothing about the controller's identity.
Contextual Isolation: Controllers can create multiple AIDs for different contexts, preventing correlation across different relationships or use cases. Each AID maintains its own independent Key Event Log (KEL), with no cryptographic linkage between AIDs controlled by the same entity.
Selective Disclosure: AIDs can be disclosed selectively - a controller might share one AID with a business partner while using a completely different AID with a healthcare provider, maintaining privacy through compartmentalization.
KERI's Authentic Chained Data Container (ACDC) specification implements sophisticated privacy mechanisms:
Graduated Disclosure: ACDCs support progressive revelation of credential information:
This graduated approach enables principle of least disclosure - sharing only what is necessary at each stage of an interaction.
UUID-Based Blinding: ACDCs include UUID fields (high-entropy pseudorandom strings) that serve as "salty nonces" to prevent rainbow table attacks. These UUIDs make it computationally infeasible to correlate compact ACDCs across different presentations, even when the same credential is presented multiple times.
Contractually Protected Disclosure: KERI implements chain-link confidentiality mechanisms where:
This approach recognizes that cryptographic solutions alone cannot sustainably protect privacy - legal and contractual frameworks are essential complements.
KERI explicitly addresses the contextual linkability threat - the most critical privacy vulnerability in credential systems:
The Problem: Even with perfect selective disclosure and zero-knowledge proofs, verifiers can capture sufficient contextual information (location, time, device fingerprints, behavioral patterns) to statistically correlate disclosed attributes with existing datasets, enabling re-identification.
KERI's Mitigation:
KERI maintains a clear distinction between privacy and confidentiality:
Confidentiality (second priority) protects content - what was said in communications. This is achieved through:
Privacy (third priority) protects metadata - who participated in communications. This is achieved through:
This separation enables precise reasoning about security properties and explicit trade-offs between different protection mechanisms.
The SPAC (Secure Privacy, Authenticity, and Confidentiality) framework provides the theoretical foundation for KERI's privacy approach:
Cold War vs. Hot War Model:
This distinction explains why KERI prioritizes authenticity and confidentiality over privacy: the first two can be solved definitively through cryptography, while privacy requires continuous evolution of both technical and legal mechanisms.
Effective Privacy Strategy: Rather than attempting absolute privacy (which is impossible), KERI aims for effective privacy through:
Healthcare Credentials: A patient can:
Business Transactions: An organization can:
vLEI Ecosystem: GLEIF's verifiable Legal Entity Identifier system demonstrates privacy in practice:
Cryptographic Foundation: Privacy protections are built on verifiable cryptographic primitives rather than trust in intermediaries.
Regulatory Compliance: The framework supports GDPR and other privacy regulations through:
Practical Usability: Unlike pure zero-knowledge proof systems, KERI's privacy mechanisms:
Privacy vs. Authenticity: The PAC Theorem constraint means maximizing privacy requires accepting some limitations on authenticity or confidentiality. KERI's choice to prioritize authenticity means:
Complexity: Privacy-preserving mechanisms add complexity:
Performance: Privacy features have computational costs:
However, KERI's "minimally sufficient means" philosophy ensures these costs remain acceptable for practical deployment.
KERI's privacy model requires governance frameworks that:
The vLEI Ecosystem Governance Framework demonstrates this in practice, establishing clear policies for information trust, privacy protection, and data handling across the entire credential ecosystem.