Loading vLEI.wiki
Fetching knowledge base...
Fetching knowledge base...
This comprehensive explanation has been generated from 13 GitHub source documents. All source documents are searchable here.
Last updated: October 7, 2025
This content is meant to be consumed by AI agents via MCP. Click here to get the MCP configuration.
Note: In rare cases it may contain LLM hallucinations.
For authoritative documentation, please consult the official GLEIF vLEI trainings and the ToIP Glossary.
A mechanism that can unambiguously assess whether information is and continues to be whole, sound, and unimpaired through cryptographic verification, without requiring comparison to previous versions or reference data.
Verified integrity represents a cryptographic mechanism that provides unambiguous, continuous assessment of whether information remains complete, consistent, and unmodified. Unlike traditional integrity checking that relies on comparing current data against stored reference versions, verified integrity enables independent verification through cryptographic primitives embedded within the data structures themselves.
The concept distinguishes itself from general integrity by emphasizing the verification mechanism rather than just the property itself. While integrity describes the state of being whole and unimpaired, verified integrity describes the capability to prove that state cryptographically at any point in time.
Key properties of verified integrity include:
The scope of verified integrity in KERI is deliberately narrow and technical. It does not address semantic correctness or veracity of information content—only whether the data structure itself remains cryptographically intact and internally consistent.
Traditional integrity verification in distributed systems has relied on several approaches:
Comparison-based verification: Systems maintain reference copies of data and verify integrity by comparing received data against these stored versions. This approach requires significant storage overhead and assumes the reference copy itself maintains integrity.
Implementing verified integrity requires understanding the verification workflow for each KERI component:
KEL Verification:
ACDC Verification:
CESR Stream Verification:
Caching verified segments: Once a KEL segment has been verified back to the root-of-trust, cache the verification result and the checkpoint. Future verifications can start from this checkpoint rather than re-verifying the entire history.
Parallel verification: KEL events can be verified in parallel up to the point where key state changes. Establishment events create verification dependencies, but interaction events between establishments can be verified concurrently.
Incremental SAID computation: For large ACDCs, compute SAIDs incrementally as data is received rather than buffering the entire structure. This reduces memory requirements and enables streaming verification.
Verified integrity mechanisms should fail closed—any verification failure should result in rejection of the data:
Hash-based checksums: Computing cryptographic hashes of data enables detection of modifications, but requires storing or transmitting the original hash through a separate trusted channel.
Digital signatures: Public key cryptography enables verification that data originated from a specific source and hasn't been modified, but traditional implementations often separate the signature from the data structure itself.
Merkle trees: Hierarchical hash structures enable efficient verification of large datasets, but typically require external storage of root hashes and verification paths.
These traditional approaches share common limitations:
KERI implements verified integrity through three complementary mechanisms across its protocol stack, each addressing different aspects of the verification challenge.
At the protocol level, verified integrity is achieved through two fundamental properties of Key Event Logs (KELs) and Transaction Event Logs (TELs):
Internal consistency: Each event in a KEL cryptographically commits to the previous event through hash chaining, creating a tamper-evident append-only log. The inception event establishes the initial key state, and subsequent establishment events and interaction events maintain cryptographic continuity. Any attempt to modify historical events breaks the hash chain, making tampering immediately detectable.
Duplicity detection: KERI's witness infrastructure enables detection of conflicting versions of event logs. When a controller attempts to create multiple inconsistent versions of their KEL (duplicitous behavior), witnesses and watchers can detect this through comparison of signed events. The KAACE (KERI's Agreement Algorithm for Control Establishment) consensus mechanism ensures that duplicity becomes evident to validators.
This dual approach—internal consistency within a single log and duplicity detection across distributed copies—provides comprehensive verified integrity for identifier control authority.
Authentic Chained Data Containers (ACDCs) implement verified integrity through Self-Addressing Identifiers (SAIDs). A SAID is simultaneously:
This self-referential design creates "verified integrity at all times by design." The SAID serves as both identifier and integrity proof—any modification to the ACDC's content would require recomputing the SAID, which would then no longer match the original identifier. This makes tampering not just detectable but impossible without changing the identifier itself.
The SAID mechanism provides several advantages:
Composable Event Streaming Representation (CESR) achieves verified integrity through a unique property: round-robin composability. CESR primitives can be converted between text and binary representations without loss of information.
The verification mechanism is elegant: if data can successfully toggle between text and binary domains and back again, this transformation itself proves integrity. The ability to compose and decompose data across domains demonstrates that:
CESR's code tables define precise encoding rules for cryptographic primitives. Successful parsing and domain conversion proves that data conforms to these specifications, providing verified integrity for streaming data without requiring comparison to reference versions.
KERI implements complementary integrity verification—a mechanism that verifies integrity independently without requiring access to previous data instances. This is achieved through public key cryptography from the data controller.
For example, once a KEL has been verified back to its root-of-trust at a specific point in time, that verified portion (the "tail") no longer requires re-verification. Future integrity checks can proceed from the established checkpoint rather than re-verifying the entire history. This enables:
Credential verification: When a verifier receives an ACDC credential, they can immediately verify its integrity by computing the SAID and comparing to the embedded identifier. No need to contact the issuer or access a registry—the credential itself contains its integrity proof.
Event log validation: Validators receiving a KEL can verify its internal consistency by checking hash chains and signatures. If witnesses have signed events, the validator can detect any duplicitous behavior by the controller without trusting any single witness.
Streaming data integrity: Systems processing CESR-encoded event streams can verify integrity continuously by testing composability. Successful parsing and domain conversion proves data integrity in real-time.
Distributed consensus: KERI's witness pools use verified integrity mechanisms to reach agreement on key state without requiring a blockchain or central authority. The combination of internal consistency and duplicity detection enables Byzantine fault-tolerant consensus.
Trustless verification: Verified integrity eliminates the need to trust intermediaries or infrastructure providers. Cryptographic proofs enable anyone to verify data integrity independently.
Efficiency: Complementary integrity verification reduces storage and computational requirements by eliminating redundant verification of historical data.
Ambient verifiability: The self-contained nature of KERI's integrity mechanisms enables ambient verifiability—anyone, anywhere, at any time can verify data integrity.
Composability: Verified integrity mechanisms work across different protocol layers (KERI, ACDC, CESR) and can be composed to verify complex data structures.
Scalability: The ability to prune verified log segments and avoid redundant verification enables KERI systems to scale to large numbers of identifiers and long operational lifetimes.
Narrow scope: KERI's verified integrity focuses exclusively on cryptographic properties. It does not address semantic correctness, business logic validation, or content veracity. Systems must implement separate mechanisms for these concerns.
Computational requirements: Cryptographic verification requires computation—hashing, signature verification, and in some cases, verification of multiple witness signatures. While efficient, this is not zero-cost.
Complexity: Understanding and correctly implementing verified integrity mechanisms requires cryptographic expertise. The self-referential nature of SAIDs and the composability requirements of CESR add conceptual complexity.
Limited to structured data: Verified integrity mechanisms work with structured data formats (event logs, JSON-based ACDCs, CESR primitives). Unstructured data requires additional wrapping or transformation.
No protection against authorized modification: Verified integrity detects unauthorized tampering but cannot prevent a legitimate controller from modifying their own data. Systems requiring immutability must implement additional controls (such as non-transferable identifiers or external anchoring).
The fundamental trade-off is between the narrow, technical focus of verified integrity (which enables objective, automatable verification) and the broader concerns of data quality, semantic correctness, and business logic validation (which require contextual judgment and cannot be fully automated through cryptography alone).
Systems should log verification failures with sufficient detail for forensic analysis while avoiding information leakage that could aid attackers.