Loading vLEI.wiki
Fetching knowledge base...
Fetching knowledge base...
This comprehensive explanation has been generated from 169 GitHub source documents. All source documents are searchable here.
Last updated: October 7, 2025
This content is meant to be consumed by AI agents via MCP. Click here to get the MCP configuration.
Note: In rare cases it may contain LLM hallucinations.
For authoritative documentation, please consult the official GLEIF vLEI trainings and the ToIP Glossary.
In KERI/ACDC, validate refers to the process of evaluating whether data, credentials, or key event logs meet specific requirements for a particular use case, encompassing both cryptographic verification and policy-based assessment to determine fitness for purpose.
Validation in the KERI/ACDC ecosystem represents a comprehensive evaluation process that extends beyond simple cryptographic verification to include contextual assessment of whether data structures, credentials, or identifiers satisfy specific requirements for their intended use. The term encompasses multiple layers of evaluation: cryptographic integrity checking, structural compliance verification, policy-based assessment, and fitness-for-purpose determination.
The KERI/ACDC vocabulary explicitly acknowledges that 'validate' has "extra diverse meanings" beyond the general eSSIF-Lab definition, including both 'evaluate' and 'verify' operations. This semantic richness reflects the protocol's need to distinguish between different levels of trust assessment - from basic cryptographic proof validation to complex business logic evaluation.
A critical distinction exists between validation and verification in KERI systems. Verification establishes cryptographic correctness (signatures are valid, digests match, structural commitments hold), while validation determines whether verified data is appropriate for a specific use case. As stated in the validator glossary entry: "a necessary but insufficient condition for a valid KEL is it is verifiable" - meaning cryptographic verification is required but not sufficient for validation.
The concept of validation in identity systems has evolved from simple credential checking to sophisticated multi-layered assessment. Traditional PKI systems conflated validation with verification, treating a valid signature as sufficient proof of trustworthiness. This approach proved inadequate for complex trust decisions requiring contextual evaluation.
Verification vs. Validation APIs: Implementations should provide separate API methods for verification (cryptographic checking) and validation (policy-based assessment). This separation enables:
Validation Context: Validators must maintain context about:
Error Reporting: Validation failures should provide detailed information about which validation stage failed and why, enabling:
Performance Considerations: Validation can be computationally expensive, especially when:
Implementations should consider caching strategies and asynchronous validation where appropriate.
Policy Management: Validation policies should be:
Watcher Integration: For KEL validation, implementations should:
ACDC-Specific Validation: For credential validation:
The eSSIF-Lab framework established a foundational definition: validation is "the act, by or on behalf of a party, of determining whether or not data is valid to be used for some specific purpose(s) of that party." This definition emphasizes the contextual, purpose-driven nature of validation - data validity is not absolute but relative to specific use cases and party requirements.
NIST standards complemented this with a more formal definition: "Confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled." This highlights the need for objective evidence and requirement fulfillment as validation criteria.
KERI implements a sophisticated multi-stage validation architecture that separates concerns while maintaining cryptographic rigor:
In KERI, a validator is "any entity or agent that evaluates whether or not a given signed statement as attributed to an identifier is valid at the time of its issuance." This role involves determining the current authoritative key set for an identifier from at least one key event log, then applying additional criteria to assess validity for specific use cases.
The validation process follows a clear hierarchy:
For Key Event Logs, validation involves evaluating whether the log can be trusted for establishing control authority. A valid KEL must be:
Critically, KERI's duplicity detection mechanisms enable validators to identify when a controller has published multiple inconsistent versions of their KEL. This "duplicity evident" property allows validators to make informed trust decisions even in adversarial scenarios.
For Authentic Chained Data Containers (ACDCs), validation extends to:
The ACDC specification's authorization system demonstrates this layered approach. The Authorizer class implements four filter types (credential filters, chain filters, edge filters, attribute filters) that progressively validate credentials beyond basic cryptographic verification.
KERI's watcher infrastructure provides ambient validation capabilities. Watchers maintain copies of KELs and apply the "first-seen wins" rule, enabling validators to:
This distributed validation model removes single points of failure while maintaining cryptographic verifiability.
The distinction between validation and verification has significant practical consequences:
When a holder presents an ACDC credential to a verifier:
A credential can be perfectly verified (cryptographically sound) yet invalid for a specific use case (wrong issuer, expired, insufficient attributes).
When establishing control authority over an AID:
A validator might accept a KEL for low-stakes interactions but require additional witness confirmations for high-value transactions - same verification, different validation criteria.
KERI implementations must carefully distinguish validation from verification in their APIs and documentation. The validator role requires access to:
The validation process should be transparent and auditable, with clear documentation of which criteria were applied and why specific validation decisions were made. This transparency enables trust without requiring trust in the validator itself - the validation logic can be independently verified.