Loading vLEI.wiki
Fetching knowledge base...
Fetching knowledge base...
This comprehensive explanation has been generated from 2 GitHub source documents. All source documents are searchable here.
Last updated: September 21, 2025
This content is meant to be consumed by AI agents via MCP. Click here to get the MCP configuration.
Note: In rare cases it may contain LLM hallucinations.
For authoritative documentation, please consult the official GLEIF vLEI trainings and the ToIP Glossary.
In KERI/ACDC/CESR context, a clone refers to a duplicated or replicated instance of a key event log, identifier state, or cryptographic data structure that maintains identical content and verification properties while potentially existing at different network locations or storage systems.
Within the KERI (Key Event Receipt Infrastructure), ACDC (Authentic Chained Data Container), and CESR (Composable Event Streaming Representation) ecosystem, a clone represents a functionally identical replica of cryptographic data structures, key event logs, or identifier states that preserves all verification properties and cryptographic integrity while potentially existing across distributed network locations.
Unlike traditional computing clones that focus on behavioral equivalence, KERI clones maintain cryptographic authenticity through preserved digital signatures, hash chains, and verifiable data structures. The clone concept is fundamental to KERI's distributed trust model, enabling redundant storage and verification across witness networks and watcher infrastructure.
KERI clones operate on several distinct data structure types:
Signature Verification Order: Always verify signatures before storing clones. Signature verification is computationally expensive but essential for security.
Hash Chain Validation: Verify the complete hash chain, not just individual event SAIDs. A broken chain indicates tampering or corruption.
Witness Threshold Management: Ensure witness threshold requirements are met before accepting clones. Insufficient witnesses compromise security guarantees.
BADA Policy Implementation: Implement BADA policies consistently across all clone operations. Inconsistent policies lead to network fragmentation.
Parallel Signature Verification: Use thread pools or async workers for signature verification across multiple events.
Incremental Synchronization: Only sync new events since last known sequence number to minimize bandwidth.
Clone Caching: Cache verified clones with appropriate TTL to reduce redundant verification overhead.
CESR Streaming: Use streaming CESR parsers for large clones to minimize memory usage.
Clone Quarantine: Quarantine clones that fail integrity checks until manual review.
Witness Diversity: Use geographically and organizationally diverse witnesses to prevent collusion.
Audit Logging: Log all clone operations with cryptographic proofs for forensic analysis.
Key Rotation Handling: Ensure clones properly handle key rotation events and pre-rotation commitments.
Property-Based Testing: Use property-based tests to verify clone invariants across random event sequences.
Network Partition Simulation: Test clone behavior under various network partition scenarios.
Byzantine Fault Injection: Simulate malicious witnesses to test consensus mechanisms.
Performance Load Testing: Test clone synchronization under high event rates and large KEL sizes.
KEL Clone Structure:
{
"v": "KERI10JSON00011c_", // Version string (22 chars)
"t": "icp|rot|ixn|dip|drt", // Event type
"d": "EBfdlu8R27Fbx-ehrqwImnK-8Cm79sqbAQ4MmvEAYqao", // SAID
"i": "EaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM", // AID prefix
"s": "0", // Sequence number (hex)
"p": "EZ-i0d8JZAoTNZH3ULaU6JR2nmwyZ-vYAfSVPzhzS6b5CM", // Prior event SAID
"kt": "1", // Key threshold
"k": ["DaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM"], // Keys
"nt": "1", // Next key threshold
"n": ["EZ-i0d8JZAoTNZH3ULaU6JR2nmwyZ-vYAfSVPzhzS6b5CM"], // Next key digests
"bt": "2", // Backer threshold
"b": ["BGKVzj4ve0VSd8z_AmvhLg4lqcC_9WYX90k03q-R_Ydo"], // Backers
"c": [], // Configuration
"a": [] // Anchors
}
ACDC Clone Integrity:
{
"v": "ACDC10JSON00197_", // ACDC version
"d": "EBfdlu8R27Fbx-ehrqwImnK-8Cm79sqbAQ4MmvEAYqao", // SAID
"i": "EaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM", // Issuer AID
"ri": "EZ-i0d8JZAoTNZH3ULaU6JR2nmwyZ-vYAfSVPzhzS6b5CM", // Registry ID
"s": "EBfdlu8R27Fbx-ehrqwImnK-8Cm79sqbAQ4MmvEAYqao", // Schema SAID
"a": { // Attribute section
"d": "EYAfSVPzhzS6b5CMaU6JR2nmwyZ-i0d8JZAoTNZH3UL",
"i": "EaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM",
"LEI": "5493001KJTIIGC8Y1R17"
},
"e": { // Edge section
"d": "EAfSVPzhzS6b5CMaU6JR2nmwyZ-i0d8JZAoTNZH3ULY",
"qvi": {
"n": "EZ-i0d8JZAoTNZH3ULaU6JR2nmwyZ-vYAfSVPzhzS6b5CM",
"s": "EBfdlu8R27Fbx-ehrqwImnK-8Cm79sqbAQ4MmvEAYqao"
}
}
}
KERI clone synchronization follows a multi-phase protocol ensuring cryptographic consistency:
Discovery Phase
Verification Phase
Clone Verification Algorithm:
function verifyClone(originalKEL, cloneKEL) {
for (event in cloneKEL.events) {
// Verify SAID integrity
computedSAID = blake3Hash(serialize(event))
if (computedSAID !== event.d) return false
// Verify signature authenticity
keyState = deriveKeyState(originalKEL, event.s)
if (!verifySignature(event, keyState.keys)) return false
// Verify hash chain integrity
if (event.s > 0 && event.p !== previousEvent.d) return false
}
return true
}
Reconciliation Phase
Clone Synchronization Flow:
Controller → Witness₁: KEL Event
Controller → Witness₂: KEL Event
Controller → Witness₃: KEL Event
Witness₁ → Watcher: Receipt + KEL Clone
Witness₂ → Watcher: Receipt + KEL Clone
Witness₃ → Watcher: Receipt + KEL Clone
Watcher: Verify Clone Consistency
Watcher: Apply BADA Policy
Watcher → Verifier: Validated Clone
Clones must preserve the cryptographic hash chain that provides tamper evidence:
Hash Chain Verification:
H(Event₀) = SAID₀
H(Event₁ || SAID₀) = SAID₁
H(Event₂ || SAID₁) = SAID₂
...
H(Eventₙ || SAIDₙ₋₁) = SAIDₙ
Where H() = Blake3-256 hash function
Each cloned event must maintain signature validity:
Signature Verification Matrix:
For Multi-sig threshold t of n:
∀ signature sᵢ ∈ signatures:
Ed25519.verify(sᵢ, serialize(event), pubkeyᵢ) = true
Count(valid_signatures) ≥ threshold
Clones must preserve pre-rotation commitments:
Pre-rotation Integrity:
next_keys_digest = Blake3(serialize(next_keys))
rotation_event.n = [next_keys_digest]
// Later rotation must reveal:
rotation_event.k = next_keys
Blake3(serialize(rotation_event.k)) = previous_event.n[0]
GET /kel/{aid}
Response: 200 OK
Content-Type: application/json
{
"kel": [...], // Complete KEL clone
"receipts": [...], // Witness receipts
"escrows": [...], // Escrowed events
"duplicity": [...] // Duplicity evidence
}
GET /kel/{aid}?sn={sequence}
Response: 200 OK
// Returns KEL clone up to specified sequence number
GET /kel/{aid}/clone-status
Response: 200 OK
{
"witnesses": {
"BGKVzj4ve0VSd8z_AmvhLg4lqcC_9WYX90k03q-R_Ydo": {
"last_seen": "2024-01-15T10:30:00Z",
"sequence": "5",
"consistent": true
}
},
"watchers": [...],
"integrity": "verified"
}
POST /validate-clone
Content-Type: application/json
{
"original": {...}, // Reference KEL
"clone": {...}, // Clone to validate
"policy": "strict" // Validation policy
}
Response: 200 OK
{
"valid": true,
"discrepancies": [],
"integrity_score": 1.0,
"witness_consensus": true
}
class KELCloneManager:
def __init__(self, db: Database, crypto: CryptoSuite):
self.db = db
self.crypto = crypto
self.validators = []
self.reconcilers = []
async def create_clone(self, aid: str, source: str) -> KELClone:
"""Create verified clone from source"""
source_kel = await self.fetch_kel(source, aid)
clone = KELClone(aid=aid, events=source_kel.events)
# Verify cryptographic integrity
if not await self.verify_clone_integrity(clone):
raise CloneIntegrityError("Hash chain verification failed")
# Validate signatures
if not await self.verify_signatures(clone):
raise CloneSignatureError("Signature validation failed")
# Store with metadata
await self.db.store_clone(clone, metadata={
'source': source,
'created': datetime.utcnow(),
'verified': True
})
return clone
async def sync_clones(self, aid: str) -> SyncResult:
"""Synchronize clones across witnesses"""
witnesses = await self.get_witnesses(aid)
clones = []
for witness in witnesses:
try:
clone = await self.fetch_clone(witness, aid)
clones.append((witness, clone))
except Exception as e:
logger.warning(f"Failed to fetch from {witness}: {e}")
# Apply BADA reconciliation
reconciled = await self.reconcile_clones(clones)
return SyncResult(canonical=reconciled, sources=clones)
class CESRCloneEncoder:
"""Encode/decode clones using CESR format"""
def encode_clone(self, clone: KELClone) -> bytes:
"""Encode clone to CESR stream"""
stream = bytearray()
# Group code for KEL clone
stream.extend(self.encode_group_code('KEL', len(clone.events)))
for event in clone.events:
# Event type and length
event_bytes = json.dumps(event, separators=(',', ':')).encode()
stream.extend(self.encode_count_code(len(event_bytes)))
stream.extend(event_bytes)
# Signatures
for sig in event.signatures:
stream.extend(self.encode_signature(sig))
return bytes(stream)
def decode_clone(self, stream: bytes) -> KELClone:
"""Decode CESR stream to clone"""
parser = CESRParser(stream)
# Parse group code
group_code, count = parser.parse_group_code()
if group_code != 'KEL':
raise CESRError(f"Expected KEL group, got {group_code}")
events = []
for _ in range(count):
event_len = parser.parse_count_code()
event_data = parser.read_bytes(event_len)
event = json.loads(event_data.decode())
# Parse signatures
signatures = []
while parser.has_signature():
sig = parser.parse_signature()
signatures.append(sig)
event['signatures'] = signatures
events.append(event)
return KELClone(events=events)
CREATE TABLE kel_clones (
id UUID PRIMARY KEY,
aid VARCHAR(44) NOT NULL, -- Base64 AID
source VARCHAR(255) NOT NULL, -- Source endpoint
sequence_number INTEGER NOT NULL, -- Latest sequence
events JSONB NOT NULL, -- Event array
receipts JSONB, -- Witness receipts
integrity_hash VARCHAR(44), -- Overall integrity hash
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
verified BOOLEAN DEFAULT FALSE,
INDEX idx_aid (aid),
INDEX idx_sequence (aid, sequence_number),
INDEX idx_source (source),
UNIQUE(aid, source)
);
CREATE TABLE clone_sync_status (
aid VARCHAR(44) PRIMARY KEY,
last_sync TIMESTAMP,
witness_count INTEGER,
consensus_achieved BOOLEAN,
discrepancies JSONB,
FOREIGN KEY (aid) REFERENCES kel_clones(aid)
);
clone_manager:
sync_interval: 300s # Clone synchronization interval
max_clone_age: 3600s # Maximum acceptable clone age
witness_timeout: 30s # Witness query timeout
integrity_check_interval: 900s # Integrity verification interval
bada_policy:
prefer_witnessed: true # Prefer witnessed events
require_threshold: 0.67 # Minimum witness consensus
max_sequence_gap: 10 # Maximum sequence gap tolerance
storage:
compression: gzip # Clone storage compression
retention_days: 90 # Clone retention period
backup_interval: 86400s # Backup interval
Clone Relationship Matrix:
KEL Clone ←→ Witness Network
├─ Receipt validation
├─ Consensus verification
└─ Duplicity detection
KEL Clone ←→ Watcher Network
├─ Clone distribution
├─ Integrity monitoring
└─ BADA policy enforcement
KEL Clone ←→ OOBI System
├─ Endpoint discovery
├─ Bootstrap resolution
└─ Service advertisement
ACDC Clone ←→ Registry System
├─ Credential validation
├─ Revocation checking
└─ Schema verification
Clone Propagation Flow:
1. Controller creates event
2. Event signed and SAIDed
3. Event sent to witnesses
4. Witnesses create receipts
5. Receipted events become clonable
6. Watchers fetch and verify clones
7. Clones distributed to verifiers
8. BADA policy applied for conflicts
Storage Analysis:
Base KEL Event: ~500 bytes (JSON)
Signatures: ~64 bytes × signature_count
Receipts: ~128 bytes × witness_count
Metadata: ~200 bytes
Total per event: ~500 + (64 × sigs) + (128 × witnesses) + 200
For 1000 events, 3 sigs, 5 witnesses: ~1.46 MB per clone
Network Performance:
CESR Encoding Efficiency:
- JSON: ~40% overhead vs binary
- CESR: ~15% overhead vs binary
- Compression: Additional 60-80% reduction
Sync Bandwidth (per clone):
- Initial: Full KEL transfer
- Incremental: New events only
- Witness receipts: ~640 bytes per event
Performance Benchmarks (1000-event KEL):
Clone Creation: 45ms ± 5ms
Integrity Verification: 120ms ± 15ms
Signature Validation: 890ms ± 50ms
CESR Encoding: 25ms ± 3ms
Database Storage: 35ms ± 8ms
Total Clone Processing: ~1.1s ± 0.08s
Throughput: ~900 clones/second (parallel)
class CloneInconsistencyError(Exception):
"""Raised when clones show cryptographic inconsistencies"""
def __init__(self, aid: str, discrepancies: List[Discrepancy]):
self.aid = aid
self.discrepancies = discrepancies
super().__init__(f"Clone inconsistency for {aid}: {len(discrepancies)} issues")
# Recovery procedure
async def handle_clone_inconsistency(error: CloneInconsistencyError):
# 1. Quarantine inconsistent clones
await quarantine_clones(error.aid)
# 2. Re-fetch from authoritative sources
authoritative_clone = await fetch_from_controller(error.aid)
# 3. Re-verify against witness network
consensus = await verify_witness_consensus(authoritative_clone)
# 4. Update local clones if consensus achieved
if consensus.achieved:
await update_local_clones(error.aid, consensus.canonical)
async def handle_witness_unavailability(aid: str, unavailable_witnesses: List[str]):
"""Handle witness network partitions"""
available_witnesses = await get_available_witnesses(aid)
threshold = await get_witness_threshold(aid)
if len(available_witnesses) < threshold:
# Insufficient witnesses for consensus
raise InsufficientWitnessError(
f"Only {len(available_witnesses)} of {threshold} witnesses available"
)
# Proceed with available witnesses
partial_consensus = await achieve_partial_consensus(available_witnesses)
# Mark clone as "partial consensus" pending full witness recovery
await mark_clone_status(aid, CloneStatus.PARTIAL_CONSENSUS)
class CloneSyncLock:
"""Prevent concurrent clone modifications"""
def __init__(self):
self._locks = {}
async def acquire(self, aid: str) -> AsyncContextManager:
if aid not in self._locks:
self._locks[aid] = asyncio.Lock()
return self._locks[aid]
# Usage
async def sync_clone_safely(aid: str):
async with clone_sync_lock.acquire(aid):
# Atomic clone synchronization
current_clone = await get_current_clone(aid)
new_events = await fetch_new_events(aid, current_clone.sequence)
if new_events:
updated_clone = await apply_events(current_clone, new_events)
await store_clone_atomically(updated_clone)
KERI Version Compatibility Matrix:
KERI 1.0: Full clone support
KERI 1.1: Enhanced BADA policies
KERI 1.2: Optimized CESR encoding
KERI 2.0: Post-quantum clone signatures
Backward Compatibility:
- KERI 1.x clones readable by 2.0
- Signature verification maintained
- CESR encoding forward-compatible
Production Clone Infrastructure:
Witness Tier:
- 3+ geographically distributed witnesses
- Load balancing with health checks
- Automated failover mechanisms
- Clone consistency monitoring
Watcher Tier:
- Regional watcher networks
- Clone validation pipelines
- BADA policy enforcement
- Duplicity detection systems
Storage Tier:
- Replicated clone databases
- Automated backup systems
- Integrity monitoring
- Performance optimization
Clone Health Metrics:
- clone_sync_latency_seconds
- clone_integrity_check_failures_total
- witness_consensus_achievement_rate
- clone_storage_size_bytes
- bada_policy_violations_total
- signature_verification_duration_seconds
Alerts:
- Clone inconsistency detected
- Witness consensus failure
- Storage capacity threshold exceeded
- Signature verification timeout
# Clone health check
kli clone status --aid EaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM
# Force clone resync
kli clone resync --aid EaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM --witnesses all
# Verify clone integrity
kli clone verify --aid EaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM --deep
# Clone backup
kli clone backup --aid EaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM --output /backup/