Loading vLEI.wiki
Fetching knowledge base...
Fetching knowledge base...
This comprehensive explanation has been generated from 3 GitHub source documents. All source documents are searchable here.
Last updated: September 21, 2025
This content is meant to be consumed by AI agents via MCP. Click here to get the MCP configuration.
Note: In rare cases it may contain LLM hallucinations.
For authoritative documentation, please consult the official GLEIF vLEI trainings and the ToIP Glossary.
A tritet is a fundamental 3-bit encoding unit within the CESR (Composable Event Streaming Representation) specification that serves as the atomic building block for performant stream resynchronization. The term combines "tri" (three) and "tet" (from tetrad/quartet), representing exactly 3 bits of information used to establish unique start bit patterns for robust stream parsing and cold-start recovery mechanisms.
tritet := 3-bit encoding unit
Range: 000₂ to 111₂ (0-7 decimal)
Purpose: Stream resynchronization markers
The tritet operates within CESR's 24-bit alignment constraint, where all primitives must align on boundaries that are multiples of 24 bits (the least common multiple of Base64's 6-bit characters and binary's 8-bit bytes):
24-bit boundary = 8 tritets = 4 Base64 characters = 3 bytes
tritet alignment: [000][001][010][011][100][101][110][111]
Tritets are organized within CESR's framing code structure to provide unique start bit patterns:
Framing Code Structure:
┌─────────────┬─────────────┬─────────────┐
│ Type │ Size │ Value │
│ (tritets) │ (tritets) │ (tritets) │
└─────────────┴─────────────┴─────────────┘
Each tritet maps to specific bit patterns used for stream synchronization:
Tritet Value | Binary | Hex | Usage Context
-------------|--------|-----|---------------
0 | 000 | 0x0 | Null/padding marker
1 | 001 | 0x1 | Start sequence indicator
2 | 010 | 0x2 | Continuation marker
3 | 011 | 0x3 | Group boundary
4 | 100 | 0x4 | Count code prefix
5 | 101 | 0x5 | Variable length indicator
6 | 110 | 0x6 | Reserved/future use
7 | 111 | 0x7 | Error/resync marker
Problem: Incorrect bit extraction when tritets span byte boundaries Solution: Use proper bit masking and shifting operations:
// Correct bit extraction for cross-byte tritets
uint8_t extract_cross_byte_tritet(const uint8_t* data, int bit_offset) {
int byte_idx = bit_offset / 8;
int bit_idx = bit_offset % 8;
uint16_t combined = (data[byte_idx] << 8) | data[byte_idx + 1];
return (combined >> (13 - bit_idx)) & 0x07;
}
Problem: Tritet extraction differs between big-endian and little-endian systems Solution: Use explicit bit operations rather than relying on byte order:
#define EXTRACT_TRITET(data, bit_pos) \
(((data)[(bit_pos)/8] >> (7 - ((bit_pos) % 8))) & 0x07)
#include <immintrin.h>
// Process 16 bytes (128 tritets) at once using AVX2
void extract_tritets_simd(const uint8_t* input, uint8_t* output, size_t count) {
__m128i data = _mm_loadu_si128((__m128i*)input);
__m128i mask = _mm_set1_epi8(0x07);
for (int shift = 0; shift < 8; shift += 3) {
__m128i shifted = _mm_srli_epi8(data, shift);
__m128i tritets = _mm_and_si128(shifted, mask);
_mm_storeu_si128((__m128i*)(output + shift * 16 / 3), tritets);
}
}
// Pre-computed tritet extraction table
static const uint8_t TRITET_TABLE[256][8] = {
// Each byte maps to 8 possible tritet extractions
// depending on bit offset (0-7)
};
uint8_t fast_extract_tritet(uint8_t byte, int bit_offset) {
return TRITET_TABLE[byte][bit_offset];
}
def validate_tritet_stream(stream: bytes, max_length: int = 1024*1024) -> bool:
"""Validate tritet stream before processing"""
if len(stream) > max_length:
raise ValueError(f"Stream too large: {len(stream)} > {max_length}")
if (len(stream) * 8) % 3 != 0:
raise ValueError("Stream length not divisible by 3 bits")
# Check for suspicious patterns that might indicate attack
tritets = extract_tritets(stream)
if detect_malicious_patterns(tritets):
raise SecurityError("Potentially malicious tritet patterns detected")
return True
The tritet-based resynchronization mechanism enables parsers to recover from stream corruption or establish parsing state from arbitrary stream positions:
def resynchronize_stream(stream_bytes, position=0):
"""
Resynchronization algorithm using tritet patterns
"""
tritets = extract_tritets(stream_bytes, position)
for i, tritet in enumerate(tritets):
if is_start_pattern(tritet):
# Found potential start sequence
if validate_framing_code(tritets[i:i+8]):
return position + (i * 3 // 8) # Convert to byte position
raise ResynchronizationError("No valid start pattern found")
def is_start_pattern(tritet):
"""Check if tritet indicates start of primitive"""
return tritet in [0b001, 0b100] # Start or count code patterns
Tritets integrate into CESR's broader message processing pipeline:
Stream Input → Tritet Extraction → Pattern Recognition →
Frame Boundary Detection → Primitive Parsing → Validation
The tritet parser operates as a finite state machine:
States:
- SEEKING: Looking for start pattern
- FRAMING: Reading frame code
- COUNTING: Processing count codes
- EXTRACTING: Reading primitive data
- VALIDATING: Verifying primitive integrity
Transitions triggered by specific tritet patterns
Tritets contribute to CESR's security model through:
Threat: Stream injection attacks
Mitigation: Tritet pattern validation prevents false primitive boundaries
Threat: Denial of service via malformed streams
Mitigation: Bounded search for valid tritet patterns
Threat: Cryptographic downgrade attacks
Mitigation: Tritet-encoded type information prevents primitive substitution
class TritetProcessor:
def extract_tritets(self, data: bytes, offset: int = 0) -> List[int]:
"""Extract 3-bit tritets from byte stream"""
tritets = []
bit_offset = offset * 8
while bit_offset + 3 <= len(data) * 8:
tritet = self._extract_3bits(data, bit_offset)
tritets.append(tritet)
bit_offset += 3
return tritets
def _extract_3bits(self, data: bytes, bit_offset: int) -> int:
"""Extract 3 bits starting at bit_offset"""
byte_idx = bit_offset // 8
bit_idx = bit_offset % 8
if bit_idx <= 5: # Tritet within single byte
return (data[byte_idx] >> (5 - bit_idx)) & 0x07
else: # Tritet spans two bytes
high_bits = (data[byte_idx] & ((1 << (8 - bit_idx)) - 1)) << (bit_idx - 5)
low_bits = data[byte_idx + 1] >> (13 - bit_idx)
return high_bits | low_bits
Operation Complexity:
- Tritet extraction: O(1) per tritet
- Pattern matching: O(n) where n = stream length in tritets
- Resynchronization: O(m) where m = search window size
Memory Requirements:
- Tritet buffer: 3 bits per unit
- Pattern cache: 64 bytes (8 tritets × 8 patterns)
- State machine: 16 bytes
The tritet implementation follows several key patterns:
// Efficient tritet buffer management
class TritetBuffer {
private:
uint64_t buffer_; // 64-bit buffer holds 21 tritets + metadata
uint8_t count_; // Number of valid tritets in buffer
public:
void push_tritet(uint8_t tritet) {
if (count_ < 21) {
buffer_ |= (static_cast<uint64_t>(tritet & 0x07) << (count_ * 3));
count_++;
}
}
uint8_t pop_tritet() {
if (count_ > 0) {
uint8_t tritet = buffer_ & 0x07;
buffer_ >>= 3;
count_--;
return tritet;
}
return 0;
}
};
Tritet processing supports concurrent operations:
use std::sync::{Arc, Mutex};
use tokio::sync::mpsc;
struct ConcurrentTritetProcessor {
workers: Vec<tokio::task::JoinHandle<()>>,
input_tx: mpsc::Sender<StreamChunk>,
output_rx: mpsc::Receiver<ParsedPrimitive>,
}
impl ConcurrentTritetProcessor {
async fn process_stream_chunk(&self, chunk: StreamChunk) {
let tritets = self.extract_tritets(&chunk.data);
// Parallel pattern matching across worker threads
for worker_tritets in tritets.chunks(1024) {
self.input_tx.send(worker_tritets.to_vec()).await.unwrap();
}
}
}
CESR Version Support:
v1.0: Basic tritet patterns (0-3)
v1.1: Extended patterns (0-7)
v2.0: Hierarchical tritet encoding
Backward Compatibility:
- v2.0 parsers handle v1.x tritets
- v1.x parsers ignore unknown patterns
- Graceful degradation for unsupported patterns
{
"tritet_config": {
"resync_window_size": 1024,
"max_search_depth": 4096,
"pattern_cache_size": 256,
"enable_parallel_processing": true,
"worker_thread_count": 4,
"buffer_size_bytes": 8192
}
}
Tritets form the foundation for all CESR primitive types:
Primitive Hierarchy:
├── Basic Primitives (fixed-length)
│ ├── Cryptographic Material (keys, signatures, digests)
│ └── Identifiers (AIDs, SAIDs)
├── Count Codes (variable-length indicators)
│ ├── Group Codes
│ └── Frame Codes
└── Indexed Primitives (signatures with indices)
All encoded using tritet-aligned framing codes
graph TD
A[KERI Events] --> B[CESR Encoding]
B --> C[Tritet Framing]
C --> D[Stream Transmission]
D --> E[Tritet Parsing]
E --> F[CESR Decoding]
F --> G[KERI Validation]
Tritets enable efficient ACDC (Authentic Chained Data Container) processing:
ACDC Message Flow:
1. ACDC fields → CESR primitives
2. CESR primitives → Tritet-framed stream
3. Stream transmission/storage
4. Tritet-based parsing
5. CESR primitive reconstruction
6. ACDC field validation
Time Complexity: O(n) where n = input bytes
Space Complexity: O(m) where m = output tritets
Bit Operations: 3 shifts + 2 masks per tritet
CPU Cycles: ~5-8 cycles per tritet (modern x86_64)
Best Case: O(1) - immediate pattern match
Average Case: O(log n) - binary search in pattern table
Worst Case: O(n) - full stream scan for resynchronization
Tritet Storage:
- Raw: 3 bits per tritet
- Packed: 21 tritets per 64-bit word
- Overhead: ~14% for alignment
Cache Performance:
- L1 Cache: 64KB holds ~170K tritets
- L2 Cache: 256KB holds ~680K tritets
- Memory Bandwidth: Limited by bit manipulation, not memory
Tritet Processing Performance (Intel i7-10700K):
- Extraction Rate: 2.1 GB/s
- Pattern Matching: 1.8 GB/s
- Resynchronization: 450 MB/s
- End-to-end CESR parsing: 320 MB/s
ARM64 Performance (Apple M1):
- Extraction Rate: 2.8 GB/s
- Pattern Matching: 2.2 GB/s
- Resynchronization: 580 MB/s
- End-to-end CESR parsing: 410 MB/s
def handle_corrupted_stream(stream, corruption_offset):
"""Handle stream corruption using tritet recovery"""
try:
# Attempt resynchronization from corruption point
sync_point = find_next_valid_tritet_pattern(
stream, corruption_offset
)
if sync_point:
return resume_parsing(stream, sync_point)
else:
raise UnrecoverableCorruption(
f"No valid tritet pattern found after offset {corruption_offset}"
)
except Exception as e:
log_corruption_event(corruption_offset, str(e))
raise
def validate_tritet_boundaries(stream_length, tritet_count):
"""Validate tritet alignment with stream boundaries"""
expected_bits = tritet_count * 3
actual_bits = stream_length * 8
if expected_bits > actual_bits:
raise InsufficientDataError(
f"Need {expected_bits} bits, have {actual_bits}"
)
if (expected_bits % 24) != 0:
raise AlignmentError(
f"Tritet sequence not aligned to 24-bit boundary"
)
class SecureTritetProcessor:
def __init__(self, max_search_window=4096):
self.max_search_window = max_search_window
self.pattern_cache = {}
def safe_resynchronize(self, stream, start_pos):
"""Bounded search prevents DoS attacks"""
search_end = min(
start_pos + self.max_search_window,
len(stream)
)
for pos in range(start_pos, search_end, 3):
if self.is_valid_start_pattern(stream, pos):
return pos
raise ResynchronizationTimeout(
f"No pattern found within {self.max_search_window} bytes"
)
Tritets are defined in the Trust over IP Foundation's CESR specification:
tswg-cesr-specificationv1.0 (2022-04): Initial tritet specification
- Basic 3-bit encoding
- 8 pattern types (0-7)
- Stream resynchronization
v1.1 (2022-08): Enhanced patterns
- Hierarchical pattern organization
- Improved error recovery
- Performance optimizations
v2.0 (2023-02): Extended functionality
- Nested tritet structures
- Parallel processing support
- Advanced pattern matching
Mandatory Features:
- 3-bit tritet extraction
- Pattern-based resynchronization
- 24-bit boundary alignment
- Error detection and recovery
Optional Features:
- Parallel processing
- Advanced pattern caching
- Hardware acceleration
- Custom pattern definitions
Load Balancer → Multiple Tritet Processors → Stream Reassembly
↓
Pattern Cache Cluster
↓
Monitoring & Metrics
Edge Device → Lightweight Tritet Parser → Local Processing
↓
Periodic Sync with Central Pattern Database
class TritetMetrics:
def __init__(self):
self.extraction_rate = Counter('tritets_extracted_total')
self.pattern_matches = Counter('patterns_matched_total')
self.resync_events = Counter('resynchronization_events_total')
self.processing_latency = Histogram('tritet_processing_seconds')
def record_extraction(self, count, duration):
self.extraction_rate.inc(count)
self.processing_latency.observe(duration)
#!/bin/bash
# Tritet processor health check
echo "Testing tritet extraction..."
echo "010001011" | tritet-tool extract --format binary
echo "Testing pattern matching..."
echo "001100101" | tritet-tool match --pattern start
echo "Testing resynchronization..."
echo "corrupted001100101" | tritet-tool resync --max-search 1024
production_config:
tritet_processor:
worker_threads: 8
buffer_size: 16384
pattern_cache_ttl: 3600
enable_simd: true
prefetch_distance: 64
monitoring:
metrics_interval: 30
alert_thresholds:
extraction_rate_min: 1000000 # tritets/sec
error_rate_max: 0.001 # 0.1%
latency_p99_max: 0.01 # 10ms
const MAX_RESYNC_ATTEMPTS: usize = 1000;
const MAX_SEARCH_WINDOW: usize = 4096;
fn safe_resynchronize(data: &[u8]) -> Result<usize, ResyncError> {
let mut attempts = 0;
let mut pos = 0;
while pos < data.len() && attempts < MAX_RESYNC_ATTEMPTS {
if let Some(sync_pos) = find_pattern(&data[pos..pos + MAX_SEARCH_WINDOW]) {
return Ok(pos + sync_pos);
}
pos += MAX_SEARCH_WINDOW / 2; // 50% overlap
attempts += 1;
}
Err(ResyncError::MaxAttemptsExceeded)
}
from hypothesis import given, strategies as st
@given(st.binary(min_size=3, max_size=1024))
def test_tritet_roundtrip(data):
"""Test that tritet extraction and reconstruction preserves data"""
# Ensure data length is multiple of 3 bits
bit_length = len(data) * 8
if bit_length % 3 != 0:
data = data[:bit_length // 3 * 3 // 8]
tritets = extract_tritets(data)
reconstructed = tritets_to_bytes(tritets)
assert reconstructed == data
@given(st.lists(st.integers(0, 7), min_size=8, max_size=1024))
def test_pattern_detection(tritets):
"""Test pattern detection in tritet sequences"""
# Insert known pattern
tritets[0:3] = [0b001, 0b010, 0b011] # Known start pattern
patterns = detect_patterns(tritets)
assert len(patterns) >= 1
assert patterns[0].position == 0
assert patterns[0].type == 'start_sequence'
// AFL++ fuzzing harness for tritet processing
int LLVMFuzzerTestOneInput(const uint8_t *data, size_t size) {
if (size < 3 || size > 10000) return 0;
// Test tritet extraction
uint8_t tritets[size * 8 / 3];
int count = extract_tritets_safe(data, size, tritets, sizeof(tritets));
if (count > 0) {
// Test pattern matching
find_patterns(tritets, count);
// Test resynchronization
resynchronize_stream(data, size, 0);
}
return 0;
}
typedef struct {
uint64_t buffer; // Holds up to 21 tritets (63 bits)
uint8_t count; // Number of valid tritets
uint8_t read_pos; // Current read position
} tritet_buffer_t;
void buffer_push_tritets(tritet_buffer_t* buf, const uint8_t* tritets, int count) {
for (int i = 0; i < count && buf->count < 21; i++) {
buf->buffer |= ((uint64_t)(tritets[i] & 0x07)) << (buf->count * 3);
buf->count++;
}
}
uint8_t buffer_pop_tritet(tritet_buffer_t* buf) {
if (buf->read_pos >= buf->count) return 0;
uint8_t tritet = (buf->buffer >> (buf->read_pos * 3)) & 0x07;
buf->read_pos++;
return tritet;
}