A compact language for AI agents to talk to each other. Agents share blueprints, not bricks — reducing cost by 86% and eliminating assembly failures in multi-agent code generation.
pip install cdn-ainpm install github:worachetdee/condensa-tsProduction Results
Integrating Condensa into Stratophic's multi-agent code generation pipeline.
| Metric | Before | After | Change |
|---|---|---|---|
| Cost per generation | $0.135 | $0.019 | -86% |
| Assembly failure rate | ~50% | ~10% | -80% |
| Assembler AI calls | 1 (15K tokens) | 0 (mechanical) | Eliminated |
The biggest win was NOT token compression — it was Condensa Code. Agents sharing interface contracts instead of code made assembly deterministic and reliable.
Before & After
Natural language (101 tokens)
"AgentC, I need you to perform a thorough code review of the file that AgentB just wrote. Please check for style issues, potential bugs, performance problems, security vulnerabilities, and type errors. Format your response as a structured report."
Condensa (10 tokens)
>:@C review $_.path checks:(style,bugs,perf,security,types) /fmt:reportCondensa Code — agents share architecture, not implementations
!:fn DashboardPage /props:(programs:Program[] onSelect:fn(id:n)->void) /renders:(stats-grid,cards)
!:type Program { id:n name:s* weeks:n level:s exercises:s[] }
!:wire dashboard.onSelect -> programs.highlight
!:wire programs.onStart -> workout.loadThree lines define the contract. Each agent builds to spec. The assembler wires mechanically — no AI reasoning, no hallucination.
Lab Benchmarks
71.7%
Token compression
59 live agent turns
95.8%
Zero-shot interpretability
avg across 7 LLMs
93.8%
Cross-model execution
Claude to Gemini Flash
$4.6-18.2K
Cost savings at scale
per 1M conversations
Models tested: Gemini Pro, GPT-4o, Claude Opus 4.6, Grok 4.20 Expert, Perplexity, DeepSeek, Gemini Flash — every model understood Condensa zero-shot. Package audit: 47/50 inputs handled correctly, 0 crashes, 43/43 tests pass.
Three Editions
!:cdnMax compression, max interpretability
~:cdnTone marks for agent negotiation
@:cdnClassification, encryption, audit trails
Quick Reference
MESSAGE: [urgency] type:body
TYPES: ! command !? sync ? query = result > delegate
~ update # status X cancel E error @ meta
ACTIONS: srch filt sort grp agg cnt avg sum gen sumz xlat fmt
val cmp cls ext read wrt del exec review deploy test
CODE: def module fn component type interface
wire connect api endpoint schema table asm assemble
FLOW: A | B | C seq(A;B;C) par(A;B;C)
MODIFIERS: /n:N /fmt:X /lang:X /since:T /desc /each /repeat:N
FALLBACK: fb:cache fb:skip fb:abort fb:retry:N fb:degrade
WIRING: !:wire source.output -> target.inputMultilingual
Cross-lingual agents communicate via Condensa without mutual NL translation — the protocol is the lingua franca.
37.5%
Japanese
37.1%
Thai
31.7%
Arabic
25.0%
Korean
Transparency
| Dense human prose | Only 4.4% savings (near information-theoretic minimum) |
| Chinese NL | -5.6% (Chinese is already extremely dense) |
| Token compression for code gen | < 0.1% of real savings (code output can't be compressed) |
| Regex encoder | 94% input handling (use LLM encoder for full fidelity) |
| Single-shot pipelines | Multi-turn features don't apply |
Condensa's value for code generation is structural correctness — not token compression. For multi-turn agent conversations, token compression is the primary value.
Research
144 automated tests. 7 LLMs tested. 1 production deployment. Open source (MIT).