Four SDKs, Four Narratives
Each SDK tells a different story about the product's value.
Cross-SDK compliance matters precisely because SDKs differ. We audit four BSV SDKs across four languages for the hackathon demo. Each has a distinct story.
Ruby — "We needed this"
Our own SDK (sgbett/bsv-ruby-sdk). The compliance review discovered 137 findings including 21 HIGH severity — money-loss bugs, consensus-critical Chronicle opcode gaps, wire-format divergences.
These were real bugs that had shipped. They caused real problems: broken payment flows in a downstream demo, hours lost debugging a WASM ABI mismatch, accumulated BEEF/TypeScript-deviation bugs that surfaced only under cross-SDK integration pressure.
This is the "before" story. What does an SDK look like when it's never been compliance-audited? It looks like this. Many of its bugs have now been actioned.
Swift — "We used it"
Our own Swift SDK (/opt/xcode/bsv-sdk), fresh Phase 1 implementation, audited in the same week. 134 findings, 14 HIGH.
This confirms the same pattern exists in a completely different codebase: money-loss-class sighash bugs (F4.1), Chronicle opcode gaps (F7.1-F7.4), BEEF serialisation corruption (F5.3), cross-SDK auth handshake failures (F8.1-F8.2). Same classes, independently implemented by different developers in a different language.
This is also a methodology comparison. The Ruby review used a one-shot prompt; the Swift review used the structured Thirty-Five format. We can compare output quality and consistency across both approaches.
Rust — "They need this"
Three independent third-party Rust SDKs, each with different lineage and maturity:
- Calhooon/bsv-rs — ~100k LOC, claims byte-for-byte TypeScript compatibility. A perfect claim to test under rigorous audit.
- b1narydt/bsv-rust-sdk — v0.2.1, claims faster than TS in 55/57 benchmarks. Speed optimisation is where letter-vs-spirit bugs hide.
- Bittoku/bsv-sdk-rust — ported from Go SDK, not TypeScript. Different lineage = different failure patterns expected.
The Bittoku fork is a natural experiment: do SDKs ported from different reference implementations produce different compliance failure patterns? That's not just a demo — it's a genuine research finding that emerges from our audit data.
Zig — "Fresh contender"
No BSV Zig SDK exists anywhere in the ecosystem. We generated one via medium-effort AI porting from the TypeScript reference — deliberately using the process known to produce letter-vs-spirit bugs.
The 35 then audits it, catching the bugs the porting process introduced.
This is live theatre: create the disease, then demonstrate the cure, in the same session. The audience watches the bugs being generated by a medium-effort port and then caught by the compliance service — end-to-end demonstration of both the problem and the product.
The research finding
The cross-SDK audit data is genuinely novel. No one has systematically compared compliance failure patterns across multiple independent implementations of the same specification. The 35 produces this data as a byproduct of its normal operation.
Questions it can answer:
- Do SDKs ported from TypeScript produce different bugs than SDKs ported from Go?
- Do mature SDKs with extensive test suites still harbour letter-vs-spirit bugs?
- Is compliance coverage correlated with test coverage, or are they orthogonal?
- Does the porting process produce predictable failure clusters that transcend language?
These aren't questions we made up for the demo. They're questions the audit data naturally surfaces. The more SDKs we audit, the stronger the answers become. That's the compounding value of a shared compliance infrastructure.