o1js Goes Native: Faster Kimchi Proving on Node

Native proving is now available—unlocking faster proofs, chunking, and up to 4× larger circuits in o1js.

image

We built o1js so that writing proofs in JavaScript feels immediate: iterate, prove, ship. But as circuits grew and use cases got heavier, a hard ceiling kept showing up. The community has been asking for native proving in Node.js, both for speed and for the ability to handle bigger circuits without the WASM memory ceiling. And we’ve been working hard to deliver.

Today, we’re excited to share that native access to the Kimchi prover is now available in a first prerelease. It keeps the same developer-facing API while switching to a native Node-API implementation when you run on Node.

The practical outcome is simple: more headroom and faster iteration. With native memory and the same rayon-powered prover core, we see 2x faster proving and 4x larger circuits thanks to now-enabled chunking. As a result, developers get faster proofs, larger circuits, and a smoother path for production workloads.

Chunking is a scaling strategy to process large circuits, which are accumulated into pieces, at the cost of one longer proof length. In practice, it lets you push to much larger circuit sizes without rewriting your app into multiple smaller circuits. In WASM, chunking was often blocked by the 4GB linear-memory ceiling; in native Node, it becomes a reliable tool you can actually use to go big.

What’s new

WASM has served us well, but it comes with the 4GB linear-memory limit and a tedious internal conversion layer. In proving workflows, both matter: memory grows quickly with circuit size, and conversions across the JS ↔ WASM boundary add overhead. Going native removes the memory ceiling and reduces that overhead because o1js will use your local machine architecture and your system’s available RAM. The former is better suited for the complex field arithmetic that runs under Kimchi’s algorithms, whereas the latter will likely offer higher memory capacity!

o1js continues to ship a WASM backend for browsers and cross-platform portability. For Node, we now attempt to load a native backend built from the same Rust proof system. If it’s available, o1js routes prover calls to the native implementation. If it’s not, o1js gracefully falls back to WASM. That means no code changes are required from developers, and the behavior stays consistent across environments.

Why native matters

This is not just a bump. It changes the kind of zkApps that are practical to build. The WASM backend forced teams to trim features, split circuits, or pause projects when they hit memory and proving‑time ceilings. Native proving lifts those constraints so the design can lead again.

For community‑driven projects, this is a reopening of doors that had started to close. Experiments like zkEmail and zkPassport have been stalled by proving limits, and native headroom makes reviving and scaling those ideas realistic again.

For in‑house development, native proving makes larger, more ambitious zkApps feasible: EdDSA verification inside zkApps becomes practical for proving off‑chain signatures, bridge‑style circuits with multiple checks and signatures fit comfortably, and there is finally room to add more cryptographic primitives at the o1js layer without immediately hitting a memory wall.

How it works

The native backend is built with Napi-RS on top of Node-API, so Rust types are exposed directly to Node in a stable, well-supported way. We still preserve the existing Kimchi bindings interface, but the conversion layer is now tailored for native data. Curves, fields, proofs, and vectors move across the boundary using Node buffers and structured JS objects, and the conversion logic in o1js handles the rest. The goal is to keep this fast and predictable while staying fully compatible with existing o1js APIs.

Native artifacts are produced as part of the Kimchi bindings build, and each supported platform/architecture gets a prebuilt package. At runtime, o1js loads the platform-specific package automatically if it’s present. You can also enforce native mode via an environment flag, while the browser and fallback paths remain WASM-based.

With native proving in place, chunking in o1js becomes a practical tool for scaling circuits. You can now target 2^18 sized circuits without running into the WASM memory ceiling. This is 4x larger than our old single-chunk circuits of 2^16 size. Leveraging smaller degree gates, our experiments show that this limit can be pushed even further up to 2^20 for generic constraints alone. For larger applications, this unlocks designs that were previously constrained by memory rather than by proof-system design.

Where this leads us to

The upgrade is about scale and developer experience. Faster proving tightens the dev loop, while larger circuits mean fewer workarounds and more expressive programs. Combined with chunking, it gives you a clearer path to complex proofs that run inside Node without fighting memory limits.

We’re excited to see what the community builds in TypeScript with native proving. If you want to try the native prover prerelease, check out https://www.o1js.dev/prerelease-native-prover - it has setup instructions and a quick overview of what’s going on.Just a heads-up: this is still a prerelease and we’re actively working on it, so expect a few rough edges and changes as we keep improving things.