Verified safety without the runtime cost.
Z3-proven contracts at build time.
No GC. No borrow checker.
Mathematical certainty.
package main use std.collections.HashMap fn main() { let mut map = HashMap<StringView, i64>::new(); map.insert("hello", 1); map.insert("world", 2); // Pipe + optional chaining map.get("hello") |?> println(f"Found: {_}"); // Iterate with f-strings for entry in map.iter() { println(f"{entry.key}: {entry.value}"); } }
Salt was developed to solve foundational challenges in high-performance computing. From optimizing LLM inference loops to structuring bare-metal unikernels, the language has been refined through application in critical systems. The following projects demonstrate Salt’s capability to handle complex, resource-constrained environments.
A complete Llama 2 inference engine. Weights are memory-mapped directly from disk, matrix multiplications are tiled through MLIR's affine optimizer, and every compute kernel carries a Z3 proof that its array accesses are in bounds. The whole thing is about 600 lines of Salt.
// Z3-verified matrix multiply fn mat_mul( out: Ptr<f32>, a: Ptr<f32>, b: Ptr<f32>, m: i64, n: i64, d: i64 ) requires(m > 0, n > 0, d > 0) { for i in 0..m { for j in 0..n { let mut sum: f32 = 0.0; for k in 0..d { sum = sum + a[i*d+k] * b[k*n+j]; } out[i * n + j] = sum; } } }
// Lattice kernel entry point @no_mangle pub fn kmain(magic: u32, mb: &u8) { serial.init(); gdt.init(); idt.init(); pit.init(); scheduler.init(); serial.print("LATTICE BOOT [OK]\n"); scheduler.start(); // Preemptive scheduling is live ring_of_fire.run_benchmark(); }
A unikernel that boots on real x86 hardware — written from scratch in Salt. It has its own GDT, IDT, PIT timer, preemptive scheduler, physical memory manager, and serial driver. No C runtime, no libc, no external dependencies at all. Benchmarked on bare-metal AWS instances with KVM.
Intel Xeon Platinum 8151 (Skylake), AWS z1d.metal, QEMU/KVM. -enable-kvm -cpu host.
Full methodology →
A neural network that trains on MNIST handwritten digits — entirely in Salt. Weight matrices are arena-allocated, every array access is Z3-verified through Slice bounds, and training time is competitive with the equivalent C implementation. The entire trainer is 188 lines.
// Arena + Slice — the safety bridge let arena = Arena::new(4194304); let w1 = Slice<f32>::new( arena.alloc_array::<f32>(HIDDEN * INPUT), HIDDEN * INPUT ); for epoch in 0..8 { for sample in 0..TRAIN_COUNT { matvec(w1, input, pre_relu, HIDDEN, INPUT); relu(hidden.as_ptr(), HIDDEN); softmax_cross_entropy_grad(...); } println(f"Epoch {epoch}: {accuracy}%"); }
A Redis-compatible in-memory data store. It handles 234K ops/sec — roughly 2× the throughput of Redis itself — using zero-copy RESP parsing, an arena-backed hash table, and a kqueue event loop. 567 total lines of Salt, zero external dependencies.
// The entire event loop fn run(&mut self) { let poller = Poller::new(); poller.add(self.listener.fd, 0); loop { let events = poller.wait(64); for ev in events { if ev.fd == self.listener.fd { self.accept_client(&poller); } else { self.handle_client(ev.fd); } } } }
A full-stack 2D rendering engine — from Bézier curve flattening to Metal
compute shaders. Five composable layers (raster, window, compositor,
UI, GPU), and every pixel write carries a Z3 proof that it's within the
framebuffer. Algorithm-identical to the C reference: Salt's MLIR codegen
matches clang -O3 at 457 fps.
// Z3-verified pixel writes pub fn set_pixel( &mut self, x: i32, y: i32, r: u8, g: u8, b: u8, a: u8 ) requires( x >= 0 && x < self.width && y >= 0 && y < self.height ) { let off = (y as i64) * self.stride + (x as i64) * 4; self.pixels.offset(off).write(r); // Z3 proves: off in [0, stride*height) }
High-connection networking benchmarks measuring how Salt handles raw I/O. Echo servers, arena-based connection pools, noisy-neighbor isolation. We test Salt against C, Rust, and Node on identical kqueue event loops — Salt matches C's throughput.
// kqueue echo server in Salt fn main() -> i32 { let listen_fd = echo_server_init(PORT); let kq = echo_kq_create(listen_fd); let buf = malloc(BUF_SIZE); while true { let n = echo_kq_poll( kq, listen_fd, buf, BUF_SIZE ); if n > 0 { total_bytes = total_bytes + n; } } }
Every benchmark uses dynamic inputs and prints results to prevent dead-code elimination.
Same hardware, same optimization flags, no tricks. Salt matches or outperforms
clang -O3 on all 22 benchmarks we track.
Platform: Apple M4, macOS ARM64. Baselines: C (clang -O3), Rust (--release).
Salt compiled with default flags. We publish everything.
Full methodology and raw data →
Salt looks like a normal language. Under the hood, three ideas make it unusual.
You write requires() and ensures() annotations on your functions.
The compiler hands these to Z3, a world-class theorem prover, and either confirms they hold
for all possible inputs or tells you exactly why they don't. Nothing is checked at runtime —
the proof is the artifact.
Instead of garbage collection or Rust-style borrow checking, Salt uses arenas — you allocate into a region and free the whole region at once, in O(1). The compiler statically proves that no reference outlives its arena. It's simple enough that you stop thinking about it.
Salt emits code through MLIR, the same compiler infrastructure behind TensorFlow and
PyTorch. Tight loops lower to hardware-optimized instructions automatically — our matrix
multiply runs 6.8× faster than clang -O3 because the compiler picks the
right tiling strategy for your chip.
Data flows left to right. Chain transformations with |>, propagate
errors with |?>, and use _ placeholders to bind arguments
wherever you want. It reads the way you think.
Raw pointer arithmetic lives inside unsafe blocks. Foreign function
calls are marked @trusted. Everything outside these boundaries is
statically verified. You always know where the unverified code is.
sp new, sp build, sp add. Artifacts are
content-addressed and cached globally. When you add a dependency, its Z3 contracts
are verified against your call sites automatically. Inspired by Cargo.
The same task, in C, Rust, and Salt. Toggle between C and Rust on the left to see what Salt replaces.
// C — manual, error-prone char *buf = malloc(4096); if (!buf) return -1; read_data(buf, 4096); process(buf); free(buf); // forget this → leak process(buf); // use-after-free → UB
// Rust — safe, but verbose let buf: Vec<u8> = Vec::with_capacity(4096); read_data(&mut buf)?; process(&buf); // borrow — ok let owned = buf; // move — buf is gone process(&buf); // ✗ compile error: use of moved value
// Arena — allocate freely, free once let arena = Arena::new(4096); let buf = arena.alloc(4096); read_data(buf, 4096); process(buf); process(buf); // still valid — arena owns it // arena drops here → everything freed
// C — return codes, manual checking int fd = open(path, O_RDONLY); if (fd < 0) return -1; int n = read(fd, buf, 4096); if (n < 0) { close(fd); return -1; } int rc = parse(buf, n); if (rc < 0) { close(fd); return -1; }
// Rust — safe, but ceremony adds up let mut file = File::open(path)?; let mut buf = vec![0u8; 4096]; let n = file.read(&mut buf)?; let result = parse(&buf[..n]) .map_err(|e| io::Error::new( ErrorKind::InvalidData, e ))?;
.map_err + trait bounds
add boilerplate.// Pipe operator — reads left to right let data = path |> File::open() |> read_all() |> parse(); // Or use Result + match — your choice match data { Result::Ok(val) => process(val), Result::Err(e) => println(f"error: {e}") }
// C — nothing stops you int divide(int a, int b) { return a / b; // b == 0? UB. good luck. } int get(int *arr, int i) { return arr[i]; // out of bounds? UB. }
// Rust — runtime panics fn divide(a: i32, b: i32) -> i32 { a / b // b == 0? panic at runtime. } fn get(arr: &[i32], i: usize) -> i32 { arr[i] // out of bounds? panic at runtime. }
// Contracts — proved before your code runs fn divide(a: i32, b: i32) -> i32 requires(b != 0) { return a / b; } fn get(arr: &[i32; 10], i: i32) -> i32 requires(i >= 0 && i < 10) { return arr[i]; } // Z3 checks every call site at compile time. // If it can't prove safety, the build fails.
// C — format strings, no type checking char msg[256]; sprintf(msg, "%s scored %d in %dms", name, score, elapsed); // wrong format specifier → UB // buffer overflow → UB
// Rust — safe, but allocates let msg = format!( "{} scored {} in {}ms", name, score, elapsed ); // Heap allocation every time. // Display trait needed for custom types.
format! always
allocates.// F-strings — just works let msg = f"{name} scored {score} in {elapsed}ms"; println(f"result: {x |> square() |> double()}"); // Inline expressions, type-safe, // no format specifiers, no heap allocation.
Over 70 modules, zero external dependencies.
Everything here is written in Salt.
Salt ships with sp, a package manager designed around the same
principles as the language: fast, ergonomic, and verified by default.
# Create a project — instant, no prompts $ sp new my_app ✨ Created project 'my_app' # Build & run $ sp run 🧂 Running my_app... # Add a dependency — contracts are shown $ sp add crypto ✨ Added crypto v0.2.1 📋 Contracts: encrypt requires(key.len() >= 32) # Verify all contracts across package boundaries $ sp check ✅ All contracts verified (0.2s)
Build artifacts are keyed by the hash of their source, compiler version, and dependencies. If nothing changed, nothing rebuilds. Caches are shared across every project on your machine.
When a library declares requires(key.len() >= 32), the compiler verifies
that condition at every call site in every downstream package. Safety contracts compose
across dependency boundaries.
Monorepos work out of the box. One lockfile, parallel builds, and the compiler only recompiles what actually changed.
Three commands, thirty seconds. That's it.
$ sp new hello && cd hello && sp run ✨ Created project 'hello' 🧂 Hello from hello!