Traverse 0.6.0 is out — and alongside it we're releasing Silicon, a unikernel build that runs the entire graph database as a lightweight virtual machine. No Linux, no containers. A single 10 MB binary boots on a KVM hypervisor and starts serving queries in under 100 milliseconds.
What's new in 0.6.0
This release focused on production stability, embedded language bindings, and broadening how you deploy and integrate Traverse.
Stability and memory fixes
We fixed several critical memory issues that surfaced under sustained production workloads in 0.5.0:
- Fixed a main-heap memory leak on database unload — databases were not fully reclaimed after hot-swap cycles.
- Fixed mimalloc RSS ratchet where the allocator would not return pages to the OS, causing monotonically increasing resident memory.
- Added a memory ceiling during database load and a pre-flight memory check to prevent OOM crashes.
- Fixed bulk-free incorrectly triggering for non-DB-allocated databases.
- Fixed server total memory reporting on Windows.
Embedded language bindings
Traverse can now run as an in-process database in five languages. Each binding wraps the native Traverse engine with no external dependencies:
- Python (PyO3), Java (JNI), Node.js (napi-rs 3), Go (FFI/syscall, no CGo), .NET (Traverse.Embedded)
- Typed HTTP clients for Python, Java (Jackson 3.1), Node.js (native fetch), and Go (stdlib only)
Server
- HTTP API and WebSocket now enabled by default on port 7691.
- Built-in MCP server for AI integration — let AI assistants query and explore your graph databases directly.
- Bolt 6.0 protocol support.
- Lifecycle semaphore preventing concurrent load/unload/drop operations.
- Fixed upload hanging on duplicate database name.
Studio
- Conditional node and edge styling with if/then/else rules based on property values.
- Library reference panel for all language bindings and HTTP clients.
- Fixed files tab being empty after login and memory popover not showing all databases.
Upgrade by downloading the latest release from the Traverse product page or following the getting started guide.
Silicon: the database is the VM
Traditional database deployments stack layers: hardware, hypervisor, operating system, container runtime, container image, and finally the database. Each layer adds latency, memory overhead, and attack surface. Silicon removes all of them except the hypervisor. The database binary is the entire virtual machine.
Silicon boots directly on Cloud Hypervisor 44.0+ or Firecracker v1.15.0+. It brings its own TCP/IP stack, its own filesystem, and its own scheduler. There is no Linux kernel, no system libraries, no shell, no SSH, no package manager. The attack surface is the database protocol and the HTTP API — nothing else.
Boot time: 84 ms
Silicon boots in under 100 milliseconds — from power-on to accepting database queries. Measured end-to-end on a 4-vCPU VM:
| Phase | Time |
|---|---|
| Kernel initialization (GDT, IDT, memory, heap, scheduler) | 15 ms |
| PCI discovery + disk format + SMP (4 cores) | 42 ms |
| Traverse startup (scan databases, ready to serve) | 27 ms |
| Total: power-on to serving queries | 84 ms |
For comparison, a container typically takes 2–5 seconds to start. A traditional VM with a full OS takes 10–30 seconds. Silicon enables instant scaling and instant recovery — a new instance is serving queries before a container would have finished pulling its image layers.
Why not containers?
Containers share the host kernel. Every container on a machine runs inside the same Linux kernel, separated only by namespaces and cgroups — software boundaries that have been repeatedly bypassed by container escape vulnerabilities. A single kernel exploit compromises every container on the host.
Silicon runs inside a hardware-enforced VM boundary. Each instance has its own virtual CPUs, its own memory, and its own block device. The isolation is enforced by the CPU itself (Intel VT-x), not by software policies.
| Container | Silicon | |
|---|---|---|
| OS kernel | Shared Linux kernel | None |
| System libraries | glibc, openssl, etc. | None (statically linked) |
| Shell / SSH | /bin/sh, bash, openssh | None |
| Package manager | apt, dpkg | None |
| Process table | Multiple PIDs | Single process |
| Container runtime | containerd + runc | None |
| Filesystem | OverlayFS layers | SiliconFS (native) |
| Image size | 200–500 MB | 10 MB |
There is no shell to exec into. No SSH to brute-force. No process table to inspect. No sudo, no setuid, no capability escalation.
Performance without a kernel
In a container, every network packet and every disk write crosses the kernel boundary via a system call. Each system call costs roughly 100 nanoseconds — a CPU pipeline flush, a privilege transition, a TLB invalidation. For a database handling thousands of operations per second, these add up.
Silicon eliminates system calls entirely. Network I/O, disk I/O, and memory allocation are function calls within a single address space. There is no privilege transition because the entire VM runs at a single privilege level.
| Operation | Container (Linux kernel) | Silicon |
|---|---|---|
| System call overhead | ~100 ns | 0 ns (function call) |
| Network packet processing | 1–5 µs (kernel TCP stack) | 100–500 ns (direct) |
| Disk write | 10–50 µs (VFS + block layer) | 1–5 µs (direct virtio) |
| Context switch | Constant | None |
SiliconFS: crash-safe from the ground up
Silicon includes its own filesystem designed specifically for database workloads on virtual block devices.
Every write follows a strict copy-on-write protocol: new data goes to fresh blocks, a flush barrier ensures durability, metadata is journaled with CRC32 integrity checks, and a second flush barrier commits the transaction. If the VM crashes at any point, the filesystem recovers to a consistent state. Either the write completed fully, or it didn't happen at all.
Every data block on disk is checksummed. Every read verifies the checksum before returning data. If a block is corrupted — by a disk error, a firmware bug, or a bit flip — the filesystem detects it instead of silently delivering corrupt data. The superblock is stored in duplicate so a single-block corruption cannot prevent mounting.
On mount, SiliconFS replays the journal and runs a full consistency check: superblock integrity, inode validation, block allocation cross-referencing, and data checksum verification. Detected inconsistencies are automatically repaired. The filesystem self-heals without manual intervention.
10 MB, everything included
The entire Silicon binary is 10 MB compressed. It contains the Traverse database engine, the Bolt protocol server, the HTTP/WebSocket API, a TLS stack, and the TCP/IP networking stack. On a host running many tenants, Silicon achieves 10–20x higher density than containers — more database instances per gigabyte of storage and per gigabyte of RAM.
Getting started with Silicon
Silicon runs on any Linux host with KVM enabled. Download the release and launch with Cloud Hypervisor or Firecracker:
# Create a storage disk (formatted automatically on first boot)
truncate -s 32G storage.raw
# Launch with Cloud Hypervisor
sudo cloud-hypervisor \
--kernel traverse-silicon \
--cpus boot=4 --memory size=32G --serial tty --console off \
--disk path=storage.raw \
--net tap=tap0,mac=52:54:00:12:34:56 \
--cmdline "TRAVERSE_HTTP_LISTEN=0.0.0.0:7691 TRAVERSE_LICENSE_KEY=<your-key>"
The VM acquires an IP via DHCP and starts accepting connections on ports 7690 (Bolt) and 7691 (HTTP). Every Neo4j driver works without changes — point it at the VM's IP and go.
See the getting started guide for full instructions including Firecracker setup, networking, and configuration options.
What's next
Silicon is the deployment model we've been building toward: a graph database that boots instantly, runs in complete isolation, and fits in 10 MB. We're continuing to push on query engine performance, Cypher coverage, and making Silicon the simplest way to run a production graph database.
Get started or reach out if you want a demo.