Back to Blog

Why We Chose Rust for Production Infrastructure

How a small consultancy ended up betting on Rust for critical infrastructure — from audio streaming to endpoint security — and what we have learned after shipping production systems.

Why We Chose Rust for Production Infrastructure

KeyQ is a small shop. We do not have the luxury of throwing engineers at problems or tolerating systems that need constant babysitting. When we build infrastructure for our clients, it needs to run reliably with minimal operational overhead. Over the past couple of years, Rust has become our default choice for performance-critical and reliability-critical systems. Here is why.

What We Have Built in Rust

We currently maintain three production Rust projects that span very different domains:

An audio streaming server that replaced Liquidsoap for a client’s broadcasting infrastructure. It decodes multiple audio formats, normalizes and crossfades tracks, encodes to MP3, and streams to multiple Icecast servers simultaneously. It runs 24/7 with automatic reconnection and built-in Prometheus monitoring.

An endpoint security agent for Windows that performs malware scanning, DNS-level phishing protection, and encrypted quarantine management. It is structured as an 11-crate Rust workspace covering everything from the scanning engine to the system tray icon.

A real-time threat visualization tool built on the Bevy game engine that renders global threat intelligence data at 60 FPS for demonstration and monitoring purposes.

These are not hobby projects or rewrites of things that already worked. Each one replaced a system that was either unreliable, too complex to maintain, or did not exist in a form that met our requirements.

The Case for Rust at a Small Company

The usual arguments for Rust — performance, memory safety, fearless concurrency — are well documented. What gets less attention is why Rust makes particular sense for a small team.

Rust programs do not wake you up at 3 AM. Our streaming server has been running continuously for months. No memory leaks, no gradual performance degradation, no mysterious crashes that require restarting the process. The compiler eliminates entire categories of bugs at build time, which means fewer production incidents and less time spent debugging.

Compare this to a Python or Node.js service where a missing null check, an unhandled promise rejection, or a slow memory leak can take down a long-running process weeks after deployment. For a team where every engineer is already stretched thin, the upfront cost of satisfying the Rust compiler pays for itself in operational peace.

Single binary deployment is operationally simple. Each of our Rust projects compiles to a single binary with no runtime dependencies (other than system libraries). There is no package manager to run on the server, no virtual environment to maintain, no version conflicts between projects. Copy the binary, write a systemd unit file, and it runs. For Docker deployments, the multi-stage build produces a minimal image — our streaming server’s runtime image is about 200 MB including the MP3 encoder.

The type system catches integration bugs. When your streaming server handles audio at multiple sample rates, bitrates, and channel configurations, the type system helps ensure you do not accidentally pass a stereo buffer to a mono encoder. When your security agent handles cryptographic operations — Ed25519 signatures for updates, XChaCha20-Poly1305 for quarantine encryption — Rust’s type system makes it much harder to misuse a key or confuse plaintext with ciphertext.

The Ecosystem Is Ready

A common objection to Rust is ecosystem maturity. Three years ago, this was a real concern. Today, the libraries we depend on are production-grade:

Tokio powers all of our async I/O — HTTP servers, network clients, timers, and task scheduling. It is the de facto async runtime and has been battle-tested at scale by companies far larger than us.

Axum (from the Tokio team) handles our REST APIs with a type-safe, ergonomic design that makes it hard to write incorrect request handlers.

Symphonia decodes audio formats in pure Rust, eliminating the C library dependencies that made FFmpeg integration fragile.

yara-x (from VirusTotal) provides YARA rule evaluation in pure Rust, removing the dependency on C-based libyara and its associated build complexity.

Serde and TOML handle serialization and configuration parsing so well that we take them for granted.

The pattern here is telling: many of these libraries are pure Rust replacements for C libraries. This matters for security-sensitive applications (like our endpoint agent) where every C dependency is a potential source of memory safety vulnerabilities.

The Workspace Pattern for Larger Projects

Our security agent is structured as an 11-crate Rust workspace:

sentinel-cli              # CLI + Windows service entry point
sentinel-engine           # Core scanning (hash + YARA)
sentinel-quarantine       # Encrypted quarantine vault
sentinel-updater          # Signed update client
sentinel-policy           # Configuration management
sentinel-transport        # HTTP client + event spool
sentinel-platform         # OS adapters (Windows, macOS, Linux)
sentinel-license          # License validation
sentinel-phishing         # URL/domain phishing detection + DNS proxy
sentinel-notify           # IPC notification system
sentinel-tray             # System tray helper

Each crate owns a feature domain with a clear interface. Shared dependencies are specified once at the workspace level, ensuring version consistency. This structure lets us compile and test individual crates in isolation, which keeps build times manageable even as the project grows.

The workspace pattern is one of Rust’s underappreciated features for larger projects. It provides the organizational benefits of a monorepo with the compilation isolation of separate packages. When you change the phishing detection crate, only it and its dependents recompile — not the entire project.

What We Would Not Use Rust For

Rust is not our answer to everything. We still reach for other tools when they are a better fit:

Web frontends — React with TypeScript. The browser is JavaScript’s domain, and fighting that is not productive.

CRUD APIs with simple business logic — NestJS (TypeScript) or Cloudflare Workers. When the hardest problem is “save this to a database and return it,” Rust’s compile times are not justified.

Rapid prototypes — Python or TypeScript. When we need to validate an idea in a day, Rust’s upfront investment is too high.

Control planes and admin dashboards — Go or TypeScript. For services where request latency is measured in hundreds of milliseconds and development velocity matters more than raw performance.

The dividing line for us is: does this system need to run reliably for months without intervention? Does it handle untrusted input, binary data, or cryptographic operations? Is performance a feature, not just a nice-to-have? If yes, Rust. If no, we pick whatever lets us ship fastest.

The Learning Curve Is Real — But Overblown

Rust’s reputation for being difficult to learn is deserved but overstated. The borrow checker is genuinely confusing for the first few weeks. After that, it becomes an ally rather than an obstacle. The compiler error messages are the best in the industry — they tell you what went wrong, why, and often how to fix it.

Our experience has been that a developer who is comfortable with typed languages (TypeScript, Go, Java) can become productive in Rust within 4-6 weeks. Not expert-level — that takes longer — but productive enough to contribute meaningful code to an existing project.

The key is starting with the right project. A CLI tool or a small API server is a much better first Rust project than a complex async system. Build confidence with the language before tackling the concurrency primitives.

The Bottom Line

For a small team building infrastructure that needs to run reliably in production, Rust has been a genuine force multiplier. The upfront investment in writing Rust is higher than Python or Go, but the payoff comes in systems that we deploy and largely forget about. In a consultancy where our time is our product, the ability to build something once and have it just work is worth more than almost any other technical advantage.

We are not Rust evangelists — we are pragmatists who found that Rust solves the specific problems we face better than the alternatives. If your infrastructure demands performance, reliability, and low operational overhead, it might solve yours too.


KeyQ builds production infrastructure in Rust, Go, and TypeScript for clients who need systems that scale and stay up. Get in touch if you are working on something ambitious.