RUST - Multiplayer Game Backend Deep Dive

RUST - Multiplayer Game Backend Deep Dive

Key Insights

Key Insights

Key Insights

  • Vet Your Networking Middleware: Rust's adoption of RakNet showed that an open-source fork of a proven library can carry hidden bugs. Validate the specific version you ship, not just the library's reputation.

  • One Codebase, Two Targets: Facepunch compiled server and client from a single shared codebase using preprocessor flags, eliminating drift between the two and reducing long-term maintenance overhead.

  • Invest in Editor Testing Early: Building a listen server mode for the Unity editor removed a painful multi-step testing workflow and made netcode iteration dramatically faster — a principle that applies to any multiplayer project.

  • Deployment Infrastructure Is Not Optional: When Rust went viral, the team had to rebuild their entire build pipeline under pressure; the experience is a direct argument for automating deployments before launch, not after.

  • PVS Culling Keeps Large Worlds Scalable: Rust's grid-based visibility system ensures players only receive state updates for nearby cells, keeping bandwidth and CPU costs proportional to local population density rather than total server size.

Garry Newman, founder of Facepunch Studios, documented Rust's networking and backend architecture in a series of posts on his personal blog at garry.net. Written between 2013 and 2016 — during the game's turbulent early development and its surprise viral launch — the posts cover the networking stack choices, server/client code architecture, editor testing workflows, and the deployment pipeline that had to be rebuilt almost overnight. The two most relevant posts are "Rust's Networking" and "The Rust backend", supplemented by Facepunch's official devblog entries from the same period.

Indirectly, they surface decisions that any multiplayer studio, from small indie teams to growing live-service games, will face sooner or later.

Picking a Networking Library (and Trusting It Blindly)

When Newman was building Rust's networking layer, he reached for RakNet. "Raknet is a tried and tested networking solution," he wrote, noting its long track record across shipped titles. Oculus had acquired it and made it open source, which seemed like good news at the time.

It wasn't, quite.

Newman noticed bugs in the public repository and suspected the open-source version "had to have a bunch of shit ripped out" relative to the internal build that had powered all those shipped games. A library with a strong industry reputation had arrived in its community form in an uncertain state.

The practical lesson isn't to avoid third-party networking libraries. It's to validate the specific version you're actually shipping against your actual use case, rather than relying on the library's historical reputation.

Open-source forks of commercial tools deserve extra scrutiny. Newman himself noted plans to eventually replace RakNet with Valve's Steam Networking Sockets, a useful reminder that networking stacks should be treated as replaceable components from the start.

It's also worth noting that this post was written in 2016. The landscape has changed considerably since. Unity developers in particular now have a wide range of accessible, well-maintained netcode options to choose from. For a current overview of what's available, Edgegap's guide to Unity netcode solutions for example is a useful starting point.

Sharing Code Between Server and Client

One of the more durable architectural decisions in Rust was treating the server and client not as separate codebases, but as two compilation targets of the same code. The codebase is compiled with either SERVER or CLIENT defined, and logic specific to one side is gated with those preprocessor flags. Newman borrowed the pattern from Valve's Source engine, where he had been shaped as a developer.

This sidesteps a common pain point: two diverging codebases that gradually drift apart as the game grows. Shared systems stay shared. Bug fixes propagate to both targets at once. The mental model of "one game, two views" maps cleanly onto how multiplayer systems actually work.

The tradeoff is discipline.

Code has to be written with constant awareness of which context it runs in. Edge cases require care: in editor listen server mode, for instance, a physics object exists in both server and client form at the same time, and they must be prevented from interacting with each other. These problems are solvable. But they don't solve themselves.

Listen Servers: Removing Friction From Testing

The original Rust development workflow for testing networked code was, as Newman put it, "a fucking nightmare." Testing any change required compiling a standalone server build, launching it in the background, then connecting from the editor client. It was so deeply embedded in the old architecture that it couldn't easily be changed, and it was one of the main reasons the team chose to restart the game from scratch.

With the rebuilt version, listen server support in the Unity editor became a high priority from day one. The solution was to define both SERVER and CLIENT simultaneously, so the editor hosts and joins its own game without any external process. Newman was careful to note that the implementation had to emulate real networking faithfully. Shortcuts that let server and client share variables directly would hide bugs that only surfaced in actual network builds.

The underlying principle holds up well even if the specific tooling has moved on. Compiling and deploying a dedicated server for local testing is a much more tractable problem today than it was in 2013. Containerization and purpose-built plugins have absorbed most of the pain. Edgegap's Unity plugin includes a build tool that reduces dedicated server compilation to a few minutes even for large projects, and the Unreal Docker Extension that sidesteps building Unreal from source entirely, cutting what used to be a multi-hour process down to around 8 minutes. The goal is the same as Newman's listen server: get a working server in front of a developer as fast as possible, so netcode bugs surface during iteration rather than after launch.

Building a Deployment Pipeline Before You Need It

After Rust went viral in late 2013, the team spent a week rebuilding their entire deployment process from the ground up. "Previously, deploying new versions was slow and bug ridden," Newman wrote. Builds were compiled on "some cunt's computer," the whole process "an ordeal."

The replacement system used Jenkins CI/CD, automatically triggered by SVN commits. Separate jobs for the server and client ran in parallel, cutting build times from around 40 minutes to under 10. Assets were divided into 60+ bundles, each kept under 5 MB so browsers could cache them like images. Each bundle was named using a CRC hash, so a file only triggered a re-download when its content had actually changed, not on every deploy. As Newman explained, "We don't want the player to have to download 300mb each time."

The system worked. But it had to be built in a scramble, under pressure, after a viral launch nobody had fully anticipated.

Newman's description of the before state is as instructive as the after. Builds were compiled on one developer's personal machine, the whole process an ordeal. The tools that replaced it were simple: Jenkins, SVN, S3, CRC hashes. Nothing exotic. What made the difference was having them in place and running automatically, so the team could ship updates at the pace the situation demanded.

PVS Culling: Only Send What Players Can See

One of the more technically instructive entries in Facepunch's early devblogs described the introduction of a Potential Visibility Set (PVS) culling system into Rust's netcode. The approach is grid-based: the world is divided into cells, and each player only receives state updates for their own cell and the surrounding ones.

The concept is straightforward. The implementation is not. You need to track objects moving between cells, send enter and leave notifications to every affected player, handle players straddling multiple cells at once, and manage all the edge cases that arise when entities are in motion. As Newman wrote in the devblog, "there's a lot of moving parts."

The payoff is significant. Without visibility culling, a server broadcasting full world state to every connected player hits bandwidth and CPU ceilings fast. PVS keeps those costs proportional to local density rather than total server population. That is what makes large-world survival games viable at scale, and it is a technique applicable well beyond Rust's specific genre. Any multiplayer game with a large explorable world should have some form of state relevance filtering before worrying about more exotic optimizations.

This article is based on and cites the original blog posts by Garry Newman, published on garry.net (link1 and link2) and Facepunch's official Rust devblog.

All rights in the original content are owned by their respective owners.

Written by

the Edgegap Team

Get your Game Online Easily & in Minutes

Start Integrating Now!

Get your Game Online Easily
& in Minutes