
Destiny 2's Network Achitecture - Multiplayer Game Deep Dive

Minimize Server Simulation: Bungie reduced cloud instance size to ~45 MB by hosting only the state that session logic strictly requires, enabling nearly 5,000 instances per server.
Declare Before You Deploy: A discipline of declaring upfront what state a script actually needs (and discarding everything else) is a transferable optimization principle for any multiplayer architecture.
Tick Rate is a Cost Dial: Running server-side session logic at 10 Hz instead of matching client simulation rates directly cut CPU and bandwidth spend without compromising the player experience.
Reconciliation Must Be Designed In: Building state consistency into the architecture from day one avoided an entire class of progression-breaking bugs that are expensive to hunt down post-launch.
Audit What Runs Where: When mission-critical logic silently migrates to the client without server oversight, it creates both exploit vectors and stability risks, making architectural audits part of ongoing optimization, not a one-time exercise.
Justin Truman, now Studio Head at Bungie, gave this talk at GDC 2015 as one of the engineering leads on Destiny, walking through the networked mission architecture the team built to support a seamless, always-online shared world at launch scale.
Indirectly, this highlights best practices that any game studio, big or small, can apply to reduce cloud infrastructure costs without sacrificing the player experience.
The Core Question: What Does the Server Actually Need to Simulate?
The most transferable insight from Destiny's architecture isn't about peer-to-peer networking or world design. It's a simpler, harder question that most teams don't ask early enough: what does your server actually need to simulate, and what can live somewhere else?
Bungie's answer was the activity host, a stripped-down cloud process that ran only the mission-critical state that activity scripts depended on. Things like whether a squad had spawned, how many enemies remained alive, whether an objective had been triggered. Not bullet trajectories. Not animation state. Not world-space positions. Those lived elsewhere.
The result was an activity host executable of around 45 MB, achieved by taking the full Destiny binary and removing everything that wasn't needed for that specific logic layer. At 10 Hz, a single 40-core server could run close to 5,000 of those instances simultaneously. As Truman explained, that translated to roughly 1 million concurrent users supported by just a few hundred machines, with safety headroom to spare. A full-simulation dedicated server approach, by contrast, would have required something closer to half a million headless processes running the entire game.
The math changes fast when you stop simulating things you don't need to simulate.
Sensors: A Pattern for Declaring What You Need
The mechanism that made this possible was what Bungie called sensors. Before an activity script could reference a piece of game state, it had to declare that dependency explicitly. A script tracking a squad of enemies didn't store individual health bars or world positions. It tracked discrete facts: how many are alive, have they spawned, are they using this firing area. That's it.
This declaration-first approach had a direct impact on infrastructure cost. Because the server only persisted what scripts formally declared they needed, the state surface stayed small and the per-instance footprint stayed low. Sense state updates were sent atomically across all sensors simultaneously, keeping bandwidth predictable and partial-state inconsistencies off the table.
The pattern is applicable well beyond Destiny's specific architecture. Before adding state to your authoritative server, ask: does this script or session logic actually need this? If the answer is "not really, but it's easier to just put it there," that's where server bloat quietly starts. For teams looking to reduce vCPU and egress costs, a systematic audit of what state lives on the server and why is often the highest-leverage starting point. Edgegap's server profiling guide for Unreal Engine is a practical place to start that audit.
Tick Rate as a Cost Lever
Bungie ran their activity hosts at 10 Hz. Not because 10 Hz was ideal for every layer of the simulation, but because the logic running on those hosts didn't require anything faster. Mission script state, squad counts, objective flags, event triggers, doesn't need to update fifty times a second. Running it at a lower frequency was a deliberate cost decision, not a compromise.
This is a point worth internalizing. Tick rate isn't a single dial you set for your entire game server. It's a per-layer decision, and the right answer depends entirely on what that layer is doing. Combat and physics simulation may need high-frequency updates to feel responsive. Session logic and script state often don't. Conflating the two and running everything at your highest tick rate is a reliable way to overpay for server capacity.
For a deeper look at how tick rate choices affect both gameplay precision and infrastructure cost, Edgegap's breakdown of game server tick rate covers the tradeoffs in detail.
Reconciliation as Architecture, Not Afterthought
One of the more expensive lessons from Halo Reach, Destiny's predecessor, was what happens when you don't design for state consistency from the start. Host migrations in Reach caused black screens, duplicate flags, broken progression, bugs that were hard to reproduce, timing-sensitive, and expensive to fix close to ship.
Bungie's response with Destiny was to treat reconciliation as a core architectural concern rather than a problem to be patched per-script. Because the activity host was always the authority over declared state, any new physics host receiving a reconcile call could be brought into a consistent simulation state automatically. The reconciliation logic reused the same code paths as normal state updates. It wasn't a special case. It was just the system working as designed.
The upfront engineering investment was real. Writing sensors cost more than exposing a simple function to script. But that cost was paid once per sensor type, not once per script that touched it. Over a live game with hundreds of missions, that trade-off compounds significantly in favor of the architectural approach.
Building reconciliation in from the start also means you're not discovering an entire class of progression-blocking bugs in QA, or worse, in production.
The Cost of Untracked Client Logic
The clearest example of what goes wrong when architecture isn't enforced consistently was the Crota raid exploit. Players discovered they could trigger a favourable game state by pulling their network cable at a specific moment during the raid boss encounter. The reason it worked: a mission-critical portion of the boss logic had been built entirely using client-side systems, bypassing the activity host's authoritative state entirely.
As Truman put it, "I view this case as an engineering failure for not paying close enough attention to and auditing the more complex raid scripts."
This isn't just a security concern. It's an optimization concern. Logic that drifts onto the client without being tracked server-side doesn't show up in your server profiling. It doesn't contribute to the instance footprint you're paying for. But it also doesn't give you the consistency guarantees you think you have. The result is a gap between your intended architecture and the actual one.
Regular audits of what runs where, cross-referencing script logic against server-tracked state, are part of keeping an architecture honest. The more complex the game, the more easily that gap opens without anyone noticing until something breaks in a live match.
This article is based on and cites the original GDC talk by Justin Truman, presented at GDC 2015 and published on the GDC YouTube channel. All rights in the original content are owned by their respective owners.
Written by
the Edgegap Team









