AWS GameLift Now Supports Containers. Is That Enough for Your Game?

Starting this November 2024, Amazon Web Service (AWS) GameLift now supports fully managed containers, which many in the industry, including Edgegap, had flagged as a gap in the platform for several years. This finally enables developers to use containers, such as Docker's, to deploy their game server on GameLift's orchestration to host and scale multiplayer game servers.

Let's break that down, and more importantly, ask if adding containers is enough to genuinely improve multiplayer game orchestration and online play experience for players.

What Are AWS GameLift Containers for Multiplayer Games

By using Amazon Elastic Container Service (ECS) integrated with GameLift's orchestration, game developers can manage fluctuating player loads and streamline deployment.

This setup allows developers to customize game hosting environments, automate scaling, and handle complex networking demands essential for multiplayer games.

GameLift also enhances player matchmaking and session management, providing tools to optimize player experiences. With this container-based service architecture, developers gain operational flexibility, focusing more on gameplay and less on infrastructure.

What to Look Closely at with ECS Containers on GameLift for Multiplayer Games

AWS GameLift supports containers, but for game studios focused on reducing hosting costs and maximizing player experience, several areas of the setup warrant closer examination, namely allocation, scaling, distribution in regions, matchmaking, and integration complexity.

Allocation

Allocation refers to how much space is taken up by game servers on the Virtual Machine (VM). While AWS does document vCPU and memory allocation per container group, the significant integration effort required from game studios means it can still be a challenge to optimize the "fill" of your game server for virtual machines and maximize your usage per vCPU in practice.

This complexity means studios can end up paying for unused capacity, along with the added DevOps cost of managing backfilling.

GameLift's architecture also requires pre-warming virtual machines so that containers are ready to launch, meaning a portion of fleet capacity must be held in reserve as a buffer at all times. Setting this buffer too low risks extended scale-up delays that directly affect player experience. Worth noting: GameLift's default buffer is set to 10%, which is typically lower than what studios need in production, especially for games with variable or spiky player demand. Most studios will need to increase this value, which raises the effective cost of running their fleet. You can use Edgegap's calculator to compare how buffer capacity choices affect overall hosting costs under different traffic scenarios.

If your servers end up overcapacity due to misconfigured buffers, players will experience latency and dropped frames. This undermines the core value of dedicated server orchestration.

At Edgegap, we pride ourselves on our ability to provide game developers with the ability to optimize their game server and fraction their vCPU usage to lower their overall costs.

Scaling

GameLift's documentation does cover scaling for container fleets, including target-based auto-scaling via the console or SDK. However, several practical questions remain underdocumented for studios looking to optimize container fleet scaling behavior, particularly around how scaling interacts with container pre-warming at the instance level.

If containers are pre-launched on instances, how does GameLift determine when to scale since containers are already consuming instance resources? If containers are deployed on the fly, how fast is that in practice (vertical scaling), and how does it behave across multiple regions simultaneously (horizontal scaling)?

Edgegap provides on-the-fly container deployment that guarantees up to 40 deployments of game servers, sustained for 60 minutes. AWS is one of Edgegap's providers, but Edgegap also uses 16 providers (as of writing) to scale vertically your game in the location it needs.

Additionally, Edgegap's "cold start" server is, on average, 3 seconds, which means more play and less waiting for game servers to deploy. This is a meaningful improvement in player wait times compared to cold-start durations commonly reported by developers using other platforms.

Unlike AWS, Edgegap deploys your game server to all of its 615+ locations worldwide on-demand. While it's great to scale in AWS-US-East for players on the east coast of the United States, the reality is your game hosting needs to scale up and down across the world to deliver a great online play experience to players globally.

Distribution in Regions

Public cloud, including AWS, has you pay for virtual machines regardless of whether you have game servers running. You pay for each region you want to have a presence in.

Want 10 regions, which would be the minimum for a "AA" or "triple-I indie game" to keep latency at low enough levels for players to enjoy the game with minimal complaints? You need to pay for 10 x C5 large VMs, at a minimum.

This traditional approach to hosting means that even if you are a co-tenant on a server you fully pay for, this cost structure can multiply your hosting expenses significantly. Edgegap's pricing page provides a direct comparison.

New orchestration services for game server hosting, like Edgegap, use pay-per-use instead. This means you only pay when your players play (i.e., during a deployment) worldwide at the same price every time. This represents a meaningful shift from the reserved-capacity pricing model that traditional public cloud orchestration requires.

Additionally, as Edgegap taps into the world's largest edge network, it is able to deliver 58% latency reduction on average versus a traditional public cloud orchestration.

Use with Matchmaking / Lobbie

Another challenge is that your matchmaker or lobby needs to pick and choose which location is best. You will have to create regions in your matchmaker, and the lower you try to get the latency (by adding regions in AWS), the longer your matchmaking queue time will be since you won't be able to do cross-region matches.

Edgegap's matchmaker is one of the only matchmakers with native latency-based parameters by default (as of writing), which helps you deploy game servers with the lowest latency for your players.

Ease of Integration and Use

AWS's own onboarding documentation for this setup spans multiple guides covering ECS, ECR, CodeBuild, and CloudFormation configuration, which reflects the genuine complexity involved in standing up a production-ready container fleet.

Getting everything configured is itself a significant undertaking, and once the integration work is complete, it's just the start. You will need at least one Engineer or DevOps professional to monitor and manage the ups and downs of your traffic.

The upside of "just in time" game servers, which load data to be fully custom like user-generated content (UGC), sidesteps this complexity entirely. For example, Six Days of Fallujah loads its entirely procedurally generated map during game server deployment, as does HIBERWORLD for its hundreds of thousands of UGC-developed game types and maps.

Are Containers Enough for AWS GameLift?

For games currently using GameLift that want to move to containers thanks to their ease of use, that may be sufficient for their situation.

If you want a more flexible solution that enables you to leverage hundreds of locations in a just-in-time, pay-as-you-go manner without having to manage any of it yourself, you should look at the new generation of game server orchestration services, including Edgegap.

Written by

the Edgegap Team

Get your Game Online Easily & in Minutes

Start Integrating Now!

Get your Game Online Easily
& in Minutes