Multiplayer games deserve more than AWS Gamelift containers for their orchestration service
Starting this November, Amazon Web Service (AWS) GameLift now supports fully managed containers (a whole 5 years after we complained it lacked them), which finally enables developers to use container, such as Docker’s, containers to deploy their game server on Gamelift’s orchestration as to host and scale multiplayer games servers.
Let’s break that down, and more importantly – ask if adding container is enough to genuinely improve multiplayer game’s orchestration and online play experience for its players.
What are AWS Gamelift Containers for Multiplayer Games
By using Amazon Elastic Kubernetes Service (EKS) or Elastic Container Service (ECS) integrated with GameLift’s orchestration, game developers can manage fluctuating player loads and streamline deployment.
This setup allows developers to customize game hosting environments, automate scaling, and handle complex networking demands essential for multiplayer games.
GameLift also enhances player matchmaking and session management, providing tools to optimize player experiences. With this container-based service architecture, developers gain operational flexibility focusing more on gameplay and less on infrastructure.
What AWS Doesn’t Say about ECS Containers on Gamelift for Multiplayer Games
AWS Gamelift supports containers, but it still lacks transparency in critical areas for game server orchestration optimization that help studios reduce their hosting costs and maximize online play experience – namely allocation, scaling, distribution in regions, matchmaking, and simplicity of ease of integration & use.
Allocation
First, AWS is unclear how allocation works in Gamelift.
Allocation is the how much space is taken up by game servers on the Virtual Machine (VM). Beyond it being a painful integration process for game studios, it means it’s challenge to optimize the “fill” of your game server for virtual machines to maximize your usage per vCPU.
This lack of transparency means studios pay for unused capacity, and additional DevOps cost of having to manage backfilling.
Second, you still have a minimum of 30% of overhead since Gamelift must prewarm virtual machines, so that the fleet container can starts the containers. While you can setup a lower value in Gamelift, doing so will result in long (read: player experience breaking duration) period during scale up where there won’t be enough capacity.
Watch out for Gamelift’s calculator; they set by default an overhead of 10% which we’ve never seen achieved in the field, along with spot-instances which require your game binary (and game mode) to support being shutoff at a few minutes notice.
Which, concretely means that if you aren’t using a large-enough server, means a poor player experience such a latency and dropped frames due to the server being overcapacity. At that point, may well use peer-to-peer (P2P) networking instead.
At Edgegap, we pride ourselves on our ability to provide game developers with the ability to optimize their game server and fraction their vCPU usage to lower their overall costs.
Scaling
AWS doesn’t clarify how how the scaling of its VMs works. Gamelift main value prop is the ability to scale VMs on demand, but they don't mention how this works for containers at all in their documentation.
If they are pre-launching containers, then we're back to the scaling question; how do they know when to scale since the container are already taking up space on the machine?
If they are deploying containers on the fly, how fast is it (vertical scaling of game servers) and how does it work across the regions selected (horizontal scaling)?
Edgegap provides on-the-fly container deployment that guarantees up to 40 deployment of game server, sustained for 60 minutes. AWS is one of Edgegap’s providers, but it also uses 16 providers (as of writing) to scale vertically your game in the location it wants.
Additionally, Edgegap “cold start” server is, on average, 3 seconds – which means more play and less waiting for game server to deploy. A massive quality of life upgrade for any online game versus the reported 10+ seconds elsewhere.
Unlike AWS, Edgegap deploys your game server to all of its 615+ locations worldwide on-demand. While it’s great to scale in AWS-US-East for players in the east coast of the United States, the reality is your game hosting needs to scale up and down across the world to deliver a great online play experience to players across the globe.
Distribution in Regions
Public cloud, including AWS, has you pay for the virtual machines (regardless of whether you have game servers running or not), you will have to pay for each region you want to have a presence in.
Want 10 regions, which would be the minimum for a “AA” or “triple-I, indie game” to keep latency at low-enough levels for players to enjoy the game with minimal complaints? You need to pay for 10 x C5 large VM, at a minimum.
This traditional approach to hosting means that even if you are a co-tenant in your server you fully pay for, this can easily 5-8x your costs.
New orchestration services for game server hosting, like Edgegap, uses pay-per-use instead. Which means that you only pay when your platers play (i.e., during a deployment) worldwide at the same price everytime. A true game changer in terms of pricing.
Additionally, as Edgegap taps into the world’s largest edge network, it is able to deliver 58% latency reduction on average versus a traditional public cloud orchestration.
Use with Matchmaking / Lobbies
Another challenge is that your matchmaker or lobby need to pick and choose which location is the best. You will have to create regions in your matchmaker, and the lower you will try to get the latency (by adding regions in AWS), the longer your matchmaking queuing time will be since you won’t be able to do cross-regions matches.
Edgegap’s matchmaker is the only matchmaker with latency-based parameters by default, which helps you deploy game server with the lowest latency for your players.
Ease of Integration and Use
AWS definition of simplicity is a blog post about it takes 10 pages to explain how it works.
The unfortunate reality is that it requires you to integrate and configure Gamelift, which is itself a challenge, and a whole stack of ECS which will orchestrate containers.
Once the integration work is complete, it’s just the start - you will need at least 1 Engineer or DevOps who will watch those and manage the ups and down of your traffic.
Additionally, it is also unclear if you can do custom elements per containers, e.g., such as injecting environment variables.
The upside of “just in time” game servers, which are loading data to be fully custom like user-generated content (UGC), sidesteps this issue completely. For example, Six Days of Fallujah loads its entirely procedurally generated map during the game server deployment, or HIBERWORLD’s hundreds of thousands of UGC-developed game types and maps.
Are Container Enough for AWS Gamelift?
For games currently using Gamelift, and want to move to containers thanks to their ease of use, maybe that’s enough for situation.
Unfortunately, if you want to have a more flexible solution which enables you to leverage hundreds of locations in a just in time, pay as you go manner (and not have to manage any of it), you should probably look at the new generation of game server orchestration services – including Edgegap.
Written by
the Edgegap Team