Does using Kubernetes Agones make economic sense?

Jun 13, 2023

Does using Kubernetes Agones make economic sense?

You probably want to think twice if you're looking at Kubernetes Agones to manage your multiplayer game servers. Here's why. Agones was born out of the requirements to manage game servers using Kubernetes. It became the CRD we know and can be applied to an existing Kubernetes cluster, giving you a set of net methods and capabilities to get game-centric specific. The main feature is the capability to "allocate" a running pod, which you can't do with a standard Kubernetes.

To get Agones, you will first have to get Kubernetes running. If you've never done that, that's no minor feature. Multiple components are needed, and many network requirements come with it. Getting a standalone version running on your laptop using Kubeadm is probably a few command lines away, but installing an actual production-grade environment with auto-scaling and resiliency is a whole different story. Leaving alone the Ops side (which you'd have to do with a "free open source" solution like that), installing Kubernetes the right way, and getting the most out of the solution requires months of training and multiple engineers to operate it once its life. You will have to pay for hosting hardware and network traffic. Are you saving with a free, open-source solution when adding engineering resources?

Now let's look at Agones CRD. Once you have Kubernetes running, you will install Agones CRD, which enables those new possibilities. It adds a trim level of complexity with calls that may not be supported by your external tools to manage Kubernetes. You will also need to set up an ingress proxy which is not trivial. Most are still in beta phases and require more workforce to understand, install and manage. The main problem with Agones and Kubernetes, in general, is that it will force you as a game studio to decide on where you want your game servers to run (due to the highly centralized notion of Kubernetes), and you will have to start standby instances waiting for players to join. What looked like a cheap solution will quickly become a $15-$20k per month even if you don't have a single player. This is because 3-4 engineers to manage the answer, at least three servers as a trio controller and one minion per location, and supporting QA teams and players worldwide will require at least six sites, if not more. We are looking at 6 locations x 4 servers + 4 engineers' salaries.

You will find this out once you and your team spend six months of lots of dollars investigating that path. There are alternatives.

A managed container as a service solution is probably the best path forward. For example, AWS Fargate, ECS, or google cloud run will allow you to connect your ci/cd automatically to push a new game server image and start game servers on demand. You could build a simple orchestration to start and stop instances and manage allocations from your matchmaker. Another alternative would be to look at similar services as Fargate but tailored for game servers like Edgegap.

Edgegap removes the need for your matchmaker to manage allocation, and you would not need an orchestration anymore since the orchestrator is integrated into the hosting platform.