History of Containers: The Future of the Gaming Industry

Aug 27, 2020

A lot has been written about the history of containers over the years, and it is still surprising to me that many of the folks we talk to in the gaming industry have not yet started looking for ways to use them in their efforts for scalability and automation. However, it’s understandable: they focus on what matters to them, which is their games, the fun players have, and keeping the lights on.

In the meantime, however, app and website developers have been fully using this technology’s scalability and cost-saving potential. I’ll introduce my (very humble) perspective on the history of containers, and why this is all about to change for game studios in the next few months.

The History of Containers starts with Virtualization

The concept of virtualization is a great one. Create a layer of abstraction to use physical resources in a better way. I first heard about it in 2003 when I was at Bell Mobility, looking at launching EVDO, from a rep at Sun Microsystems who had this “new tech” he wanted to show. That was VMWare, a company we did not know, that had just launched virtual center & vmotion.

As geeks, we had a lot of interest in the technology, but the entire engineering team had significant doubts. Those doubts were since we were already pushing our hardware to its limit. CDMA traffic was skyrocketing, people were starting to use phones for data, and we had to grow at a crazy pace. Using some of this precious resource to “split” them into multiple VM was not bringing a lot of value. On top of that, application vendors prevented us from moving to VM for “support reasons.” It was not making much sense and had little value back then.

Challenges of Virtual Machines

We now know what happened to VMWare and such technologies. SDN/NFV, Openstack and such became the norm. This allowed for a much more flexible “applications management” through this abstract layer. Hat tips to the engineer at Amazon who saw this wave coming. Virtual machines brought us ease of use, and a plethora of tools to make SysAdmin, Developers & DevOps life much easier. Servers got much more powerful, so suddenly, the heap of resources needed for virtualization was not as harmful as the benefits it could bring.

One problem persisted with this virtual layer; “The operational system”. For each virtual machine, we’ve been forced to package the OS according to the application requirements. Even for the same application, for each instance, you would have to store twice the operational system, run twice every component for this OS, and add yet another layer of unnecessary elements for the end-user. Solving this problem is the next chapter in the history of containers.

The Solution

Three years after I first heard about VMware, some smart guys at Google started to work on something called Cgroup. Their goal was to isolate resources from applications. This work in junction with namespace isolation created what we now called Containers.

One of the key benefits of containers is to prevent having to duplicate an entire OS for each instance of a given application. It allows you to use CPU & memory resources in an optimized way, preventing you from paying for items not meant to improve your service. Along with cost savings, it brings many other benefits like rapid deployment, ease of use for developers, faster testing (+ automation with pipeline), close to unlimited migration, and many more.

What is a Container?

Think of a container as a cooking recipe. You list what you want for your application to work, and the end service, what you care about, will be up and running. “I need this kind of OS, this specific release, please include those packages, change X and Y, add my application, and run it like that.” Once your recipe is completed, you can create a running container.

The nature of a container is to be stateless. This is quite key in understanding how to use them. The typical analogy between VM and container is “cat vs cattle.” If a VM is a cat, and your cat is sick, you will bring it to the vet for a checkup (the same way you will log in to your VM to fix it).  If a container crashes, like a cow on a large farm, you may not be able to save it and choose to replace it with a younger one.

The Stateless Nature of Containers

That’s where the stateless nature comes into play. Storing information in a container is supported by various mechanisms (either map a stateful volume within a container, extract/push the data outside, etc.). If you write anything in the container without those mechanisms, once the container is killed due to a restart, you will lose this information.

Containers are typically not “restarted,” they are shut down, and a new one is started once you want the service back. We’ve had customers who were writing log files in their virtual machines and were retrieving those daily for analytics. Converting them to a container-based solution required this process to be changed so that those logs get pushed in real-time to prevent losing them. This was not a significant change and brought a few benefits on top of leveraging containers, but this shows that sometimes what can be considered as a walk in the park may need some planning.

The History of Containers is Still Being Written

This new technology brought new capabilities around mobility and scalability, which created a whole new ecosystem. Kubernetes has been a hot topic for the last few years, and we now have a slew of solutions and alternatives around that.

Note that I have not talked about Docker yet. The reason is that Docker is one of many technologies to use containers. Here at Edgegap, we typically use Docker as it is the most popular, but we’ve had to deal with others like LXC, ContainerD and Rkt. Each one has pros and cons, and we’ve seen that some specific markets “prefer” one vs the other. At Edgegap, we’ve been using Dockers for game elements with a lot of success. Those customers who were already using containers were mainly leveraging Docker’s power.

The Debate About Userspace vs Kernel Space

Not everything is around containers. We’ve heard a few objections over the last two years. One of the concerns we hear a lot involves the core of the technology. Containers will run in user space vs kernel space. Kernel space is where your operating system’s heart will run, where resources like memory are managed and such. Userspace is where applications would typically run. There is a perceived risk of doing so, especially when you have two applications from different customers in a shared environment.

This can be true if the environment is not configured correctly. Whether it be from a shared resource poorly allocated (quota not enforced), run with user rights above what’s needed (root anymore?), image management, or a virtual network perspective, there is a series of best practices that need to be followed. Edgegap is proud to list that we support those and actively follow the market to make sure we fix 0-day attacks and use best practices.

Containers for Games?

Major game studios have been using Windows virtual machines for years. Some of them moved to Linux based, but are still using VM. Be it VMWare-based infrastructure, or Openstack-based, games over two years old will mainly run in VM. It has been like that for as long as cloud vendors have been on the market.

Multiple tools emerged over the years to manage those virtual machines. For example, AWS Gametech has a tool to scale up and down your VM needs, based on past traffic. This leverages their highly centralized data centers and can be seen as a fix to a problem that has been around for years.

The history of containers for games is just starting

Google created a project called Agones to create a plugin on top of Kubernetes to use it as a game server manager. This is a step in the right direction as it helps studios move to containers while leveraging existing infrastructures like matchmakers.

The flow of the communication remains the same with game allocations, clusters and such. The downside is that you still have to use “clusters,” i.e. a highly centralized environment. You cannot get closer to your players and provide lower latency. You will have to “reserve” resources and pay for some of them even when they are not used.

The real power of a container is to be started only when needed, stopped when it’s no longer used, and moved around as if it were a simple tiny LEGO block. Forget “hot/cold warmup” for virtual machines, starting a container takes a few seconds, if not milliseconds.

All the studios we’ve met in the last two years have told us they were interested in the benefits of container-based technologies. The question is not “if” this is the moment for containers to be added to the toolset of multiplayer game developers worldwide, but “when”.

Need help?

At Edgegap, we are specialized in container-based solutions like micro-services. Our platform and our team help studios provide an improved online experience to players worldwide to increase retention and monetization of live service games. We’re here to help you migrate your services to a container-based solution and leverage our platform’s strength to get the most of the new value’s containers can bring to your studio.

Get Your Game Online, Easily & in Minutes

Get Started

Get Your Game Online, Easily & in Minutes

Get Started

Get Your Game Online, Easily & in Minutes

Get Started