AWS re:Invent 2020 Day 8: Building a Multiplayer Game Architecture with Global Accelerator Custom Routing

AWS re:Invent 2020 Day 8: Building a Multiplayer Game Architecture with Global Accelerator Custom Routing

Luc van Donkersgoed

Luc van Donkersgoed

Custom Routing for Global Accelerator is a niche feature: your company might not need it, but companies that do have a use for static routing over a global network are going to be extremely happy Amazon did the heavy lifting for them. In this post we will use Global Accelerator Custom Routing to build an architecture for game lobbies: making sure users in the same multiplayer game are hosted on the same EC2 instance.

Global Accelerator Post - GA Enabled

Let’s quickly recap the basic Global Accelerator functionality. An Accelerator consists of two static IP addresses, which are announced with BGP from many edge locations around the world. When a user connects to these IP addresses their traffic is routed to the closest edge location, and onto the AWS global backbone. This reduces latency and congestion. In my earlier post AWS Global Accelerator: Terrible Name, Awesome Service I wrote:

It not only accelerates your traffic; it also makes your ALBs more useful, makes multi-region setups easy as pie, your environments and traffic policies more secure, it enables point and click blue/green deployments and allows for client affinity between different protocols.

Global Accelerator acts as a load balancer: it routes the traffic to the closest available region. Within one region, the traffic will commonly be routed to an Elastic Load Balancer, which further distributes the traffic over a group of instances.

Through these multiple layers of load balancing it becomes impossible to predict to which instances requests and traffic will be routed. When your application architecture requires fine-grained control over traffic flow, Custom Routing for Global Accelerator comes into play.

Example use case: game lobbies

Multiplayer games often have very strict latency requirements: when one player makes a move in a shooter or race game, the server and other players should be processing that move in milliseconds. To achieve this, the move cannot be processed by a cluster of servers - the network latency between the servers would slow down the game too much. To group a number of players on the same server, multiplayer games often use lobbies. A lobby is hosted on a single server, and so players connecting to the same lobby are connecting to the same instance.

One server instance will generally be able to host multiple lobbies. Below, we will assume one server hosts ten lobbies. Global Accelerator Custom Routing requires each of these lobbies to be available on their own network port. Our server’s lobbies will be available on ports 2000 to 2009.

Global Accelerator Lobbies

Matchmaking

In a non-Global Accelerator setup this server would have a public IP address, and some external system would keep track of users and lobbies. This external system is called the matchmaking service.

Let’s say a player named Axel wants to host a new game. They click ‘new multiplayer game’ in the main screen and call the game “Axel’s game”. The matchmaking service assigns them an empty lobby, which happens to be available at port 2002. This lobby is now accessible at 18.123.67.242:2002 and Axel is added to that lobby. The matchmaking service also adds an entry for this game in a database.

New Game

Bjorn knows Axel and wants to join his game. He is browsing available multiplayer games, and spots Axel’s game at the top of the list. He presses the ‘join game’ button, and the matchmaking service retrieves the IP and port for Axel’s lobby. Bjorn’s game client connects to this address, and Axel and Bjorn are now hosted on the same server. When the game starts, their moves will be processed by the same instance, and latency will be as low as possible.

Scaling out

Of course, ten lobbies aren’t going to be nearly enough for our super popular game. So let’s scale out! We add many game servers, but retain a single matchmaking service.

Multiple Servers

In this example we have three game servers, each with a public IP address. The matchmaking service knows about all of them, and allocates lobbies to players as it sees fit.

Reducing latency with Global Accelerator

As we’ve discussed, achieving the lowest latency is essential for multiplayer games. This goes for the server architecture, but the delays between the client (the gamer) and the server are just as important.

Without Global Accelerator, clients connecting to your game server might travel across many hops and congested networks to reach the multiplayer server:
Global Accelerator Post - No GA

Global Accelerator can theoretically solve this issue by allowing traffic to enter the AWS backbone earlier. The AWS backbone is highly efficient and congestion-free, which improves the user experience.
Global Accelerator Post - GA Enabled

Before the release of Custom Routing for Global Accelerator (AWS News - AWS Blog) this remained a theoretical solution. Because Global Accelerator had no way to pin traffic to a single server, game traffic might end up at any server. This made the implementation of lobbies impossible.

So how does Custom Routing solve this issue?

Introducing Custom Routing

Simply put, Custom Routing for Global Accelerator dedicates a single port on its public IP addresses to every single lobby port on every EC2 instance. So if you have two game servers with four lobbies each, Global Accelerator will have eight ports:
Custom Routing

If more servers are added or the servers would have more lobbies the amount of ports on Global Accelerator would need to increase as well. For example, 1000 servers with 20 lobbies each would result in 20,000 open ports on Global Accelerator. The maximum amount of supported ports is 65,535.

Through the mechanisms of port mapping game developers can use the Global Accelerator network for low latency access to their game servers, while retaining precise control over the destination ports and servers for each client.

Maintaining the mapping

At this point the game servers no longer need to be supplied with public IP addresses. In fact, every server, in whichever region they are placed, will be accessible on the static Global Accelerator IP addresses. This allows us to remove the IP address column from the lobbies database entirely, and just register ports.

If your application needs to know which port on the Global Accelerator maps to a specific port on an EC2 instance, you can use the ListCustomRoutingPortMappingsByDestination API. Conversely, to find out what EC2 instance and port a specific Global Accelerator port connects to, you can use the ListCustomRoutingPortMappings API.

Caveats

The Custom Accelerators only support EC2 instances as targets. Routing to load balancers and Elastic IPs is not supported.

CloudFormation is also not supported, nor is Bring Your Own IP (BYOIP).

Health checks and failovers are not supported either. This means you will need to implement your own health checks and remove an unhealthy instance (and its ports) from Global Accelerator through in-house code.

Conclusion

Custom Routing broadens the audience for Global Accelerator. Where Global Accelerator was previously only able to provide latency and stability gains for distributed and scalable systems, Custom Routing now unlocks the same benefits for more constrained architectures. However, this comes at a cost of reliability, as health checks and failovers are no longer supported.

Releases like these show how Amazon continues to work to solve the undifferentiated heavy lifting for many use cases. And as 90% of Amazon’s roadmaps is driven by customer demand, quite a few companies have likely asked for this feature.

This article is part of a series published around re:Invent 2020. If you would like to read more about re:Invent 2020, check out my other posts:

I share posts like these and smaller news articles on Twitter, follow me there for regular updates! If you have questions or remarks, or would just like to get in touch, you can also find me on LinkedIn.

Luc van Donkersgoed
Luc van Donkersgoed