AWS Global Accelerator: Terrible Name, Awesome Service

March 30, 2020

AWS Global Accelerator: Terrible Name, Awesome Service

I recently had my first encounter with Global Accelerator. Before that I thought it was irrelevant for me; I don’t deploy many global applications, and networking in AWS is generally plenty fast. I figured Global Accelerator was one of those “yeah that exists too” services I wouldn’t need or ever use. Boy was I wrong.

This blog explains the features of Global Accelerator and the ways it has blown me away. It will also explain why it has a terrible name.

What is Global Accelerator?

On its face, Global Accelerator is a service that provides two static IP addresses. You can configure Global Accelerator to route traffic any traffic to these IP addresses to one or more resources in AWS. The IP addresses are announced (more details about this later) from multiple edge locations around the world, allowing your traffic to enter the AWS backbone as close to you as possible, instead of traversing large parts of the public internet.

Let’s visualize this. In the image below we’re hosting an app in Ireland and not using Global Accelerator. The red lines denote traffic from different visitors over the public internet.

Global Accelerator Post - No GA

Now let’s look at the same app, but with Global Accelerator enabled. The green lines denote traffic within Amazon’s private networks. The green circles indicate Global Accelerator edge locations.

Global Accelerator Post - GA Enabled

As you can see, the traffic traverses a much smaller portion of the public internet. Compared to the example without Global Accelerator traffic enters the Amazon global network a lot sooner, after which it can efficiently be routed to its destination.

Now you might say: that’s not fair! Amazon doesn’t have these straight fiber lines from all over the world to their data centers! And you would be correct. However, they do have dedicated fiber lines and a very efficient backbone that allows traffic between two countries or continents with a lot fewer network hops and without congestion. The resulting speed increase is visualized in the straight green lines.

But then Global Accelerator is a good name!

Yes, if this was all Global Accelerator did it would be a good name. The thing is that Global Accelerator does a lot more. We will look at the other things is does in a bit. But first:

How does it work?

Global Accelerator uses IPv4 anycast to perform its magic. With anycast (not to be confused with unicast), multiple hosts on the internet have the same public IP address. In the case of Global Accelerator, the edge locations all have the IP addresses for all accelerators, announced by the Border Gateway Protocol (BGP).

A BGP announcement is - ignoring its many technical intricacies - a way for routers to announce to other routers ‘I am able and willing to receive traffic for these IP addresses’. When another router, for example at your internet provider, receives traffic for one of these addresses, it will forward it to the router that announced it can process that traffic. Every time a packet moves from one router to another system is called a hop. Every hop costs processing time, so fewer hops means lower latency.

When using anycast, multiple locations announce that they can receive traffic for an IP address. This means there are multiple paths to route traffic to its destination. When this is the case, a router will determine which path is the shortest (or fastest) and send the traffic over that path.

Let’s visualize this again. The blue circles are the hops between the client and the edge locations. Going through Brazil would take three hops, while connecting to the US edge location takes only one hop. A router determining the traffic’s route would thus prefer the red line.

Global Accelerator Post - Routes

TCP termination at the edge

Apart from the reduction of hops and maximizing the use of the congestion-free Amazon backbone, AWS recently introduced a third way to improve traffic with Global Accelerator: TCP termination at the edge.

A TCP session starts with a three-way handshake. I’ve described the technical details in a previous blog post, where we’ve analyzed a HTTP request with Wireshark. If the distance between your client and your application is very large (like a client in Japan and an application in Ireland), the three packets for this handshake need to travel:

  1. All the way from Japan to Ireland…
  2. All the way back from Ireland to Japan…
  3. And again from Japan to Ireland…

before the actual request can take place. With TCP termination at the edge, this handshake takes place between the client in Japan and the edge location in Japan (low latency), and almost simultaneously between the edge location and your application (also low latency). The result is that using Global Accelerator the client can start talking to the application sooner than it could otherwise have done.

This is very close to networking black magic and one of the things about Global Accelerator that blows me away.

Other things 1: static IP addresses for Application Load Balancers

So, enough about how Global Accelerator accelerates your traffic. I was making a point about it being terribly named. Let’s look at the other things Global Accelerator does. Case in point 1: how Global Accelerator unlocks static IP addresses for Application Load Balancers.

Global Accelerator can currently forward traffic to four targets:

  1. Elastic IP addresses
  2. Network Load Balancers
  3. EC2 Instances
  4. Application Load Balancers

The first target is a static IP address. You allocate it once, you can shift it between resources in a region, and the IP address will not change.

The second target (the NLB) can optionally have static IP addresses assigned to it. Actually, you can assign an Elastic IP address per availability zone the NLB is in, so it’s much the same as the previous point.

The third target (an EC2 instance) can also have an Elastic IP address assigned to it.

The fourth one, the Application Load Balancer (ALB), has never had an option for static IP addresses. There are technical reasons for it: when the load balancer scales in or out its compute resources might be replaced, and the DNS address for the ALB will point to the IP address of the new compute resource. That’s why you will often hear “Always use the DNS name of an ALB, never its underlying IP addresses. They might change.”

However, there are two common cases where you really want an ALB with static IP addresses:

  1. Your DNS is hosted outside of AWS and you want your domain apex (http://example.com) to point to an ALB using a CNAME. The ISC explains why that is impossible, and what workarounds you could apply. Many of these workarounds are clumsy, and it would greatly help if the ALB just had static IP addresses you could point your A records to.
  2. Some secure system needs to access your ALB. This system uses an outbound firewall with an IP whitelist to restrict what resources on the internet can be accessed. Now they need to either whitelist all of AWS, or you need a static IP addresses.

The official workaround to get an ALB with static addresses was to put an NLB with static IPs in front of your ALB, point the NLB target group to the (changing) IP addresses of the ALB, then have a Lambda check the ALB’s IP addresses and update the target group when necessary. This is quite a bad and overly complex solution.

With Global Accelerator, we finally have a comprehensive and easy solution for this problem. As discussed before, Global Accelerator has two static IP addresses per accelerator, and it allows ALBs as a target. It takes about ten clicks in the console to set up.

Other things 2: transparently routing traffic to the nearest region

With AWS it’s easy to deploy resources all over the world. For example, you could define your application in CloudFormation templates and deploy copies of your application in the US, Europe and Asia.

You could then use Route 53 with a Geolocation routing rule to direct traffic to its nearest region. But the different regions would have their own IP addresses; Route 53 would tell clients the addresses it thinks are closest to it, but if a client would choose to manually connect to the IP addresses of another region, it could.

With Global Accelerator, the amount of regions and resources becomes invisible to the client. They just connect to the static Global Accelerator IP addresses, and their traffic will be transparently routed to the closest region.

Let’s test this with a simple setup: one EC2 instance in Virginia, one in Frankfurt and a continuous curl to the Global Accelerator address. During the curl loop I will turn on a VPN to Canada for a few seconds.

curl

As you can see, Global Accelerator switches over to another region because my traffic now originates in Canada.

We can further tweak this behavior with Traffic Dials. A Traffic Dial on an endpoint group is like a filter: it only allows a portion of the traffic through. For example, setting the Traffic Dial for my Frankfurt region to 10% will only allow 1 in every 10 requests through, with the other requests redirecting to my other region.

traffic dial console

Now my curl loop from the Netherlands results in this:

traffic dial

Because Global Accelerator also does health checks, this is a great feature for pilot light scenarios. Additional use cases are A/B testing and safely deploying applications in new regions. All while using a static IP addresses and no DNS changes.

Other things 3: transparent blue/green deployments

The previous section focussed on balancing traffic between regions. Global Accelerator also allows you to balance traffic between resources in a single region using weights.

For example, you could have two Application Load Balancers in one region. One hosts your old v1 application, the other one hosts the completely refactored v2 application.

With Global Accelerator, you can transparently route a small percentage of traffic to the new application. When your tests succeed you can gradually increase traffic, until the old application is completely cut off. Again, completely transparently behind two static IP addresses and without DNS changes.

The console for configuring endpoint weights looks like this:

weights

As you can see, the second instance is configured to receive 5% of this regions traffic. But as its health check is failing, no traffic will currently be sent to it.

After making sure the instance is healthy, a curl loop looks like this:
curl weight

The traffic dials from the previous section are processed before the weights. So if you’re only accepting 10% of traffic in eu-central-1, and only sending 5% of traffic with eu-central-1 to the second instance, only 0.5% of all global traffic will be routed there.

Other things 4: client affinity

Because multiple edge locations announce the same IP address, it might occur that two paths from a client to different edge locations have equal lengths. In this case, a few packets might take path A, and a few packets take path B.

To assure that these packets end up at the same endpoint, Global Accelerator calculates a five-tuple hash consisting of the source IP, source port, destination IP, destination port, and the protocol. Every packet that matches this hash will be sent to the same endpoint, regardless of its path.

Additionally, Global Accelerator offers an option to enable Client Affinity for a listener. When enabled, Global Accelerator will not use the five-tuple hash, but instead use a two-tuple hash of the source IP and destination IP. This results in any traffic from a specific client (TCP, UDP, changing source ports) to be routed to the same endpoint.

This unlocks completely new use cases. For example, image a game server. A client can request leaderboards over a HTTP API on port 443 (using TCP), but the real time game engine uses UDP over port 9000. With client affinity, you can now guarantee that these two traffic flows end up in the same server or server group, making sure they are in sync.

Other things 5: multiple listeners and targets

A single Global Accelerator has two IP addresses. On those addresses, you can configure multiple listeners. For example TCP on port 80, TCP on port 443 and UDP on port 9000. Each of these listeners has its own endpoints groups, endpoints, weights and traffic dials. This allows you to manage multiple traffic streams from the same IP address, without worrying about congestion, DNS or external routing protocols.

Other things 6: a great user experience

The console for Global Accelerator is really simple. There is an almost Apple-esque usability to it: there are four steps, each with a very small amount of options. Setting up a new accelerator takes about a minute.

Hidden below this polished service is a giant feat of engineering, one of the most impressive implementations of software defined networking (SDN) I’ve ever seen. The ten clicks you apply in the console result in IP addresses being announced all over the world, traffic flows being defined and shaped, dynamic failover being setup, health checks being deployed, all on a global scale. As Arthur C. Clarke said:

Any sufficiently advanced technology is indistinguishable from magic.

Well, even while I understand the principles and most of the implementation details of Global Accelerator, it feels like magic to me.

Other things 7: low cost

With all of the above being said, Amazon could have easily asked a hefty price for it. Something like AWS Shield Advanced which goes for $3000,- per month. But they don’t.

For every accelerator you have running you pay $0.025 per hour. That’s $18 per 30 days. Apart from that you pay per gigabyte traveling over the AWS global network. The price depends on the source and destination regions. A few examples:

  • USA to USA: $0.015 /GB
  • USA to Europe: $0.015 /GB
  • Australia to USA: $0.070 /GB

Generally (and logically) local transfers are cheap, long distance transfers are expensive. This means you might need to deploy some resources in a local area (like Australia) to make sure the traffic is handled locally. It will probably be cheaper then paying the inter-region Global Accelerator cost. And as we’ve shown in this post, that’s relatively easy to achieve using Global Accelerator’s traffic policies.

What else?

If you would like to more about Global Accelerator I would advise to start with the Global Accelerator FAQs. It has concise answers on questions like:

  • Where is AWS Global Accelerator deployed today?
  • How does AWS Global Accelerator work together with Elastic Load Balancing (ELB)?
  • How is AWS Global Accelerator different from Amazon CloudFront?

If that doesn’t satisfy your thirst for knowledge, continue with the Global Accelerator Documentation.

But most of all, my advice would be to build a few test setups with Global Accelerator. Deploy some EC2 instances in different regions, set up an accelerator, run some ping and curl loops. Play with the traffic dials and weights. There is no better way to learn than getting experience.

Conclusion: a terrible name, an awesome service

I think my post has shown many use cases and an impressive set of features for Global Accelerator. It not only accelerates your traffic; it also makes your ALBs more useful, makes multi-region setups easy as pie, your environments and traffic policies more secure, it enables point and click blue/green deployments and allows for client affinity between different protocols. And all of that at a rate that’s affordable to even the smallest startup.

And that’s my problem with Global Accelerator. The name does not cover its function(s). Now I’m glad they didn’t go for something abstract like Fargate, Sagemaker, Neptune or Macie. But I didn’t look at Global Accelerator properly because of its name.

If I can make a suggestion: rename it to AWS Holy Grail tm.

Luc van Donkersgoed

Luc van Donkersgoed