When your Lambda's are living on the edge: Lambda@Edge

When your Lambda's are living on the edge: Lambda@Edge


Kah Tang

We all love our Lambda’s on AWS. They’re fast, easy and, best of all, you only pay for the times they’re requested. It’s a great product that is flexible and has integrations with almost every part of AWS.

Today I want to talk about Lambda@Edge. It’s a funny name. Because it is Lambda, but it’s also not. It is Lambda, because it’s still serverless and you still only pay when it’s requested. It’s also not entirely Lambda, because it does come with some limitations that does not apply to normal Lambda’s.

What is it?

First of all, what is Lambda@Edge? We all know Cloudfront as AWS’ content delivery network to speed up requests. It does so by deploying services around the world at edge locations where these requests come from. These edge locations can serve the content directly if it has been cached here.

But what if you want to run some logic before caching happens? You might want to run an A/B test or maybe you want to add some extra headers to the request. Or maybe you want to resize an image before serving.

These are the kind of things that can be done with Lambda@Edge. It is basically a Lambda function that is deployed to the edge location of a Cloudfront distribution and runs when a request is made.

The Lambda’s can talk to AWS’ other serverless products as well. For example, you can fetch or store files on S3, make DynamoDB requests or get key values from Secrets Manager or the Systems Manager Parameter Store.


Great, let’s run Lambda’s everywhere. You might ask, what is the catch? Well, yes. There are some gotcha’s about Lambda@Edge that need be taken into account.

Deploy Lambda to us-east-1

Here is the first and biggest gotcha. The Lambda itself MUST be deployed to N. Virginia (us-east-1). Even when Cloudfront is deployed on another region, you must first deploy the Lambda in the us-east-1 region and refer to this Lambda through it’s ARN from the Cloudfront behaviour.

When a request comes from a certain edge location to run the Lambda, Cloudfront will deploy the Lambda to that edge location prior to running it.

This part is a bit annoying when you deploy everything from Cloudformation, since it requires you to make a new stack, just to deploy the Lambda to us-east-1 and create a dependency between this region and the region of your main stack. You can however use things like Custom Resources and CDK to manage this dependency.

Version must be used

When you refer to a Lambda in your Cloudfront behaviour, you MUST refer to a specific version of the Lambda. $LATEST is not allowed. Needless to say it needs to be a published version of the Lambda.

Of course you should not forget to update the ARN to the newest version in the Cloudfront behaviour when the Lambda is changed, otherwise it will keep using the old version.


When you configure Lambda@Edge in Cloudfront, you also need to specify the trigger. It can be one of 4 values:

  • Viewer Request: When the viewer makes a request, before Cloudfront checks the cache.
  • Origin Request: When the request is not cached and is forwarded to the origin.
  • Origin Response: When the response is received from the origin, but just before it gets cached
  • Viewer Response: When the response is returned to the viewer, regardless if it’s cached.


There are some Lambda quota’s that are different per trigger. This mainly concerns the Viewer Request and Viewer Response triggers.

The Lambda’s for these triggers cannot have a timeout of longer than 5 seconds and cannot use more than 128MB of memory. Also, the compressed size of the Lambda cannot be bigger than 1MB. Last but not least, the total size of the response cannot be bigger than 40KB

For Origin Request and Origin Response triggers, normal Lambda quota’s apply.

Price difference

There is a price difference between a normal Lambda and Lambda@Edge functions. Where a normal Lambda costs $0.20 per 1 million requests and $0.0000000021 per 128MB-second, Lambda@Edge costs $0.60 per 1 million request and $0.00000625125 per 128MB-second.

If you don’t want it to run on ALL requests, you can configure the Cloudfront behaviour to only run on requests that match a certain pattern.

No VPC access

Since the Lambda won’t run in your VPC, but are deployed into the unknown, it won’t have private access to any of the resources in your VPC either. So, no RDS, no EC2 etc…

But, as stated before, you can access resources outside the VPC, such as as S3, Secrets Manager or the SSM parameter store.

Node.js and Python only

At the moment of writing, only Node.js and Python are supported languages. And only certain version 12, 10, 8 and 6 of Node.js and 3.8 and 3.7 of Python. So, make sure your Lambda@Edge code is tested with any of these versions before you deploy it to prevent any issues.

No Environment Variables

This can be a bit upsetting for the 12factor app enthusiasts among us. Lambda@Edge does not support environment variables. I repeat, Lambda@edge does not support environment variables.

There are some options to work around this though. For Origin Request and Origin Response Lambda’s, you could store these in Origin Custom headers in the Cloudfront Origin settings.

Another option is to store these in the Systems Manager Parameter Store and retrieve them from there. Since these are often environment specific, it makes sense to store them in there.

Lambda@Edge logs are… everywhere

As stated before, Lambda’s run in the region where the edge location has been requested. This also means that logging happens in the region where the Lambda@Edge function has run. This can make it a bit challenging to find out in which region you need to open Cloudwatch to see the logging for the Lambda@Edge function.

Luckily, AWS has made it a bit easier by introducing the Cloudfront Monitoring tab, where you can see in which region a Lambda@Edge function has been running. Still, it can be quite a game of whack-a-mole to debug your application among the different regions.



It’s great to work with Lambda@Edge. Just make sure you know what its limitations are and understand how it’s deployed.