New CDK Bootstrap and the EKS Cluster
In the AWS CDK Version v1.25.0, the CDK team added a new bootstrap template that includes new resources like IAM Role and S3 Buckets. From the AWS CDK Documentation: > The AWS CDK supports two…
The first virtual re:Invent has officially kicked off with Monday Night Live and Andy Jassy’s keynote. In this article we’ll highlight the 5 most important announcements and releases for day 1.
EC2 instances on AWS have run on Linux and Windows for the longest time. Now AWS is adding the third big operating system, Apple’s Mac OS. There are two reasons Mac OS hasn’t been available before.
Customer demand: Most workloads in the cloud run on Linux, followed by Windows. The use cases for Mac OS in the cloud are far more limited. At the same time, enabling Mac OS in the cloud requires a significant investment. Only now, with AWS’ scale and reach, does it make sense to build a solution with limited use cases like this.
Mac OS licensing: The other reason why running Mac OS in the cloud is not commonly available is Apple’s licensing terms: you can only run and virtualize Mac OS on Apple hardware. To overcome this, Apple has deployed thousands of Mac Mini computers in their data centers and connected them to the AWS Nitro system through Thunderbolt 3. This also explains why the only available instance type is
mac.metal: you get the entire physical instance to yourself.
So if most workloads in the cloud run Linux or Windows, what do we need Mac OS for? Well, some processes only run on Mac OS, like building and testing applications for iOS, MacOS, tvOS and Safari. If you’re building applications for these platforms at scale, you will need to do a lot of testing; for multiple versions of MacOS, for multiple hardware platforms, multiple screen sizes, and so on. Cumulatively, this can lead to hundreds or thousands of tests and builds per application version.
Previously, developers needed to maintain their own Mac hardware to support these builds and tests. With Mac OS in the cloud, you get all the conveniences and security features of EC2 - without purchasing any hardware.
There were three big announcements for Lambda yesterday:
For many people, the biggest announcement will be the billing granularity. This release will reduce costs for every single Lambda function automatically, right now. Previously, the minimum billable duration for a Lambda function was 100 milliseconds. So even if your function only ran for 10ms, you would be billed for 100ms - an overhead of 90%! Starting today, these functions will actually be billed per single ms. Workloads with quickly-processing Lambda functions will benefit the most, which brings me to announcement two.
Lambda functions can now be configured to use up to 10GB of RAM, up from the previous 3GB (or thereabouts). CPU processing power scales in parallel, and can now offer up to 6 vCPUs. If you optimize your functions for multithreading (and the new support for the AVX2 instruction set), this can drastically reduce your execution time. Combine this with the previous point, and Lambda-heavy workloads can look forward to large cost reductions.
The third release adds container images as a packaging format for Lambda. This will smooth the transition from current container-based workloads to Lambda. Additionally, a container can provide up to 10 GB of packages and supporting libraries for your functions, where the previous deployment packages (with or without Lambda Layers) were limited to 256MB. This is a big win for machine learning inference, for example, where packages like
scikit-learn would easily break size limits.
Amazon announced Aurora Serverless v1 at re:Invent 2017. Although cool in concept, it had quite a few issues in real-world applications. These included very slow scaling, high costs at any realistic load, and missing features like multi-AZ deployments.
Aurora Serverless v2 promises to solve these issues. It’s currently in preview, but will likely become generally available (GA) in 2021.
Relational databases are commonly the least flexible component in an elastic environment. For example, your EC2 autoscaling groups might scale between 2 and 200 instances, DynamoDB offers dynamic capacity between 1 and 1.000 WCUs, but your Aurora Cluster is always configured at a static instance size (an r5.xlarge maybe). Manual scaling is possible, but might take hours, depending on your database’s size. Additionally, you need to manage the infrastructure for this database.
Aurora Serverless v2 can scale up and down in a fraction of a second, matching your requests and users as they ebb and flow. This leads to a better user experience, reduced costs and less operational stress.
Amazon has released DockerHub. I can’t put it any more concisely than that. Previously, ECR repositories were always private and could only be used by authorized systems and accounts. For public images, users generally used non-AWS products like DockerHub. With this new release, Amazon enables users to use ECR for public images as well.
On November 20th, DockerHub has introduced new rate limits. These have led to many issues for users depending on public images. With ECR Public Repositories, Amazon is obviously offering an alternative solution, which in turn might act as a gateway into the broader AWS ecosystem for new users.
There are three major releases and announcements for EBS:
The new gp3 volume type is 20% cheaper than then current gp2 volume type. Additionally, it provides a higher baseline performance (in both IOPS and bandwidth), and it is the first general purpose volume type that allows you to configure IOPS independently from disk size. This last feature is great for small-size, high-IOPS workloads, which previously required you to either deploy very large volumes (with very low usage) or use the expensive io2 volume type.
Speaking of io2: you can now run io2 disks on the next generation of EBS hardware, EBS Block Express. With io2 Block Express, you can deploy disks up to 256K IOPS, 4000 MBps of throughput and 64TB in size. All of these measures are up 4x compared to normal io2 volumes, which max out at 64K IOPS, 1000MBps and 16TB. The new io2 Block Express volumes also offer 99.999% durability (0.001% annual failure rate), up from 99.8% - 99.9% for all other volume types. This is a game changer for workloads with high reliability requirements, which previously depended on strong backup mechanisms.
The third announcement is a price reduction for io2 volumes. Previously there was only one tier for provisioned IOPS, for example $0.065/provisioned IOPS-month. With the new tiered pricing structure, this cost applies to the first 32,000 IOPS, but the next tier (32,001 to 64,000 IOPS) is priced 30% lower ($0.046/provisioned IOPS-month in our example). With io2 Block Express, you can configure even higher performance, up to 256,000 IOPS. This last tier is priced 30% lower than tier 2, at $0.032/provisioned IOPS-month in our example.
Simple Storage Service got a few releases that were not covered during the keynote. These are significant enough to warrant their own blog post. Read all about it here: AWS re:Invent 2020 Day 1: S3 Announcements
We’re one day into re:Invent and I’m already overloaded. It would be good for my health if the releases slow down a bit… but I guess we’re simply in for a wild ride. You can expect these posts almost every day in the next few weeks.
I share posts like these and smaller news articles on Twitter, follow me there for regular updates! If you have questions or remarks, or would just like to get in touch, you can also find me on LinkedIn.