In our day-to-day practice we notice organizations often underestimate the shift of moving infrastructure to the cloud. In this article, we’ll share some of our lessons learned to help you understand how to take full advantage of moving to the cloud.
1. Infrastructure-as-Click instead of Infrastructure-as-Code
Cloud providers, like Microsoft, Amazon, and Google, offer web portals to provision resources easily. In a matter of minutes, you can create a network, a virtual machine, and a database to deploy your application. This makes creating a functional application quick and easy, but the patchwork of source code complicates the underlying infrastructure with every additional click. Everything works fine until one click accidentally breaks your environment and you realize there is no “undo”-button.
All cloud providers offer a way to describe the desired infrastructure in a text file - Infrastructure as Code. To provision resources, you send the text file to the cloud provider. Text files can easily be version controlled, allowing you to track who changed what and when - and allow for a roll back in case of issues. Infrastructure as Code allows you to quickly provision or update multiple resources at once. Editing text files beats clicking in a web portal.
2. Manual installations instead of automatic deployments
Installing and updating applications manually is time-consuming and error-prone.
Instead, consistently manage everything in code. Your setup will be quick and consistent.
3. Testing in production instead of using a separate environment
Even a small parameter change can break your environment.
To lessen the risk of downtime and iterations, make sure to:
- Apply the four-eyes principle
- Automate testing
- Standardize the use of Development, Test, Acceptance and Production (DTAP) environments as part of your DevOps processes
- Plan using “What-if scenarios” that address the impact of deployment on the existing infrastructure
4. Piling up resources instead of organizing to trace back cost
As your cloud environment grows, so does its financial complexity. When spending thousands of euros on resources amongst various applications and their DTAP environments (Development, Test, Acceptance and Production), it’s important to maintain control over the costs.
To stay in control of your costs and gain valuable insights:
- Setup a subscription per environment
- Enforce naming conventions
- Apply tags to be able to break down costs per application, environment, server, customer or whatever your business requires
- Enable charge back
5. Peak performance instead of leveraging cost saving measures
The cloud only charges usage. This can lead to a continuous financial advantage, as long as you make sure you have set up your environment to only use what it needs. And for most organizations, it’s unnecessary to keep all virtual machines running 24/7 to achieve peak performance.
Stop virtual machines when you don’t need them and start them when you do. Spin up extra capacity when you need it, and shrink when you don’t. With the right insights and tooling, these processes can easily be automated.
Understanding (as well as accurately predicting) what and when you need your data, is essential to staying in control of your costs. It’s even possible to get a discount, when you reserve virtual machines for months or years in advance.
Explore whether it is possible to utilize spot instances (cheap but can be shut down any time by the cloud provider) or serverless principles. In the case of the latter, you’ll pay exactly for what you use.
6. Paralyzing governance instead of enabling
Governance should enable and not disable your organization. We’ve had overenthusiastic cloud architects and security engineers who have locked down the entire platform with great success but with that, also blocked anyone from deploying anything without prior consent.
Facilitate your developers with a playground environment with clear set budgets. Make sure that environment is completely isolated from your corporate network and data, so you don’t have to worry about potential data losses while allowing them to experiment and grow. Within Sentia, our developers also leverage these playgrounds for knowledge-sharing opportunities, such as Learning Friday sessions, webinars and workshops.
7. Focusing on just technology instead of involving people and processes
Oftentimes, organizations still carry the assumption that focusing on IT is enough. However, even with flawlessly performing technology, an organization’s digital transformation can only really succeed when processes are well-organized and workable for people. Not involving business units, Finance, Operations, HR, Security and Compliance will set you up for failure.
Security and Compliance
Many times, we see the Security and Compliance department struggle the most with the rapid digital changes. Previously, they relied on traditional controls, frameworks and physical layered networks. But the vast and flexible cloud environment brings new dimensions to data protection. Safeguarding a “software-defined cloud” is a new and challenging concept. Automate security controls and utilize off-the-shelf security policies and baselines, provided by your cloud partner, to ensure a safe transition.
Unpredictable costs aren’t an average employee’s favorite works. What do you mean, you don’t know how much it will cost, we’re agile and we’ll see and find out? Give them clear insight on how costs are calculated, as well as predicted expenditures over the months. Make sure to include their needs in your cloud strategy and SLAs, for instance with monitoring dashboards and weekly updates.
Even the most technical developer can struggle with the shift to the cloud, as it requires a new approach to truly leverage the power of the cloud. Especially in the beginning, without proper guidance, they find themselves limited rather than enabled. Provide best (and worse) practices and adoption workshops, so they too have a sprint start.
It can take a while to get every involved party on board. One of the lead developers at an Independent Software Vendor customer strongly opposed a cloud transition. He feared planning and refactoring a mission critical application would take at least three years. In reality, with trial and error, we were able to refactor the main modules of the application together within three months. With that, we did not just “lift and shift” but replatformed to containers and webapps and gained a performance increase of about 170%.
While a solid cloud strategy is essential, sometimes it’s also important to just dive in. Thanks to our “Designed for failure” approach, with IaC and testing environments, we were able to successfully refactor and improve a complex application. Seeing proof, rather than concepts, has since turned our lead developer into one of our biggest advocates for fully cloud-based IT.
8. Relying on technology to “just work” instead of expecting the unexpected
When moving to the cloud, especially platform services, you will often find the overall availability of the services is already much higher than your data centre. Infrastructure is spread across availability zones and can even be set up cross-region. Nevertheless, don’t just rely on a cloud’s extraordinary availability and call it a day.
Design and build your applications for failure. When you calculate for components to break or lose availability, you guarantee resilience in the long run. For instance, focus on minimizing the impact of a broken function or module. Decouple your dependencies, cache where you can, and introduce retry-mechanisms so you can rely on auto-healing when things don’t work. Shift your focus from “Keeping the infrastructure up and running” to optimizing your applications to be resilient under any circumstance.
Would you like to take advantage of our broad experience of moving enterprise architectures to the cloud and prevent many more mistakes? Contact us to talk about your plans with cloud.