Three must-haves for your multi-cloud architecture

Three must-haves for your multi-cloud architecture

Your preflight checklist should have centralized account management, resource management, and asset normalization.

Most cloud architects are finding that their world is suddenly heterogenous. Once we could focus on a single public cloud provider, today, we have as many as four in the mix. The architecture patterns have moved from intra-cloud to inter-cloud, where complexity and risk come in.

As a result, architects, including myself, have put together processes to ensure that most bases are covered—much like a pilot uses a preflight checklist. These include items such as cross-cloud governance, security, operations, etc. However, a few things that are vital for success are often forgotten. Here are my top three:

Cross-cloud, centralized user account management. 

If you’re looking for real success with multicoloured, you need to treat the group of public cloud providers as a single cloud as much as possible. There should be a common user management layer to add, remove, or change user accounts using a single point of control capable of talking to each cloud natively.

Besides making user management much less demanding, centralized account management improves security by consistently making the identities represented to each cloud provider. Identity access management systems will be more consistent as well, cloud security will be, well, more secure.

Cross-cloud resource management. 

This category can be AIops tools, cloud management platform tools, or anything that monitors the use of resources, such as storage and compute (including provisioning). Most importantly, automated de-provisioning to return the resource to the pool. This stops the cloud provider from billing for that resource.

I get a call a month from somebody in a panic because they allocated a huge amount of cloud resources and never shut them down. The bills are enormous, and it’s tough to get the cloud providers to forgive them, mistake or no. Multicloud means more to keep track of and a greater chance of costly mistakes. 

Normalization of assets.

Let’s say that you’re using the same database brand in each cloud within your multi-cloud. This is not cost- or operationally efficient, considering that you’re likely paying more than you should for license costs, and one cloud running the same resources is going to be much less than the others.

IT departments often think that using the same database in more than one cloud is redundancy—not keeping all of your data eggs in the same public cloud basket. If one cloud provider “breaks bad” on you, you can move to the same database on another cloud.

Although I’m certainly down with risk reduction, it may not be the best approach to run production databases using the same technology and brand in more than a single cloud provider. Other methods are just as risk-averse, not as complex, and less costly to run. Again, just a checklist item to define better ways to solve the same set of business problems.

Building multi-cloud is not easy. I suspect we’ll get much better during the next few years by learning from others' mistakes. For now, let’s avoid being the ones who make mistakes.

The multi-cloud challenge: Building the future everywhere

Fear of the cloud has evaporated. Instead, most companies now use at least several public clouds, from AWS to Azure to Salesforce to Slack. Hence the ascendance of the term “multi-cloud,” which now encompasses not just the management of IaaS and SaaS clouds but also private clouds of virtualized on-prem resources.

The low barrier to entry of the cloud has been both a blessing and a curse. The ability to simply open a cloud account and start using an application or building one has delivered unprecedented agility. But it also makes it easy for stakeholders to go off in their directions, sometimes with too little regard for cost or security risks.

Multicloud’s problem is as old as IT: the problem of governance. For some, that’s an ugly word because it smacks of a bureaucracy that stands squarely in the way of getting things done, as in: Fill out your request in triplicate, and you’ll get a couple of cloud VMs in six weeks if you’re lucky. But few would advocate anarchy, either – you don’t want developers running around building cloud applications on a whim using, say, pricey AI/ML services and live customer data.

In a recent CIO Think Tank, Thomas Sweet, vice president of IT solutions at GM Financial, introduced a well-chosen phrase: “minimum viable governance.” Instead of bashing people with prohibitions or elaborate approval processes, give them lightweight cloud “guardrails” to prevent duplicate efforts or poor cloud security. Couple those with cost ceilings and a catalogue of pre-approved cloud services, and developers or enterprising LoB managers have the freedom they need to experiment and innovate.

In particular, the big three IaaS clouds – AWS, Google Cloud Platform, and Microsoft Azure – provide environments where innovation can flourish, in part because they’re cauldrons of emerging technology, from serverless computing to AR/VR app dev platforms. For many organizations, “multi-cloud” really refers to adopting two or more big-three IaaS clouds, mainly because the second or third cloud offers a new or better cloud service others lack. Wrapping guardrails around that diversity is an endless governance challenge.

But that’s where the future is pointing: Toward a world where we assemble hundreds of cloud services from multiple providers into the applications, our customers and we need, iterating and innovating as well go. This collection of articles from InfoWorld, CIO, Computerworld, CSO, and Network World explains how forward-looking organizations are moving toward that goal and the lessons they’re learning along the way.

Using OPA for multi-cloud policy and process portability

As multi-cloud strategies become fully mainstream, companies and dev teams must figure out how to create consistent approaches among cloud environments. Multicloud, itself, is ubiquitous: Among companies in the cloud, a full 93% have multi-cloud strategies—meaning they use more than one public cloud vendor like Amazon Web Services, Google Cloud Platform, or Microsoft Azure. Furthermore, 87% or those companies have a hybrid cloud strategy, mixing public cloud and on-premises cloud environments.

Companies move to the cloud at all to improve the performance, availability, scalability, and cost-effectiveness of computing, storage, network, and database functions. Then, organizations adopt a multi-cloud strategy largely to avoid vendor lock-in.

But multicloud also presents a second alluring possibility, an extension of that original cloud-native logic: the ability to abstract cloud computing architectures so they can port automatically and seamlessly (if not just quickly) between cloud providers to maximize performance, availability, and cost savings—or at least maintain uptime if one cloud vendor happens to goes down. Cloud-agnostic platforms like Kubernetes, which run the same in any environment—whether that’s AWS, GCP, Azure, private cloud, or wherever—offer a tantalizing glimpse of how companies could achieve this kind of multi-cloud portability.

But while elegant in theory, multi-cloud portability is complicated in practice. Dependencies like vendor-specific features, APIs, and difficult-to-port data lakes make true application and workload portability a complicated journey. In practice, multi-cloud portability only really works—and works well—when organizations achieve consistency across cloud environments. For that, businesses need a level of policy abstraction that works across said vendors, clouds, APIs, and so on—enabling them to easily port skills, people, and processes across the cloud-native business. While individual applications may not always port seamlessly between clouds, the organization’s overall approach should.

Using OPA to create consistent policy and processes across clouds

One of the tools that have become popular, precisely because it’s domain agnostic, is Open Policy Agent (OPA). Developed by Styra and donated to the Cloud Native Computing Foundation, OPA is an open-source policy engine that lets developer teams build, scale, and enforce consistent, context-aware policy and authorization across the cloud-native realm. Because OPA lets teams write and enforce policies across any number of environments, at any number of enforcement points—for cloud infrastructure, Kubernetes, microservices APIs, databases, service meshes, application authorization, and much more—it allows organizations to take a portable approach to policy enforcement across multi-cloud and hybrid cloud environments.

Moreover, as a policy-as-code tool, OPA enables organizations to take the otherwise in company wikis and people’s heads and codify them into machine-processable policy libraries. Policy as code not only lets organizations automatically enforce policy in any number of clouds but also shift left and inject policies upstream, closer to the development teams who are working across clouds, to catch and prevent security, operational, and compliance risk sooner.

Pairing OPA with Terraform and Kubernetes

As one example, many developers now use OPA in tandem with infrastructure-as-code (IaC) tools like Terraform and AWS CDK. Developers use IaC tools to make declarative changes to their vendor-hosted cloud infrastructure—describing the desired state of how they want their infrastructure configured and letting Terraform figure out which changes need to be made. 

Developers then use OPA, a policy-as-code tool, to write policies that validate Terraform's changes and test for misconfigurations or other problems before they are applied to production.

At the same time, OPA can automatically approve routine infrastructure changes to cut down on the need for manual peer review (and the potential for human error that comes with it). This creates a vital safety net and sanity check for developers and allows them to experiment risk-free with different configurations. While the cloud infrastructure itself is not portable between vendors, the approach is, by design.

Similarly, developers also use OPA to control, secure and operationalize Kubernetes across clouds and even across various Kubernetes distributions. Kubernetes has become a standard for deploying, scaling and managing fleets of containerized applications. Just as Kubernetes is portable, so, too, are the OPA policies that you run on top of it.

There are many Kubernetes use cases for OPA. One popular use case, for example, is to use OPA as a Kubernetes admission controller to ensure containers are deployed correctly, with appropriate configuration and permissions. Developers can also use OPA to control Kubernetes ingress and egress decisions, for example, writing policies that prohibit ingresses with conflicting hostnames to ensure that applications never steal each other’s internet traffic. Most important for the multi-cloud cloud, perhaps, is the ability to ensure that each Kubernetes distribution, across clouds, is provably in compliance with enterprise-wide corporate security policies.

Creating standard cloud-native building blocks

Before companies can port applications seamlessly across public clouds, they must first create standard building blocks for developers across every cloud-native environment. Along these lines, developers use OPA to create policy and automate the enforcement of security, compliance, and operations standards across the CI/CD pipeline. This enables repeatable scale for any multi-cloud deployment while speeding development and reducing manual errors.

OPA’s enabling of policy as code means that companies can use tools like Terraform for their public clouds and OPA for policy, Kubernetes for container management and OPA for policy, plus any number of microservices API and app authorization tools and OPA for policy, while running those same OPA policies in the CI/CI pipeline, or on the laptops of developers.

In short, organizations need not waste any time reverse-engineering applications for multi-cloud portability. Instead, they can focus on building a repeatable process, using common skills, across the entire cloud-native stack.

Multicloud is not really about clouds anymore.

Most think of multicoloured just how it sounds: an architecture that leverages plural public and/or private clouds simultaneously, in support of best-of-breed cloud services. In other words, we use multicloud as a path to access the cloud services that are the best fit.

As multicloud becomes the norm, I’ve observed that multi-cloud-based architectures' design and deployment are not about the underlying clouds. There are a few reasons for this:

First, technology to manage multiclouds should sit above and separate from the cloud-native resources it is managing. 

It does not matter if the tools are for AIops, identity and access management, network monitoring, metadata management, etc. When deploying multiclouds, it’s always better to leverage technology that spans the clouds and is not limited to operating a single branded platform.   

The past common pattern was to leverage cloud-native tools and technology for each cloud services provider in a multicloud configuration, but this means that your multicloud deployment will have too many parts. Using specific tools for each specific cloud leads to too much complexity, and the operational costs of running a multi-cloud deployment with excessive complexity will be high.

Second, cloud services providers are becoming abstractable. 

We can view storage systems, databases, platforms, or even security systems through common interfaces that remove us from dealing with all of the cloud-native interfaces for the specific providers in our multicloud. This has arisen in the past few years and did not work well until this year.  

The idea is that if you can look at several very different cloud service providers using abstraction (such as abstraction of cloudops using AIops tools, or abstraction of development and security using develops tools), you’ll be able to leverage those clouds as similar resources that span the clouds. A common notion of data storage, process integration, orchestration makes multicloud much simpler and thus valuable. 

The focus on multi-cloud shouldn’t be about how individual cloud services providers play a role; it should be about the software, tools, and other technology that sit above those cloud resources to make multiclouds viable for most enterprises. When multicloud is no longer about clouds and becomes about configuring technology into a multicloud solution—that’s something new.

Frequently Asked Questions

A multi-cloud strategy is an approach that operates any combination of private, public and hybrid clouds. Therefore, an organisation may have multiple public and private clouds, or multiple hybrid clouds, all either connected or not.

A multi-cloud architecture is one that includes two or more clouds of the same type. Some organizations use multiple private clouds to deliver services, while others use multiple public clouds from different vendors – these are both examples of multi-cloud architectures.

A multi-cloud strategy gives companies the freedom to use the best possible cloud for each workload. In contrast, single-cloud stacks impose a significant cost. Where there could be greater power drawn from the unique capabilities of every cloud, there is higher complexity and the limitation of proprietary systems.

Scroll to Top