What is Cloud Computing? A Beginners Guide

What is Cloud Computing? A Beginners Guide

This article is a beginner guide for anyone who wanted to understand Cloud computing, its benefits and the different cloud computing layers.

Before moving further, let's understand first:

What is Cloud Computing?

Cloud Computing can be defined as on-demand services that can be delivered over the internet.

These services can be broadly classified into

  1. Infrastructure as a Service (IAAS)
  2. Platform as a Service (PAAS)
  3. Software as a Service (SAAS)

We will look into more details about these services in the latter part of this article.

Benefits of using Cloud Computing

Flexibility: The businesses that have fluctuating bandwidth demands need the flexibility of Cloud Computing. If you need high bandwidth, you can scale up your cloud capacity. When you do not need high bandwidth, you can just scale down. There is no need to be tied into an inflexible fixed capacity infrastructure.

Disaster Recovery: Cloud Computing provides robust backup and recovery solutions that are hosted in the cloud. Due to this, there is no need to spend extra resources on homegrown disaster recovery. It also saves time in setting up disaster recovery.

Automatic Software Updates: Most of the Cloud providers give automatic software updates. This reduces the extra task of installing a new software version and always catching up with the latest software installs.

Low Capital Expenditure: In Cloud computing, the model is Pay as you Go. This means there is very less upfront capital expenditure. There is a variable payment that is based on usage.

Collaboration: In a cloud environment, applications can be shared between teams. This increases collaboration and communication among team members.

Remote Work: Cloud solutions provide the flexibility of working remotely. There is no on-site work. One can just connect from anywhere and start working.

Security: Cloud computing solutions are more secure than regular on-site work. Data stored in local servers and computers are prone to security attacks. In Cloud Computing, there are very few loose ends. Cloud providers give a secure working environment for their users.

Document Control: Once the documents are stored in a common repository, it increases the visibility and transparency among companies and their clients. Since there is one shared copy, there are fewer chances of discrepancies.

Competitive Pricing: In Cloud computing, there are multiple players, so they keep competing among themselves and provide very good pricing. This comes out much cheaper compared to other options.

Environment-Friendly: Cloud computing saves precious environmental resources also. By not blocking the resources and bandwidth.

On-demand computing in Cloud Computing

On-demand computing is the latest model in enterprise systems. It is related to Cloud computing. It means IT resources can be provided on demand by a Cloud provider.

In an enterprise system, the demand for computing resources varies from time to time. In such a scenario, On-demand computing makes sure that servers and IT resources are provisioned to handle the increase/decrease in demand.

A cloud provider maintains a poll of resources. The pool of resources contains networks, servers, storage, applications, and services. This pool can serve the varying demand for resources and computing by various enterprise clients. There are many concepts like- grid computing, utility computing, autonomic computing, etc., that are similar to on-demand computing.

This is the most popular trend in computing models as of now.

Different layers of Cloud computing

The three main layers of Cloud computing are as follows:

  1. Infrastructure as a Service (IAAS): IAAS providers give low-level abstractions of physical devices. Amazon Web Services (AWS) is an example of IAAS. AWS provides EC2 for computing, S3 buckets for storage, etc. Mainly the resources in this layer are hardware like memory, processor speed, network bandwidth, etc.

An IAAS provider can give physical, virtual or both kinds of resources. These resources are used to build a cloud.

IAAS provider also handles security and backup recovery for these services. The main resources in IAAS are servers, storage, routers, switches, and other related hardware etc.

  1. Platform as a Service (PAAS): PAAS providers offer managed services like Rails, Django, etc. One good example of PAAS is Google App Engineer. These are the environments in which developers can develop sophisticated Software with ease.

Developers just focus on developing Software, whereas PAAS providers handle scaling and performance.

A PaaS provider offers a platform on which clients can develop, run and manage applications without building the infrastructure.

In PAAS, clients save time by not creating and managing infrastructure environments associated with the app that they want to develop.

III. Software as a Service (SAAS): SAAS provider offers an actual working software application to clients. Salesforce and Github are two good examples of SAAS. They hide the underlying details of the Software and just provide an interface to work on the system. Behind the scenes, the version of Software can be easily changed.

The main benefit of SaaS is that a client can add more users on the fly based on its current needs. And the client does not need to install or maintain any software on its premises to use this Software.

Different deployment models in Cloud computing

Cloud computing supports the following deployment models:

  1. Private Cloud: Some companies build their private cloud. A private cloud is a fully functional platform owned, operated and used by only one organization.

The primary reason for the private cloud is security. Many companies feel secure in a private cloud. The other reasons for building a private cloud are strategic decisions or control of operations.

There is also a concept of Virtual Private Cloud (VPC). In VPC, a private cloud is built and operated by a hosting company.

But it is exclusively used by one organization.

  1. Public Cloud: There are cloud platforms by some companies that are open for the general public and big companies for use and deployment. E.g. Google Apps, Amazon Web Services, etc.

The public cloud providers focus on layers and applications like- cloud application, infrastructure management, etc. In this model, resources are shared among different organizations.

III. Hybrid Cloud: The combination of public and private clouds is known as the Hybrid cloud. This approach provides benefits of both the approaches- private and public cloud. So it is a very robust platform.

A client gets the functionalities and features of both the cloud platforms. Using a Hybrid cloud, an organization can create its cloud and pass its cloud control to another third party.

Why companies now prefer Cloud Computing architecture over Client-Server Architecture?

In Client-Server architecture, there is one to one communication between client and server. The server is often at an in-house data centre, and the client can access the same server from anywhere. If the client is at a remote location, the communication can have high latency.

In Cloud Computing, there can be multiple servers in the cloud. There will be a Cloud controller that directs the requests to the right server node. In such a scenario, clients can access cloud-based service from any location, and they can be directed to the one nearest to them.

Another reason for Cloud computing architecture is of high availability. Since there are multiple servers behind the cloud, another server can serve the clients seamlessly even if one server is down.

Cloud Transformation: Application Re-Architect and Rewrite approaches on cloud

Cloud adoption is high for the last few years, and enterprises are aggressively started their cloud journey irrespective of their size.

Enterprises working to move key enterprise applications to the public cloud using the holistic cloud migrations approach defined by cloud providers in their cloud adoption frameworks or recommend by cloud practitioners.

Most migrations normally follow Rehosting ("Lift and Shift") or Refactoring where existing application migrated as-is or with minimal code/configurations changes to get rid of the on-premise data centre other overhead by adopting fewer cloud benefits.

Very few are considering modernizing their workload by exploring the majority of the cloud features, in this cloud transformation path, we could see approaches like Rewrite (Rebuild) and Rearchitect.

These approaches require a lot of effort to make the application completely cloud-native by understanding real business functionalities of legacy applications and understanding the monolithic application architecture, application complexity, Integration points, Data Model, etc. where we can translate business requirements into scalable, secure, resilience and reliable solutions by exploiting maximum cloud features through these detailed analyses to seamless migration on the cloud.

Rearchitect: We can choose this migration approach when there is no requirement for business functionalities change. The current application is compatible with separating as a small service or monolithic applications are compatible with host as containerization technology.

The major benefits of adopting the rearchitect approach are that we can reuse the existing code base and operational cost benefits; however, it's limited use of cloud features, so the OPEX benefit is less than the the Rewrite approach. Ex: Monolithic applications deploying on Docker containers with minimal design changes.

Rewrite: an application is a complete change in architecture style in a monolithic architecture application. It will be replaced by a new architecture style like Microservices using CaaS (Container as a Service) approach Serverless Architecture.

Why Rewrite a legacy application?

  • Retiring legacy technologies like VB, Excel Macros, Lotus, COBOL etc
  • Rewrite application to incorporate major business functionalities
  • Caveat around current application scalability, availability, security and maintenance challenges
  • Achieving Omnichannel experience in Legacy applications architecture is challenging
  • Complex Data model and Tangled & Fat application design

In the Rewrite scenario, completely scrapping the current code base and rewriting an application from scratch. Incorporate business functionalities changes. Adopting the latest architecture style and programming language, design patterns and principles, etc. Methodologies like 12 Factor Apps created

However, cloud-native development can adopt this methodology to build best in class cloud-native applications specifically for SaaS apps.

Benefits of Rewrite applications.

Cloud-native approaches containerization of applications or serverless computing would give more cost-effective than migrating Azure VMs or AWS EC2 of any cloud provider IaaS platform.

  • Streamline deployment approach by adopting a DevOps approach
  • Better Resilience
  • Enables Scalability
  • Enhanced approaches for Monitoring and Logging cloud capabilities
  • Enterprise-scale Security and Compliance
  • Full advantage of cloud capabilities

Enterprises should perform IT landscape portfolio assessment to capture all the Servers/infra stack, Technology stacks, Dependencies and also the business functionalities of applications by engaging the right cloud adoption assessment specialist and cloud practitioners team to determine which "R"'s these applications are fit into and more.

Cloud provides an opportunity to consider modernization of business-critical applications leveraging cloud-native capabilities and latest architecture styles, principles, Design methodologies.

Transition to Public Cloud?! Use public networks!

Most organizations are busy with digital transformation and, in most cases, for the same underpinning reasons, with the transition to cloud or using cloud functionality, and have a cloud strategy. Unfortunately, most organizations will use "traditional" thinking in their cloud strategy, to impact their success.

There was a data centre with four walls and a minimum number of entry points in the traditional days, if possible one or maybe two. All traffic coming in was going through the firewall, etc. Things from outside were dangerous and from within the walls was safe.

Things from inside were trusted, and things from outside were checked, filtered, etc. and were considered not trusted.

Some organizations had dual data centres or twin data centres or some other concept involving more than one location. In that case, the network connections would be protected, or better to say, the walls were moved from only the data centre to cover the whole area and create the new safe inside and not safe outside world.

Of course, this already led to discussion in the past, and therefore, in some cases, network-segments were created, VLANs were introduced, and the physical network was segmented in smaller chunks. Traffic was restricted between these segments, in general, these chunks were still quite large.

With the coming of use of the public cloud, many organizations are still using the above and extend their network "into the cloud", which is a very bad idea. They use things like "direct connect" and "express route" to do this, thinking this is handy, and the walls are moved, but the applications and functionality in the cloud can easily connect with those on-premises and the other way around.

But the goals they had with the cloud strategy are highly undermined by this approach. The first problems with this traditional approach:

Attackers, in most cases, come from "inside", an employee, contractor, a subcontractor.

There is no "one or two points of access" anymore, there are multiple proxies, VPN's, networks, and there might even be a "weak protected WiFi"-connection involved that somehow has access to the network to use an application or to access a resource.

When being "in", no new big boundaries for access

Scaling is difficult with a multi-cloud strategy. They are pulling in a new "segment" like a new cloud service provider is not a walk in the park and asks for additional expenses.

Apart from the previous point, scaling is difficult if all your traffic needs to go over "private routes" like an "express route" or a "direct connect" connection, soon it will be too much because the trend is that the amount of data being exchanged between functions, applications and other endpoints is exponentially growing.

Agility and flexibility were primary goals, setting up or moving these "outside walls" takes serious effort and time.

Providing access to a subcontractor to do some maintenance or change on your application reached through a public network is less of an on-boarding process than providing someone access to your whole network.

Digital business innovates at the software speed, but networking has innovated at hardware speed (Gartner, 2019). The digital transformation is about speed, teams' autonomy, reduction of cost, and maybe most importantly, new business models and new business designs. New interactions between businesses, people, and last but not least, "things". There will be more traffic to the public cloud than the on-premises data centre, including more sensitive data in the public cloud than on-premises.

The data centre is not the pivot in the middle anymore for your organization. It is just another ecosystem used by the organization. Using public networks scaling is easier. No extra steps are involved. It does mean that the whole traditional model being used up till now needs to change. From trusting your network to a zero-trust policy and that is also not a walk in the park. The zero-trust model needs proper identification (IP address or physical location is not enough) and encryption of (sensitive) data. Identification counts for everything: users, applications and devices.

After taking this step, it does not matter anymore if something is routed over a public network, the full flexibility of the cloud can be used, ranging from SaaS providers to features provided by cloud service providers. Are you using the Microsoft Graph API? No problem, these kind of things already have a public endpoint and can be safely used over public networks, that kind of flexibility is wanted for everything.

APIs can not only be used for exposing business- and application functions to the outside world but can also be used as a safe way to provide application- and business functions to other ecosystems (multi-cloud approach, one cloud environment having the functionality, being used both from on-premises and different cloud provider environment in use).

This way, the full flexibility, speed, agility and probably many more of the cloud goals are achieved, and the full potential is released. The heavy maintenance on the "traditional" model and way of working is being eliminated. Implementing zero trusts is a challenge, but the bonus is huge.

Frequently Asked Questions

With cloud computing, resources are available in minutes, which means companies can respond to new market developments much more rapidly. Dovetailed with the inherent agility of cloud resources is DevOps, which realigns software development and deployment to create continuous integration and continuous delivery.

Cloud computing is named as such because the information being accessed is found remotely in the cloud or a virtual space. Companies that provide cloud services enable users to store files and applications on remote servers and then access all the data via the Internet. ... Cloud computing can be both public and private.

It has allowed us to quickly adapt and cater to the ever-changing needs of businesses and their employees. Cloud computing can process large volumes of data and facilitate global deployment, allowing businesses to create more innovative and dynamic ways of working.

Scroll to Top