Technology

AWS vs Azure vs GCP – Computing

What is compute?

Compute is often one of the first resources organizations deploy to get started with cloud computing. In this post, we examine the computing options available from Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).

If you’ve been in IT for any amount of time, you’re probably familiar with servers. Over the years, they have progressed from standalone white boxes to power hungry stacks of super dense computers that can make a room hotter than the sun (if not cooled properly).

As time went on, we virtualized the OS on the hardware to increase server density and better utilize that hardware.

The natural progression is to move those servers into the cloud, removing the need to manage power, cooling, and all the tasks that come with server hardware — freeing IT professionals to do the important work that adds value to the organization. And watch cat videos.

If you’d like to understand how Azure, Amazon Web Services and Google Cloud Platform implement these compute resources, keep reading!

An intro to compute

Before we begin our comparison, let’s start with a quick overview of platform as a service or PaaS and infrastructure as a service or IaaS. This is important to know because management responsibility is different for those two options for deploying services in the cloud.

An example of an IaaS service would be to deploy a web server into the cloud provider to host a website, while deploying a website to a hosted web service would be an example of using a PaaS service.

Shared responsibility model

The shared responsibility model outlines who is responsible for managing different aspects of services deployed in the cloud. For example, an organization doesn’t have to worry about physical servers and virtualization software when deploying IaaS servers on Azure, however, OS patching and software updates is still part of ongoing management.

This is important to understand when talking about compute with cloud services, because in most circumstances, compute is an IaaS service and much of the management responsibility falls on the customer.

Compute options

At the heart of of IaaS compute is the virtual machine sizes offered and specific sizes and features frequently change with each provider.

  • AWS has Elastic Cloud Compute or EC2.
  • GCP refers to their service as Compute Engine.
  • Azure has Virtual Machines to provide compute resources similar to what we’ve all enjoyed with our on-premise VMs.

Coming up, we’ll explore the size and performance options available with each as well as the operating systems, availability and scalability and some cost-savings options available with each provider.

AWS

  • Amazon EC2 offers a general purpose VM for common workloads, a compute optimized type for applications that require high performance processing and a memory optimize options for applications that benefit from large amounts of memory, such as applications that process data in memory.
  • AWS also offers an accelerated compute option for hardware accelerated processing, leveraging GPUs and a storage optimized option for workloads that require high sequential reads and write access to datasets.
  • And it also supports burstable performance instances, a cost-effective option for applications with a low baseline CPU usage.

Google Cloud

  • GCP Compute Engine also supports general purpose VMs along with compute optimize that offer high performance per CPU core.
  • There’s a memory optimize option and accelerated VMs, and just like AWS, there’s a shared core or burstable VM option available.
  • At the time of publication, GCP does not offer a storage-optimize option.

Microsoft Azure

  • Just like the others, Azure compute offers similar general purpose VMs and compute and memory optimized VMs. Accelerator optimized VMs are available referred to as GPU type.
  • Azure offers a storage optimized virtual machine, as well as burstable options for workloads that don’t require consistent access to the CPU.

Supported operating systems

Let’s move on the different types of operating systems available with each provider. The supported operating systems differ slightly between cloud providers. All three offer ready to use images for multiple versions of Windows and Linux distributions.

In addition, AWS offers Mac OS. And in Azure, a multi-user version of Windows 10, aptly called Windows 10 Multi-User is supported.

All providers offer images for a wide variety of Linux distributions. Some providers supply their own flavor of Linux, such as AWS with Amazon Linux. Specific distribution versions differ between providers.

Keeping services available is important and running services in the cloud provides a greater flexibility in how and where services are deployed. Let’s look at the options for availability and scalability next.

Availability and scalability

Each cloud provider distributes workloads across regions. A region as a grouping of one or more data centers. Distributing workloads across regions provides high availability if one region should experience an outage.

Each region is divided into multiple independent, but well-connected zones. Think of these as separate data centers located in the same city. They’re referred to as “zones” in GCP and “availability zones” in Azure and AWS.

Regions and zones are important when planning compute services in the cloud for two reasons.

  • First, they provide high availability. If workloads are distributed across multiple regions and zones, the application or service is resilient in the event of a region or zone failure.
  • Regions also allow us to put the workload closer to the customer. Latency is a factor with application performance. And until we can change the laws of physics, physical placement will be a consideration when deploying compute and other services in the cloud.

Azure, GCP and AWS all offer multiple regions and zones, but not every region supports every type of virtual machine. Some applications such as machine learning or seasonal order processing benefits from dynamically increasing the number of VMs as the load increases.

The ability to dynamically add more nodes to a resource pool when needed and deallocate those nodes when not in use provides capacity on demand while cutting costs when the resource is not needed.

AWS

  • For this type of workload, Amazon EC2 offers Auto Scaling allowing the number of EC2 instances to automatically scale out as demand increases and scale those instances down when the demand decreases. This can be done based on metrics such as CPU utilization.
  • EC2 also offers a feature called Fleet Management, which extends Auto Scaling to maintaining availability by automatically replacing unhealthy or unavailable instances.
  • In addition, Predictive Scaling uses machine learning to predict and add capacity based on usage patterns.
  • Auto Scaling is available in multiple zones, referred to as “availability zones” in AWS.

Google Cloud

  • GCP has groups of VMs called “managed instance groups” or “MIGs”. A MIG can be managed as a single entity. It supports Auto Scaling to increase and decrease the number of VM instances based on load. Scaling policies are set by CPU utilization, capacity or load balancer, or custom metrics.
  • Schedule-based auto-scaling is also an option with GCP providing for an increased capacity during times of known increased workloads.
  • Another option called Auto-Healing can recreate VMs that fail a health check.
  • Managed instance groups can be deployed to a single zone or across multiple zones in the same region.

Microsoft Azure

  • Azure, just like the others provide a centrally managed pool of identical VMs called a “scale set.”
  • The number of VMs at a scale set can increase and decrease automatically with auto-scale policies based on VM utilization. And auto-scale schedule can scale up and down based on times of anticipated high utilization. And automatic instance repair option can repair failed instances in a scale set if they become unavailable.
  • Scale sets can be deployed in availability zones in Azure for high availability across the region.

Licensing

We’ve talked a lot about hardware, but what about software? Let’s move on to options for licensing the OS running on the virtual machine.

  • Using open-source Linux servers, you only pay for the virtual hardware using the OS.
  • For the Windows OS, there’s an option to pay for the server OS with the hardware. This makes the cost of the Windows VM higher than a comparable Linux VM. The advantage to attaching the license to the VM, however, is that the VM is in compliance with Microsoft licensing.

Organizations with Microsoft volume licensing and an active software assurance contract may qualify for hybrid use benefits. This is a bring-your-own license option and applies existing Microsoft volume licensing hybrid use benefits to the virtual machine. The price of the VM is then comparable to that of a Linux VM.

Costs

There are other money-saving options for both Windows and Linux VMs. Each provider has a way of reducing the cost of a VM for some commitment of running the VM for a given amount of time.

  • With GCP, a committed use discount provides a discount of up to 70% off the pay as you go price with a one or three year committed use discount.
  • Both AWS and Azure has a similar option called reserved instances. A reserved instance applies a discount to the virtual machine price with a one or three-year commitment of service.
  • Azure, AWS and GCP have another class of VMs that provide a deep discount, up to 90% in some cases, but with a catch. They can be shut down at any time when the provider needs capacity for premium services. This is an excellent option for stateless and fault-tolerant applications that can be interrupted and resumed once the resource is back online or in dev test environments that don’t need the availability of a production environment.
    • In GCP, these are referred to as Preemptible VM instances.
    • In AWS and Azure, these are referred to as spot instances.

And there’s another option available for applications that don’t need consistent access to the CPU. These are virtual machine types for workloads that sit idle with an occasional burst of activity.

  • In GCP, these are referred to as Shared Core VMs. Shared Core VMs use time slicing on the physical processor and allow conditional bursting for short periods of time.
  • Azure B-Series and AWS T Instances have similar functionality. They provide a baseline CPU cycle and the VMs build credits during time of low utilization. This credit can then be used for periods of high CPU usage. The baseline and the amount of credit depend on the specific size in the VM family.

Sometimes you don’t even need the VMs to be available 24/7. Dev and test environments or batch processing that only run occasionally, for example. All three providers have the option to stop and terminate a running VM. This deallocates the VM from the service and stops related charges. Other associated charges, such as disc storage still apply.

Both AWS and GCP offer another option: the ability to pause a running VM, referred to as “hibernate” in AWS and “suspend” in GCP. This option pauses the VM preserving the memory state and the associated settings, such as IP address — kind of like closing the lid on a laptop.

All three providers offer a wide variety of VM sizes and options to fit the needs of any project. Along with the size, there’s also cost-savings options with reserved instances, shared and burstable CPUs, and bring your own licensing.

   
Ads Blocker Image Powered by Code Help Pro

Ads Blocker Detected!!!

We have detected that you are using extensions to block ads. Please support us by disabling these ads blocker.