Overview of Amazon Elastic Compute Cloud (EC2)

Amazon Elastic Compute Cloud – usually called EC2 – is the most fundamental service from AWS when we talk about compute. It is the building block for compute on AWS. Large number of other (AWS) services use this in one form or another as underlying compute component.


When we thinking of servers – in context of AWS – we think of EC2 instances. Often, the first activity beginners start with is creating an EC2 instance – sometimes even before the Root user creates another user (as one should). AWS has made it so simple to get a server (or set of servers) up and running that you can have a simple website running literally in 5 minutes (barring proper setup of roles, custom VPC, Subnet, NACL, etc.).


Key Components of EC2

  • Processor – AWS provides a fair variety of processor choices – Intel, AMD, AWS Graviton (AWS’ custom-built processor), along with specialized processors like GPUs from NVIDIA or AWS’ machine learning oriented custom-built AWS Inferentia. The variety helps you select the right set of compute power and processing-type for your needs.
  • Storage – each instance is associated with a root device volume, which can be either EBS volume or an instance store volume. Elastic Block Store (EBS) is the most often attached volume. Alternatively, depending on instance type, an Instance Store – which is physically present on the same hardware as is the EC2 in context – may be attached.
  • Enhanced Virtualization – AWS has built its own platform (called AWS Nitro System) to bring super efficiencies to EC2 instances. Nitro System does so by offloading various virtualization functions to dedicated hardware and software.
  • Enhanced Networking – AWS has enabled 100 Gbps enhanced Ethernet networking for its EC2 instances.


Instance Types

AWS has streamlined the options for EC2 instances into categories to make it easier to pick most suitable compute architecture that matches your application needs.


General Purpose

General Purpose instance types are meant to provide a balance of compute, memory and networking resources for most common workloads.

  • A1
    • Arm (architecture) based instances
    • Custom-built AWS Graviton processor with 64-bit Arm Neoverse cores
      • Neoverse is optimized for cloud-native workloads
    • EBS-optimized by default


  • T2,  T3, T3a
    • Burstable CPU capacity – provides a balance of a baseline CPU performance, but can burst CPU at any time for as long as required
    • AWS handles burst capacity by using a CPU credit system – you earn credits when CPU consumption is below baseline threshold, and you consume those credits when you burst above that threshold
    • T3a instances are AMD processors based variation of T3 instances
    • T2 is the predecessor to T3, but still in use. The two generations have similar features, however T3 in general has higher vCPU count than T2 when compared for same size (for nano, micro, small)
  • M4, M5, M5a, M5n, M6g
    • Dedicated CPU capacity
    • EBS-optimized by default
    • M4, M5, M5n are Intel Xeon based
    • M5a is AMD EPYC 7000 series based
    • M6g is AWS Graviton2 Processor based


Compute Optimized

Compute Optimized instances are meant for applications that require higher computational performance. Well suited for batch processing workloads, high performance application servers, scientific modeling, dedicated gaming servers, etc.

  • C4, C5, C5a, C5n, C6g
    • EBS-optimized by default
    • Higher networking performance
    • C4, C5 and C5n are Intel Xeon based
    • C5a is AMD EPYC 7002 series based
    • C6g is AWS Graviton2 processor based
    • C5n provides higher network bandwidth and memory compared to C5


Memory Optimized

Memory Optimized instances are meant for applications that process large amount of data in memory.

  • R4, R5, R5a, R5n, R6g, X1, X1e
    • R4, R5, R5n, X1 and X1e are Intel Xeon based
    • R5a is AMD EPYC 7000 series based
    • R6g is Graviton2 processor based
    • EBS-optimized by default
    • Support for enhanced Networking


  • High Memory
    • High Memory instances are meant to run large in-memory databases (such as SAP HANA)
    • Offers up to 24 TiB of instance memory – highest of any EC2 instance type
    • Offers bare metal performance with direct access to host hardware
    • EBS-optimized by default


  • z1d
    • z1d instances offer both high memory as well as high compute capacity
    • Offers up to 48 vCPU capacity and 384 GiB memory
    • Intel Xeon based


Accelerated Computing

Accelerated Computing instances the most compute power of any instance type, by leveraging hardware accelerators or co-processors to perform functions like floating point number calculations, graphics processing, etc.

  • F1, G3, G4, Inf1, P2, P3
    • All of them are Xeon processor based (various models though)
    • All (except Inf1) uses NVIDIA GPUs; Inf1 uses custom-built AWS Inferentia Chips


Storage Optimized

Storage Optimized instance are meant for applications that perform high read / write on data sets on local storage

  • H1, D2, I3, I3en
    • All of them are Xeon processor based (various models though)
    • H1 and D2 use HDD based storage, providing high disk throughput
    • I3 and I3en use NVMe SSD, providing high Random I/O performance and High Sequential Read throughput


Storage options for EC2

When you create an instance, you can specify what type of Block device you would like to attach to EC2. There must be at least one, to act as the root device. You can optionally add more Block devices to the instance.


There are four types of storage used with EC2: Instance Store, EBS, EFS and S3.


EC2 Storage Options

Image courtesy of AWS


Instance Store

Instance Store is a Block storage, and resides on the same hardware where associated EC2 instance is running on.

Key points:

  • Instance Store storage is highly ephemeral.
  • Enables very high IO performance
  • May only be attached at launch time. You cannot add this type after the launch.
  • Data persists after instance reboot, but not after instance is terminated


Elastic Block Store (EBS)

EBS is a durable Block storage, and resides on a remote hardware (that is – remote to hardware where associated EC2 is running). EBS can be used as a primary storage device for the instance, specially when instance would perform frequent data updates. It’s well suited (and recommended) for databases that you want to run on instances (as opposed to using managed RDS / other databases).

Key points:

  • One or more EBS volumes may be attached to an EC2 instance.
  • An EBS volume may be detached from one EC2 instance, and attached to another one. However, at a given time it can only be attached to one instance
  • An EBS volume is specific to an Availability Zone (AZ). You can create a copy across AZ (or even Region) by first creating snapshot, and then recreating from that snapshot in the new AZ
  • AWS replicates EBS volume within that AZ to provide high durability


Elastic File System (EFS)

EFS is a fully managed file storage system that uses NFS (v4.0 and 4.1) protocol (POSIX-compliant). It resides on remote hardware, and is accessed from EC2 by mounting the file system.

Key points:

  • Fully managed – scales up and down as needed. You pay as per usage.
  • Regional service – data is stored in one or more AZs
    • Instances can access EFS across Availability Zones (AZs) and regions
    • On-premise data centers can access EFS over VPN or DirectConnect
  • Highly scalable – can scale into petabytes
  • Highly available and durable
  • Ideal candidate when multiple instances need to access the same set of data – each of the instance (that needs the data) can mount the same EFS and thus read or write to it
  • Compatible with all Linux-based AMIs for EC2s
    • Does not support Windows AMI
  • Two storage classes
    • Standard – designed for active file system workloads
    • Infrequent Access storage class (EFS IA) – cost-optimized, best suited for data that is not accessed every day
      • You can setup Lifecycle policy to move data from Standard to EFS IA based on usage pattern
      • Files smaller than 128 KiB are not eligible for Lifecycle management, and will always stay on Standard class
  • Two Throughput modes
    • Bursting Throughput – throughput scales with the size of the file system, dynamically bursting as needed
    • Provisioned Throughput – supports dedicated throughput and can be configured independently of the amount of data on the file system
  • Supports data encryption at rest and in transit
  • You can use (managed service) AWS DataSync to transfer data between on-premises storage and EFS
    • works across Regions
    • can be done over DirectConnect, VPN, or over internet
  • AWS Backup may be used to perform backups


Simple Storage Service (S3)

S3 is an Object storage service that offers very low-cost storage solution.

Key points

  • Highly scalable – virtually unlimited storage
  • Data stored on S3 is highly durable – 99.999999999% (11 9’s)



Amazon EC2 has a per hour rate, and billed down to number of seconds an EC2 instance has been up (minimum of 60 seconds each time it’s up).

There are five ways to commit and / or use EC2, and pricing varies accordingly:

  • On-demand
  • Reserved Instances
  • Spot Instances
  • Savings Plan
  • Dedicated Hosts


Also, pricing varies by the underlying instance type.


External Resources