

# Amazon EBS volume types
<a name="ebs-volume-types"></a>

Amazon EBS provides the following volume types, which differ in performance characteristics and price, so that you can tailor your storage performance and cost to the needs of your applications. 

**Important**  
There are several factors that can affect the performance of EBS volumes, such as instance configuration, I/O characteristics, and workload demand. To fully use the IOPS provisioned on an EBS volume, use [EBS–optimized instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html). For more information about getting the most out of your EBS volumes, see [Amazon EBS volume performance](ebs-performance.md).

For more information about pricing, see [Amazon EBS Pricing](https://aws.amazon.com/ebs/pricing/).

**Volume types**
+ [Solid state drive (SSD) volumes](#vol-type-ssd)
+ [Hard disk drive (HDD) volumes](#vol-type-hdd)
+ [Previous generation volumes](#vol-type-prev)

## Solid state drive (SSD) volumes
<a name="vol-type-ssd"></a>

SSD-backed volumes are optimized for transactional workloads involving frequent read/write operations with small I/O size, where the dominant performance attribute is IOPS. SSD-backed volume types include **General Purpose SSD** and **Provisioned IOPS SSD **. The following is a summary of the use cases and characteristics of SSD-backed volumes.


|  | [Amazon EBS General Purpose SSD volumes](general-purpose.md) | [Amazon EBS Provisioned IOPS SSD volumes](provisioned-iops.md) | 
| --- | --- | --- | 
| Volume type | gp3 6 | gp2 | io2 Block Express | io1 | 
| Durability | 99.8% - 99.9% durability (0.1% - 0.2% annual failure rate) | 99.999% durability (0.001% annual failure rate) | 99.8% - 99.9% durability (0.1% - 0.2% annual failure rate) | 
| Use cases |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/ebs/latest/userguide/ebs-volume-types.html)  |  Workloads that require: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/ebs/latest/userguide/ebs-volume-types.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/ebs/latest/userguide/ebs-volume-types.html)  | 
| Volume size | 1 GiB - 64 TiB  | 1 GiB - 16 TiB  | 4 GiB - 64 TiB  | 4 GiB - 16 TiB  | 
| Max IOPS | 80,000 3 (64 KiB I/O 4) | 16,000 (16 KiB I/O 4) | 256,000 3 (16 KiB I/O 4)  | 64,000 (16 KiB I/O 4) | 
| Max throughput | 2,000 MiB/s | 250 MiB/s 1 | 4,000 MiB/s | 1,000 MiB/s 2 | 
| Amazon EBS Multi-attach | Not supported | Supported | 
| NVMe reservations | Not supported | Supported | Not supported | 
| Boot volume | Supported | 

1 The throughput limit is between 128 MiB/s and 250 MiB/s, depending on the volume size. For more information, see [`gp2` volume performance](general-purpose.md#gp2-performance). Volumes created before **December 3, 2018** that have not been modified since creation might not reach full performance unless you [modify the volume](ebs-modify-volume.md).

2 To achieve maximum throughput of 1,000 MiB/s, the volume must be provisioned with 64,000 IOPS and it must be attached to a [ Nitro-based instance](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html). Volumes created before **December 6, 2017** that have not been modified since creation might not reach full performance unless you [modify the volume](ebs-modify-volume.md).

3 [ Nitro-based instances](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html) support volumes provisioned with up to 256,000 IOPS. Other instance types can be attached to volumes provisioned with up to 64,000 IOPS, but can achieve up to 32,000 IOPS.

4 Represents the required I/O size to reach maximum IOPS within the volume's throughput limit.

5 `io2` Block Express volumes are designed to deliver an average latency of under 500 microseconds for 16KiB I/O operations.

6 On Outposts, gp3 volumes support sizes up to 16 TiB, IOPS up to 16,000, and throughput up to 1,000 MiB/s.

For more information about the SSD-backed volume types, see the following:
+ [Amazon EBS General Purpose SSD volumes](general-purpose.md)
+ [Amazon EBS Provisioned IOPS SSD volumes](provisioned-iops.md)

## Hard disk drive (HDD) volumes
<a name="vol-type-hdd"></a>

HDD-backed volumes are optimized for large streaming workloads where the dominant performance attribute is throughput. HDD volume types include ** Throughput Optimized HDD** and **Cold HDD**. The following is a summary of the use cases and characteristics of HDD-backed volumes.


|  | [Throughput Optimized HDD volumes](hdd-vols.md#EBSVolumeTypes_st1) | [Cold HDD volumes](hdd-vols.md#EBSVolumeTypes_sc1) | 
| --- | --- | --- | 
| Volume type | st1 | sc1 | 
| Durability | 99.8% - 99.9% durability (0.1% - 0.2% annual failure rate) | 
| Use cases |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/ebs/latest/userguide/ebs-volume-types.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/ebs/latest/userguide/ebs-volume-types.html)  | 
| Volume size | 125 GiB - 16 TiB | 
| Max IOPS per volume (1 MiB I/O) | 500 | 250 | 
| Max throughput per volume | 500 MiB/s | 250 MiB/s | 
| Amazon EBS Multi-attach | Not supported | 
| Boot volume | Not supported | 

For more information about the Hard disk drives (HDD) volumes, see [Amazon EBS Throughput Optimized HDD and Cold HDD volumes](hdd-vols.md).

## Previous generation volumes
<a name="vol-type-prev"></a>

Magnetic (`standard`) volumes are previous generation volumes that are backed by magnetic drives. They are suited for workloads with small datasets where data is accessed infrequently and performance is not of primary importance. These volumes deliver approximately 100 IOPS on average, with burst capability of up to hundreds of IOPS, and they can range in size from 1 GiB to 1 TiB.

**Tip**  
Magnetic is a previous generation volume type. If you need higher performance or performance consistency than previous-generation volumes can provide, we recommend using one of the current generation volume types.

The following table describes previous-generation EBS volume types.


|  | Magnetic | 
| --- | --- | 
| Volume type | standard | 
| Use cases | Workloads where data is infrequently accessed | 
| Volume size | 1 GiB-1 TiB | 
| Max IOPS per volume | 40–200 | 
| Max throughput per volume | 40–90 MiB/s | 
| Boot volume | Supported | 

# Amazon EBS General Purpose SSD volumes
<a name="general-purpose"></a>

General Purpose SSD (gp2 and gp3) volumes are backed by solid-state drives (SSDs). They balance price and performance for a wide variety of transactional workloads. These include virtual desktops, medium-sized single instance databases, latency sensitive interactive applications, development and test environments, and boot volumes. We recommend these volumes for most workloads.

Amazon EBS offers the following types of General Purpose SSD volumes:

**Topics**
+ [

## General Purpose SSD (gp3) volumes
](#gp3-ebs-volume-type)
+ [

## General Purpose SSD (gp2) volumes
](#EBSVolumeTypes_gp2)

## General Purpose SSD (gp3) volumes
<a name="gp3-ebs-volume-type"></a>

General Purpose SSD (gp3) volumes are the latest generation of General Purpose SSD volumes, and the lowest cost SSD volume offered by Amazon EBS. This volume type helps to provide the right balance of price and performance for most applications. It also helps you to scale volume performance independently of volume size. This means that you can provision the required performance without needing to provision additional block storage capacity. Additionally, gp3 volumes offer a 20 percent lower price per GiB than General Purpose SSD (gp2) volumes.

gp3 volumes provide single-digit millisecond latency and 99.8 percent to 99.9 percent volume durability with an annual failure rate (AFR) no higher than 0.2 percent, which translates to a maximum of two volume failures per 1,000 running volumes over a one-year period. AWS designs gp3 volumes to deliver their provisioned performance 99 percent of the time.

**Tip**  
For latency-sensitive workloads, we recommend that you use io2 Block Express volumes. `io2` Block Express volumes are designed to deliver an average latency of under 500 microseconds for 16KiB I/O operations. `io2` Block Express volumes also deliver better outlier latency compared to General Purpose volumes, reducing the frequency of I/Os exceeding 800 microseconds by over 10 times. For more information, see [Provisioned IOPS SSD (`io2`) Block Express volumes](provisioned-iops.md#io2-block-express).

**Topics**
+ [

### gp3 volume performance
](#gp3-performance)
+ [

### gp3 volume size
](#gp3-sie)
+ [

### Migrate to gp3 from gp2
](#migrate-to-gp3)

### gp3 volume performance
<a name="gp3-performance"></a>

**Tip**  
gp3 volumes do not use burst performance. They can indefinitely sustain their full provisioned IOPS and throughput performance.

**IOPS performance**  
gp3 volumes deliver a consistent baseline IOPS performance of 3,000 IOPS, which is included with the price of storage. You can provision additional IOPS (up to a maximum of 80,000) for an additional cost at a ratio of 500 IOPS per GiB of volume size. Maximum IOPS can be provisioned for volumes 160 GiB or larger (500 IOPS per GiB × 160 GiB = 80,000 IOPS).

**Throughput performance**  
gp3 volumes deliver a consistent baseline throughput performance of 125 MiB/s, which is included with the price of storage. You can provision additional throughput (up to a maximum of 2,000 MiB/s) for an additional cost at a ratio of 0.25 MiB/s per provisioned IOPS. Maximum throughput can be provisioned at 8,000 IOPS or higher and 16 GiB or larger (8,000 IOPS × 0.25 MiB/s per IOPS = 2,000 MiB/s).

**Note**  
On Outposts, gp3 volumes support sizes up to 16 TiB, IOPS up to 16,000, and throughput up to 1,000 MiB/s.

### gp3 volume size
<a name="gp3-sie"></a>

A gp3 volume can range in size from 1 GiB to 64 TiB.

### Migrate to gp3 from gp2
<a name="migrate-to-gp3"></a>

If you are currently using gp2 volumes, you can migrate your volumes to gp3 using [Modify an Amazon EBS volume using Elastic Volumes operations](ebs-modify-volume.md) operations. You can use Amazon EBS Elastic Volumes operations to modify the volume type, IOPS, and throughput of your existing volumes without interrupting your Amazon EC2 instances. When using the console to create a volume or to create an AMI from a snapshot, General Purpose SSD `gp3` is the default selection for volume type. In other cases, `gp2` is the default selection. In these cases, you can select `gp3` as the volume type instead of using `gp2`.

To find out how much you can save by migrating your gp2 volumes to gp3, use the [Amazon EBS gp2 to gp3 migration cost savings calculator](https://d1.awsstatic.com/product-marketing/Storage/EBS/gp2_gp3_CostOptimizer.dd5eac2187ef7678f4922fcc3d96982992964ba5.xlsx).

## General Purpose SSD (gp2) volumes
<a name="EBSVolumeTypes_gp2"></a>

They offer cost-effective storage that is ideal for a broad range of transactional workloads. With `gp2` volumes, performance scales with volume size.

**Tip**  
`gp3` volumes are the latest generation of General Purpose SSD volumes. They offer more predictable performance scaling and prices that are up to 20 percent lower than `gp2` volumes. For more information, see [General Purpose SSD (gp3) volumes](#gp3-ebs-volume-type).   
To find out how much you can save by migrating your `gp2` volumes to `gp3`, use the [Amazon EBS gp2 to gp3 migration cost savings calculator](https://d1.awsstatic.com/product-marketing/Storage/EBS/gp2_gp3_CostOptimizer.dd5eac2187ef7678f4922fcc3d96982992964ba5.xlsx).

`gp2` volumes provide single-digit millisecond latency and 99.8 percent to 99.9 percent volume durability with an annual failure rate (AFR) no higher than 0.2 percent, which translates to a maximum of two volume failures per 1,000 running volumes over a one-year period. AWS designs `gp2` volumes to deliver their provisioned performance 99 percent of the time.

**Topics**
+ [

### `gp2` volume performance
](#gp2-performance)
+ [

### `gp2` volume size
](#gp2-size)

### `gp2` volume performance
<a name="gp2-performance"></a>

**IOPS performance**  
Baseline IOPS performance scales linearly between a minimum of 100 and a maximum of 16,000 at a rate of 3 IOPS per GiB of volume size. IOPS performance is provisioned as follows:
+ Volumes 33.33 GiB and smaller are provisioned with the minimum of 100 IOPS.
+ Volumes larger than 33.33 GiB are provisioned with 3 IOPS per GiB of volume size up to the maximum of 16,000 IOPS, which is reached at 5,334 GiB (3 X 5,334).
+ Volumes 5,334 GiB and larger are provisioned with 16,000 IOPS.

`gp2` volumes smaller than 1 TiB (and that are provisioned with less than 3,000 IOPS) can **burst** to 3,000 IOPS when needed for an extended period of time. A volume's ability to burst is governed by I/O credits. When I/O demand is greater than baseline performance, the volume **spends I/O credits** to burst to the required performance level (up to 3,000 IOPS). While bursting, I/O credits are not accumulated and they are spent at the rate of IOPS that is being used above baseline IOPS (spend rate = burst IOPS - baseline IOPS). The more I/O credits a volume has accrued, the longer it can sustain its burst performance. You can calculate **Burst duration** as follows:

```
                        (I/O credit balance)
Burst duration  =  ------------------------------
                   (Burst IOPS) - (Baseline IOPS)
```

When I/O demand drops to baseline performance level or lower, the volume starts to **earn I/O credits** at a rate of 3 I/O credits per GiB of volume size per second. Volumes have an **I/O credit accrual limit** of 5.4 million I/O credits, which is enough to sustain the maximum burst performance of 3,000 IOPS for at least 30 minutes.

**Note**  
Each volume receives an initial I/O credit balance of 5.4 million I/O credits, which provides a fast initial boot cycle for boot volumes and a good bootstrapping experience for other applications.

The following table lists example volume sizes and the associated baseline performance of the volume, the burst duration (when starting with 5.4 million I/O credits), and the time needed to refill an empty I/O credits balance.


| Volume size (GiB) | Baseline performance (IOPS) | Burst duration at 3,000 IOPS (seconds) | Time to refill empty credit balance (seconds) | 
| --- | --- | --- | --- | 
|  1 to 33.33  |  100  |  1,862  | 54,000 | 
|  100  |  300  |  2,000  | 18,000 | 
|  334 (min size for max throughput)  | 1,002 |  2,703  |  5,389  | 
|  750  |  2,250  |  7,200  | 2,400 | 
|  1,000  |  3,000  |  N/A\$1  |  N/A\$1  | 
|  5,334 (min size for max IOPS) and larger  |  16,000  |  N/A\$1  |  N/A\$1  | 

\$1 The baseline performance of the volume exceeds the maximum burst performance.

You can monitor the I/O credit balance for a volume using the Amazon EBS `BurstBalance` metric in Amazon CloudWatch. This metric shows the percentage of I/O credits for `gp2` remaining. For more information, see [Amazon EBS I/O characteristics and monitoring](ebs-io-characteristics.md). You can set an alarm that notifies you when the `BurstBalance` value falls to a certain level. For more information, see [ Creating CloudWatch Alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html).

**Throughput performance**  


`gp2` volumes deliver throughput between 128 MiB/s and 250 MiB/s, depending on the volume size. Throughput performance is provisioned as follows:
+ Volumes that are 170 GiB and smaller deliver a maximum throughput of 128 MiB/s.
+ Volumes larger than 170 GiB but smaller than 334 GiB can burst to a maximum throughput of 250 MiB/s.
+ Volumes that are 334 GiB and larger deliver 250 MiB/s.

Throughput for a `gp2` volume can be calculated using the following formula, up to the throughput limit of 250 MiB/s:

```
Throughput in MiB/s = IOPS performance × I/O size in KiB / 1,024
```

### `gp2` volume size
<a name="gp2-size"></a>

A `gp2` volume can range in size from 1 GiB to 16 TiB. Keep in mind that volume performance scales linearly with the volume size.

# Amazon EBS Provisioned IOPS SSD volumes
<a name="provisioned-iops"></a>

Provisioned IOPS SSD volumes are backed by solid-state drives (SSDs). They are the highest performance Amazon EBS storage volumes designed for critical, IOPS-intensive, and throughput-intensive workloads that require low latency. Provisioned IOPS SSD volumes deliver their provisioned IOPS performance 99.9 percent of the time.

**Topics**
+ [

## Provisioned IOPS SSD (`io2`) Block Express volumes
](#io2-block-express)
+ [

## Provisioned IOPS SSD (`io1`) volumes
](#EBSVolumeTypes_piops)

## Provisioned IOPS SSD (`io2`) Block Express volumes
<a name="io2-block-express"></a>

`io2` Block Express volumes are built on the next generation of Amazon EBS storage server architecture. It has been built for the purpose of meeting the performance requirements of the most demanding I/O intensive applications that run on [ instances built on the Nitro System](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html). With the highest durability and lowest latency, Block Express is ideal for running performance-intensive, mission-critical workloads, such as Oracle, SAP HANA, Microsoft SQL Server, and SAS Analytics.

Block Express architecture increases performance and scale of `io2` volumes. Block Express servers communicate with [ Nitro-based instances](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html) using the Scalable Reliable Datagram (SRD) networking protocol. This interface is implemented in the Nitro Card dedicated for Amazon EBS I/O function on the host hardware of the instance. It minimizes I/O delay and latency variation (network jitter), which provides faster and more consistent performance for your applications.

`io2` Block Express volumes are designed to provide 99.999 percent volume durability with an annual failure rate (AFR) no higher than 0.001 percent, which translates to a single volume failure per 100,000 running volumes over a one-year period. `io2` Block Express volumes are suited for workloads that benefit from a single volume that provides consistent sub-millisecond latency, and supports higher IOPS and throughput than gp3 volumes.

When attached to [ Nitro-based instances](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html), `io2` Block Express volumes are designed to deliver an average latency of under 500 microseconds for 16KiB I/O operations. `io2` Block Express volumes also deliver better outlier latency compared to General Purpose volumes, reducing the frequency of I/Os exceeding 800 microseconds by over 10 times.

**Topics**
+ [

### Considerations
](#io2-bx-considerations)
+ [

### Performance
](#io2-bx-perf)

### Considerations
<a name="io2-bx-considerations"></a>
+ `io2` Block Express volumes are available in all AWS Regions, including the AWS GovCloud (US) Regions and China Regions.
+ As of **April 30, 2025**, all new and previously created `io2` volumes are `io2` Block Express volumes.
+ [ Nitro-based instances](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html) support volumes provisioned with up to 256,000 IOPS. Other instance types can be attached to volumes provisioned with up to 64,000 IOPS, but can achieve up to 32,000 IOPS.

### Performance
<a name="io2-bx-perf"></a>

`io2` Block Express volumes have the following characteristics:
+ Average latency under 500 microseconds for 16KiB I/O size. Better outlier latency compared to General Purpose volumes, reducing the frequency of I/Os exceeding 800 microseconds by over 10 times.
+ Storage capacity up to 64 TiB (65,536 GiB).
+ Provisioned IOPS up to 256,000, with an IOPS:GiB ratio of 1,000:1. Maximum IOPS can be provisioned with volumes 256 GiB and larger (1,000 IOPS × 256 GiB = 256,000 IOPS).
**Note**  
You can achieve up to 256,000 IOPS with [ Nitro-based instances](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html). On other instances, you can achieve up to 32,000 IOPS.
+ Volume throughput up to 4,000 MiB/s. Throughput scales proportionally at a rate of 0.256 MiB/s per provisioned IOPS. Maximum throughput can be achieved at 16,000 IOPS or higher.

![\[Throughput limits for io2 Block Express volumes\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/io2_bx.png)


## Provisioned IOPS SSD (`io1`) volumes
<a name="EBSVolumeTypes_piops"></a>

Provisioned IOPS SSD (`io1`) volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads, that are sensitive to storage performance and consistency. Provisioned IOPS SSD volumes use a consistent IOPS rate, which you specify when you create the volume, and Amazon EBS delivers the provisioned performance 99.9 percent of the time.

`io1` volumes are designed to provide 99.8 percent to 99.9 percent volume durability with an annual failure rate (AFR) no higher than 0.2 percent, which translates to a maximum of two volume failures per 1,000 running volumes over a one-year period.

`io1` volumes are available for all Amazon EC2 instance types.

**Performance**  
`io1` volumes can range in size from 4 GiB to 16 TiB and you can provision from 100 IOPS up to 64,000 IOPS per volume. The maximum ratio of provisioned IOPS to requested volume size (in GiB) is 50:1. For example, a 100 GiB `io1` volume can be provisioned with up to 5,000 IOPS.

The maximum IOPS can be provisioned for volumes that are 1,280 GiB or larger (50 × 1,280 GiB = 64,000 IOPS).
+ `io1` volumes provisioned with up to 32,000 IOPS support a maximum I/O size of 256 KiB and yield as much as 500 MiB/s of throughput. With the I/O size at the maximum, peak throughput is reached at 2,000 IOPS.
+ `io1` volumes provisioned with more than 32,000 IOPS (up to the maximum of 64,000 IOPS) yield a linear increase in throughput at a rate of 16 KiB per provisioned IOPS. For example, a volume provisioned with 48,000 IOPS can support up to 750 MiB/s of throughput (16 KiB per provisioned IOPS × 48,000 provisioned IOPS = 750 MiB/s).
+ To achieve the maximum throughput of 1,000 MiB/s, a volume must be provisioned with 64,000 IOPS (16 KiB per provisioned IOPS × 64,000 provisioned IOPS = 1,000 MiB/s).
+ You can achieve up to 64,000 IOPS only on [ Nitro-based instances](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html). On other instances, you can achieve up to 32,000 IOPS.

. The following graph illustrates these performance characteristics:

![\[Throughput limits for io1 volumes\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/io1_throughput.png)


Your per-I/O latency experience depends on the provisioned IOPS and on your workload profile. For the best I/O latency experience, ensure that you provision IOPS to meet the I/O profile of your workload.

# Amazon EBS Throughput Optimized HDD and Cold HDD volumes
<a name="hdd-vols"></a>

The HDD-backed volumes provided by Amazon EBS fall into these categories:
+ Throughput Optimized HDD — A low-cost HDD designed for frequently accessed, throughput-intensive workloads.
+ Cold HDD — The lowest-cost HDD designed for less frequently accessed workloads.

**Topics**
+ [

## Limitations on per-instance throughput
](#throughput-limitations)
+ [

## Throughput Optimized HDD volumes
](#EBSVolumeTypes_st1)
+ [

## Cold HDD volumes
](#EBSVolumeTypes_sc1)
+ [

## Performance considerations when using HDD volumes
](#EBSVolumeTypes_considerations)
+ [

## Monitor the burst bucket balance for volumes
](#monitoring_burstbucket-hdd)

## Limitations on per-instance throughput
<a name="throughput-limitations"></a>

Throughput for `st1` and `sc1` volumes is always determined by the smaller of the following:
+ Throughput limits of the volume
+ Throughput limits of the instance

As for all Amazon EBS volumes, we recommend that you select an appropriate EBS-optimized EC2 instance to avoid network bottlenecks.

## Throughput Optimized HDD volumes
<a name="EBSVolumeTypes_st1"></a>

Throughput Optimized HDD (`st1`) volumes provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. This volume type is a good fit for large, sequential workloads such as Amazon EMR, ETL, data warehouses, and log processing. Bootable `st1` volumes are not supported. 

Throughput Optimized HDD (`st1`) volumes, though similar to Cold HDD (`sc1`) volumes, are designed to support *frequently* accessed data.

**Note**  
This volume type is optimized for workloads involving large, sequential I/O, and we recommend that customers with workloads performing small, random I/O use [Amazon EBS General Purpose SSD volumes](general-purpose.md) or [Amazon EBS Provisioned IOPS SSD volumes](provisioned-iops.md). For more information, see [Inefficiency of small read/writes on HDD](#inefficiency).

Throughput Optimized HDD (`st1`) volumes attached to EBS-optimized instances are designed to offer consistent performance, delivering at least 90 percent of the expected throughput performance 99 percent of the time in a given year.

### Throughput credits and burst performance
<a name="ST1ThroughputBurst"></a>

Like `gp2`, `st1` uses a burst bucket model for performance. Volume size determines the baseline throughput of your volume, which is the rate at which the volume accumulates throughput credits. Volume size also determines the burst throughput of your volume, which is the rate at which you can spend credits when they are available. Larger volumes have higher baseline and burst throughput. The more credits your volume has, the longer it can drive I/O at the burst level.

The following diagram shows the burst bucket behavior for `st1`.

![\[st1 burst bucket\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/st1-burst-bucket.png)


Subject to throughput and throughput-credit caps, the available throughput of an `st1` volume is expressed by the following formula:

```
(Volume size) × (Credit accumulation rate per TiB) = Throughput
```

For a 1-TiB `st1` volume, burst throughput is limited to 250 MiB/s, the bucket fills with credits at 40 MiB/s, and it can hold up to 1 TiB-worth of credits.

Larger volumes scale these limits linearly, with throughput capped at a maximum of 500 MiB/s. After the bucket is depleted, throughput is limited to the baseline rate of 40 MiB/s per TiB. 

On volume sizes ranging from 0.125 TiB to 16 TiB, baseline throughput varies from 5 MiB/s to a cap of 500 MiB/s, which is reached at 12.5 TiB as follows:

```
            40 MiB/s
12.5 TiB × ---------- = 500 MiB/s
             1 TiB
```

Burst throughput varies from 31 MiB/s to a cap of 500 MiB/s, which is reached at 2 TiB as follows:

```
         250 MiB/s
2 TiB × ---------- = 500 MiB/s
          1 TiB
```

The following table states the full range of base and burst throughput values for `st1`.


| Volume size (TiB) | ST1 base throughput (MiB/s) | ST1 burst throughput (MiB/s) | 
| --- | --- | --- | 
| 0.125 | 5 | 31 | 
| 0.5 | 20 | 125 | 
| 1 | 40 | 250 | 
| 2 | 80 | 500 | 
| 3 | 120 | 500 | 
| 4 | 160 | 500 | 
| 5 | 200 | 500 | 
| 6 | 240 | 500 | 
| 7 | 280 | 500 | 
| 8 | 320 | 500 | 
| 9 | 360 | 500 | 
| 10 | 400 | 500 | 
| 11 | 440 | 500 | 
| 12 | 480 | 500 | 
| 12.5 | 500 | 500 | 
| 13 | 500 | 500 | 
| 14 | 500 | 500 | 
| 15 | 500 | 500 | 
| 16 | 500 | 500 | 

The following diagram plots the table values:

![\[Comparing st1 base and burst throughput\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/st1_base_v_burst.png)


**Note**  
When you create a snapshot of a Throughput Optimized HDD (`st1`) volume, performance may drop as far as the volume's baseline value while the snapshot is in progress.

For information about using CloudWatch metrics and alarms to monitor your burst bucket balance, see [Monitor the burst bucket balance for volumes](#monitoring_burstbucket-hdd).

## Cold HDD volumes
<a name="EBSVolumeTypes_sc1"></a>

Cold HDD (`sc1`) volumes provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. With a lower throughput limit than `st1`, `sc1` is a good fit for large, sequential cold-data workloads. If you require infrequent access to your data and are looking to save costs, `sc1` provides inexpensive block storage. Bootable `sc1` volumes are not supported.

Cold HDD (`sc1`) volumes, though similar to Throughput Optimized HDD (`st1`) volumes, are designed to support *infrequently* accessed data.

**Note**  
This volume type is optimized for workloads involving large, sequential I/O, and we recommend that customers with workloads performing small, random I/O use [Amazon EBS General Purpose SSD volumes](general-purpose.md) or [Amazon EBS Provisioned IOPS SSD volumes](provisioned-iops.md). For more information, see [Inefficiency of small read/writes on HDD](#inefficiency).

Cold HDD (`sc1`) volumes attached to EBS-optimized instances are designed to offer consistent performance, delivering at least 90 percent of the expected throughput performance 99 percent of the time in a given year.

### Throughput credits and burst performance
<a name="SC1ThroughputBurst"></a>

Like `gp2`, `sc1` uses a burst bucket model for performance. Volume size determines the baseline throughput of your volume, which is the rate at which the volume accumulates throughput credits. Volume size also determines the burst throughput of your volume, which is the rate at which you can spend credits when they are available. Larger volumes have higher baseline and burst throughput. The more credits your volume has, the longer it can drive I/O at the burst level.

![\[sc1 burst bucket\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/sc1-burst-bucket.png)


Subject to throughput and throughput-credit caps, the available throughput of an `sc1` volume is expressed by the following formula:

```
(Volume size) × (Credit accumulation rate per TiB) = Throughput
```

For a 1-TiB `sc1` volume, burst throughput is limited to 80 MiB/s, the bucket fills with credits at 12 MiB/s, and it can hold up to 1 TiB-worth of credits.

Larger volumes scale these limits linearly, with throughput capped at a maximum of 250 MiB/s. After the bucket is depleted, throughput is limited to the baseline rate of 12 MiB/s per TiB. 

On volume sizes ranging from 0.125 TiB to 16 TiB, baseline throughput varies from 1.5 MiB/s to a maximum of 192 MiB/s, which is reached at 16 TiB as follows:

```
           12 MiB/s
16 TiB × ---------- = 192 MiB/s
            1 TiB
```

Burst throughput varies from 10 MiB/s to a cap of 250 MiB/s, which is reached at 3.125 TiB as follows:

```
             80 MiB/s
3.125 TiB × ----------- = 250 MiB/s
              1 TiB
```

The following table states the full range of base and burst throughput values for `sc1`:


| Volume Size (TiB) | SC1 Base Throughput (MiB/s) | SC1 Burst Throughput (MiB/s) | 
| --- | --- | --- | 
| 0.125 | 1.5 | 10 | 
| 0.5 | 6 | 40 | 
| 1 | 12 | 80 | 
| 2 | 24 | 160 | 
| 3 | 36 | 240 | 
| 3.125 | 37.5 | 250 | 
| 4 | 48 | 250 | 
| 5 | 60 | 250 | 
| 6 | 72 | 250 | 
| 7 | 84 | 250 | 
| 8 | 96 | 250 | 
| 9 | 108 | 250 | 
| 10 | 120 | 250 | 
| 11 | 132 | 250 | 
| 12 | 144 | 250 | 
| 13 | 156 | 250 | 
| 14 | 168 | 250 | 
| 15 | 180 | 250 | 
| 16 | 192 | 250 | 

The following diagram plots the table values:

![\[Comparing sc1 base and burst throughput\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/sc1_base_v_burst.png)


**Note**  
When you create a snapshot of a Cold HDD (`sc1`) volume, performance may drop as far as the volume's baseline value while the snapshot is in progress.

For information about using CloudWatch metrics and alarms to monitor your burst bucket balance, see [Monitor the burst bucket balance for volumes](#monitoring_burstbucket-hdd).

## Performance considerations when using HDD volumes
<a name="EBSVolumeTypes_considerations"></a>

For optimal throughput results using HDD volumes, plan your workloads with the following considerations in mind.

### **Comparing Throughput Optimized HDD and Cold HDD**
<a name="ST1vSC1"></a>

The `st1` and `sc1` bucket sizes vary according to volume size, and a full bucket contains enough tokens for a full volume scan. However, larger `st1` and `sc1` volumes take longer for the volume scan to complete because of per-instance and per-volume throughput limits. Volumes attached to smaller instances are limited to the per-instance throughput rather than the `st1` or `sc1` throughput limits.

Both `st1` and `sc1` are designed for performance consistency of 90 percent of burst throughput 99 percent of the time. Non-compliant periods are approximately uniformly distributed, targeting 99 percent of expected total throughput each hour.

In general, scan times are expressed by this formula:

```
 Volume size
------------ = Scan time
 Throughput
```

For example, taking the performance consistency guarantees and other optimizations into account, an `st1` customer with a 5-TiB volume can expect to complete a full volume scan in 2.91 to 3.27 hours. 
+ Optimal scan time

  ```
     5 TiB            5 TiB
  ----------- = ------------------ = 10,486 seconds = 2.91 hours 
   500 MiB/s     0.00047684 TiB/s
  ```
+ Maximum scan time

  ```
    2.91 hours
  -------------- = 3.27 hours
   (0.90)(0.99) <-- From expected performance of 90% of burst 99% of the time
  ```

Similarly, an `sc1` customer with a 5-TiB volume can expect to complete a full volume scan in 5.83 to 6.54 hours.
+ Optimal scan time

  ```
     5 TiB             5 TiB
  ----------- = ------------------- = 20972 seconds = 5.83 hours 
   250 MiB/s     0.000238418 TiB/s
  ```
+ Maximum scan time

  ```
    5.83 hours
  -------------- = 6.54 hours
   (0.90)(0.99)
  ```

The following table shows ideal scan times for volumes of various size, assuming full buckets and sufficient instance throughput.


| Volume size (TiB) | ST1 scan time with burst (hours)\$1 | SC1 scan time with burst (hours)\$1 | 
| --- | --- | --- | 
| 1 | 1.17 | 3.64 | 
| 2 | 1.17 | 3.64 | 
| 3 | 1.75 | 3.64 | 
| 4 | 2.33 | 4.66 | 
| 5 | 2.91 | 5.83 | 
| 6 | 3.50 | 6.99 | 
| 7 | 4.08 | 8.16 | 
| 8 | 4.66 | 9.32 | 
| 9 | 5.24 | 10.49 | 
| 10 | 5.83 | 11.65 | 
| 11 | 6.41 | 12.82 | 
| 12 | 6.99 | 13.98 | 
| 13 | 7.57 | 15.15 | 
| 14 | 8.16 | 16.31 | 
| 15 | 8.74 | 17.48 | 
| 16 | 9.32 | 18.64 | 

 \$1 These scan times assume an average queue depth (rounded to the nearest whole number) of four or more when performing 1 MiB of sequential I/O.

Therefore if you have a throughput-oriented workload that needs to complete scans quickly (up to 500 MiB/s), or requires several full volume scans a day, use `st1`. If you are optimizing for cost, your data is relatively infrequently accessed, and you don’t need more than 250 MiB/s of scanning performance, then use `sc1`.

### Inefficiency of small read/writes on HDD
<a name="inefficiency"></a>

The performance model for `st1` and `sc1` volumes is optimized for sequential I/Os, favoring high-throughput workloads, offering acceptable performance on workloads with mixed IOPS and throughput, and discouraging workloads with small, random I/O.

For example, an I/O request of 1 MiB or less counts as a 1 MiB I/O credit. However, if the I/Os are sequential, they are merged into 1 MiB I/O blocks and count only as a 1 MiB I/O credit. 

## Monitor the burst bucket balance for volumes
<a name="monitoring_burstbucket-hdd"></a>

You can monitor the burst bucket level for `st1` and `sc1` volumes using the Amazon EBS `BurstBalance` metric available in Amazon CloudWatch. This metric shows the throughput credits for `st1` and `sc1` remaining in the burst bucket. For more information about the `BurstBalance` metric and other metrics related to I/O, see [Amazon EBS I/O characteristics and monitoring](ebs-io-characteristics.md). CloudWatch also allows you to set an alarm that notifies you when the `BurstBalance` value falls to a certain level. For more information, see [Creating CloudWatch Alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html).