Boto3 Response to JSON String in Python

When working to programmatically check or configure your AWS infrastructure in Python, we need to use Boto3. Most of the time we will need to check the output of the Boto3 Client by printing it to the terminal. Unfortunately, printing directly the boto3 response is not really easy to understand.

The best way to work with the boto3 response is to convert it to a JSON string before printing it to the terminal.

Printing Boto3 Client response

To see how we can output boto3 client’s response to the terminal we will initially use the describe_regions function of the boto3 EC2 client.

See the Python script below.

import boto3

client = boto3.client('ec2')

response = client.describe_regions(RegionNames=['us-east-1'])


Running the Python script will result in the long single line output below.

{'Regions': [{'Endpoint': '', 'RegionName': 'us-east-1', 'OptInStatus': 'opt-in-not-required'}], 'ResponseMetadata': {'RequestId': '9437271e-6132-468f-b19d-535f9d7bda09', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': '9437271e-6132-468f-b19d-535f9d7bda09', 'cache-control': 'no-cache, no-store', 'strict-transport-security': 'max-age=31536000; includeSubDomains', 'content-type': 'text/xml;charset=UTF-8', 'content-length': '449', 'date': 'Sat, 03 Apr 2021 08:30:15 GMT', 'server': 'AmazonEC2'}, 'RetryAttempts': 0}}

In my screen it will look like this since it automatically wraps the output.

With the output above, it is hard to understand or search for the specific part of the response that you are looking for.

Note: The response object of the boto3 client is a dictionary (dict) type. The examples below will work even if you change response object to any dictionary object.

Converting Boto3 Response to JSON String

To easier understand the returned response of the boto3 client it is best to convert it to a JSON string before printing it. To change the response to a JSON string use the function json.dumps.

Continue reading Boto3 Response to JSON String in Python

Listing AWS Regions using boto3 Python

If you want to programmatically retrieve the AWS regions one of the best ways to do this is via Python boto3 package.

In boto3, specifically the EC2 client, there is a function named describe_regions. This will retrieve the regions that are in AWS.

See different uses of the describe_regions function below.

Basic code for retrieving the AWS Regions using boto3

Here is a basic code for getting the AWS Regions in boto3.

import boto3
ec2_client = boto3.client('ec2')

response = ec2_client.describe_regions()


response is a dictionary that contains all active regions in the AWS Account.

The output of the Python script above can be seen below. I decided to format the output for easier reading, and show the whole response to easily understand how to manipulate the output.

Continue reading Listing AWS Regions using boto3 Python

Understanding gp3 IOPS – EBS Volumes

In my goal to understand when the gp3 is cheaper than gp2 volumes, I need to know what each of the performance settings of each Elastic Block Store (EBS) Volume Types are.

I have already discussed the Throughput of gp3 and gp2 in different posts. If you do not know what Throughput is then I suggest going to the post about EBS Volumes Throughput.

Let’s go straight to dicussing about gp3 type IOPS.

What is the IOPS of gp3 Volumes?

From the AWS documentation the IOPS of gp3 Volumes has a minimum of 3,000 IOPS and a maximum of 16,000 IOPS.

Minimum IOPS3,000 IOPS
Maximum IOPS16,000 IOPS

The advantage of gp3 Volumes to gp2 is that you can set their IOPS regardless of the volume size. This is very unlike the gp2 where the IOPS and Throughput is highly dependent on the volume size.

We also no longer have to think about bursting IOPS in gp3 Volume types or having high IOPS operation for an extended period of time as long as you set your gp3 IOPS properly.

Maximum IOPS per Volume Size

You may think that whatever the volume size from 1 GiB (minimum) to 16 TiB (maximum), you can assign any IOPS as long as its between 3000 IOPS and 16,000 IOPS.

Continue reading Understanding gp3 IOPS – EBS Volumes

gp2 vs gp3 Cost Comparison – EBS Volumes

Amazon Web Services (AWS) has launched the gp3 type EBS Volume and they are saying that it is 20% cheaper compared to gp2 types. Since the gp3 has a different pricing model compared to the gp2, I decided to go down this rabbit hole to see if the statement that gp3 is 20% cheaper than gp2 is always true.

My goal here is to confidently know that moving from a gp2 to gp3 will give us significant savings without the degraded performance.

TL;DR – Most of the time gp3 is cheaper than gp2. There are scenarios where the gp3 is more expensive than the gp2.

gp2 and gp3 EBS Pricing Models

Based on the Amazon EBS Pricing page this is the pricing of gp2 and gp3 Volumes.

gp2 Pricing

gp2 Volume Pricing$0.10 per GB-month of provisioned storage

gp3 Pricing

gp3 Storage Pricing$0.08/GB-month
gp3 IOPS Pricing3,000 IOPS free and
$0.005/provisioned IOPS-month over 3,000
gp3 Throughput Pricing125 MB/s free and
$0.04/provisioned MB/s-month over 125

Note: Pricing below is based on N. Virginia Region. Different regions have different pricing but the concepts and percent discount will be the same.

The 20% Cheaper Statement

If we only look at the pricing of the gp3 in terms of Storage Size ($0.08/GB-month) compared to the pricing of gp2 ($0.10/GB-month) we can definitely say that gp3 is 20% cheaper than gp2 Volumes.

Continue reading gp2 vs gp3 Cost Comparison – EBS Volumes

gp2 Throughput Explained – EBS Volumes

I wanted to investigate if the statement of Amazon Web Services (AWS) that “gp3 is 20% cheaper than gp2” is always true. That is why I’m creating this series of post to investigate when can we say that gp3 is really cheaper than gp2.

I have already written about Throughput of Elastic Block Store (EBS) Volumes and Throughput of gp3 EBS Volume type.

In this post, I will be discussing the Throughput of gp2 Volumes.

What is the Throughput of gp2 Volumes?

A quick look in the AWS documentation, here are the details of gp2 Volumes.

Minimum Size1 GiB
Maximum Size16 TiB
Price*$0.10 per GB-month of provisioned storage
Maximum IOPS16,000 IOPS
Maximum Throughput250 MiB/s

price is based on N. Virginia Amazon Web Services (AWS) Region.

The table in the Solid State Drive (SSD) part of the documentation only said something about Maximum Througput, but it never really say anything about what is the real Throughput or what are the conditions that affect the Throughput of gp2 volumes.

Continue reading gp2 Throughput Explained – EBS Volumes

gp3 Throughput Explained – EBS Volumes

In order for me to understand when a gp3 volume becomes cheaper than a gp2 volume, I need to understand the difference between the two EBS Volume types.

We have discussed before what is Throughput and how it affects performance of EBS Volumes. In this post we will be discussing the Throughput of gp3 volumes.

What is the Throughput of gp3 Volumes?

gp3 volumes has the following details.

Minimum Size1 GiB
Maximum Size16 TiB
Price of Size*$ 0.08/GiB-month
Minimum IOPS3000 IOPS
Maximum IOPS16000 IOPS
Price of IOPS*3000 IOPS free
$0.05/provisioned IOPS-month over 3000 IOPS
Minimum Throughput125 MiB/s
Maximum Throughput1000 MiB/s
Price of Throughput*125 MiB/s
$0.04/provisioned MiB/s-month over 125 MiB/s

* price is based on N. Virginia Amazon Web Services (AWS) Region.

For gp3 EBS Volumes, the throughput you set is sustained throughout its life unless you modify it.

Unlike the gp2 which has a Throughput that is dependent on the size of the volume and its burst credits, gp3’s Throughput is dependent on what you set, and how much are you willing to pay for the Throughput.

Continue reading gp3 Throughput Explained – EBS Volumes

EBS Volumes Throughput Explained

I have been using gp2 type Elastic Block Store (EBS) Volumes by default, that when the gp3 type was launched I was really curious about the difference between the two.

This lead me into a rabbit hole trying to look beyond the statement of Amazon Web Services (AWS) that “gp3 is 20% cheaper than gp2 EBS Volume types”.

I realized that I have a small understanding about Throughput compared to IOPS, so in this post, I will go into the details about the Throughput of EBS Volumes.

What is Throughput?

For an EBS Volume, throughput is the total amount of data that a storage can read/write per second.

In the AWS documentation the unit for Throughput is MiB/s.

Here are some simple examples on how to compute Throughput.

Continue reading EBS Volumes Throughput Explained