Amazon S3 with Python Boto3 Library

Amazon S3 is the Simple Storage Service provided by Amazon Web Services (AWS) for object based file storage. With the increase of Big Data Applications and cloud computing, it is absolutely necessary that all the “big data” shall be stored on the cloud for easy processing over the cloud applications.

In this tutorial, you will learn how to use Amazon S3 service via the Python library Boto3. You will learn how to create S3 Buckets and Folders, and how to upload and access files to and from S3 buckets. Eventually, you will have a Python code that you can run on EC2 instance and access your data on the cloud while it is stored on the cloud.

Introduction

Amazon Simple Storage Service (Amazon S3) is the data storage service provided by Amazon Web Services (AWS), which is used by many companies in different domains. It provides services like Data Lakes and Analytics, Disaster Recovery, Data Archive, Cloud-native Application Data and Data Backup.

Why to use S3 over EC2 for data storage?

  • S3 is highly scalable. EC2 needs scaling when it comes to high data digest.
  • S3 is durable. The data over S3 is replicated and duplicated across multiple data centers to avoid data loss and data failure. EC2 needs to take snapshots of EBS volume to keep the data durable. Any data that has not been snapshot would get loss once EC2 instance is terminated.
  • S3 has security in built. EC2 needs installation of various software based on the OS to keep the data secure.
  • S3 is pay-as-you-go. You only need to pay for the storage that is consumed, depending on how fast the data is consumed and retrieved. EC2 is processioned. Meaning, you will need to define the EBS volumes before you can provision one EC2 instance. You pay for the entire volume stack, even though only a fraction of it is used.
  • S3 makes file sharing much more easier by giving link to direct download access. EC2 needs VPN configurations to share the data.
  • For large amount of data, that may be needed by multiple application and needs much data replication, S3 is much more cheaper than EC2, whose main purpose is computation.

AWS CLI Installation and Boto3 Configuration

In order to access S3 via python, you will need to configure and install AWS CLI and Boto3 Python library. I have already explained that in my previous post.

Follow along on how to Install AWS CLI and How to Configure and Install Boto3 Library from that post.

 

S3 Client

First, import the Boto3 library

Create the boto3 client.

Getting Response

Create a response variable and print it.

You get a JSON response

 

Use the following function to extract the necessary information. You need to import Pandas first.

You can invoke the function as

As shown, I have 2 S3 buckets named testbuckethp and testbuckethp2. It is required that your bucket is unique, globally. As a result, you might need put in some efforts to come up with a unique name.

 

Create a S3 Bucket

Let us create a bucket from the python terminal.

Output:

 

Let us check the status dataframe that lists all the buckets and their creation time.

As you can see, now I have three buckets namely, testbuckethp, testbuckethp2  and a newly made testbuckethp3py.

 

Upload a File into the Bucket

You need to specify the path to the file that you want to upload, the bucket name and what do you want to name the file on your bucket.

In this case, you have a file called testfile.txt  in the same directory as you Python script. I want to upload that to the newly created s3 bucket with the name testfile_s3.txt. The code does not return anything and hence passes without error.

 

Creating Folder Structure

S3 does NOT have a folder structure at all. Even though you would have a button to create a folder on AWS Web portal. Even the official documentation for that has the time “Create Folder”.

In fact, S3 is simply key-value pair storage system. Each object is given a unique key across the bucket and hence the object access is faster than a directory level file access. In this key / is interpreted as a directory and as a result, you can specify as many as directory-sub directory as possible, without actually creating it.

Putting an object is very similar to uploading a file, except, it needs the body of the file rather than the filepath.

The code first gets the body of the file by reading it. Next, it created the directory like structure on the bucket, as specified by the key ‘testdir/testfile.txt’.

As you can see, the S3 bucket creates a folder and in that folder, I can see the file, testfile.txt.

This way, you can structure your data, in the way you desire.

S3 Application in Data Science

In order to understand the application of S3 in Data Science, let us upload some data to S3. For the tutorial, I am using US City Population data by data.gov, which can be found here

I have extracted a small piece of the data, with New York State data only. Similar to a text file uploaded as an object, you can upload the csv file as well.

Output:

As you see, it creates a new folder (New York) like structure and inside that folder, I can see my csv file.

File Access from S3

In order to access the file, unlike the client object, you need the resource object.

Create the resource object.

In order to access the object, you need to have the right bucketname and the right key.

From this object, you need to access the body of the object.

Output:

Conclusion

S3 provides secure, durable and most available solution to data storage over cloud. Through boto3 python library, you can access the data pragmatically and make seamless applications that has higher data retrieval rates.

 

Complete Project Code

 

 

Rating: 5.0/5. From 2 votes.
Please wait...

Leave a Reply