S3 Bucket Security Issues Part 2: The Risks of Misconfigured S3 Buckets and What You Can Do About Them

author_profile
Noga Yam Amitai
Wednesday, Jun 2nd, 2021

In the first part of this series, we provided an overview of AWS cloud storage service – S3. We discussed the three components of an S3 object, the content, the identifier, and the metadata, as well as how to access objects from within a bucket using AWS evaluation, including the risks involved. If you missed part one, you can check it out here.

In the second installment of this two-part series, we will be discussing the specific risks of misconfigured S3 buckets, including diving into our own research that has uncovered a cross-account attack path you may not be aware of.

Let’s get started.

Risks Involving Misconfigured Buckets

Amazon s3 comes from a the words simple storage service, three s’s, so s3! There are two main risks when considering misconfigurations on AWS buckets, read and write access permissions. First, a misconfigured s3 bucket that allows public read access can lead to a customer data breach. Second, a misconfigured s3 bucket allowing public write access can be used to serve or control malware, damage a website hosted on a cloud storage service, store any amount of data at your expense, and even encrypt your files for the purposes of demanding a ransom. Suddenly nothing so “simple storage service” about it, eh?

Examples of These Kinds of s3 Bucket Security Issues

There are a number of high-profile examples of access management issues or customer data breaches and security incidents that occur from misconfigured s3 buckets. 

Booz Allen Hamilton is a leading U.S. government contractor, famous for a data breach that involved misconfigured buckets. Booz Allen Hamilton left sensitive data on AWS S3, publicly accessible, exposing 60,000 files related to the Dept of Defense. In addition, you would expect there to be at least default encryption, but actually the huge amount of data in the s3 bucket was stored with no encryption at all. The company claimed that they had no need to encrypt data as the data itself was not classified, but it included credentials to sensitive government systems, credentials belonging to a senior engineer at Booz Allen Hamilton, and more.

Another example is Verizon, an American wireless network operator, who failed to secure their security controls or bucket policies. This company suffered from two data breaches only a few months apart, exposing more than 6 million customer accounts. That’s a huge amount of data! Both breaches were caused by S3 misconfigurations.

Misconfigurations negatively impacting

Researching into the Current Status of AWS Buckets

To understand the scope of the issue, we started by searching a few AWS customers’ environments for their public s3 buckets, and their “objects can be public” s3 buckets. (See part one for a list of definitions)

The results were quite interesting. The average amount of public buckets stands at almost 4% per company, and the average amount of “objects can be public” is around 42%.

This means that almost 50% of a company’s buckets could potentially be misconfigured!

To describe our research in more detail, here is what we tried to accomplish.

We wanted to get a hermetic image of the possible outcomes of using the block public settings. We simulated every combination, trying to access the bucket and objects, and then documented each result.

The most interesting cases were when AWS evaluated the bucket status as “objects can be public”, and as mentioned in part 1, that’s not surprising because this is the most confusing definition.

It’s clear that because AWS evaluation is happening at the bucket level, and it does not take into consideration the objects’ ACLs, this will open companies up to risk, as this is the key for understanding where they might be vulnerable.

Our New Python Tool

As you can see from the statistics, the “object can by public” status is very common. This left us wanting to provide a solution that can help in dealing with this issue.

We have therefore created an open-source python tool that you can use to get a better grasp of your buckets and objects. It takes the public evaluation a step further and tells you the buckets and objects that are publicly accessible, with no “can be public” about it. You can find the tool here: Free S3 buckets scanner.

Our Unexpected Find! New Cross-Account Attack on S3 Buckets

While performing the research, we found ourselves analyzing a lot of bucket policy examples, so that we could fully understand which ones allows public access.

It was then that we came across an interesting case, where the policy grants AWS Config and CloudTrail services access to a bucket. These two services use S3 Buckets to store their output, and while setting them, the user must choose whether to create a new bucket, to use an existing one from their own account, or to use an existing one from another account.

CloudTrail Example

CloudTrail Example

Config Example

Config Example

Many users can ease the process of specifying several different accounts as a list by creating a general path instead. A general path can be:

  • arn:aws:s3:::{bucket-name}/*
  • arn:aws:s3:::{bucket-name}/AWSLogs/*
  • arn:aws:s3:::{bucket-name}/AWSLogs/*/Config/*

This case is common when using the AWS Organizations and Control Tower, which creates a dedicated account for logging and auditing. When you have only a few accounts, that can work with a bucket policy with several resources for the other accounts. However, when you have 100 or 1000 accounts, wildcards are mostly in use, which can open you up to risk.

The tricky thing in this case specifically is that since the principal is the AWS service, the source account is evaluated from the resource, as opposed to normal bucket policies.

Each one of the general resource path patterns above will enable any AWS account to define the bucket as their Config bucket, and by doing that, use it to store their data at your expense. It does not allow them to read objects that are stored in the bucket, only to write new ones, making it the second category of risk we discussed above.

This configuration opens up your bucket for unauthorized writes from any Amazon Web Services account.

Note: In our research, we tested AWS Cloudtrail and Config but this issue may be valid for other AWS Services that use S3 Buckets to store their data by default. In other cases, the permission may also be a read permission that can lead to a serious data breach.

Tips for Secured S3 Buckets

You wouldn’t ignore your physical security risk, and this is no different. Here are some top tips for staying on top of file storage in Amazon s3 buckets and s3 bucket misconfigurations and access management. 

  1. Try our tool! Our Python tool will be able to show you all the buckets that are publicly accessible. We’d love to hear what you found!
  2. Keep alert. Continuously assess the s3 bucket states in your account and your access control lists – you can use config rules or AWS “Access Analyzer for S3” for that. Notice that the “Access Analyzer for S3” does not cover the two main cases we discussed – the cross-account attack and the bucket with “objects can be public” status that has publicly accessible objects.
  3. Think smart. Never assume that buckets are private. In order to deny public access, it is preferable to use the block public access, and not a deny policy.
  4. Avoid wildcards. Your buckets and objects are your responsibility. Try to make your policies as specific as you can.

For more advice on securing your Amazon web services or general cloud environment, get in touch to schedule a demo of the panoptica platform.

Popup Image