Back to articles
cloud security
10 min readJanuary 20, 2026

AWS S3 Security: The Misconfiguration That Keeps Happening

I Googled a client's company name last month and found their S3 bucket indexed on page one. Here's how to make sure that never happens to you.

AWS S3 Security: The Misconfiguration That Keeps Happening

AWS S3 Security: The Misconfiguration That Keeps Happening

Last month, a startup I was consulting for had a small panic. Someone Googled their company name and found a publicly accessible S3 bucket on the first page of results. Inside: database backups, internal documentation, and a spreadsheet of customer email addresses. Nobody had intentionally made it public. A developer had toggled a setting during debugging weeks earlier and forgot to revert it.

This isn't unusual. AWS S3 security misconfigurations have been behind some of the most high-profile data breaches in recent history. Capital One's 2019 breach exposed 106 million customer records, partly due to a misconfigured WAF that allowed access to S3 metadata. Facebook had hundreds of millions of records sitting in public buckets managed by third-party app developers. These aren't obscure companies with tiny security budgets — if it can happen to them, it can happen to anyone.

Why S3 Misconfiguration Is So Common

Here's the thing: S3 buckets are private by default now. AWS even added multiple warning banners when you try to make a bucket public. So why does this keep happening?

Because "making the bucket public" isn't the only way to expose data. Overly permissive IAM policies, misconfigured bucket policies that grant access to

*
, pre-signed URLs with absurdly long expiration times, and cross-account access gone wrong — these are the real culprits. The "public bucket" checkbox is the obvious threat. The subtle policy misconfiguration is what actually gets you.

I've also seen teams disable the public access block "temporarily" to test something, then forget to re-enable it. Infrastructure drift is real, and it's insidious.

Step 1: Block All Public Access (The Nuclear Option)

Start here. Even if you think your bucket is private, apply this explicitly. Belt and suspenders.

bash
# Apply the "nuclear option" — block all public access
aws s3api put-public-access-block \
  --bucket my-sensitive-bucket \
  --public-access-block-configuration \
    "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"

Or in Terraform (which you should be using for anything beyond a personal project, because clickops is how drift happens):

hcl
resource "aws_s3_bucket_public_access_block" "example" {
  bucket = aws_s3_bucket.example.id

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

Gotcha: You can also apply public access blocks at the AWS account level, which prevents any bucket in the account from being made public. For accounts that should never host public content, this is the smarter move — it removes the possibility of someone "temporarily" opening a bucket.

But wait — what if you legitimately need a public bucket for static assets or a website? That's a valid use case. The answer is: use a separate AWS account (or at minimum, a separate bucket with explicit documentation) for public assets, and lock down everything else. Don't mix public and private data in the same bucket, and definitely not in the same account if you can avoid it.

Step 2: Enable Encryption

Data at rest should be encrypted. This won't prevent access from someone who has IAM permissions to read the bucket, but it protects against physical media theft and certain classes of AWS-internal access.

hcl
resource "aws_s3_bucket_server_side_encryption_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "aws:kms"
      kms_master_key_id = aws_kms_key.mykey.arn
    }
    bucket_key_enabled = true
  }
}

Using KMS (as opposed to SSE-S3) gives you an additional layer of access control — you can restrict who has permission to use the encryption key, adding another gate beyond just IAM bucket permissions. It costs a tiny bit more per request, but for sensitive data, it's worth it.

Gotcha:

bucket_key_enabled = true
is important for cost. Without it, every object-level operation makes a separate KMS API call, and those add up fast on high-traffic buckets. Bucket keys reduce KMS costs significantly by reusing a bucket-level key.

Step 3: Enforce HTTPS Only

This one is surprisingly overlooked. By default, S3 accepts both HTTP and HTTPS requests. An attacker performing a man-in-the-middle attack could intercept unencrypted data in transit. This bucket policy denies any request made over plain HTTP:

json
{
  "Version": "2012-10-17",
  "Statement": [{
    "Sid": "DenyHTTP",
    "Effect": "Deny",
    "Principal": "*",
    "Action": "s3:*",
    "Resource": [
      "arn:aws:s3:::my-bucket",
      "arn:aws:s3:::my-bucket/*"
    ],
    "Condition": {
      "Bool": { "aws:SecureTransport": "false" }
    }
  }]
}

If you've already read our TLS 1.3 post, you know why transport encryption matters. This policy enforces it at the S3 level so you're not depending on clients to do the right thing.

Step 4: Enable Versioning and Access Logging

This isn't about preventing breaches — it's about detecting and recovering from them.

Versioning ensures that if someone (or some compromised credential) deletes or overwrites objects, you can recover previous versions. Access logging gives you a trail of who accessed what and when, which is critical during incident response.

Neither of these is enabled by default. Both should be.

Step 5: Audit What You Already Have

Here's the part most guides skip. You can set up perfect security for new buckets, but what about the 47 buckets that already exist in your account?

bash
# Check public access block status for a specific bucket
aws s3api get-public-access-block --bucket my-bucket

# List all buckets and check each one
aws s3 ls | awk '{print $3}' | while read bucket; do
  echo "--- $bucket ---"
  aws s3api get-public-access-block --bucket "$bucket" 2>&1
done

Better yet, enable AWS Config with the

s3-bucket-public-read-prohibited
and
s3-bucket-public-write-prohibited
rules. This gives you continuous compliance monitoring — any bucket that becomes public will trigger an alert. This catches the "temporary debug change that becomes permanent" scenario I mentioned at the start.

The Trade-Off Nobody Talks About

Locking down S3 aggressively can break things. Pre-signed URLs stop working if your policies are too restrictive. CloudFront distributions need specific access patterns. Third-party integrations that expect public read access will fail. Lambda functions that write to S3 need the right execution role.

The answer isn't "make everything permissive so nothing breaks." The answer is: lock everything down, then methodically grant the minimum access each service needs. Yes, it's slower. Yes, developers will complain. But "it works" and "it's secure" are not mutually exclusive — they just require more thoughtful configuration.

If I Had to Pick One Thing

Run

aws s3api get-public-access-block
on every bucket in your account today. Right now. If any of them don't have all four settings set to
true
, fix it or document exactly why it's an exception. That single command, run today, is worth more than reading ten articles about S3 security.

Discussion

0 comments

Share your thoughts

No comments yet. Be the first to share your thoughts!