coderain blog

Troubleshooting AWS S3 SDK Error: 'You Have Attempted to Create More Buckets Than Allowed' in Test Environments

Amazon S3 (Simple Storage Service) is a cornerstone of cloud storage for countless applications, offering scalability, durability, and flexibility. In development and test environments, S3 buckets are often created dynamically—for example, to simulate data storage in unit tests, integration tests, or CI/CD pipelines. However, a common roadblock developers face is the error: "You Have Attempted to Create More Buckets Than Allowed".

This error occurs when your AWS account exceeds the default limit for S3 buckets (100 buckets per account, by default). While production environments typically manage buckets carefully, test environments are prone to this issue due to automated workflows, parallel testing, and incomplete cleanup. This blog post will demystify the error, explore its root causes in test environments, and provide actionable steps to troubleshoot and prevent it.

2026-01

Table of Contents#

  1. Understanding the Error
  2. Common Causes in Test Environments
  3. Step-by-Step Troubleshooting
  4. Preventive Measures to Avoid Recurrence
  5. Conclusion
  6. References

Understanding the Error#

What Triggers the Error?#

The error You Have Attempted to Create More Buckets Than Allowed is thrown by the AWS S3 SDK when your AWS account exceeds the maximum number of buckets permitted. By default, AWS enforces a limit of 100 buckets per account (this limit can be increased by submitting a support request, but it is not unlimited).

The error message may look like this in code (e.g., using Boto3 for Python):

botocore.errorfactory.BucketLimitExceeded: An error occurred (BucketLimitExceeded) when calling the CreateBucket operation: You have attempted to create more buckets than allowed.  

Key Notes About S3 Bucket Limits#

  • Account-Level Limit: The 100-bucket limit applies per AWS account, not per region or user.
  • Global Uniqueness: Bucket names are globally unique across all AWS accounts, but this does not affect the per-account count limit.
  • Deleted Buckets: Deleting a bucket immediately frees up space in your limit, but the bucket name may be unavailable for reuse for a short period (minutes to hours).

Common Causes in Test Environments#

Test environments are far more likely to trigger this error than production, due to their dynamic, iterative nature. Below are the most frequent culprits:

1. Uncleaned-Up Buckets from Automated Tests#

Tests (unit, integration, or end-to-end) often create temporary S3 buckets to simulate storage. If tests fail or lack proper teardown logic, these buckets are not deleted. Over time, orphaned buckets accumulate and consume the account limit.

Example: A CI pipeline runs 50 tests daily, each creating a new bucket. If only 10% of tests fail to clean up, 5 buckets are orphaned daily—leading to 150+ buckets in a month.

2. Parallel Test Execution#

Modern testing frameworks (e.g., pytest, JUnit) support parallel test runs to save time. If each parallel test creates its own bucket, the total number of buckets can spike rapidly, exceeding the limit in minutes.

Example: A test suite with 20 parallel workers, each creating 5 buckets, results in 100 buckets created in a single run—hitting the limit immediately.

3. Hardcoded or Non-Unique Bucket Names#

Tests may use hardcoded bucket names (e.g., my-test-bucket). If multiple tests or users try to create the same name, AWS returns a "BucketAlreadyExists" error. Retries (common in automation) then create new buckets with variations (e.g., my-test-bucket-1, my-test-bucket-2), ballooning the count.

4. Shared AWS Accounts for Testing#

Many teams share a single AWS account for development and testing. If multiple developers or CI pipelines run tests simultaneously, the shared bucket limit is quickly exhausted.

5. Lack of Monitoring#

Without visibility into bucket creation events, teams may not realize they’re approaching the limit until the error occurs.

Step-by-Step Troubleshooting#

If you encounter the "too many buckets" error, follow these steps to resolve it:

Step 1: Verify Current Bucket Count#

First, confirm how many buckets your account currently has. Use one of these methods:

Via AWS S3 Console#

  • Navigate to the S3 Console.
  • The total bucket count is displayed at the top of the page (e.g., "100 Buckets").

Via AWS CLI#

Run this command to list all buckets and count them:

aws s3api list-buckets --query "length(Buckets)" --output text  

Via AWS SDK (Python Example)#

import boto3  
 
s3 = boto3.client('s3')  
response = s3.list_buckets()  
print(f"Total buckets: {len(response['Buckets'])}")  

Step 2: Identify Recent Bucket Creation Events#

Use AWS CloudTrail to track who/what is creating buckets.

Via CloudTrail Console#

  • Go to the CloudTrail Console.
  • Filter events by EventName = CreateBucket and check the Username, SourceIPAddress, or UserAgent to identify the source (e.g., a CI pipeline or test script).

Via AWS CLI#

aws cloudtrail lookup-events \  
  --lookup-attributes AttributeKey=EventName,AttributeValue=CreateBucket \  
  --start-time 2024-01-01T00:00:00Z \  
  --query "Events[].{Time:EventTime, User:Username, Bucket:RequestParameters.bucketName}"  

Step 3: Locate Orphaned or Unused Buckets#

Identify buckets created by tests but not deleted. Look for:

  • Buckets with test-specific prefixes (e.g., test-, ci-, temp-).
  • Buckets with timestamps or random suffixes (e.g., my-test-202401011234).
  • Buckets with no recent activity (check "Last modified" in the S3 Console).

Step 4: Resolve the Immediate Issue#

Option 1: Delete Unused Buckets#

Delete orphaned or obsolete buckets to free up space.

Warning: Ensure buckets are empty before deletion (AWS blocks deletion of non-empty buckets).

Via AWS CLI (Delete a Single Bucket):

# Empty the bucket first  
aws s3 rm s3://my-orphaned-test-bucket --recursive  
 
# Delete the bucket  
aws s3api delete-bucket --bucket my-orphaned-test-bucket  

Via Script (Bulk Delete Test Buckets):
Use a script to delete all buckets with a test prefix (e.g., test-):

#!/bin/bash  
PREFIX="test-"  
for bucket in $(aws s3api list-buckets --query "Buckets[?starts_with(Name, '${PREFIX}')].Name" --output text); do  
  echo "Emptying and deleting $bucket..."  
  aws s3 rm s3://$bucket --recursive  
  aws s3api delete-bucket --bucket $bucket  
done  

Option 2: Request a Temporary Limit Increase#

If deletion isn’t feasible (e.g., buckets are in use), request a limit increase via the AWS Support Center. Select "Service Limit Increase" → "S3" → "Bucket Count" and justify the need (e.g., "Temporary increase for test environment").

Option 3: Optimize Test Bucket Usage#

Temporarily reduce parallelism in tests or reuse existing buckets to avoid creating new ones.

Preventive Measures to Avoid Recurrence#

Once resolved, implement these practices to prevent the error from recurring:

1. Automate Bucket Cleanup#

Ensure tests delete buckets unconditionally, even if they fail.

In Test Code (Python Example with try/finally)#

import boto3  
 
def test_s3_operation():  
    s3 = boto3.client('s3')  
    bucket_name = f"test-bucket-{uuid.uuid4()}"  # Unique name  
    s3.create_bucket(Bucket=bucket_name)  
 
    try:  
        # Run test logic...  
        assert s3.list_objects_v2(Bucket=bucket_name)['KeyCount'] == 0  
    finally:  
        # Teardown: Delete bucket even if test fails  
        s3.delete_objects(Bucket=bucket_name, Delete={'Objects': [{'Key': k} for k in s3.list_objects_v2(Bucket=bucket_name)['Contents']]})  
        s3.delete_bucket(Bucket=bucket_name)  

In CI Pipelines#

Add a post-test cleanup step (e.g., in GitHub Actions, GitLab CI) to delete all test buckets:

# GitHub Actions example  
jobs:  
  test:  
    runs-on: ubuntu-latest  
    steps:  
      - name: Run tests  
        run: pytest tests/  
      - name: Cleanup test buckets  
        if: always()  # Run even if tests fail  
        run: ./cleanup-test-buckets.sh  # Script from Step 4  

2. Reuse Buckets Instead of Creating New Ones#

Instead of creating a new bucket per test, reuse a single bucket for all tests in a run. Prefix object keys with unique identifiers (e.g., test case ID) to avoid collisions.

Example: Use test-run-1234/object-1 instead of test-bucket-1234/object-1.

3. Use Local S3 Mocks for Unit Tests#

For unit tests, use tools like LocalStack or moto to mock S3. This avoids creating real AWS buckets entirely.

Example with moto (Python):

from moto import mock_s3  
import boto3  
 
@mock_s3  
def test_s3_mock():  
    s3 = boto3.client('s3', region_name='us-east-1')  
    s3.create_bucket(Bucket='test-bucket')  # Mocked; no real bucket created  
    assert len(s3.list_buckets()['Buckets']) == 1  

4. Limit Parallelism and Monitor Bucket Count#

  • Restrict parallel test workers to a safe number (e.g., 10 workers, each reusing 1 bucket = 10 buckets per run).
  • Set up CloudWatch alarms to notify you when bucket count exceeds a threshold (e.g., 80 buckets).

CloudWatch Alarm via AWS CLI:

aws cloudwatch put-metric-alarm \  
  --alarm-name S3BucketCountHigh \  
  --metric-name BucketCount \  
  --namespace AWS/S3 \  
  --statistic Sum \  
  --period 300 \  
  --threshold 80 \  
  --comparison-operator GreaterThanThreshold \  
  --evaluation-periods 1 \  
  --alarm-actions arn:aws:sns:us-east-1:123456789012:BucketAlerts  

5. Use Separate AWS Accounts for Testing#

Isolate test workloads into a dedicated AWS account to avoid shared limits with production or other teams. AWS Organizations makes it easy to manage multiple accounts.

6. Enforce IAM Restrictions#

Use IAM policies to limit bucket creation in test accounts (e.g., only allow buckets with a test- prefix, or restrict creation to specific roles).

Example IAM Policy:

{  
  "Version": "2012-10-17",  
  "Statement": [  
    {  
      "Effect": "Allow",  
      "Action": "s3:CreateBucket",  
      "Resource": "arn:aws:s3:::test-*"  # Only allow test-prefixed buckets  
    }  
  ]  
}  

Conclusion#

The "You Have Attempted to Create More Buckets Than Allowed" error in AWS S3 is a common pain point in test environments, driven by automated testing, parallel execution, and poor cleanup practices. By troubleshooting bucket counts, identifying orphaned resources, and implementing preventive measures like automated cleanup and bucket reuse, you can avoid this error and keep your test workflows running smoothly.

Remember: The key is to treat S3 buckets in tests as ephemeral resources—create them intentionally, monitor their usage, and delete them promptly.

References#