Set up S3 bucket using Docker / Compose

I’m having trouble getting started with an S3 bucket using Docker (tag s3-latest). I am definitely struggling with the lack of documentation. So far I can manually create a bucket, IAM user, and access token.

Is there a more convenient way to create a bucket with fixed credentials to connect to?

  1. can you create a bucket / credentials on startup via environment variables without creating them manually?
  2. are there any default access credentials I can attach to the bucket?
  3. can I create fixed access credentials or must they be generated?
  4. is there a good way to pass the access credentials to my service programmatically (using Docker Compose)?
  5. are buckets private or public by default?

To avoid asking an XY question, I am trying to spin up an S3 bucket for local development so as to not require a real bucket for that. I am running my stack using Docker Compose. S3 credentials are passed to the app via environment variable, so in order for this to work with localstack I need to either

  1. make the bucket public (undesirable, not realistic)
  2. pass default access credentials
  3. create fixed access credentials and pass those
  4. create dynamic access credentials and hack them into the app’s environment variables somehow

Hello @aentwist!

By using the s3-latest tag, you using the S3-only LocalStack image. This image does not include any other AWS service like IAM for example.

To use LocalStack, you don’t need to setup any credentials directly: for S3, we would advise to use an Access Key ID equal to test and a Secret Access Key set to test as well. You can read more about credentials in LocalStack here: Credentials | Docs

By default, buckets will be public, as LocalStack does not have a way yet to use unauthorized requests, it will fallback to a default account which then will have access to your buckets.

To create the bucket at startup, you could make use of initialization hooks: Initialization Hooks | Docs

I created a small sample creating an S3 bucket when starting the container.

docker-compose.yml

version: "3.8"

services:
  localstack:
    container_name: "${LOCALSTACK_DOCKER_NAME-localstack-main}"
    image: localstack/localstack:s3-latest
    ports:
      - "127.0.0.1:4566:4566"
    volumes:
      - "./init-s3.py:/etc/localstack/init/ready.d/init-s3.py"  # ready hook

In the same folder as the docker-compose.yml file, you have this Python file:
init-s3.py

import boto3

s3_client = boto3.client(
    "s3",
    endpoint_url=f"http://localhost:4566",
    aws_access_key_id="test",
    aws_secret_access_key="test"
)

s3_client.create_bucket(Bucket="your-bucket")

Now, when starting LocalStack with this Docker compose file, you will have a bucket named your-bucket available when targeting the LocalStack container with your applications.

For example, if you have the awscli installed on your host machine, you could run the following:

export AWS_ACCESS_KEY_ID="test"
export AWS_SECRET_ACCESS_KEY="test"
export AWS_DEFAULT_REGION="us-east-1"

aws --endpoint-url=http://localhost:4566 s3api list-buckets

And it would return the following:

{
    "Buckets": [
        {
            "Name": "your-bucket",
            "CreationDate": "2023-12-07T07:11:35.000Z"
        }
    ],
    "Owner": {
        "DisplayName": "webfile",
        "ID": "75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a"
    }
}

For more informations about how to use the AWS CLI with LocalStack, see AWS Command Line Interface | Docs.

2 Likes

Thank you for taking the time to make such a detailed reply. The credentials page and the mysterious s3-latest tag are what I previously found confusing. After working with the init hooks page today I can say that page is absolutely fantastic, some of the best documentation I’ve ever read.

My apologies, you are right I was using the latest image when I had success creating a user and access token, not s3-latest. I will use s3-latest.

So if I understand what you are saying,

    1. Use an init script, the s3-latest image has no special handling for this - totally understandable
  • 2, 3, 4. Use any credentials, they are required in requests however they are currently ignored by localstack
    1. Public only, credentials are currently ignored by localstack

Thanks to your help especially with the credentials concepts I was able to get this working today. Here are my notes:

  • The s3-latest image does not include the awslocal CLI, so we have to use the S3 REST API for init scripts. An SDK like boto3 (since Python init scripts are supported) makes this easier, so we use that to create the bucket instead of shell + REST API.
  • When creating the bucket, the credentials can be omitted completely, but they are probably good to leave there to self-document that the bucket should theoretically be private in case you really need that feature for some reason and localstack decides to support it in the future.

Hello @aentwist and thank you for the great feedback!

Yes, I apologize again for the s3-latest image, we will be adding documentation very soon about it.

Also, yes about credentials, they are not exactly “ignored”, but we do not enforce them. We have IAM enforcement as part of our paid offering.

About your notes, exactly, it’s true that the s3-latest image does not include the awscli (for image size reason), and as boto3 is included inside the image Python environment, we can make use of it. This limitation will be written down in the documentation.

For credentials, yes, they could be omitted but it’s good to use some explicit default like test/test, so that we’re sure we are not using real AWS credentials and make it explicit we’re in the test environment.

Thanks again for the feedback!