When upload file from one container to localstack S3, Broken pipe (Write failed) error raised

I have multiple containers booted on my local. And I am trying to run the code to upload a 8k file from my test container to the localstack container s3. Below is my code:

(ns com.my.service.resources.s3
  (:require [clojure.tools.logging :refer [error info infof]]
            [clojure.java.io :as io])
  (:import
   [java.io File FileInputStream FileOutputStream]
   java.util.UUID
   [com.amazonaws ClientConfiguration]
   [com.amazonaws Protocol]
   [com.amazonaws.services.s3 AmazonS3Client AmazonS3URI]
   [com.amazonaws.services.s3.model DeleteObjectRequest
    GetObjectRequest ObjectMetadata PutObjectRequest]
   [com.amazonaws.services.s3.model AmazonS3Exception]))

(defn- s3-client
  []
  (let [client-configuration (ClientConfiguration.)
        s3-client (AmazonS3Client. client-configuration)]
    (.setEndpoint s3-client "http://localstack:4566")
    (.setMaxErrorRetry client-configuration 3)
    (.setConnectionTimeout client-configuration (* 50 1000))
    (.setSocketTimeout client-configuration (* 50 1000))
    (.setProtocol client-configuration Protocol/HTTP)

    s3-client))

(defn upload
  "Upload a given file to s3 and return the path"
  [id user file filename size content-type]
  (metrics/time
   @registry
   {"operation" "upload" "service" "s3"}
   (let [key (str BASE "/" user "/" id ".tmp")
         size (.length file)
         content-type "application/shapefile"
         in (FileInputStream. file)
         metadata (doto (ObjectMetadata.)
                    (.setContentDisposition filename)
                    (.setContentLength size)
                    (.setContentType content-type))
         request (PutObjectRequest. BUCKET key in metadata)]
     (try
       (infof "uploading to s3://%s/%s" BUCKET key)
       (.putObject (s3-client) request)
       (format "s3://%s/%s" BUCKET key)
       (finally
         (try
           (.close in)
           (catch Exception ex nil)))))))

and my docker-compose.yaml file

localstack:
    image: xxx.ecr.us-east-1.amazonaws.com/hub/localstack/localstack:latest
    container_name: localstack
    ports:
      - "4566:4566"
    environment:
      - SERVICES=s3
      - DEBUG=1
      - DATA_DIR=
      - DEFAULT_REGION=us-east-1
      - AWS_REGION=us-east-1
      - AWS_DEFAULT_REGION=us-east-1
      - AWS_ACCESS_KEY_ID=fake
      - AWS_SECRET_ACCESS_KEY=fake
      - HOSTNAME=localstack
      - HOSTNAME_EXTERNAL=localstack
    volumes:
      - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
      - "/var/run/docker.sock:/var/run/docker.sock"
      - "${PWD}/docker-entrypoint-initaws.d:/docker-entrypoint-initaws.d"

  
  test:
    image: xxx.ecr.us-east-1.amazonaws.com/build-images/build-image:0.12.0
    container_name: test
    environment:
      - DEFAULT_REGION=us-east-1
      - AWS_REGION=us-east-1
      - AWS_ACCESS_KEY_ID=fake
      - AWS_SECRET_ACCESS_KEY=fake
      - JAVA_OPTS=-Dwebdriver.chrome.whitelistedIps=
      - ARTIFACTORY_SVC_ACCT_PWD=${ARTIFACTORY_SVC_ACCT_PWD}
      - ARTIFACTORY_SVC_ACCT=${ARTIFACTORY_SVC_ACCT}
    ports:
      - "46050:46050"
    volumes:
      - .:/tmp
    depends_on:
      - db
    entrypoint:
      - tail
      - -f
      - /dev/null

However, when I ran this, I got error

com.amazonaws.SdkClientException: Unable to execute HTTP request: Broken pipe (Write failed)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1219)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1165)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:814)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:781)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:755)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:715)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:697)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:561)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:541)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5520)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5467)
    at com.amazonaws.services.s3.AmazonS3Client.access$300(AmazonS3Client.java:422)
    at com.amazonaws.services.s3.AmazonS3Client$PutObjectStrategy.invokeServiceCall(AmazonS3Client.java:6601)
    at com.amazonaws.services.s3.AmazonS3Client.uploadObject(AmazonS3Client.java:1891)
    at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1851)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93)
    at clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28)
......
Caused by: java.net.SocketException: Broken pipe (Write failed)
    at java.net.SocketOutputStream.socketWrite0(Native Method)
    at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
    at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
    at org.apache.http.impl.io.SessionOutputBufferImpl.streamWrite(SessionOutputBufferImpl.java:126)
    at org.apache.http.impl.io.SessionOutputBufferImpl.flushBuffer(SessionOutputBufferImpl.java:138)
    at org.apache.http.impl.io.SessionOutputBufferImpl.flush(SessionOutputBufferImpl.java:146)
    at org.apache.http.impl.io.ContentLengthOutputStream.close(ContentLengthOutputStream.java:95)
    at org.apache.http.impl.DefaultBHttpClientConnection.sendRequestEntity(DefaultBHttpClientConnection.java:159)
    at org.apache.http.impl.conn.CPoolProxy.sendRequestEntity(CPoolProxy.java:162)
    at org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:237)
    at com.amazonaws.http.protocol.SdkHttpRequestExecutor.doSendRequest(SdkHttpRequestExecutor.java:63)
    at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:122)
    at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:271)
    at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
    at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
    at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
    at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
    at com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1346)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1157)
    ... 128 common frames omitted

One thing I noticed (not sure if it is helpful) is that when I go to the test container at very beginning and run

aws --endpoint-url=http://localstack:4566 s3 ls

I can see the bucket list on localstack s3. However, after I run the code and get the error above, if I run the same command again, I got

An error occurred (500) when calling the ListBuckets operation (reached max retries: 4): Internal Server Error

But if I exit the test container and go back again, I can again list the container.

Any idea? Please help.

Hi @zhazi,

I would suggest starting with a fresh docker compose file and with the endpoint that correlate with the ones in our documentation:
localstack/docker-compose.yml at master ยท localstack/localstack (github.com)
S3 | Docs (localstack.cloud)