Etag for s3api head-object is the same for all objects

For any file I do an “s3api head-object” call on I get the same Etag value.
Note below the files are different lengths, so there should be no way the Etag could be the same.

{
“AcceptRanges”: “bytes”,
“LastModified”: “2023-05-26T18:22:02+00:00”,
“ContentLength”: 71904,
“ETag”: “"d41d8cd98f00b204e9800998ecf8427e"”,
“ContentType”: “binary/octet-stream”,
“Metadata”: {}
}

{
“AcceptRanges”: “bytes”,
“LastModified”: “2023-05-26T18:23:43+00:00”,
“ContentLength”: 113722,
“ETag”: “"d41d8cd98f00b204e9800998ecf8427e"”,
“ContentType”: “binary/octet-stream”,
“Metadata”: {}
}

This causes the Java API v1 to report that the file is corrupt since it compares that ETag with the MD5 of the file on the client side.

Happened with legacy and V2 S3 implementation. I am using docker-compose to launch pro with trial key. And using S3_DIR for the S3 mount.

environment:
  - DEBUG=${DEBUG-}
  - PERSISTENCE=${PERSISTENCE-}
  - LOCALSTACK_API_KEY=${LOCALSTACK_API_KEY-}  # required for Pro
  - DOCKER_HOST=unix:///var/run/docker.sock
  - S3_DIR=/tmp/s3-buckets
  - PROVIDER_OVERRIDE_S3=legacy
volumes:
  - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
  - "/var/run/docker.sock:/var/run/docker.sock"
  - "/opt/niab/s3-buckets:/tmp/s3-buckets"

localstack_main | LocalStack version: 2.0.3.dev20230522172943
localstack_main | LocalStack Docker container id: 19675eb04d9a
localstack_main | LocalStack build date: 2023-05-23
localstack_main | LocalStack build git hash: 1ce5b73

Is this a known issue? Seems like the Java API would never work in this case?

Also, there is a workaround, need to tell API to ignore for gets and puts:

System.setProperty(SkipMd5CheckStrategy.DISABLE_GET_OBJECT_MD5_VALIDATION_PROPERTY, “true”)
System.setProperty(SkipMd5CheckStrategy.DISABLE_PUT_OBJECT_MD5_VALIDATION_PROPERTY, “true”)

Hi @jake,

It’s possible that the S3_DIR option is not working as expected. Would it be possible to proceed with the usual procedure for uploading the files to the S3 repository and check if the tags have been updated correctly?

We are currently in the process of implementing enhancements to the S3_DIR option, which should address the issues and improve performance.

Thanks,
Marcel

Sure, I’ll give it a try and see what it looks like.