I created a Lambda which was triggered by placing a file in an S3 bucket and I expected to get an S3Event passed to it as would happen in AWS and I think many Localstack examples show this happening also. However, I got a LinkedHashMap which seems to have the necessary info but it is very hard to find things like bucket name and path. Moreover, if I want the code I developed with using Localstack to work in AWS later, the approach I thought of is to check the class of the Object (which is how I declared the parameter) and extract info depending upon the class – obviously not having to deal with two different possibilities would be better. Could this be because I am using the wrong version (I am using Docker) of Localstack or is there a configuration choice that I made that caused this? I am pretty sure it can’t be in the Lambda itself.
Please try using our new lambda provider using
PROVIDER_OVERRIDE_LAMBDA=asf in the localstack environment variables, and please mount your docker socket into the container for it to work. (If you use docker-compose, it is adding
- "/var/run/docker.sock:/var/run/docker.sock" to the
With this new provider, the objects should be correctly casted, if you use S3Event as event type in your handler.
You situation mostly happens if you use the
local lambda executor of the current lambda provider, which has a lot of shortcomings, especially for java.