I'm using docker-compose to run a python API and a Localstack instance in 2 separate containers, for local development.
The API has an endpoint which generates a presigned AWS S3 URL and redirects the user, in order to load images directly from S3.
In local development, the API instantiates a boto3 client using the address of the localstack container as a custom endpoint url (ie: boto3.client("s3",endpoint_url="http://localstack:4566")) which allows the API to access resources within the localstack container.
The problem is that the presigned URL returned by the boto3 client uses the localstack address, and the browser cannot load it, since the localstack resources are exposed to the host machine, at http://localhost:4566.
If I try to set the aws resources endpoint url to localhost in the boto3 client instantiation, then the API, which is running inside of a container, will look for AWS resources within it OWN CONTAINER's localhost, and not the host machine, where the localstack resources are exposed.
Is there any way to access localstack resources, running in a docker container, from both the host machine's browser AND a different container, using the same address?
[Edit] I'm using docker on mac, in case that changes anything [/Edit]
127.0.0.1on your host. Any solution will probably require the service to be listening on all interfaces instead (0.0.0.0). I was writing up a longer answer, but as I don't have access to a Mac running Docker there ended up being too much conjecture. Because on a Mac (or Windows) Docker is running in a virtual machine the networking is different than e.g. on my Linux host.