-
-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The environment variable XXX_FILE is not available. #58
Comments
Using a custom dockerfile can make it work properly. FROM efrecon/s3fs:1.94
RUN sed -i \
-e '69s/.*/ AWS_S3_ACCESS_KEY_ID=$(cat "${AWS_S3_ACCESS_KEY_ID_FILE}")/' \
-e '75s/.*/ AWS_S3_SECRET_ACCESS_KEY=$(cat "${AWS_S3_SECRET_ACCESS_KEY_FILE}")/' \
/usr/local/bin/docker-entrypoint.sh |
I think that this is because, when run from compose, the user ID inside the container is not allowed to access the secrets file, as they are readable only by |
I used cat to replace the read command and it worked normally, so it's not a permissions issue, right? https://github.com/efrecon/docker-s3fs-client/issues/58#issuecomment-2391565851 |
I am unable to reproduce this. Can you set |
Unable to start, no logs |
I've been testing for a long time, but it still won't start normally, and there are no prompts or indications.
If I'm using version 1.94, the message is: "s3fs exited with code 1"
If I am using version 1.89, the message is: "read: '/run/secrets/aws_secret_access_key': bad variable name".
When using the custom dockerfile, the logs display as follows:
All tests run with docker compose.
The text was updated successfully, but these errors were encountered: