Unable to Push to Google Container Registry (access denied) - google-cloud-platform

When I tried to push a container image to the Container Registry, it gave me the following error,
denied: Token exchange failed for project 'my-proj-123'. Caller does not have permission 'storage.buckets.create'. To configure permissions, follow instructions at: https://cloud.google.com/container-registry/docs/access-control
I had to follow the Bucket Name Verification process to be able to create the artifacts.my-proj-123.appspot.com bucket. Now when I try to push the docker image, it does not complain on storage.buckets.create permission but only gives:
denied: Access denied.
I don't know which user I need to give access to. I gave Storage Admin access to the Compute Engine default service account to no avail. How can I fix it?

I was able to push a Docker image to Container Registry from a Container Optimized OS.
If you are having permission problems, I recommend you to give the Compute Engine default service account at least project editor permissions, just for testing purposes. Even if you just target Cloud Storage, other parts of the processes may need more permissions. Once you finish testing, you can create a new service account with less permissions and fine tune it for your needs.
Also, there is an alternative to gcloud for authentication. You can try by following this:
First try to download docker-credential-gcr with:
VERSION=1.5.0
OS=linux # or "darwin" for OSX, "windows" for Windows.
ARCH=amd64 # or "386" for 32-bit OSs
curl -fsSL "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${VERSION}/docker-credential-gcr_${OS}_${ARCH}-${VERSION}.tar.gz" \
| tar xz --to-stdout ./docker-credential-gcr \
> /usr/bin/docker-credential-gcr && chmod +x /usr/bin/docker-credential-gcr
After that execute docker-credential-gcr configure-docker
Download the Compute Engine default service account json key.
Execute cat [your_service_account_credentials.json] | docker login -u _json_key --password-stdin https://[HOSTNAME]

I hit the similar issue while i was trying to upload the docker image to GCR from container optimized OS, i ran the following sequence of command,
Created a service account and assigned Storage Admin privileges.
Downloaded the JSON key
Executed docker-credential-gcr configure-docker
Logged in with docker command - docker login -u _json_key -p "$(cat ./mygcrserviceaccount.JSON)" https://gcr.io
Tried pushing the image gcr - docker push gcr.io/project-id/imagename:tage01
It failed with following error,
denied: Token exchange failed for project 'project-id'. Caller does not have permission 'storage.buckets.create'. To configure permissions, follow instructions at: https://cloud.google.com/container-registry/docs/access-control
I tried giving every possible permission to my service account through IAM role but it would fail with same error.
After reading this issue i did the following changes,
Removed the docker config directory rm -rf ~/.docker
Executed docker-credential-gcr configure-docker
Stored the JSON key into variable named GOOGLE_APPLICATION_CREDENTIALS
GOOGLE_APPLICATION_CREDENTIALS=/path/to/mygcrserviceaccount.JSON
Logged in with docker command - docker login -u _json_key -p "$(cat ${GOOGLE_APPLICATION_CREDENTIALS})" https://gcr.io
Executed docker push command - docker push gcr.io/project-id/imagename:tage01
Voila, it worked like a charm!

Related

Where and how catch gsutil errors on during deployment of my website?

I have a personal website hosted on Google Cloud storage. The way I am deploying my website on my bucket is the following:
Github Actions runs make deploy when I am pushing on the branch develop
Make deploy is running a shell script called bin/deploy.sh
I have a billing issue on my Google Cloud account so i am not able to modify anything on my GCS bucket. In fact, If I run make deploy locally, I am getting this error log:
AccessDeniedException: 403 The project to be billed is associated with a delinquent billing account.
CommandException: 29 files/objects could not be copied/removed.
make: *** No rule to make target `do', needed by `deploy'. Stop.
My Github Actions pipeline succeeded and did not report any error.
When and how should I catch a gcloud error?
deploy.sh
# set website config
gsutil web set -m index.html -e 404.html gs://pierre-alexandre.io
# add user permissions
gsutil iam ch allUsers:legacyObjectReader gs://pierre-alexandre.io
# copy the website files!
gsutil -m rsync -d -r public_html gs://pierre-alexandre.io
Makefile
deploy: $(shell ./bin/deploy.sh)
.github/workflows/main.yml
name: CI
on:
push:
branches: [ develop ]
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Deployment to production server
run: |
echo deploying new version on pierre-alexandre.io ...
echo make deploy
Apart from what #Mousumi provided, please find an example of how you can catch gstil errors, by writing some shell script.
The issue is related to your account being suspended. To reinstate the billing account is to update the payment method and settle the outstanding balance and reopen the account so you may use the project link in it.
To update the payment method, kindly refer to the steps on this page.
To reopen the billing account, kindly refer to the steps on this page.

AWS EB docker-compose deployment from private registry access forbidden

I'm trying to get docker-compose deployment to AWS Elastic Beanstalk working, in which the docker images are pulled from a private registry hosted by GitLab.
The strange thing is that initial deployment works perfectly; It pulls the image from the private registry and starts the containers using docker-compose, and the webpage (served by Django) is accessible through the host.
Deploying a new version using the same docker-compose and the same docker image will result in an error while pulling the docker image:
2021/03/16 09:28:34.957094 [ERROR] An error occurred during execution of command [app-deploy] - [Run Docker Container]. Stop running the command. Error: failed to run docker containers: Command /bin/sh -c docker-compose up -d failed with error exit status 1. Stderr:Building with native build. Learn about native build in Compose here: https://docs.docker.com/go/compose-native-build/
Creating network "current_default" with the default driver
Pulling redis (redis:alpine)...
Pulling mysql (mysql:5.7)...
Pulling project.dockertest(registry.gitlab.com/company/spikes/dockertest:latest)...
Get https://registry.gitlab.com/v2/company/spikes/dockertest/manifests/latest: denied: access forbidden
2021/03/16 09:28:34.957104 [INFO] Executing cleanup logic
Setup
AWS Elastic Beanstalk 64bit Amazon Linux 2/3.2
Gitlab registry credentials are stored within a S3 bucket, with the filename .dockercfg and has the following content:
{
"auths": {
"registry.gitlab.com": {
"auth": "base64 encoded username:personal_access_token"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.03.1-ce (linux)"
}
}
The repository contains a v3 Dockerrun.aws.json file to refer to the credential file in S3:
{
"AWSEBDockerrunVersion": "3",
"Authentication": {
"bucket": "gitlab-dockercfg",
"key": ".dockercfg"
}
}
Reproduce
Setup docker-compose.yml that uses a service with a private docker image (and can be pulled with the credentials setup in the dockercfg within S3)
Create a new applicatoin that uses the docker-platform.
eb init testapplication --platform=docker --region=eu-west-1
Note: region must be the same as the S3 bucket containing the dockercfg.
Initial deployment (this will succeed)
eb create testapplication-test --branch_default --cname testapplication-test --elb-type=application --instance-types=t2.micro --min-instance=1 --max-instances=4
The initial deployment shows that the image is available and can be started:
2021/03/16 08:58:07.533988 [INFO] save docker tag command: docker tag 5812dfe24a4f redis:alpine
2021/03/16 08:58:07.533993 [INFO] save docker tag command: docker tag f8fcde8b9ae2 mysql:5.7
2021/03/16 08:58:07.533998 [INFO] save docker tag command: docker tag 1dd9b65d6a9f registry.gitlab.com/company/spikes/dockertest:latest
2021/03/16 08:58:07.534010 [INFO] Running command /bin/sh -c docker rm `docker ps -aq`
Without changing anything to the local repository and the remote docker image on the private registry, lets do a redeployment which will trigger the error:
eb deploy testapplication-test
This will fail with the following output:
...
2021-03-16 10:02:28 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2021-03-16 10:02:29 ERROR Unsuccessful command execution on instance id(s) 'i-0dc445d118ac14b80'. Aborting the operation.
2021-03-16 10:02:29 ERROR Failed to deploy application.
ERROR: ServiceError - Failed to deploy application.
And logs of the instance show (/var/log/eb-engine.log):
Pulling redis (redis:alpine)...
Pulling mysql (mysql:5.7)...
Pulling project.dockertest (registry.gitlab.com/company/spikes/dockertest:latest)...
Get https://registry.gitlab.com/v2/company/spikes/dockertest/manifests/latest: denied: access forbidden
2021/03/16 10:02:25.902479 [INFO] Executing cleanup logic
Steps I've tried to debug or solve the issue
Rename dockercfg to .dockercfg on S3 (somewhere mentioned on the internet as possible solution)
Use the 'old' docker config format instead of the one generated by docker 1.7+. But later on I figured out that Amazon Linux 2-instances are compatible with the new format together with Dockerrun v3
Having an incorrectly formatted dockercfg on S3 will cause an error deployment regarding the misformatted file (so it actually does something with the dockercfg from S3)
Documentation
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/single-container-docker-configuration.html
I'm out of debug options, and I've no idea where to look any further to debug this problem. Perhaps someone can see what is going wrong here?
First of all, the issue describe above is a bug confirmed by Amazon. To get the deployment working on our side, we've contacted Amazon support.
They've a fix in place which should be released this month, so keep an eye on the changelog of the Elastic beanstalk platform: https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/relnotes.html
Although the upcoming release should have the fix, there is a workaround available to get the docker-compose deployment working.
Elastic Beanstalk allows hook to be executed within the deployment, which can be used to fetch the .docker.cfg from a S3 bucket to authenticate with against the private registry.
To do so, create the following file and directories from the root of the project:
File location: .platform/hooks/predeploy/docker_login
#!/bin/bash
aws s3 cp s3://{{bucket_name_to_use}}/.dockercfg ~/.docker/config.json
Important: Add execution rights to this file (for example: chmod +x .platform/hooks/predeploy/docker_login)
To support instance configuration changes, please symlink the hooks directory to confighooks:
ln -s .platform/hooks/ .platform/confighooks/
Updating configuration requires the .dockercfg credentials to be fetched too.
This should enable continuous deployments to the same EB-instance without the authentication errors, because the hook will be execute before the docker image pulling.
Some background:
The docker daemon reads credentials from ~/.docker/config by default on traditional linux systems. On the initial deploy this file will exist on the Elastic Beanstalk instance. On the next deployment this file is removed. Unfortunately, on the next deployment the .dockercfg is not refetched, therefor the docker daemon does not have the correct credentials to authenticate with.
I was dealing the same errors while trying to pull images from a privately hosted GitLab instance. I was able to resolve them by including the email address that was associated with the generated token found in the auth field of the .dockercfg file.
The following file format worked for me:
"registry.gitlab.com" {
"auth": "base64 encoded username:personal_access_token",
"email": "email for personal access token"
}
In my case I used a Project Access Token, which has an e-mail address associated with it once it is created.
The file format in the Elastic Beanstalk documentation for the authentication file here, indicates that this is the required file format, though the versions that it says this format is required for are almost certainly outdated, since we are running Docker ^19.

Permission Error Running Container in AWS CodeBuild

I'm attempting to run the following command in CodeBuild:
- docker run --rm -v $(pwd)/SQL:/flyway/SQL -v $(pwd)/conf:/flyway/conf flyway/flyway -enterprise -url=jdbc:postgresql://xxx.xxx.us-east-1.rds.amazonaws.com:5432/hamshackradio -dryRunOutput="/flyway/SQL/output.sql" migrate
I get the following error:
ERROR: Unable to use /flyway/SQL/output.sql as a dry run output:
/flyway/SQL/output.sql (Permission denied) Caused by:
java.io.FileNotFoundException: /flyway/SQL/output.sql (Permission
denied)
The goal is to capture the output.sql file. Running the exact same command locally on Windows (adjusting the paths of course) works without error. The issue isn't Flyway or the overall command structure. It's something to do with the internals of running the Docker container on Ubuntu on AWS CodeBuild and permissions there (maybe permissions, maybe something else, I'm open).
Does anyone have a good idea on how to address this?
The container doesn't have write access to the host. You could try the following, which saves the artifact to the container and uses docker cp to copy the artifact to the host.
container=$(docker create -v <flywaymigrationspathonhost>:/flyway/sql flyway/flyway migrate -dryRunOutput=/flyway/reports/changes.sql -schemas=dbo -user=youruser -password=yourpass -url=<yourjdbcurl> -licenseKey=<licensekey>)
docker start -a ${container}
docker cp ${container}:/flyway/reports/changes.sql <hostpath>
You need to enable Privileged Mode for your CodeBuild project.

Sops unable to gcp kms decrypt file on Circleci despite GOOGLE_APPLICATION_CREDENTIALS successfully set to service account json

I am trying to configure a job on my local circleci (using docker executor, image: google/cloud-sdk:latest), and that job requires a sops gcp kms encrypted file to be decrypted. I have setup a google service account for the gcp kms decrypt service (I can run the script, to be run via the circleci job, successfully locally by decrypting the sops file via the service account, so I know the service account setup is valid). Here is how I am running my job.
1- I base64 encode the google service account json file: base64 path/to/service_aacount_file.json
2- I run circleci job, setting GCLOUD_SERVICE_KEY environment variable on circleci, with the base64 encoded content from the previous step: circleci local execute --env GCLOUD_SERVICE_KEY='<Base64EncodedServiceAccountJsonFileContent>' --job '<MyJob>'
3- Here is my circleci config:
- run:
name: <MyJob>
command: |
apt-get install -y docker
apt-get install -y sudo
cd $(pwd)/path/to/jobcode
echo $GCLOUD_SERVICE_KEY | base64 -d > ${HOME}/<MyGoogleServiceAccountJsonFile.json>
export GOOGLE_APPLICATION_CREDENTIALS="${HOME}/<MyGoogleServiceAccountJsonFile.json>"
gcloud auth activate-service-account --key-file ${HOME}/<MyGoogleServiceAccountJsonFile.json>
echo $GOOGLE_APPLICATION_CREDENTIALS
ls -halt $GOOGLE_APPLICATION_CREDENTIALS
cat $GOOGLE_APPLICATION_CREDENTIALS
sudo ./<RunJob.sh>
4- I get following error when I execute the job:
Failed to get the data key required to decrypt the SOPS file.
Group 0: FAILED
projects/<MyProject>/locations/<MyLocation>/keyRings/<MySopsKeyring>/cryptoKeys/<MyKey>: FAILED
- | Cannot create GCP KMS service: google: could not find
| default credentials. See
| https://developers.google.com/accounts/docs/application-default-credentials
| for more information.
Recovery failed because no master key was able to decrypt the file. In
order for SOPS to recover the file, at least one key has to be successful,
but none were.
5- Further, from the console output:
a- I can see that the service account was successfully activated: Activated service account credentials for: [<MyServiceAccount>#<MyProject>.iam.gserviceaccount.com]
b- The GOOGLE_APPLICATION_CREDENTIALS environment variable is set to the service account json's path: /path/to/service_account.json
c- The above file has been correctly base64 decoded and contains valid json:
{
"client_x509_cert_url": "<MyUrl>",
"auth_uri": "<MyAuthUri>",
"private_key": "<MyPrivateKey>",
"client_email": "<ClientEmail>",
"private_key_id": "<PrivateKeyId>",
"client_id": "<ClientId>",
"token_uri": "<TokenUri>",
"project_id": "<ProjectId>",
"type": "<ServiceAccount>",
"auth_provider_x509_cert_url": "<AuthProviderCertUrl>"
}
6- Some other things I have tried:
a- Tried setting google project name in environment variables, but still same error.
b- Tried setting GOOGLE_APPLICATION_CREDENTIALS to file's content, instead of file path, but again same result.
c- Tried setting GOOGLE_APPLICATION_CREDENTIALS by providing file path without quotes or single quotes, but still no difference.
d- Tried setting $BASH_ENV by doing echo 'export GOOGLE_APPLICATION_CREDENTIALS=path/to/service_account.json' >> $BASH_ENV, but same error
Please help.
Five options that could work:
Try to run the following command: gcloud auth application-default login
Try this command to set the env var: echo 'export GOOGLE_APPLICATION_CREDENTIALS=/tmp/service-account.json' >> $BASH_ENV
The other thing is that I see that runjob.sh is running under root. It could be that the gcp credentials are not visible under sudo per default. Either run the script without sudo or run the preceding commands with sudo.
As a last resort (those options worked for me, could be different in your scenario): { echo 1; echo 1; echo n; } | gcloud init
gcloud components update This sometimes works when the sdk is outdated.
config set project [PROJECT_NAME]
You can also check active accounts with: gcloud auth list

What is the best way to pass AWS credentials to a Docker container?

I am running docker-container on Amazon EC2. Currently I have added AWS Credentials to Dockerfile. Could you please let me know the best way to do this?
A lot has changed in Docker since this question was asked, so here's an attempt at an updated answer.
First, specifically with AWS credentials on containers already running inside of the cloud, using IAM roles as Vor suggests is a really good option. If you can do that, then add one more plus one to his answer and skip the rest of this.
Once you start running things outside of the cloud, or have a different type of secret, there are two key places that I recommend against storing secrets:
Environment variables: when these are defined on a container, every process inside the container has access to them, they are visible via /proc, apps may dump their environment to stdout where it gets stored in the logs, and most importantly, they appear in clear text when you inspect the container.
In the image itself: images often get pushed to registries where many users have pull access, sometimes without any credentials required to pull the image. Even if you delete the secret from one layer, the image can be disassembled with common Linux utilities like tar and the secret can be found from the step where it was first added to the image.
So what other options are there for secrets in Docker containers?
Option A: If you need this secret only during the build of your image, cannot use the secret before the build starts, and do not have access to BuildKit yet, then a multi-stage build is a best of the bad options. You would add the secret to the initial stages of the build, use it there, and then copy the output of that stage without the secret to your release stage, and only push that release stage to the registry servers. This secret is still in the image cache on the build server, so I tend to use this only as a last resort.
Option B: Also during build time, if you can use BuildKit which was released in 18.09, there are currently experimental features to allow the injection of secrets as a volume mount for a single RUN line. That mount does not get written to the image layers, so you can access the secret during build without worrying it will be pushed to a public registry server. The resulting Dockerfile looks like:
# syntax = docker/dockerfile:experimental
FROM python:3
RUN pip install awscli
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws s3 cp s3://... ...
And you build it with a command in 18.09 or newer like:
DOCKER_BUILDKIT=1 docker build -t your_image --secret id=aws,src=$HOME/.aws/credentials .
Option C: At runtime on a single node, without Swarm Mode or other orchestration, you can mount the credentials as a read only volume. Access to this credential requires the same access that you would have outside of docker to the same credentials file, so it's no better or worse than the scenario without docker. Most importantly, the contents of this file should not be visible when you inspect the container, view the logs, or push the image to a registry server, since the volume is outside of that in every scenario. This does require that you copy your credentials on the docker host, separate from the deploy of the container. (Note, anyone with the ability to run containers on that host can view your credential since access to the docker API is root on the host and root can view the files of any user. If you don't trust users with root on the host, then don't give them docker API access.)
For a docker run, this looks like:
docker run -v $HOME/.aws/credentials:/home/app/.aws/credentials:ro your_image
Or for a compose file, you'd have:
version: '3'
services:
app:
image: your_image
volumes:
- $HOME/.aws/credentials:/home/app/.aws/credentials:ro
Option D: With orchestration tools like Swarm Mode and Kubernetes, we now have secrets support that's better than a volume. With Swarm Mode, the file is encrypted on the manager filesystem (though the decryption key is often there too, allowing the manager to be restarted without an admin entering a decrypt key). More importantly, the secret is only sent to the workers that need the secret (running a container with that secret), it is only stored in memory on the worker, never disk, and it is injected as a file into the container with a tmpfs mount. Users on the host outside of swarm cannot mount that secret directly into their own container, however, with open access to the docker API, they could extract the secret from a running container on the node, so again, limit who has this access to the API. From compose, this secret injection looks like:
version: '3.7'
secrets:
aws_creds:
external: true
services:
app:
image: your_image
secrets:
- source: aws_creds
target: /home/user/.aws/credentials
uid: '1000'
gid: '1000'
mode: 0700
You turn on swarm mode with docker swarm init for a single node, then follow the directions for adding additional nodes. You can create the secret externally with docker secret create aws_creds $HOME/.aws/credentials. And you deploy the compose file with docker stack deploy -c docker-compose.yml stack_name.
I often version my secrets using a script from: https://github.com/sudo-bmitch/docker-config-update
Option E: Other tools exist to manage secrets, and my favorite is Vault because it gives the ability to create time limited secrets that automatically expire. Every application then gets its own set of tokens to request secrets, and those tokens give them the ability to request those time limited secrets for as long as they can reach the vault server. That reduces the risk if a secret is ever taken out of your network since it will either not work or be quick to expire. The functionality specific to AWS for Vault is documented at https://www.vaultproject.io/docs/secrets/aws/index.html
The best way is to use IAM Role and do not deal with credentials at all. (see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html )
Credentials could be retrieved from http://169.254.169.254..... Since this is a private ip address, it could be accessible only from EC2 instances.
All modern AWS client libraries "know" how to fetch, refresh and use credentials from there. So in most cases you don't even need to know about it. Just run ec2 with correct IAM role and you good to go.
As an option you can pass them at the runtime as environment variables ( i.e docker run -e AWS_ACCESS_KEY_ID=xyz -e AWS_SECRET_ACCESS_KEY=aaa myimage)
You can access these environment variables by running printenv at the terminal.
Yet another approach is to create temporary read-only volume in docker-compose.yaml. AWS CLI and SDK (like boto3 or AWS SDK for Java etc.) are looking for default profile in ~/.aws/credentials file.
If you want to use other profiles, you just need also to export AWS_PROFILE variable before running docker-compose command.
export AWS_PROFILE=some_other_profile_name
version: '3'
services:
service-name:
image: docker-image-name:latest
environment:
- AWS_PROFILE=${AWS_PROFILE}
volumes:
- ~/.aws/:/root/.aws:ro
In this example, I used root user on docker. If you are using other user, just change /root/.aws to user home directory.
:ro - stands for read-only docker volume
It is very helpful when you have multiple profiles in ~/.aws/credentials file and you are also using MFA. Also helpful when you want to locally test docker-container before deploying it on ECS on which you have IAM Roles, but locally you don't.
Another approach is to pass the keys from the host machine to the docker container. You may add the following lines to the docker-compose file.
services:
web:
build: .
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
The following one-liner works for me even when my credentials are set up by aws-okta or saml2aws:
$ docker run -v$HOME/.aws:/root/.aws:ro \
-e AWS_ACCESS_KEY_ID \
-e AWS_CA_BUNDLE \
-e AWS_CLI_FILE_ENCODING \
-e AWS_CONFIG_FILE \
-e AWS_DEFAULT_OUTPUT \
-e AWS_DEFAULT_REGION \
-e AWS_PAGER \
-e AWS_PROFILE \
-e AWS_ROLE_SESSION_NAME \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_SESSION_TOKEN \
-e AWS_SHARED_CREDENTIALS_FILE \
-e AWS_STS_REGIONAL_ENDPOINTS \
amazon/aws-cli s3 ls
Please note that for advanced use cases you might need to allow rw (read-write) permissions, so omit the ro (read-only) limitation when mounting the .aws volume in -v$HOME/.aws:/root/.aws:ro
Volume mounting is noted in this thread but as of docker-compose v3.2 + you can Bind Mount.
For example, if you have a file named .aws_creds in the root of your project:
In your service for the compose file do this for volumes:
volumes:
# normal volume mount, already shown in thread
- ./.aws_creds:/root/.aws/credentials
# way 2, note this requires docker-compose v 3.2+
- type: bind
source: .aws_creds # from local
target: /root/.aws/credentials # to the container location
Using this idea, you can publicly store your docker images on docker-hub because your aws credentials will not physically be in the image...to have them associated, you must have the correct directory structure locally where the container is started (i.e. pulling from Git)
You could create ~/aws_env_creds containing:
touch ~/aws_env_creds
chmod 777 ~/aws_env_creds
vi ~/aws_env_creds
Add these value (replace the key of yours):
AWS_ACCESS_KEY_ID=AK_FAKE_KEY_88RD3PNY
AWS_SECRET_ACCESS_KEY=BividQsWW_FAKE_KEY_MuB5VAAsQNJtSxQQyDY2C
Press "esc" to save the file.
Run and test the container:
my_service:
build: .
image: my_image
env_file:
- ~/aws_env_creds
If someone still face the same issue after following the instructions mentioned in accepted answer then make sure that you are not passing environment variables from two different sources. In my case I was passing environment variables to docker run via a file and as parameters which was causing the variables passed as parameters show no effect.
So the following command did not work for me:
docker run --env-file ./env.list -e AWS_ACCESS_KEY_ID=ABCD -e AWS_SECRET_ACCESS_KEY=PQRST IMAGE_NAME:v1.0.1
Moving the aws credentials into the mentioned env.list file helped.
for php apache docker the following command works
docker run --rm -d -p 80:80 --name my-apache-php-app -v "$PWD":/var/www/html -v ~/.aws:/.aws --env AWS_PROFILE=mfa php:7.2-apache
Based on some of previous answers, I built my own as follows.
My project structure:
├── Dockerfile
├── code
│   └── main.py
├── credentials
├── docker-compose.yml
└── requirements.txt
My docker-compose.yml file:
version: "3"
services:
app:
build:
context: .
volumes:
- ./credentials:/root/.aws/credentials
- ./code:/home/app
My Docker file:
FROM python:3.8-alpine
RUN pip3 --no-cache-dir install --upgrade awscli
RUN mkdir /app
WORKDIR /home/app
CMD python main.py