when i try pushing image using drone plugin for amazon ECR i'm getting the following message:
"no basic auth credentials"
my .drone.yml file pipline:
publish-to-ecr:
image: plugins/ecr
repo: foo
registry: xxx.dkr.ecr.us-west-1.amazonaws.com
dockerfile: ./Dockerfile
tags:
- latest
access_key: xxx
secret_key: xxx
region: xxx
i am using the creds for pushing my local env and it is working
The problem was that the role I configured to the machine was not configured in the repository side as well
go to the repository, and under permissions add the role the following permissions: PutImage, CompleteLayerUpload, InitiateLayerUplaod
and it worked
Related
I am using Single sign-on (SSO) authentication with AWS.
In the terminal, I sign into my SSO account, successfully:
aws sso login --profile dev
Navigating to the directory of the docker-compose.yml file, and using Docker in an Amazon ECS context, the command docker compose up -d fails with:
NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I have deleted the old (non-SSO) access keys and profiles in:
~/.aws/config
~/.aws/credentials
So now all that is present in the above directories is my SSO account.
Before SSO (using IAM users), docker compose up -d worked as expected, so I believe the problem is that Docker is having difficulty connecting to AWS via SSO on the CLI.
Any help here is much appreciated.
Docs on Docker ECS integration: https://docs.docker.com/cloud/ecs-integration/
The docker-compose.yml file looks like this:
version: "3.4"
x-aws-vpc: "vpc-xxxxx"
x-aws-cluster: "test"
x-aws-loadbalancer: "test-nlb"
services:
test:
build:
context: ./
dockerfile: Dockerfile
target: development
image: xxx.dkr.ecr.eu-west-1.amazonaws.com/xxx:10
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- ENABLE_SWAGGER=${ENABLE_SWAGGER:-true}
- LOGGING_LEVEL=${LOGGING_LEVEL:-INFO}
ports:
- "9090:9090"
I've a bitbucket pipeline that must have multiple aws credentials for different duties.
In the first lines, I have custom ECR image. To pull it, I created an AWS user for only ECR read only permissions. access-key and secret-key parameters are the keys of that user.
And in this ECR image, I embedded another AWS user's credentials to do the rest of the work (image push etc). But somehow, the credentials that I used for pulling base image running in steps too. Because of this situation, image push is being denied.
I tried to use export AWS_PROFILE=deployment but it doesn't help.
Is the credentials for base image pull being applied pipeline-wide?
And how can I overcome with this situation?
Thank you.
image:
name: <ECR Image>
aws:
access-key: $AWS_ACCESS_KEY_ID
secret-key: $AWS_SECRET_ACCESS_KEY
pipelines:
- step:
name: "Image Build & Push"
services:
-docker
script:
- export AWS_PROFILE=deployment
- export ENVIRONMENT=beta
- echo "Environment is ${ENVIRONMENT}"
- export DOCKER_IMAGE_BUILDER="${BITBUCKET_REPO_SLUG}:builder"
- make clean
- make build BUILD_VER=${BITBUCKET_TAG}.${BITBUCKET_BUILD_NUMBER} \ APP_NAME=${BITBUCKET_REPO_SLUG} \
DOCKER_IMAGE_BUILDER=${DOCKER_IMAGE_BUILDER}
- make test
- docker tag ....
- docker push .....
What I would do here instead of baking credentials inside the images:
Use one credential for pulling/pushing/taggin the image, why not use the same for pushing the image.
If that is something you don't wanna do:
Create an IAM role and give that permission to tag/push the images and assume this role from the earlier credentials being exported, No need to bake credentials in the images.
I found the following example in the documentation
script:
# build the image
- docker build -t my-docker-image .
# use the pipe to push to AWS ECR
- pipe: atlassian/aws-ecr-push-image:1.2.2
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
IMAGE_NAME: my-docker-image
TAGS: '${BITBUCKET_TAG} latest'G
The OpenID Connect is nice feature https://support.atlassian.com/bitbucket-cloud/docs/deploy-on-aws-using-bitbucket-pipelines-openid-connect/
I am running a docker compose network on AWS CodeBuild and I need to pass AWS credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) to the docker containers as they need to interact with AWS SSM. What is the best way to get these credentials from CodeBuild and pass them to the docker containers?
Initially, I thought of mounting the credentials directory from CodeBuild as a volume by adding this to each service in the docker-compose.yml file
volumes:
- '${HOME}/.aws/credentials:/root/.aws/credentials'
but that did not work as it seems the ${HOME}/.aws/ folder on the CodeBuild environment did not have any credentials in it
Using Docker secret, you may create your secrets:
docker secret create credentials.cnf credentials.cnf
define your Keys in the credentials.cnf file, and include it in your compose file as below:
services:
example:
image:
environment:
secrets:
- credentials.cnf
secrets:
- AWS_KEY:
file: credentials.cnf
- AWS_SECRET:
file: credentials.cnf
You can view your secrets with docker secrets ls
In the environment section of the CodeBuild project you have an option to set the environment variable from the value stored in Parameter Store.
I'm using Circleci to deploy my project on my AWS S3 bucket.
After many attempts I was able to finally made my config.yml work and according to Circleci interface everything is running succefully.
The problem is that when I access my bucket there's nothing there.
I already tried this:
-
run:
command: "aws s3 sync myAppPath s3://myBucketName"
Anyone could help? I have no errors and everything is done successully but no file on my bucket.
Thanks in advance
You have to add credentials.
add environmental variables in the project https://circleci.com/docs/2.0/env-vars/
screen:
And then configure config .circleci/config.yml :
# deploy to aws s3
deploy:
docker:
- image: cibuilds/aws:1.15.73
environment:
aws_access_key_id: $AWS_ACCESS_KEY_ID
aws_secret_access_key: $AWS_SECRET_ACCESS_KEY
steps:
- attach_workspace:
at: ./workspace
- run:
name: Deploy to S3 if tests pass and branch is develop
command: aws s3 sync workspace/public s3://your.bucket/ --delete
also just let you know to debug aws cli use circleci cli:
And in your terminal, once you connect by ssh to circleci
try:
aws s3 sync workspace/public s3://your.bucket/ --debug
I'm configuring CirclCI. And trying to sync Github to AWS-EC2.
When I committed to pushed to github repo, CircleCI showed some error like this.
Here's circle.yml config.
test:
override:
- exit 0
deployment:
staging:
branch: develop-citest
region: ap-northeast-1
codedeploy:
w***r-ma:
application_root: /home/adbase/
revision_location:
revision_type: S3
s3_location:
bucket: w****r-vm-dev
key_pattern: w****r-{BRANCH}-{SHORT_COMMIT}
deployment_group: staging-instance-group
deployment_config: CodeDeployDefault.AllAtOnce
What should I do against this problem?