NoCredentialProviders error when running "docker compose up" with AWS ECS integration - amazon-web-services

I'm getting the following error when I try to run docker compose up to deploy my infrastructure to AWS using Docker's ECS integration. Note that I'm running this on Pop!_OS 21.10, which is based on Ubuntu.
NoCredentialProviders: no valid providers in chain. Deprecated. For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Things I've tried, based on an exhaustive search of SO and other sites:
Verified the proper format of my ~/.aws/config and ~/.aws/credentials files are formatted correctly, are in the proper place, and have the correct permissions
Verified that the aws cli works fine
Verify that AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION are all set correctly
Tried copying the config and credentials to /root/.aws
Tried setting AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION in the root user's environment
Created /etc/systemd/system/docker.service.d/aws-credentials.conf and populated it with:
[Service]
Environment="AWS_ACCESS_KEY_ID=********************"
Environment="AWS_SECRET_ACCESS_KEY=****************************************"
Ran docker -l debug compose up (Only extra information it provides is DEBUG deploying on AWS with region="us-east-1"
I'm running out of options. If anyone has any other ideas to try, I'd love to hear it. Thanks!
Update: I've also now tried the following, with no luck:
Tried setting Environment="AWS_SHARED_CREDENTIALS_FILE=/home/kespan/.aws/credentials
Tried setting Environment="AWS_SHARED_CREDENTIALS_FILE=/home/kespan/.aws/credentials in /etc/systemd/system/docker.service.d/override.conf
After remembering my IAM account has MFA enabled, generated a token and added Environment="AWS_SESSION_TOKEN=..." to override.conf
Also to note - each time after I've added/modified files under /etc/systemd/system/docker.service.d/ I've run:
sudo systemctl daemon-reload
sudo systemctl restart docker
Edit:
Here's one of the Dockerfiles (both the scraper and scheduler use an identical Dockerfile):
FROM denoland/deno:alpine
WORKDIR /app
USER deno
COPY deps.ts .
RUN deno cache --unstable --no-check deps.ts
COPY . .
RUN deno cache --unstable --no-check mod.ts
RUN mkdir -p /var/tmp/log
CMD ["run", "--unstable", "--allow-all", "--no-check", "mod.ts"]
Here's my docker-compose (some bits redacted):
version: '3'
services:
grafana:
container_name: grafana
image: grafana/grafana
ports:
- "3000:3000"
volumes:
- grafana:/var/lib/grafana
deploy:
replicas: 1
scheduler:
image: scheduler
x-aws-pull-credentials: "arn..."
container_name: scheduler
environment:
DB_CONNECTION_STRING: "postgres://..."
SQS_URL: "..."
SQS_REGION: "us-east-1"
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
deploy:
replicas: 1
scraper:
image: scraper
x-aws-pull-credentials: "arn..."
container_name: scraper
environment:
DB_CONNECTION_STRING: "postgres://..."
SQS_URL: "..."
SQS_REGION: "us-east-1"
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
deploy:
replicas: 1
volumes:
grafana:

Have you attempted to use the Amazon ECS Local Container Endpoints tool that AWS Labs provides? It allows you to create an override file for you docker-compose configurations, and it will simulate the ECS endpoints and IAM roles you would be using in AWS.
This is done using the local AWS credentials you have on your workstation. More information is available on the AWS Blog.

Related

Docker CLI does not Understand AWS CLI SSO Credentials

I am using Single sign-on (SSO) authentication with AWS.
In the terminal, I sign into my SSO account, successfully:
aws sso login --profile dev
Navigating to the directory of the docker-compose.yml file, and using Docker in an Amazon ECS context, the command docker compose up -d fails with:
NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I have deleted the old (non-SSO) access keys and profiles in:
~/.aws/config
~/.aws/credentials
So now all that is present in the above directories is my SSO account.
Before SSO (using IAM users), docker compose up -d worked as expected, so I believe the problem is that Docker is having difficulty connecting to AWS via SSO on the CLI.
Any help here is much appreciated.
Docs on Docker ECS integration: https://docs.docker.com/cloud/ecs-integration/
The docker-compose.yml file looks like this:
version: "3.4"
x-aws-vpc: "vpc-xxxxx"
x-aws-cluster: "test"
x-aws-loadbalancer: "test-nlb"
services:
test:
build:
context: ./
dockerfile: Dockerfile
target: development
image: xxx.dkr.ecr.eu-west-1.amazonaws.com/xxx:10
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- ENABLE_SWAGGER=${ENABLE_SWAGGER:-true}
- LOGGING_LEVEL=${LOGGING_LEVEL:-INFO}
ports:
- "9090:9090"

AWS EBS - How to pull environment name into .ebextensions script

I have a grails app that I deploy to AWS Elastic Beanstalk through Jenkins. I want to add a splunk forwarder to my project so I can keep track of my logs outside of AWS and set up easy notifications.
The problem is, I have multiple environments of the app running (dev, pre-prod, prod, etc), which is fine because you can just change the environment name for the forwarded and be able to easily sort through that in Splunk.
However, the same .ebextensions file has to be used between all the environments, no I need a way to set the environment name to whatever AWS has the name as. Is there a way I can easily do this that I'm overlooking?
Start of the script:
container_commands:
01install-splunk:
command: /usr/local/bin/install-splunk.sh
02set-splunk-outputs:
command: /usr/local/bin/set_splunk_outputs.sh
env:
SPLUNK_SERVER_HOST: "splunk.host"
03add-inputs-to-splunk:
command: /usr/local/bin/add-inputs-to-splunk.sh
env:
ENVIRONMENT_NAME: "Development"
cwd: /root
ignoreErrors: false
That ENVIRONMENT_NAME variable I'm setting that's passed to the 3rd script is what I want to be able to change based on what environment is being deployed. Can I set this in Jenkins or pull it through AWS somehow?
You can try below steps:
Configure your AWS Elasticbeanstalk environment with the environment variable
ENVIRONMENT_NAME = 'Development' or 'QA' or 'Prod'
please refer aws-official-docs for same.
Then update config as below:
container_commands:
01install-splunk:
command: /usr/local/bin/install-splunk.sh
02set-splunk-outputs:
command: /usr/local/bin/set_splunk_outputs.sh
env:
SPLUNK_SERVER_HOST: "splunk.host"
03add-inputs-to-splunk:
command: /usr/local/bin/add-inputs-to-splunk.sh
env:
ENVIRONMENT_NAME: "$ENVIRONMENT_NAME"
cwd: /root
ignoreErrors: false
Hope this should work for you.

Deploy Applications on Amazon ECS Using docker compose

I'm trying to deploy a docker container with multiple services to ECS. I've been following this article which looks great: https://aws.amazon.com/blogs/containers/deploy-applications-on-amazon-ecs-using-docker-compose/
I can get my container to run locally, and I can connect to the ECS context using the AWS CLI; however in the basic example from the article when I run
docker compose up
In order to deploy the image to ECS, I get the error:
pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
Can't seem to make heads or tails of this. My docker is logged in to ECS using
aws ecr get-login-password --region region | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com
The default IAM user on my aws CLI has AmazonECS_FullAccess as well as "ecs:ListAccountSettings" and "cloudformation:ListStackResources"
I read here: pull access denied repository does not exist or may require docker login mikemaccana 's answer that after Nov 2020 authentication may be required in your YAML file to allow AWS to pull from hub.docker.io (e.g. give aws your Docker hub username and password) but I can't get the 'auth' syntax to work in my yaml file. This is my YAML file that runs tomcat and mariadb locally:
version: "2"
services:
database:
build:
context: ./tba-database
image: tba-database
# set default mysql root password, change as needed
environment:
MYSQL_ROOT_PASSWORD: password
# Expose port 3306 to host. Not for the application but
# handy to inspect the database from the host machine.
ports:
- "3306:3306"
restart: always
webserver:
build:
context: ./tba-webserver
image: tba-webserver
# mount point for application in tomcat
volumes:
- ./target/testPROJ:/usr/local/tomcat/webapps/ROOT
links:
- database:tba-database
# open ports for tomcat and remote debugging
ports:
- "8080:8080"
- "8000:8000"
restart: always
Author of the blog here (thanks for the kind comment!). I haven't played much with the build side of things but I suspect what's happening here is that when you run docker compose up we ignore the build phase and only leverage the image field. What happens next is that the containers being deployed on ECS/Fargate tries to pull the image tba-database (which is where the deploying seems to be complaining because it doesn't exist). You need extra steps to push your image to either GH or ECR before you could bring it life using docker compose up when in the ecs context.
You also probably need to change the compose version ("2" is very old).

How to get AWS kops based kubernetes cluster IP address to connect with gitlab CICD pipeline

I am trying to create basic gitlab CICD pipeline which will deploy my node.js based backend to AWS kops based k8s cluster.For that I have created gitlab-ci.yml file which will use for deploy whole CICD pipeline, however I am getting confused with how to get kubernetes cluster IP address so I can use it in gitlab-ci.yml to set as - kubectl config set-cluster k8s --server="$CLUSTER_ADDRESS"
where I want CLUSTER_ADDRESS to configure with gitlab in gitlab-ci.yml.
Any help would be appreciated.
variables:
DOCKER_DRIVER: overlay2
REGISTRY: $CI_REGISTRY
IMAGE_TAG: $CI_REGISTRY_IMAGE
K8S_DEPLOYMENT_NAME: deployment/$CI_PROJECT_NAME
CONTAINER_NAME: $CI_PROJECT_NAME
stages:
- build
- build-docker
- deploy
build-docker:
image: docker:latest
stage: build-docker
services:
- docker:dind
tags:
- privileged
only:
- Test
script:
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $REGISTRY
- docker build --network host -t $IMAGE_NAME:$IMAGE_TAG -t $IMAGE_NAME:latest .
- docker push $IMAGE_NAME:$IMAGE_TAG
- docker push $IMAGE_NAME:latest
deploy-k8s-(stage):
image:
name: kubectl:latest
entrypoint: [""]
stage: deploy
tags:
- privileged
# Optional: Manual gate
when: manual
dependencies:
- build-docker
script:
- kubectl config set-cluster k8s --server="$CLUSTER_ADDRESS"
- kubectl config set clusters.k8s.certificate-authority-data $CA_AUTH_DATA
- kubectl config set-credentials gitlab-service-account --token=$K8S_TOKEN
- kubectl config set-context default --cluster=k8s --user=gitlab-service-account --namespace=default
- kubectl config use-context default
- kubectl set image $K8S_DEPLOYMENT_NAME $CI_PROJECT_NAME=$IMAGE_TAG
- kubectl rollout restart $K8S_DEPLOYMENT_NAME
If your current kubeconfig context is set to the cluster in question, you can run the following to get the cluster address you want:
kubectl config view --minify --raw \
--output 'jsonpath={.clusters[0].cluster.server}'
You can add --context <cluster name> if not.
In most cases this will be https://api.<cluster name>.

how to deploy helm chart from gitlab to eks?

I created a kubernetes cluster and linked it with eks.
I created also an helm chart and .gitla-ci.yml.
I want to add a new step to deploy my app using helm to the cluster, but I don't find a recent tutorial. All tutorials use gitlab-auto devops.
The image is hosted on gitlab.
How could I do to achieve this task ?
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: test
USER_GITLAB: kosted
APP_NAME: mebooks
REPO: gara-mebooks
MAVEN_CLI_OPTS: "-s .m2/settings.xml --batch-mode"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
stages:
- deploy
k8s-deploy:
stage: deploy
image: dtzar/helm-kubectl:3.1.2
only:
- develop
script:
# Read certificate stored in $KUBE_CA_PEM variable and save it in a new file
- echo $KUBE_URL
- kubectl config set-cluster gara-eks-cluster --server="$KUBE_URL" --certificate-authority="$KUBE_CA_PEM"
- kubectl get pods
In the gitlab console I got
The connection to the server localhost:8080 was refused - did you
specify the right host or port? Running after_script 00:01 Uploading
artifacts for failed job 00:02 ERROR: Job failed: exit code 1
1 - Create arn role or user on IAM from your aws console
2 - connect to your bastion and add the arn role/user in the ConfigMap aws-auth
you can follow this to understand how it works (you are not the creator of the cluster paragraph) : https://aws.amazon.com/fr/premiumsupport/knowledge-center/eks-api-server-unauthorized-error/
3- In your gitlab ci you just have to add this if it is a user you have created :
k8s-deploy:
stage: deploy
image: you need an image with aws + kubectl + helm
only:
- develop
script:
- aws --version
- aws --profile default configure set aws_access_key_id "your access id"
- aws --profile default configure set aws_secret_access_key "your secret"
- helm version
- aws eks update-kubeconfig --name NAME-OF-YOUR-CLUSTER --region eu-west-3
- helm upgrade init
- helm upgrade --install my-chart ./my-chart-folder
If you created a role note a user, you have just to do:
k8s-deploy:
stage: deploy
image: you need an image with aws + kubectl + helm
only:
- develop
script:
- aws --version
- helm version
- aws eks update-kubeconfig --name NAME-OF-YOUR-CLUSTER --region eu-west-3 -arn
- helm upgrade init
- helm upgrade --install my-chart ./my-chart-folder
Here I am adding my method, which is generic and can be used in any K8S environment without AWS CLI.
First, you need to convert your Kube Config to a base64 string:
cat ~/.kube/config | base64
Add the result string as a variable to your CI/CD pipeline settings of the project/group. In my example I used kube_config. Read more on how to add variables here.
Here is my CI YAML file:
stages:
# - build
# - test
- deploy
variables:
KUBEFOLDER: /root/.kube
KUBECONFIG: $KUBEFOLDER/config
k8s-deploy-job:
stage: deploy
image: dtzar/helm-kubectl:3.5.0
before_script:
- mkdir ${KUBEFOLDER}
- echo ${kube_config} | base64 -d > ${KUBECONFIG}
- helm version
- helm repo update
script:
- echo "Deploying application..."
- kubectl get pods
#- helm upgrade --install my-chart ./my-chart-folder
- echo "Application successfully deployed."
Inspired by:
https://about.gitlab.com/blog/2017/09/21/how-to-create-ci-cd-pipeline-with-autodeploy-to-kubernetes-using-gitlab-and-helm/