I am using Docker on Windows (Docker Desktop).
I have a docker-compose.yml on which I want to enable awslogs logging driver:
version: "3"
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.0.0
container_name: zookeeper
hostname: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_SESSION_TOKEN: ${AWS_SESSION_TOKEN}
logging:
driver: awslogs
options:
awslogs-region: eu-west-1
awslogs-group: zookeeper-logs
Under %userprofile%\.aws I have valid, working aws credentials:
/C:\Users\catalin.gavan\.aws
├── config
└── credentials
When I try to build and run the containers, I get the following error:
C:\Users\catalin.gavan\Work\DockerApp>
docker-compose up
Creating network "dockerapp_default" with the default driver
Creating zookeeper ... error
ERROR: for zookeeper Cannot start service zookeeper: failed to initialize logging driver: failed to create Cloudwatch log stream: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
ERROR: for zookeeper Cannot start service zookeeper: failed to initialize logging driver: failed to create Cloudwatch log stream: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
ERROR: Encountered errors while bringing up the project.
The CloudWatch zookeeper-logs logs group already exists. The AWS profile which I am using has full access, and has already been tested with different scenarios.
The problem seems to be caused by Docker Desktop (Windows) daemon, which cannot read the .aws credentials.
The same problem has been reported:
https://forums.docker.com/t/awslogs-logging-driver-issue-nocredentialproviders-no-valid-providers-in-chain/91135
NoCredentialProviders error with awslogs logging driver in docker at mac
Awslogs logging driver issue - NoCredentialProviders: no valid providers in chain
It's important to remember that this credential file needs to be made available to the docker engine not the client. It's the engine (the daemon) that is going to connect to aws.
If you create that file as a user, it may not be available to the engine. If you're running docker-machine and the engine is in the VM, you'll need to move that credentials file into the VM for the root user.
Here's how you can pass credentials to daemon https://wdullaer.com/blog/2016/02/28/pass-credentials-to-the-awslogs-docker-logging-driver-on-ubuntu/
Related
I am working on a POC using Confluent platform and trying to connect Kinesis in my AWS account to send data to Kafka running on Confluent platform (setup using Docker compose). I have used the AWS Kinesis connector available with Confluent. I am using trial version of the connector valid for 30 days.
I have setup the KinesisSourceConnector plugin from https://www.confluent.io/hub/confluentinc/kafka-connect-kinesis
The Source connector configuration has credentials configuration available for AWS Access Key Id, AWS Secret Key Id
However, it does not have a configuration parameter for AWS Session Token. Is there any way to set this up since my AWS account can only be accessed using STS ?
I have tried adding an additional property aws_access_key_id but with no success.
Error description -
The provided credentials are invalid: The security token included in the request is invalid. (Service: AWSSecurityTokenService; Status Code: 403; Error Code: InvalidClientTokenId; Request ID: d893039b-d4f3-4de3-95ef-ede233b0885c)
Thanks to #OneCricketeer for helping find an answer
Add environment variables to the Connect server's Java process for security reasons, or have ~/.aws/credentials file on the Connect worker servers
Create a .env file in the folder where you will run Kafka connect
Setup the aws credentials in the .env file (AWS_SESSION_TOKEN, AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID, AWS_DEFAULT_REGION)
Modify the docker-compose yml file to add the environment variables
for Kafka connect
connect:
image: cnfldemos/cp-server-connect-datagen:0.5.3-7.1.0
hostname: connect
container_name: connect
depends_on:
- broker
- schema-registry
ports:
- "8083:8083"
environment:
...
AWS_SESSION_TOKEN: '${AWS_SESSION_TOKEN}'
AWS_SECRET_ACCESS_KEY: '${AWS_SECRET_ACCESS_KEY}'
AWS_ACCESS_KEY_ID: '${AWS_ACCESS_KEY_ID}'
AWS_DEFAULT_REGION: '${AWS_DEFAULT_REGION}'
Restart Kafka connect
I'm getting the following error when I try to run docker compose up to deploy my infrastructure to AWS using Docker's ECS integration. Note that I'm running this on Pop!_OS 21.10, which is based on Ubuntu.
NoCredentialProviders: no valid providers in chain. Deprecated. For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Things I've tried, based on an exhaustive search of SO and other sites:
Verified the proper format of my ~/.aws/config and ~/.aws/credentials files are formatted correctly, are in the proper place, and have the correct permissions
Verified that the aws cli works fine
Verify that AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION are all set correctly
Tried copying the config and credentials to /root/.aws
Tried setting AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION in the root user's environment
Created /etc/systemd/system/docker.service.d/aws-credentials.conf and populated it with:
[Service]
Environment="AWS_ACCESS_KEY_ID=********************"
Environment="AWS_SECRET_ACCESS_KEY=****************************************"
Ran docker -l debug compose up (Only extra information it provides is DEBUG deploying on AWS with region="us-east-1"
I'm running out of options. If anyone has any other ideas to try, I'd love to hear it. Thanks!
Update: I've also now tried the following, with no luck:
Tried setting Environment="AWS_SHARED_CREDENTIALS_FILE=/home/kespan/.aws/credentials
Tried setting Environment="AWS_SHARED_CREDENTIALS_FILE=/home/kespan/.aws/credentials in /etc/systemd/system/docker.service.d/override.conf
After remembering my IAM account has MFA enabled, generated a token and added Environment="AWS_SESSION_TOKEN=..." to override.conf
Also to note - each time after I've added/modified files under /etc/systemd/system/docker.service.d/ I've run:
sudo systemctl daemon-reload
sudo systemctl restart docker
Edit:
Here's one of the Dockerfiles (both the scraper and scheduler use an identical Dockerfile):
FROM denoland/deno:alpine
WORKDIR /app
USER deno
COPY deps.ts .
RUN deno cache --unstable --no-check deps.ts
COPY . .
RUN deno cache --unstable --no-check mod.ts
RUN mkdir -p /var/tmp/log
CMD ["run", "--unstable", "--allow-all", "--no-check", "mod.ts"]
Here's my docker-compose (some bits redacted):
version: '3'
services:
grafana:
container_name: grafana
image: grafana/grafana
ports:
- "3000:3000"
volumes:
- grafana:/var/lib/grafana
deploy:
replicas: 1
scheduler:
image: scheduler
x-aws-pull-credentials: "arn..."
container_name: scheduler
environment:
DB_CONNECTION_STRING: "postgres://..."
SQS_URL: "..."
SQS_REGION: "us-east-1"
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
deploy:
replicas: 1
scraper:
image: scraper
x-aws-pull-credentials: "arn..."
container_name: scraper
environment:
DB_CONNECTION_STRING: "postgres://..."
SQS_URL: "..."
SQS_REGION: "us-east-1"
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
deploy:
replicas: 1
volumes:
grafana:
Have you attempted to use the Amazon ECS Local Container Endpoints tool that AWS Labs provides? It allows you to create an override file for you docker-compose configurations, and it will simulate the ECS endpoints and IAM roles you would be using in AWS.
This is done using the local AWS credentials you have on your workstation. More information is available on the AWS Blog.
I'm trying to deploy a docker container with multiple services to ECS. I've been following this article which looks great: https://aws.amazon.com/blogs/containers/deploy-applications-on-amazon-ecs-using-docker-compose/
I can get my container to run locally, and I can connect to the ECS context using the AWS CLI; however in the basic example from the article when I run
docker compose up
In order to deploy the image to ECS, I get the error:
pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
Can't seem to make heads or tails of this. My docker is logged in to ECS using
aws ecr get-login-password --region region | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com
The default IAM user on my aws CLI has AmazonECS_FullAccess as well as "ecs:ListAccountSettings" and "cloudformation:ListStackResources"
I read here: pull access denied repository does not exist or may require docker login mikemaccana 's answer that after Nov 2020 authentication may be required in your YAML file to allow AWS to pull from hub.docker.io (e.g. give aws your Docker hub username and password) but I can't get the 'auth' syntax to work in my yaml file. This is my YAML file that runs tomcat and mariadb locally:
version: "2"
services:
database:
build:
context: ./tba-database
image: tba-database
# set default mysql root password, change as needed
environment:
MYSQL_ROOT_PASSWORD: password
# Expose port 3306 to host. Not for the application but
# handy to inspect the database from the host machine.
ports:
- "3306:3306"
restart: always
webserver:
build:
context: ./tba-webserver
image: tba-webserver
# mount point for application in tomcat
volumes:
- ./target/testPROJ:/usr/local/tomcat/webapps/ROOT
links:
- database:tba-database
# open ports for tomcat and remote debugging
ports:
- "8080:8080"
- "8000:8000"
restart: always
Author of the blog here (thanks for the kind comment!). I haven't played much with the build side of things but I suspect what's happening here is that when you run docker compose up we ignore the build phase and only leverage the image field. What happens next is that the containers being deployed on ECS/Fargate tries to pull the image tba-database (which is where the deploying seems to be complaining because it doesn't exist). You need extra steps to push your image to either GH or ECR before you could bring it life using docker compose up when in the ecs context.
You also probably need to change the compose version ("2" is very old).
First time I'm trying to deploy a django app to elastic beanstalk. The application uses django channels.
These are my config files:
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: "dashboard/dashboard/wsgi.py"
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: "dashboard/dashboard/settings.py"
PYTHONPATH: /opt/python/current/app/dashboard:$PYTHONPATH
aws:elbv2:listener:80:
DefaultProcess: http
ListenerEnabled: 'true'
Protocol: HTTP
Rules: ws
aws:elbv2:listenerrule:ws:
PathPatterns: /websockets/*
Process: websocket
Priority: 1
aws:elasticbeanstalk:environment:process:http:
Port: '80'
Protocol: HTTP
aws:elasticbeanstalk:environment:process:websocket:
Port: '5000'
Protocol: HTTP
container_commands:
00_pip_upgrade:
command: "source /opt/python/run/venv/bin/activate && pip install --upgrade pip"
ignoreErrors: false
01_migrate:
command: "django-admin.py migrate"
leader_only: true
02_collectstatic:
command: "django-admin.py collectstatic --noinput"
03_wsgipass:
command: 'echo "WSGIPassAuthorization On" >> ../wsgi.conf'
When I run eb create django-env I get the following logs:
Creating application version archive "app-200617_112710".
Uploading: [##################################################] 100% Done...
Environment details for: django-env
Application name: dashboard
Region: us-west-2
Deployed Version: app-200617_112710
Environment ID: e-rdgipdg4z3
Platform: arn:aws:elasticbeanstalk:us-west-2::platform/Python 3.7 running on 64bit Amazon Linux 2/3.0.2
Tier: WebServer-Standard-1.0
CNAME: UNKNOWN
Updated: 2020-06-17 10:27:48.898000+00:00
Printing Status:
2020-06-17 10:27:47 INFO createEnvironment is starting.
2020-06-17 10:27:49 INFO Using elasticbeanstalk-us-west-2-041741961231 as Amazon S3 storage bucket for environment data.
2020-06-17 10:28:10 INFO Created security group named: sg-0942435ec637ad173
2020-06-17 10:28:25 INFO Created load balancer named: awseb-e-r-AWSEBLoa-19UYXEUG5IA4F
2020-06-17 10:28:25 INFO Created security group named: awseb-e-rdgipdg4z3-stack-AWSEBSecurityGroup-17RVV1ZT14855
2020-06-17 10:28:25 INFO Created Auto Scaling launch configuration named: awseb-e-rdgipdg4z3-stack-AWSEBAutoScalingLaunchConfiguration-H5E4G2YJ3LEC
2020-06-17 10:29:30 INFO Created Auto Scaling group named: awseb-e-rdgipdg4z3-stack-AWSEBAutoScalingGroup-1I2C273N6RN8S
2020-06-17 10:29:30 INFO Waiting for EC2 instances to launch. This may take a few minutes.
2020-06-17 10:29:30 INFO Created Auto Scaling group policy named: arn:aws:autoscaling:us-west-2:041741961231:scalingPolicy:8d4c8dcf-d77d-4d18-92d8-67f8a2c1cd9e:autoScalingGroupName/awseb-e-rdgipdg4z3-stack-AWSEBAutoScalingGroup-1I2C273N6RN8S:policyName/awseb-e-rdgipdg4z3-stack-AWSEBAutoScalingScaleDownPolicy-1JAUAII3SCELN
2020-06-17 10:29:30 INFO Created Auto Scaling group policy named: arn:aws:autoscaling:us-west-2:041741961231:scalingPolicy:0c3d9c2c-bc65-44ed-8a22-2f9bef538ba7:autoScalingGroupName/awseb-e-rdgipdg4z3-stack-AWSEBAutoScalingGroup-1I2C273N6RN8S:policyName/awseb-e-rdgipdg4z3-stack-AWSEBAutoScalingScaleUpPolicy-XI8Z22SYWQKR
2020-06-17 10:29:30 INFO Created CloudWatch alarm named: awseb-e-rdgipdg4z3-stack-AWSEBCloudwatchAlarmHigh-572C6W1QYGIC
2020-06-17 10:29:30 INFO Created CloudWatch alarm named: awseb-e-rdgipdg4z3-stack-AWSEBCloudwatchAlarmLow-1RTNBIHPHISRO
2020-06-17 10:33:05 ERROR [Instance: i-01576cfe5918af1c3] Command failed on instance. An unexpected error has occurred [ErrorCode: 0000000001].
2020-06-17 10:33:05 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2020-06-17 10:34:07 ERROR Create environment operation is complete, but with errors. For more information, see troubleshooting documentation.
ERROR: ServiceError - Create environment operation is complete, but with errors. For more information, see troubleshooting documentation.
The error is extremely vague, and I have no clue as to what I'm doing wrong.
I had a similar issue. I used psycopg2-binary instead of psycopg2 and created a new environment. The health status is now ok
Since this is getting some attention, I suggest you check your Elastic Beanstalk logs on the aws console, since the error is completely generic and can be anything. I suggest checking mainly the cmd execution and activity logs.
In my case, it was because I had the following listed in requirements.txt, and they failed to install on EC2:
mkl-fft==1.1.0
mkl-random==1.1.0
mkl-service==2.3.0
pypiwin32==223
pywin32==228
Removing those from requirements.txt fixed the issue
it is most likely a connection error. make sure the instance can access to the internet and you have VPC endpoints for SQS/Cloudformation/CloudWatch/S3/elasticbeanstalk/elasticbeanstalk-health. also make sure the security groups for these endpoints allow access to your instance
I have a few docker containers running with docker-compose on an AWS EC2 instance. I am looking to get the logs sent to AWS CloudWatch. I was also having issues getting the logs from docker containers to AWS CloudWatch from my Mac running Sierra so I've moved over to EC2 instances running Amazon AMI.
My docker-compose file:
version: '2'
services:
scraper:
build: ./Scraper/
logging:
driver: "awslogs"
options:
awslogs-region: "eu-west-1"
awslogs-group: "permission-logs"
awslogs-stream: "stream"
volumes:
- ./Scraper/spiders:/spiders
When I run docker-compose up I get the following error:
scraper_1 | WARNING: no logs are available with the 'awslogs' log driver
but the container is running. No logs appear on the AWS CloudWatch stream. I have assigned an IAM role to the EC2 container that the docker-containers run on.
I am at a complete loss now as to what I should be doing and would apprecaite any advice.
The awslogs works without using ECS.
you need to configure the AWS credentials (the user should have IAM roles appropriate [cloudwatch logs]).
I used this tutorial, it worked for me: https://wdullaer.com/blog/2016/02/28/pass-credentials-to-the-awslogs-docker-logging-driver-on-ubuntu/
I was getting the same error but when I checked the cloudwatch logs, I was able to see the logs in cloudwatch. Did you check that if you have the logs group created in cloudwatch. Docker doesn't support console logging when we use the custom logging drivers.
The section on limitations here says that docker logs command is only available for json-file and journald drivers, and that's true for built-in drivers.
When trying to get logs from a driver that doesn't support reading, nothing hangs for me, docker logs prints this:
Error response from daemon: configured logging driver does not support reading
There are 3 main steps involved it to it.
Create an IAM role/User
Install CloudAgent
Modify docker-compose file or docker run command
I have referred an article here with steps to send the docker logs to aws cloudwatch.
The AWS logs driver you are using awslogs is for use with EC2 Container Service (ECS). It will not work on plain EC2. See documentation.
I would recommend creating a single node ECS cluster. Be sure the EC2 instance(s) in that cluster have a role, and the role provides permissions to write to Cloudwatch logs.
From there anything in your container that logs to stdout will be captured by the awslogs driver and streamed to Cloudwatch logs.