I am working on a POC using Confluent platform and trying to connect Kinesis in my AWS account to send data to Kafka running on Confluent platform (setup using Docker compose). I have used the AWS Kinesis connector available with Confluent. I am using trial version of the connector valid for 30 days.
I have setup the KinesisSourceConnector plugin from https://www.confluent.io/hub/confluentinc/kafka-connect-kinesis
The Source connector configuration has credentials configuration available for AWS Access Key Id, AWS Secret Key Id
However, it does not have a configuration parameter for AWS Session Token. Is there any way to set this up since my AWS account can only be accessed using STS ?
I have tried adding an additional property aws_access_key_id but with no success.
Error description -
The provided credentials are invalid: The security token included in the request is invalid. (Service: AWSSecurityTokenService; Status Code: 403; Error Code: InvalidClientTokenId; Request ID: d893039b-d4f3-4de3-95ef-ede233b0885c)
Thanks to #OneCricketeer for helping find an answer
Add environment variables to the Connect server's Java process for security reasons, or have ~/.aws/credentials file on the Connect worker servers
Create a .env file in the folder where you will run Kafka connect
Setup the aws credentials in the .env file (AWS_SESSION_TOKEN, AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID, AWS_DEFAULT_REGION)
Modify the docker-compose yml file to add the environment variables
for Kafka connect
connect:
image: cnfldemos/cp-server-connect-datagen:0.5.3-7.1.0
hostname: connect
container_name: connect
depends_on:
- broker
- schema-registry
ports:
- "8083:8083"
environment:
...
AWS_SESSION_TOKEN: '${AWS_SESSION_TOKEN}'
AWS_SECRET_ACCESS_KEY: '${AWS_SECRET_ACCESS_KEY}'
AWS_ACCESS_KEY_ID: '${AWS_ACCESS_KEY_ID}'
AWS_DEFAULT_REGION: '${AWS_DEFAULT_REGION}'
Restart Kafka connect
Related
i'm using OpenTelemetry collector, running via docker-compose, to send telemetry data from an application to GCP among other exporters. Very new to GCP and OpenTelemetry!
I added GCP as exporter to my OpenTelemetry configuration, and in the services/telemetry/pipeline i have set it up to export Logs, Tracing and Metrics.
To authenticate to GCP, i have downloaded and installed google cloud sdk and ran the command gcloud auth application-default login to generate credentials. I have made them accessible to the container by mounting them as a volume, grabbing the generated 'application_default_credentials.json' file, and setting it as the value to an environment variable 'GOOGLE_APPLICATION_CREDENTIALS'.
otel-collector:
image: otel/opentelemetry-collector-contrib:0.61.0
container_name: monitoring-collector
restart: unless-stopped
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./collector/otel-collector-config.yaml:/etc/otel-collector-config.yaml
- ~/.config/gcloud/application_default_credentials.json:/etc/otel/key.json
- /<MyFilePath>:/etc/otel/config.yaml
environment:
- GOOGLE_APPLICATION_CREDENTIALS=/etc/otel/key.json
When i inspect the container, I can see the credentials are present. However, i'm getting an error which reads as follows from my container:
stackdriver: no project found with application default credentials
Does anyone know why this might be happening? to me, it implies the credentials have been picked up, but that it can't bind them to my existing project on GCP? i tried to read further into the documentation but went down a rabbit hole... At this point, i'm wondering are further permissions required to complete the authentication on GCP's end, or perhaps using ADC (Application Default Credentials) is not supported with our current project.
Any thoughts would be appreciated!
I am using Single sign-on (SSO) authentication with AWS.
In the terminal, I sign into my SSO account, successfully:
aws sso login --profile dev
Navigating to the directory of the docker-compose.yml file, and using Docker in an Amazon ECS context, the command docker compose up -d fails with:
NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I have deleted the old (non-SSO) access keys and profiles in:
~/.aws/config
~/.aws/credentials
So now all that is present in the above directories is my SSO account.
Before SSO (using IAM users), docker compose up -d worked as expected, so I believe the problem is that Docker is having difficulty connecting to AWS via SSO on the CLI.
Any help here is much appreciated.
Docs on Docker ECS integration: https://docs.docker.com/cloud/ecs-integration/
The docker-compose.yml file looks like this:
version: "3.4"
x-aws-vpc: "vpc-xxxxx"
x-aws-cluster: "test"
x-aws-loadbalancer: "test-nlb"
services:
test:
build:
context: ./
dockerfile: Dockerfile
target: development
image: xxx.dkr.ecr.eu-west-1.amazonaws.com/xxx:10
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- ENABLE_SWAGGER=${ENABLE_SWAGGER:-true}
- LOGGING_LEVEL=${LOGGING_LEVEL:-INFO}
ports:
- "9090:9090"
I'm trying to deploy a docker container with multiple services to ECS. I've been following this article which looks great: https://aws.amazon.com/blogs/containers/deploy-applications-on-amazon-ecs-using-docker-compose/
I can get my container to run locally, and I can connect to the ECS context using the AWS CLI; however in the basic example from the article when I run
docker compose up
In order to deploy the image to ECS, I get the error:
pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
Can't seem to make heads or tails of this. My docker is logged in to ECS using
aws ecr get-login-password --region region | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com
The default IAM user on my aws CLI has AmazonECS_FullAccess as well as "ecs:ListAccountSettings" and "cloudformation:ListStackResources"
I read here: pull access denied repository does not exist or may require docker login mikemaccana 's answer that after Nov 2020 authentication may be required in your YAML file to allow AWS to pull from hub.docker.io (e.g. give aws your Docker hub username and password) but I can't get the 'auth' syntax to work in my yaml file. This is my YAML file that runs tomcat and mariadb locally:
version: "2"
services:
database:
build:
context: ./tba-database
image: tba-database
# set default mysql root password, change as needed
environment:
MYSQL_ROOT_PASSWORD: password
# Expose port 3306 to host. Not for the application but
# handy to inspect the database from the host machine.
ports:
- "3306:3306"
restart: always
webserver:
build:
context: ./tba-webserver
image: tba-webserver
# mount point for application in tomcat
volumes:
- ./target/testPROJ:/usr/local/tomcat/webapps/ROOT
links:
- database:tba-database
# open ports for tomcat and remote debugging
ports:
- "8080:8080"
- "8000:8000"
restart: always
Author of the blog here (thanks for the kind comment!). I haven't played much with the build side of things but I suspect what's happening here is that when you run docker compose up we ignore the build phase and only leverage the image field. What happens next is that the containers being deployed on ECS/Fargate tries to pull the image tba-database (which is where the deploying seems to be complaining because it doesn't exist). You need extra steps to push your image to either GH or ECR before you could bring it life using docker compose up when in the ecs context.
You also probably need to change the compose version ("2" is very old).
I am using Docker on Windows (Docker Desktop).
I have a docker-compose.yml on which I want to enable awslogs logging driver:
version: "3"
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.0.0
container_name: zookeeper
hostname: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_SESSION_TOKEN: ${AWS_SESSION_TOKEN}
logging:
driver: awslogs
options:
awslogs-region: eu-west-1
awslogs-group: zookeeper-logs
Under %userprofile%\.aws I have valid, working aws credentials:
/C:\Users\catalin.gavan\.aws
├── config
└── credentials
When I try to build and run the containers, I get the following error:
C:\Users\catalin.gavan\Work\DockerApp>
docker-compose up
Creating network "dockerapp_default" with the default driver
Creating zookeeper ... error
ERROR: for zookeeper Cannot start service zookeeper: failed to initialize logging driver: failed to create Cloudwatch log stream: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
ERROR: for zookeeper Cannot start service zookeeper: failed to initialize logging driver: failed to create Cloudwatch log stream: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
ERROR: Encountered errors while bringing up the project.
The CloudWatch zookeeper-logs logs group already exists. The AWS profile which I am using has full access, and has already been tested with different scenarios.
The problem seems to be caused by Docker Desktop (Windows) daemon, which cannot read the .aws credentials.
The same problem has been reported:
https://forums.docker.com/t/awslogs-logging-driver-issue-nocredentialproviders-no-valid-providers-in-chain/91135
NoCredentialProviders error with awslogs logging driver in docker at mac
Awslogs logging driver issue - NoCredentialProviders: no valid providers in chain
It's important to remember that this credential file needs to be made available to the docker engine not the client. It's the engine (the daemon) that is going to connect to aws.
If you create that file as a user, it may not be available to the engine. If you're running docker-machine and the engine is in the VM, you'll need to move that credentials file into the VM for the root user.
Here's how you can pass credentials to daemon https://wdullaer.com/blog/2016/02/28/pass-credentials-to-the-awslogs-docker-logging-driver-on-ubuntu/
I am running a docker compose network on AWS CodeBuild and I need to pass AWS credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) to the docker containers as they need to interact with AWS SSM. What is the best way to get these credentials from CodeBuild and pass them to the docker containers?
Initially, I thought of mounting the credentials directory from CodeBuild as a volume by adding this to each service in the docker-compose.yml file
volumes:
- '${HOME}/.aws/credentials:/root/.aws/credentials'
but that did not work as it seems the ${HOME}/.aws/ folder on the CodeBuild environment did not have any credentials in it
Using Docker secret, you may create your secrets:
docker secret create credentials.cnf credentials.cnf
define your Keys in the credentials.cnf file, and include it in your compose file as below:
services:
example:
image:
environment:
secrets:
- credentials.cnf
secrets:
- AWS_KEY:
file: credentials.cnf
- AWS_SECRET:
file: credentials.cnf
You can view your secrets with docker secrets ls
In the environment section of the CodeBuild project you have an option to set the environment variable from the value stored in Parameter Store.