can't start new thread in aws cli dockerfile - dockerfile

I am trying to run a aws s3 cli command during docker build and it is giving me error can't start new thread. I tried setting max_concurrent_requests to 1 to limits threads to 1.
...
RUN yarn build
RUN aws configure set default.s3.max_concurrent_requests 1
RUN aws sts get-caller-identity
RUN aws s3 rm s3://public-assets/build/_next/static --recursive
...
The last s3 rm command is giving the error.

For anyone that stumbles into this exact same issue, it is not a AWS CLI specific issue in anyway. I was using alpine image and then I switched to ubuntu. Once, I switched to ubuntu base image, everything worked out the way it is intended to.

Related

sam build botocore.exceptions.NoCredentialsError: Unable to locate credentials

I am trying to deploy my machine learning model with sam for couple of days and I am getting this error:
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I am also make sure that my aws config is fine
the "aws s3 ls" command works fine with me any help will be useful thanks in advance
I've read through this issue which seems to have been deployed in v1.53: SAM Accelerate issue
Reading that seemed to imply that it might be worth trying
sam deploy --guided --profile mark
--profile mark is the new part and mark is just the name of the profile.
I'm using v1.53 but still have to pass in the profile to avoid the problem you're having and I was having, so they may not have fixed the issue as well as intended, but at least the --profile seems to solve it for me.
If you are using Linux, this error can be caused by a misalignment between a docker root installation and user-level AWS credentials.
Amazon documentation recommends adding credentials using the aws configure command without sudo. However, when you install docker on Linux, it requires a root-level installation. This ultimately results in the user being forced to use sudo for the SAM CLI build and deploy commands, which leads to the error.
There are two different solutions that will fix the issue:
Allow non-root users to manage docker. If you use this method, you will not need to use sudo for your SAM CLI commands. This fix can be accomplished by using the following commands:
sudo groupadd docker
sudo usermod -aG docker $USER
OR
Use sudo aws configure to add AWS credentials to root. This fix requires you to continue using sudo for your SAM CLI commands.

How can I install aws cli, from WITHIN the ECS task?

Question:
How can I install aws cli, from WITHIN the ECS task ?
DESCRIPTION:
I'm using a docker container to run the logstash application (it is part of the elastic family).
The docker image name is "docker.elastic.co/logstash/logstash:7.10.2"
This logstash application needs to write to S3, thus it needs AWS CLI installed.
If aws is not installed, it crashes.
# STEP 1 #
To avoid crashing, when I used this application only as a docker, I ran it in a way that I caused the 'logstash start' to be delayed, after docker container was started.
I did this by adding "sleep" command to an external docker-entrypoint file, before it starts the logstash.
This is how it looks in the docker-entrypoint file:
sleep 120
if [[ -z $1 ]] || [[ ${1:0:1} == '-' ]] ; then
exec logstash "$#"
else
exec "$#"
fi
# EOF
# STEP 2 #
run the docker with "--entrypoint" flag so it will use my entrypoint file
docker run \
-d \
--name my_logstash \
-v /home/centos/DevOps/psifas_logstash_docker-entrypoint:/usr/local/bin/psifas_logstash_docker-entrypoint \
-v /home/centos/DevOps/logstash.conf:/usr/share/logstash/pipeline/logstash.conf \
-v /home/centos/DevOps/logstash.yml:/usr/share/logstash/config/logstash.yml \
--entrypoint /usr/local/bin/psifas_logstash_docker-entrypoint \
docker.elastic.co/logstash/logstash:7.10.2
# STEP 3 #
install aws cli and configure aws cli from the server hosting the docker:
docker exec -it -u root <DOCKER_CONTAINER_ID> yum install awscli -y
docker exec -it <DOCKER_CONTAINER_ID> aws configure set aws_access_key_id <MY_aws_access_key_id>
docker exec -it <DOCKER_CONTAINER_ID> aws configure set aws_secret_access_key <MY_aws_secret_access_key>
docker exec -it <DOCKER_CONTAINER_ID> aws configure set region <MY_region>
This worked for me,
Now I want to "translate" this flow into an AWS ECS task.
in ECS I will use parameters instead of running the above 3 "aws configure" commands.
MY QUESTION
How can I do my 3rd step, installing aws cli, from WITHIN the ECS task ? (meaning not to run it on the EC2 server hosting the ECS cluster)
When I was working on the docker I also thought of these options to use the aws cli:
find an official elastic docker image containing both logstash and aws cli. <-- I did not find one.
create such an image by myself and use. <-- I prefer not , because I want to avoid the maintenance of creating new custom images when needed (e.g when new version of logstash image is available).
Eventually I choose the 3 steps above, but I'm open to suggestion.
Also, My tests showed that running 2 containers within the same ECS task:
logstah
awscli
and then the logstash container will use the aws cli container
(image "amazon/aws-cli") is not working.
THANKS A LOT IN ADVANCE :-)
Your option #2, create the image yourself, is really the best way to do this. Anything else is going to be a "hack". Also, you shouldn't be running aws configure for an image running in ECS, you should be assigning a IAM role to the task, and the AWS CLI will pick that up and use it.
Mark B, your answer helped me to solve this. Thanks!
writing here the solution in case it will help somebody else.
There is no need to install AWS CLI, in the logstash docker container running inside the ECS task.
Inside the logstash container (from image "docker.elastic.co/logstash/logstash:7.10.2") there is AWS SDK to connect to the S3.
The only thing required is to allow the ECS Task execution role, access to S3.
(I attached AmazonS3FullAccess policy)

AWS ec2 userdata ssl verify error.. unique

So I created my own vanilla centos image and installed the aws cli tools. All commands including s3 work fine as either ec2-user or root.
My issue: For some reason only when I launch a server, in the user data I'm doing just a simple aws s3 cp command I get the ssl_verify_certificate error.
Understand userdata runs as root. I've reinstalled the tools still no same issue. Any help would be appreciated
Running Centos 7.9

Error when logging into ECR with Docker login: "Error saving credentials... not implemented"

I'm trying to log in to AWS ECR with the Docker login command. I can get a password with the AWS CLI with the command aws ecr get-login-password but when piping this into the docker login command I get the following error:
Error saving credentials: error storing credentials - err: exit status 1, out: `not implemented`
The command I am running is the one recommended in the AWS ECR documentation:
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin account_id_redacted.dkr.ecr.us-east-1.amazonaws.com/blog-project
I'm running the latest version of AWS CLI as of this question, 2.0.57.
I'm running Docker version 2.4.0 on macOS 10.14.6
Has anyone else run into this issue, and if so have they found a solution?
I've definitely achieved this in the past, but I wonder if there is an issue between the latest versions of Docker and the AWS CLI...
I'm not 100% sure what the issue was here, but it was something to do with the Docker credentials helper.
I installed the Docker credentials helper for macOS, changed the credsStore parameter in ~/.docker/config.json to osxkeychain. That fixed the issues.
I had similar issue, seems like my ~/.docker/config.json was totally messed after work with multiple repos / hubs.
So I just wiped out all the content in this file leaving it empty and rerun aws ecr get-login-password | docker login ... which automatically populated config with appropriate values.
I had this issue on macOS from
.docker/config.json
remove
"credsStore" : "ecr-login"
This resolved the issue for me
if anybody has the same problem on windows then go to C:\Users folder and in the .docker folder remove the config.json file.
it might fix your problem
I believe this is the intended result (sorta). The point of using amazon-ecr-credential-helper is to not need to use docker login. You should instead configure the AWS CLI with your profile credentials (mine: myprofile). Then, you would just need to slightly modify your scripts.
For example, in ECR the AWS given steps to upload a docker image are:
Retrieve an authentication token and authenticate your Docker client
to your registry. Use the AWS CLI:
aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin XXXXXXXXX.dkr.ecr.us-east-2.amazonaws.com
Note: If you receive an error using the AWS CLI, make sure that you have the latest version of
the AWS CLI and Docker installed.
Build your Docker image using the
following command. For information on building a Docker file from
scratch see the instructions here . You can skip this step if your
image is already built:
docker build -t toy_project .
After the build completes, tag your
image so you can push the image to this repository:
docker tag toy_project:latest XXXXXXXXX.dkr.ecr.us-east-2.amazonaws.com/toy_project:latest
Run the following command to push this image to your newly created AWS
repository:
docker push XXXXXXXXX.dkr.ecr.us-east-2.amazonaws.com/toy_project:latest
However, you would want to skip step 1. The reason is that if you configured aws cli (i.e. aws configure --profile myprofile) then your credentials will be stored. So you can skip to step 2.
On the 4th step, you simply need to add AWS_PROFILE, just like below
AWS_PROFILE=myprofile docker push XXXXXXXXX.dkr.ecr.us-east-2.amazonaws.com/toy_project:latest`
With amazon-ecr-credential-helper, you no longer need to use docker login or worry about storing credentials, that is the point of amazon-ecr-credential-helper. However, this may not be the best solution for you if you need to actively use docker login in your scripts.
Note: my ~/.docker/config.json looks like
{
"credsStore": "ecr-login"
}
I was getting the same error while running this command on MacOS.
Error possibly occurred because that particular location didn't have the appropriate permissions for users read/write/execute.
Also while I was doing
% docker ps
It was giving an error as: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
What I did:
% sudo chmod 777 /var/run/docker.sock
This gave all the required permissions to that location.
Hope it would help!

aws command not found error even after installing aws cli on jenkins windows slave when running a jenkins job

I have installed AWS CLI on my windows slave in Jenkins. To verify the same, I run the following command in the command line of the windows machine and get this as the output
C:> aws --version
aws-cli/1.11.122 Python/2.7.9 Windows/2008ServerR2 botocore/1.5.85
I am running an aws cli command in the execute windows batch command in the jenkins job and the job is failing for the following reason
C:\Users\ADMINI~1\AppData\Local\Temp\2\hudson1929374596375903011.sh: line 6:
aws: command not found
Build step 'Execute shell' marked build as failure
The aws command I am running is
aws cloudformation validate-template --template-body file://file1.json
I also checked the PATH variable on the windows machine and it contains AWSCLI path.
My goal is to run AWS CLI command via Jenkins job. Can somebody help me with this?
It's possible that Jenkins has a different %PATH% than when you are logged in.
Try finding your path via jenkins. Create a job and in the script that runs echo out your %PATH% to see what jenkins' thinks your path is.
You can modify Jenkins' environment variables, including %PATH%, see https://stackoverflow.com/a/5819768/8207662