So I created my own vanilla centos image and installed the aws cli tools. All commands including s3 work fine as either ec2-user or root.
My issue: For some reason only when I launch a server, in the user data I'm doing just a simple aws s3 cp command I get the ssl_verify_certificate error.
Understand userdata runs as root. I've reinstalled the tools still no same issue. Any help would be appreciated
Running Centos 7.9
Related
I am trying to deploy my machine learning model with sam for couple of days and I am getting this error:
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I am also make sure that my aws config is fine
the "aws s3 ls" command works fine with me any help will be useful thanks in advance
I've read through this issue which seems to have been deployed in v1.53: SAM Accelerate issue
Reading that seemed to imply that it might be worth trying
sam deploy --guided --profile mark
--profile mark is the new part and mark is just the name of the profile.
I'm using v1.53 but still have to pass in the profile to avoid the problem you're having and I was having, so they may not have fixed the issue as well as intended, but at least the --profile seems to solve it for me.
If you are using Linux, this error can be caused by a misalignment between a docker root installation and user-level AWS credentials.
Amazon documentation recommends adding credentials using the aws configure command without sudo. However, when you install docker on Linux, it requires a root-level installation. This ultimately results in the user being forced to use sudo for the SAM CLI build and deploy commands, which leads to the error.
There are two different solutions that will fix the issue:
Allow non-root users to manage docker. If you use this method, you will not need to use sudo for your SAM CLI commands. This fix can be accomplished by using the following commands:
sudo groupadd docker
sudo usermod -aG docker $USER
OR
Use sudo aws configure to add AWS credentials to root. This fix requires you to continue using sudo for your SAM CLI commands.
I have deployed TeamCity server and Agent to AWS using JetBrains Stack Template (https://www.jetbrains.com/help/teamcity/running-teamcity-stack-in-aws.html)
All seems to be good, my server starts, agent is functional, I have created several builds, etc.
I came to a point, where I want to deploy my application to AWS environment using aws-cli commands.
I am struggling to enable/install aws-cli into agent. My build steps are erroring out with aws: command not found
Does anyone have any ideas?
My progress so far: I have connected to agent EC2 machine via ssh bastion ec2, and I am able to invoke aws --version as ec2-user, but the build agent cannot see aws.
Turns out, my TeamCity agent runs in AWS ECS via docker image https://hub.docker.com/r/jetbrains/teamcity-agent
What I ended up doing is creating my own docker image by using jetbrains one as a base.
I uploaded my docker image to AWS ECS Repository. Afterwards I created a new revision for original task definition. This new revision uses my image instead of original one, therefore I have aws-cli there.
I then added my AWS profile to EC2 host machine and added volume to docker container (via task definition) so that container would be able to access .aws/credentials file.
Dockerfile looks like this:
FROM jetbrains/teamcity-agent
RUN apt-get update && apt-get install -y python-pip
RUN pip install awscli --upgrade --user
ENV PATH="~/.local/bin:${PATH}"
I added the aws-cli in team city agent using remote desktop connection as I used window agent of team city. In the build steps I used Runner Type as command line and executed the aws commands.
for more information you can refer below link where I answered the question:
How to deploy to AWS Elastic Beanstalk on successful Teamcity build
I'm attempting to deploy an app from one ElasticBeanstalk instance to another. Running pip install awsebcli --upgrade --user doesnt install the eb cli tool for some odd reason on the EC2 machine.
Does anyone know the equivalent of eb deploy using only the aws cli options?
This question is a bit confusing. Are you attempting to move code between EC2 instances in your Beanstalk environment?
If I'm assuming correctly, you've pulled/changed your code on one Beanstalk host. And now you're trying to propagate that change to the other instances using the EB CLI. That's not a best practice. Beanstalk has a mechanism to deploy your code to all instances.
The EB CLI is meant to be run from your workstation to push code from your IDE/editor to the Beanstalk hosts in AWS.
Beanstalk keeps a copy of that code revision in S3. And if the Beanstalk environment is load balanced then all instances will be running the same application version when scaling events or deployments occur because it will pull your code from a common source.
But to answer your question:
Does anyone know the equivalent of eb deploy using only the aws cli options?
You're gonna wanna ZIP and upload your code to S3 and note the S3 key and bucket values of where it's located.
Then create a new application version.
% aws elasticbeanstalk create-application-version --application-name="<APPLICATION_NAME>" --version-label="<NEW_VERSION_LABEL>" --source-bundle="{\"S3Bucket\": \"<S3_BUCKET_NAME>\",\"S3Key\": \"<S3_KEY>\"}"
Then deploy your new application version to the running environment.
% aws elasticbeanstalk update-environment --environment-id="<ENVIRONMENT_ID>" --version-label="<NEW_VERSION_LABEL>"
Reading is hard...
Linux requires you to "[a]dd the path to the executable file to your PATH variable"
export PATH=~/.local/bin:$PATH
eb --version now works
I had a Jenkins 2.46 installation running on an EC2 box, associated to a IAM role through an instance profile.
Jenkins was able to do various tasks requiring AWS credentials (f.e. use terraform, upload files to s3, access CodeCommit git repos) using just the instance profile role (no access key or secret keys were stored on the instance).
After upgrading to Jenkins 2.89, this is no longer the case: every task requiring authentication with AWS fails with a 403 error.
However, running a command on the instance bash as the jenkins user still works fine (f.e. running sudo -u jenkins /usr/bin/aws s3 ls s3://my-bucket/ lists bucket files; running the same command into Jenkins' Script Console yelds a 403).
I read the release notes of every version from 2.46 to 2.89 but I did not find anything relevant.
Jenkins was installed and updated through yum, the aws cli was installed using the bundled installer provided by AWS.
I am trying to login to Quay with ECS.
Quay is a private registry docker.
I followed this documentation but I have also a 403 error: "{\"error\": \"Permission Denied\"}".
I put this code in /etc/ecs/ecs.config:
ECS_ENGINE_AUTH_TYPE=docker
ECS_ENGINE_AUTH_DATA={"https://quay.io/": { "username": "xxxxxx","password":"xxxxx","email": "."}}
And I've reboot the ecs services but it's not working.
Have you got an idea ?
The documentation points out to a slightly different content of /etc/ecs/ecs.conf:
ECS_ENGINE_AUTH_TYPE=dockercfg
ECS_ENGINE_AUTH_DATA={"https://quay.io": {"auth": "YOURAUTHTOKENFROMDOCKERCFG", "email": "user#example.com"}}
It uses dockercfg and a token rather than username/password.
The dockercfg is described in the documentation page "I'm authorized but I'm still getting 403s"
docker stores the credentials it uses for push and pull in a file typically placed at $HOME/.dockercfg.
If you are executing docker in another environment (scripted docker build, virtual machine, makefile, virtualenv, etc), docker will not be able to find the .dockercfg file and will fail.
As the OP Mathieu Perochon comments below, this is also linked to the environment version of the Amazon Machine Image:
I have upgrade my AMI (Amazon ECS-Optimized Amazon Linux) and it's working
Thanks for reply #VonC. i've resolved my problem. i have upgrade my AMI (Amazon ECS-Optimized Amazon Linux) and it's working.
Link to good AMI : https://aws.amazon.com/marketplace/pp/B00U6QTYI2/