Jenkins ecs Command not found - amazon-web-services

I installed third party tool (ecs deploy using pip install ecs-deploy) .When I try to deploy using command ecs deploy demo-cluster demo-service in command prompt its working fine when I try with jenkins to deploy getting error
/tmp/jenkins5062380414579854312.sh: line 13: ecs: command not found
Build step 'Execute shell' marked build as failure
Finished: FAILURE

The Jenkins service runs typically runs under the user jenkins.
You have installed the package as the ec2-user. This means the jenkins user may not have the package in its own path or have correct permissions to execute the file.
You can correct this one of two ways:
Use sudo to elevate permissions and install it globally. Set the path in /etc/environment
Interactively login as the jenkins user and install under that account.

You need to run the full AWS CLI command:
aws ecs deploy --cluster demo-cluster --service demo-service

Related

sam build botocore.exceptions.NoCredentialsError: Unable to locate credentials

I am trying to deploy my machine learning model with sam for couple of days and I am getting this error:
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I am also make sure that my aws config is fine
the "aws s3 ls" command works fine with me any help will be useful thanks in advance
I've read through this issue which seems to have been deployed in v1.53: SAM Accelerate issue
Reading that seemed to imply that it might be worth trying
sam deploy --guided --profile mark
--profile mark is the new part and mark is just the name of the profile.
I'm using v1.53 but still have to pass in the profile to avoid the problem you're having and I was having, so they may not have fixed the issue as well as intended, but at least the --profile seems to solve it for me.
If you are using Linux, this error can be caused by a misalignment between a docker root installation and user-level AWS credentials.
Amazon documentation recommends adding credentials using the aws configure command without sudo. However, when you install docker on Linux, it requires a root-level installation. This ultimately results in the user being forced to use sudo for the SAM CLI build and deploy commands, which leads to the error.
There are two different solutions that will fix the issue:
Allow non-root users to manage docker. If you use this method, you will not need to use sudo for your SAM CLI commands. This fix can be accomplished by using the following commands:
sudo groupadd docker
sudo usermod -aG docker $USER
OR
Use sudo aws configure to add AWS credentials to root. This fix requires you to continue using sudo for your SAM CLI commands.

AWS CodePipeline + CodeDeploy to EC2 with docker-compose

Hi I've been trying to get autodeployment working on AWS. Whenever there's a merge or commit to repo, CodePipeline will detect it and have CodeDeploy update the tagged EC2 instance with the new changes. The app is a simple node.js app which I want to start with docker-compose. I have installed docker-compose + docker on the EC2 instance already and enabled CodeDeploy Agent.
I tested the whole process and it is mostly working except for the part where CodeDeploy fails the deployment because it is unable to run the command docker-compose up -d in my ApplicationStart portion of the appspec.yml. I get the error docker compose cannot be found which is kind of weird because in the BeforeInstall script I download and install docker + docker-compose and set all the permissions. Is there something I'm missing or it is just not meant to happen with CodeDeploy and EC2?
I can confirm when I SSH into the EC2 instance and use the command docker-compose up -d in project root directory it works, but as soon as I try to run the docker-compose command in the script portion of the appspec.yml it fails.
The project repo is here, just in case there's anything I missed: https://github.com/c3ho/simple_crud

AWS SAM: Build Failed Error: Docker is unreachable. Docker needs to be running to build inside a container

Im running AWS SAM and using sam build --use-container then get the following error.
Starting Build inside a container Building function 'SamTutorialFunction Build Failed Error: Docker is unreachable. Docker needs to be running to build inside a container
I run sudo service docker start before and still get the same error.
I had the same issue. The problem was that docker was installed to run as the root user. AWS SAM is trying to access as your logged in user. You can set docker to run as non-root user (without sudo) by adding your user to the docker group. See https://docs.docker.com/engine/install/linux-postinstall/
If you are running Ubuntu on WSL2, You need to enable integration between Docker and WSL2 in order to run
sam build --use-container
Steps:
Download Docker Desktop https://desktop.docker.com/win/main/amd64/Docker%20Desktop%20Installer.exe
Go to settings => resources => WSL integration.
Check the enable integration with additional distros.

TeamCity Agent - AWS CLI

I have deployed TeamCity server and Agent to AWS using JetBrains Stack Template (https://www.jetbrains.com/help/teamcity/running-teamcity-stack-in-aws.html)
All seems to be good, my server starts, agent is functional, I have created several builds, etc.
I came to a point, where I want to deploy my application to AWS environment using aws-cli commands.
I am struggling to enable/install aws-cli into agent. My build steps are erroring out with aws: command not found
Does anyone have any ideas?
My progress so far: I have connected to agent EC2 machine via ssh bastion ec2, and I am able to invoke aws --version as ec2-user, but the build agent cannot see aws.
Turns out, my TeamCity agent runs in AWS ECS via docker image https://hub.docker.com/r/jetbrains/teamcity-agent
What I ended up doing is creating my own docker image by using jetbrains one as a base.
I uploaded my docker image to AWS ECS Repository. Afterwards I created a new revision for original task definition. This new revision uses my image instead of original one, therefore I have aws-cli there.
I then added my AWS profile to EC2 host machine and added volume to docker container (via task definition) so that container would be able to access .aws/credentials file.
Dockerfile looks like this:
FROM jetbrains/teamcity-agent
RUN apt-get update && apt-get install -y python-pip
RUN pip install awscli --upgrade --user
ENV PATH="~/.local/bin:${PATH}"
I added the aws-cli in team city agent using remote desktop connection as I used window agent of team city. In the build steps I used Runner Type as command line and executed the aws commands.
for more information you can refer below link where I answered the question:
How to deploy to AWS Elastic Beanstalk on successful Teamcity build

aws command not found error even after installing aws cli on jenkins windows slave when running a jenkins job

I have installed AWS CLI on my windows slave in Jenkins. To verify the same, I run the following command in the command line of the windows machine and get this as the output
C:> aws --version
aws-cli/1.11.122 Python/2.7.9 Windows/2008ServerR2 botocore/1.5.85
I am running an aws cli command in the execute windows batch command in the jenkins job and the job is failing for the following reason
C:\Users\ADMINI~1\AppData\Local\Temp\2\hudson1929374596375903011.sh: line 6:
aws: command not found
Build step 'Execute shell' marked build as failure
The aws command I am running is
aws cloudformation validate-template --template-body file://file1.json
I also checked the PATH variable on the windows machine and it contains AWSCLI path.
My goal is to run AWS CLI command via Jenkins job. Can somebody help me with this?
It's possible that Jenkins has a different %PATH% than when you are logged in.
Try finding your path via jenkins. Create a job and in the script that runs echo out your %PATH% to see what jenkins' thinks your path is.
You can modify Jenkins' environment variables, including %PATH%, see https://stackoverflow.com/a/5819768/8207662