Jenkins on AWS EC2 instance unable to use instance profile after upgrade - amazon-web-services

I had a Jenkins 2.46 installation running on an EC2 box, associated to a IAM role through an instance profile.
Jenkins was able to do various tasks requiring AWS credentials (f.e. use terraform, upload files to s3, access CodeCommit git repos) using just the instance profile role (no access key or secret keys were stored on the instance).
After upgrading to Jenkins 2.89, this is no longer the case: every task requiring authentication with AWS fails with a 403 error.
However, running a command on the instance bash as the jenkins user still works fine (f.e. running sudo -u jenkins /usr/bin/aws s3 ls s3://my-bucket/ lists bucket files; running the same command into Jenkins' Script Console yelds a 403).
I read the release notes of every version from 2.46 to 2.89 but I did not find anything relevant.
Jenkins was installed and updated through yum, the aws cli was installed using the bundled installer provided by AWS.

Related

TeamCity Agent - AWS CLI

I have deployed TeamCity server and Agent to AWS using JetBrains Stack Template (https://www.jetbrains.com/help/teamcity/running-teamcity-stack-in-aws.html)
All seems to be good, my server starts, agent is functional, I have created several builds, etc.
I came to a point, where I want to deploy my application to AWS environment using aws-cli commands.
I am struggling to enable/install aws-cli into agent. My build steps are erroring out with aws: command not found
Does anyone have any ideas?
My progress so far: I have connected to agent EC2 machine via ssh bastion ec2, and I am able to invoke aws --version as ec2-user, but the build agent cannot see aws.
Turns out, my TeamCity agent runs in AWS ECS via docker image https://hub.docker.com/r/jetbrains/teamcity-agent
What I ended up doing is creating my own docker image by using jetbrains one as a base.
I uploaded my docker image to AWS ECS Repository. Afterwards I created a new revision for original task definition. This new revision uses my image instead of original one, therefore I have aws-cli there.
I then added my AWS profile to EC2 host machine and added volume to docker container (via task definition) so that container would be able to access .aws/credentials file.
Dockerfile looks like this:
FROM jetbrains/teamcity-agent
RUN apt-get update && apt-get install -y python-pip
RUN pip install awscli --upgrade --user
ENV PATH="~/.local/bin:${PATH}"
I added the aws-cli in team city agent using remote desktop connection as I used window agent of team city. In the build steps I used Runner Type as command line and executed the aws commands.
for more information you can refer below link where I answered the question:
How to deploy to AWS Elastic Beanstalk on successful Teamcity build

How do I link a project source folder to an existing Elastic Beanstalk application?

I've been using the AWS console to upload a WAR file for deployment. Now I want to do it from the command line. I've been following this guide and see eb init and read the help with eb init --help and eb --help, but the only option is to create a new application.
usage: eb init <application_name> [options ...]
Initializes your directory with the EB CLI. Creates the application.
positional arguments:
application_name application name
How do I link my local source project directory to an existing application in AWS console?
I would expect a command like eb link or something, like how you can just add a Git remote with Heroku and automatically link an existing project to an existing app.
When you perform eb init in the directory containing your source code, eb will prompt you for an application name and an environment name. This way you can link your source code to what ever application/environment is deployed on Beanstalk.
It worked after I got the AWS CLI keys for the project and ran aws configure. I had old keys in ~/.aws/ from a different project from perhaps a decade ago that used a different format. Once I got new keys, that were given permission for these particular apps, and ran aws configure and set the region, then eb init would present a menu of applications to choose from. The command aws elasticbeanstalk describe-applications has to work first before eb can work. I was expecting it would ask for a username and password, like Heroku does.
Install aws and eb command line tools:
Install awscli
Get keys from AWS admin devops.
aws configure (Example Region: 'us-east-1')
aws elasticbeanstalk describe-applications
Install Python
pip install awsebcli --upgrade --user
Add eb to your PATH, probably %USERPROFILE%\AppData\Roaming\Python\Python37\Scripts
eb init
eb list / eb logs / eb ssh / eb status / eb config / eb help
Beanstalk differs from Heroku in this workflow, unless you are using CodeCommit. I am assuming you are just using S3 to store your application versions.
The EBCLI command to do this is:
eb create-application-version
You can specify an application, a version label, as well as either a CodeCommit repository, a codeBuild build, or a source bundle in S3. API docs
You will need to run a separate command before create-application-version to upload to your S3 bucket.
Using the CLI:
aws s3 cp <filename> <s3bucket>
API docs
You can also use the console.
It seems like that guide skips initializing your local git repository. For linking your local source project to beanstalk, make sure you have initialized a local git repository. Then you can link your workspace and application using eb init. more about EB CLI and Git
Based on my understanding, your question is that you had a project directory on your PC and run your app at the localhost, now you want to run it in the AWS Elastic Beanstalk to make it public.
If you have created an EB application in the EB management console and uploaded your bundled source code, the source code becomes an application version, you need to deploy it into one of your environment using the EB management console, like this:
Figure of the management console.
Then the EB platform(container) will take care of that and run your server automatically as long as you set up the command which your app uses to run the server, the proxy, and other configurations either through the EB management console -> [Your environment] -> configuration or using the .ebextensions file.
If everything is well, you can visit your app's home page through the environment URL at that time.

Enable AWS Batch in AWS CLI

I am working in the US-East-1 (N.Virginia) and have even configured the Default Region Name to us-east-1 using the command aws configure.
But I am not able to access Batch using CLI. Batch is not even listed as one of the Available Services in aws help.
Any ideas how to enable Batch in AWS Cli? I have administrative access in IAM console so permissions don't seem to be the issue.
The batch service is relatively new, so its commands only exist in fairly new versions of the aws CLI.
Commands for batch in the latest cli documentation: http://docs.aws.amazon.com/cli/latest/reference/batch/index.html?highlight=batch
If you are running Windows, simply download the updated installer. https://aws.amazon.com/cli/
If you are using OSX or Linux use pip. pip install --upgrade awscli

Ansible AWS deployment: Can we use IAM role instead of Keys?

I have Ansible Master running on an ubuntu ec2 server with IAM role having full permission on Ec2 and nothing else. All the instances deployed using this Ansible-master are although deployed but in terminated state.
Albiet, while I was testing another approach, I created a new master and provided my authentication keys which are of a root user having all the permissions.
Is there a problem with IAM role's permissions or deployment is known not to work with IAM roles?
It works as expected for me:
root#test-node:~# cat /etc/issue
Ubuntu 14.04.4 LTS \n \l
root#test-node:~# ansible --version
ansible 2.1.2.0
config file =
configured module search path = Default w/o overrides
root#test-node:~# pip list | grep boto
boto (2.42.0)
If no credentials are specified in env variables or config files, Boto (library that Ansible uses to connect to AWS) will try to fetch credentials from instance metadata.
You may try to fetch them manually with:
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/<role-name>
and pass KeyId and Secret to Ansible via environment variables to test that role's permissions are correct.
Keep in mind, that:
IAM role should be attached to the EC2 instance before start
region should be always defined: either via module parameters or via environment variable.

Linking github repository with my Amazon EC2 Instances AWS

I am new to github and AWS. I want to deploy my code directly from my github repository (a simple 'hello world' html page), and onto my EC2 instance. I was following this tutorial http://docs.aws.amazon.com/codedeploy/latest/userguide/github-integ-tutorial.html However on step 4 I am struggling.
It says after 'launched the instance and verified the AWS CodeDeploy agent is running, go to the next step'.
But, how do I verify AWS CodeDeploy Agent is running? It says to follow this link, however i am lost with it http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent.html#how-to-run-agent-install-windows (windows server)
Where do i put these commands in and where? And do I need the AWS SDK first?
Thanks
You can check if the code deploy agent is running from the command
sudo service codedeploy-agent status
If the command returns an error, the AWS CodeDeploy agent is not installed. Install it as described in To install, uninstall, or reinstall the AWS CodeDeploy agent for Amazon Linux or RHEL
If the AWS CodeDeploy agent is installed and running, you should see a message like The AWS CodeDeploy agent is running.
If you see a message like error: No AWS CodeDeploy agent running, start the service and run the following two commands, one at a time:
sudo service codedeploy-agent start
sudo service codedeploy-agent status
see http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent.html if you want info for another os type