Linking github repository with my Amazon EC2 Instances AWS - amazon-web-services

I am new to github and AWS. I want to deploy my code directly from my github repository (a simple 'hello world' html page), and onto my EC2 instance. I was following this tutorial http://docs.aws.amazon.com/codedeploy/latest/userguide/github-integ-tutorial.html However on step 4 I am struggling.
It says after 'launched the instance and verified the AWS CodeDeploy agent is running, go to the next step'.
But, how do I verify AWS CodeDeploy Agent is running? It says to follow this link, however i am lost with it http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent.html#how-to-run-agent-install-windows (windows server)
Where do i put these commands in and where? And do I need the AWS SDK first?
Thanks

You can check if the code deploy agent is running from the command
sudo service codedeploy-agent status
If the command returns an error, the AWS CodeDeploy agent is not installed. Install it as described in To install, uninstall, or reinstall the AWS CodeDeploy agent for Amazon Linux or RHEL
If the AWS CodeDeploy agent is installed and running, you should see a message like The AWS CodeDeploy agent is running.
If you see a message like error: No AWS CodeDeploy agent running, start the service and run the following two commands, one at a time:
sudo service codedeploy-agent start
sudo service codedeploy-agent status
see http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent.html if you want info for another os type

Related

AWS CodePipeline + CodeDeploy to EC2 with docker-compose

Hi I've been trying to get autodeployment working on AWS. Whenever there's a merge or commit to repo, CodePipeline will detect it and have CodeDeploy update the tagged EC2 instance with the new changes. The app is a simple node.js app which I want to start with docker-compose. I have installed docker-compose + docker on the EC2 instance already and enabled CodeDeploy Agent.
I tested the whole process and it is mostly working except for the part where CodeDeploy fails the deployment because it is unable to run the command docker-compose up -d in my ApplicationStart portion of the appspec.yml. I get the error docker compose cannot be found which is kind of weird because in the BeforeInstall script I download and install docker + docker-compose and set all the permissions. Is there something I'm missing or it is just not meant to happen with CodeDeploy and EC2?
I can confirm when I SSH into the EC2 instance and use the command docker-compose up -d in project root directory it works, but as soon as I try to run the docker-compose command in the script portion of the appspec.yml it fails.
The project repo is here, just in case there's anything I missed: https://github.com/c3ho/simple_crud

TeamCity Agent - AWS CLI

I have deployed TeamCity server and Agent to AWS using JetBrains Stack Template (https://www.jetbrains.com/help/teamcity/running-teamcity-stack-in-aws.html)
All seems to be good, my server starts, agent is functional, I have created several builds, etc.
I came to a point, where I want to deploy my application to AWS environment using aws-cli commands.
I am struggling to enable/install aws-cli into agent. My build steps are erroring out with aws: command not found
Does anyone have any ideas?
My progress so far: I have connected to agent EC2 machine via ssh bastion ec2, and I am able to invoke aws --version as ec2-user, but the build agent cannot see aws.
Turns out, my TeamCity agent runs in AWS ECS via docker image https://hub.docker.com/r/jetbrains/teamcity-agent
What I ended up doing is creating my own docker image by using jetbrains one as a base.
I uploaded my docker image to AWS ECS Repository. Afterwards I created a new revision for original task definition. This new revision uses my image instead of original one, therefore I have aws-cli there.
I then added my AWS profile to EC2 host machine and added volume to docker container (via task definition) so that container would be able to access .aws/credentials file.
Dockerfile looks like this:
FROM jetbrains/teamcity-agent
RUN apt-get update && apt-get install -y python-pip
RUN pip install awscli --upgrade --user
ENV PATH="~/.local/bin:${PATH}"
I added the aws-cli in team city agent using remote desktop connection as I used window agent of team city. In the build steps I used Runner Type as command line and executed the aws commands.
for more information you can refer below link where I answered the question:
How to deploy to AWS Elastic Beanstalk on successful Teamcity build

Kinesis agent failing to start

I am trying to setup a kinesis agent on an Amazon EC2 instance which is supposed to be preinstalled.
But when I run the command:
sudo service aws-kinesis-agent start
It gives an error.. can someone help?

Issue installing Kubernetes on AWS EC2 / ubuntu 16.04

I want to test kubernetes for gitlab-ci, so I want to create my first k8s cluster on aws
So I follow the docs:
sudo snap install conjure-up --classic
# re-login may be required at that point if you just installed snap utility
conjure-up kubernetes
In the install process, I choose:
Canonical Distribution of Kubernetes
Helm
AWS
my credentials
us-east-2
Juju-as-a-Service (JaaS) Free Controller
Then I must log into JaaS. I log entering my Ubuntu One account, but it always fail:
Login failed, please try again: ERROR cannot log into "jimm.jujucharms.com": cannot get user details for "https://login.ubuntu.com/+id/W8KzXrQ":
not found
What am I forgetting ?

Jenkins on AWS EC2 instance unable to use instance profile after upgrade

I had a Jenkins 2.46 installation running on an EC2 box, associated to a IAM role through an instance profile.
Jenkins was able to do various tasks requiring AWS credentials (f.e. use terraform, upload files to s3, access CodeCommit git repos) using just the instance profile role (no access key or secret keys were stored on the instance).
After upgrading to Jenkins 2.89, this is no longer the case: every task requiring authentication with AWS fails with a 403 error.
However, running a command on the instance bash as the jenkins user still works fine (f.e. running sudo -u jenkins /usr/bin/aws s3 ls s3://my-bucket/ lists bucket files; running the same command into Jenkins' Script Console yelds a 403).
I read the release notes of every version from 2.46 to 2.89 but I did not find anything relevant.
Jenkins was installed and updated through yum, the aws cli was installed using the bundled installer provided by AWS.