i've set up everything according to this article
https://aws.amazon.com/tw/blogs/apn/announcing-atlassian-bitbucket-support-for-aws-codedeploy/
Here is my env:
Instance (free tier with amazon linux)
- apache 2.4 installed
Security group
- only 22 (only my ip can access) and 80 port are opened
Iptables stopped
2 roles are set
- one for linking S3 <-> bitbucket
(attached custom policy)
- one role is for deployment group
(attached AWSCodeDeployRole policy)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "codedeploy.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
The script tried to deploy is
https://s3.amazonaws.com/aws-codedeploy-us-east-1/samples/latest/SampleApp_Linux.zip
Permission
/var/www/* is owned by ec2-user with 755 permission
Agent
service codedeploy-agent status =
The AWS CodeDeploy agent is running as PID 7200
Clues:
There are some zip file in my s3 bucket is uploaded for every deploy.
Error code: HEALTH_CONSTRAINTS
Anyone have an idea what the causes of deployment fail?
update1 After i re-launch the instance with iam profile, the application can be deployed. But it is still failed, when i click view events, there is log as follow:
Error CodeScriptFailed
Script Namescripts/install_dependencies
MessageScript at specified location: scripts/install_dependencies run as user root failed with exit code 1
Log TailLifecycleEvent - BeforeInstall
Script - scripts/install_dependencies
[stdout]Loaded plugins: priorities, update-motd, upgrade-helper
[stdout]Resolving Dependencies
[stdout]--> Running transaction check
[stdout]---> Package httpd.x86_64 0:2.2.31-1.8.amzn1 will be installed
[stdout]--> Processing Dependency: httpd-tools = 2.2.31-1.8.amzn1 for package: httpd-2.2.31-1.8.amzn1.x86_64
[stdout]--> Processing Dependency: apr-util-ldap for package: httpd-2.2.31-1.8.amzn1.x86_64
[stdout]--> Running transaction check
[stdout]---> Package apr-util-ldap.x86_64 0:1.4.1-4.17.amzn1 will be installed
[stdout]---> Package httpd-tools.x86_64 0:2.2.31-1.8.amzn1 will be installed
[stdout]--> Processing Conflict: httpd24-2.4.23-1.66.amzn1.x86_64 conflicts httpd < 2.4.23
[stdout]--> Processing Conflict: httpd24-tools-2.4.23-1.66.amzn1.x86_64 conflicts httpd-tools < 2.4.23
[stdout]--> Finished Dependency Resolution
[stderr]Error: httpd24-tools conflicts with httpd-tools-2.2.31-1.8.amzn1.x86_64
[stderr]Error: httpd24 conflicts with httpd-2.2.31-1.8.amzn1.x86_64
[stdout] You could try using --skip-broken to work around the problem
[stdout] You could try running: rpm -Va --nofiles --nodigest
Anyone what is the problem?
The error code HEALTH_CONSTRAINTS means more instances failed than expected, which is defined by the deployment configuration.
For more information about why the deployment failed, on the deployment console https://region.console.aws.amazon.com/codedeploy/home?region=region#/deployments, you can click on the failed deploymentID, then it will redirect to the deployment details page, which contains all of the instances included in the specified deployment, and each line contains the instance's lifecycle event. Then click on the ViewEvents, then if there is View Logs link, you can see the reason why this instance deployment failed.
If the console doesn't have enough information for what you need, then the log on the instance can be found at less /var/log/aws/codedeploy-agent/codedeploy-agent.log. It contains the logs for most recent deployments.
This happens because the codeDeploy checks health of the ec2 instances by hitting instances. Before deployment, you need to run below bash script on the instances and check if the script worked. httpd service must be started. Reboot the instance.
#!/bin/bash
sudo su
yum update -y
yum install httpd -y
yum install ruby
yum install aws-cli
cd ~
aws s3 cp s3://aws-codedeploy-us-east-1/latest/install . --region us-east-1
chmod +x ./install
./install auto
echo 'hello world' > /var/www/html/index.html
hostname >> /var/www/html/index.html
chkconfig httpd on
service httpd start
It depends on your deployment configuration, but basically 1 or more deployments failed.
HEALTH_CONSTRAINTS: The deployment failed on too many instances to be
successfully deployed within the instance health constraints specified
http://docs.aws.amazon.com/codedeploy/latest/APIReference/API_ErrorInformation.html
Check your deployment configuration settings. The overall failure/success of the deployment is based on these settings. Try the CodeDeployDefault.AllAtOnce, and dial in as needed.
Also, double check AWS CodeDeploy Instance Health settings, especially minimum-healthy-hosts
It seems there is a conflict between one of the dependencies you asked to install in your appspec.yaml file and your httpd24-tools service.
[stderr]Error: httpd24-tools conflicts with httpd-tools-2.2.31-1.8.amzn1.x86_64
[stderr]Error: httpd24 conflicts with httpd-2.2.31-1.8.amzn1.x86_64
[stdout] You could try using --skip-broken to work around the problem
So try to solve the dependency installation problem. You can try to install dependencies manually on your ec2 and find a solution for this conflict and when you solved it bring the solution to your appspec.yaml file and install the dependencies via code deploy.
Related
I am following a tutorial to deploy a Flask application with Docker to AWS Elastic Beanstalk (EB). I created an AWS Elastic Container Registry (ECR) and ran some commands which successfully pushed the Docker image to the ECR:
docker build -t app-backend
docker tag app-backend:latest [URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend:latest
docker push [URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend:latest
Then I tried to deploy to EB:
eb init (selecting a Docker EB application I created on the AWS GUI)
eb deploy
On "eb init" I get the error "Cannot setup CodeCommit because there is no Source Control setup, continuing with initialization", but I assume this can be ignored as it otherwise looked fine. On "eb deploy" though, the deployment fails. In "eb-engine.log" (found in the AWS GUI), I see error messages like:
[ERROR] An error occurred during execution of command [app-deploy] - [Docker Specific Build Application]. Stop running the command. Error: failed to pull docker image: Command /bin/sh -c docker pull [URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend:latest failed with error exit status 1. Stderr:failed to register layer: Error processing tar file(exit status 1): write /root/.cache/pip/http/5/e/7/3/b/[long number]: no space left on device
When I manually run the pull command the error references (locally, not from the EB instance), the command seems to respond as expected:
docker pull [URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend:latest
What could be causing this deployment failure?
My Dockerrun.aws.json file looks like this:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "[URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend",
"Update": "true"
},
"Ports": [
{
"ContainerPort": 5000,
"HostPort": 5000
}
]
}
I solved this by following how to prevent error "no space left on device" when deploying multi container docker application on AWS beanstalk?.
Basically you find your Elastic Beanstalk instance in the EC2 AWS GUI, you modify the volumes to add space to the EB instance. Then you follow the link in that Stack Overflow post to repartition your EB instance by SSHing into it with eb ssh and then using commands like df -H and lsblk to see how much space in in each partition. And use commands like:
sudo growpart /dev/xvda 1
sudo xfs_growfs -d /
to repartition the hard drive as to use all the new space you added in the AWS EC2 GUI. You can check with df -H and lsblk to see if the repartitioning gave you more space.
Then the eb deploy command should work. If SSH isn't setup yet, you may have to do eb ssh --setup first.
As wasn't particularly satisfied with only being able to use Amazon Linux (wanted to use Amazon Linux 2 as well), created two instances using both OS versions and adding the same script
mkdir /etc/codedeploy-agent/
mkdir /etc/codedeploy-agent/conf
cat <<EOT >> /etc/codedeploy-agent/conf/codedeploy.onpremises.yml
---
aws_access_key_id: ACCESS
aws_secret_access_key: SECRET
iam_user_arn: arn:aws:iam::525221857828:user/GeneralUser
region: eu-west-2
EOT
wget https://aws-codedeploy-us-west-2.s3.us-west-2.amazonaws.com/latest/install
chmod +x ./install
sudo ./install auto
The difference I noted between the two is that in the instance that has Linux 2, the folder /etc/codedeploy-agent/conf/ has only one file
and in Linux has two files
Knowing this, I created a new file in the Linux 2 instance with the same name
touch codedeployagent.yml
, changed its permissions from
-rw-r--r-- 1 root root 261 Oct 2 10:43 codedeployagent.yml
to
-rwxr-xr-x 1 root root 261 Oct 2 10:43 codedeployagent.yml
, and added the same content
:log_aws_wire: false
:log_dir: '/var/log/aws/codedeploy-agent/'
:pid_dir: '/opt/codedeploy-agent/state/.pid/'
:program_name: codedeploy-agent
:root_dir: '/opt/codedeploy-agent/deployment-root'
:verbose: false
:wait_between_runs: 1
:proxy_uri:
:max_revisions: 5
and then rebooted the machine. Still, this didn't fix the issue as when I run
sudo service codedeploy-agent status
will still get
Redirecting to /bin/systemctl status codedeploy-agent.service Unit
codedeploy-agent.service could not be found.
Also ensured all the updates were in place, rebooted the machine but that didn't work either.
I can provide details of my setup for Amazon Linux 2 instances to deploy CodeDeployGitHubDemo (based on past question).
1. CodeDeploy agent
Used the following as UserData (you may need to adjust region if not us-east-1):
#!/bin/bash
yum update -y
yum install -y ruby wget
cd /home/ec2-user
wget https://aws-codedeploy-us-east-1.s3.us-east-1.amazonaws.com/latest/install
chmod +x ./install
./install auto
It did not require hard-coding credentials. The following works perfectly fine on Amazon Linux 2 instances that I've used.
2. Instance role
Your instance needs a role suitable for CodeDeploy. I used an EC2 instance role with policy listed here:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:Get*",
"s3:List*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
3. Deployment group
I had three instances for tests in an AutoScaling group, called myasg:
4. Deployment
I deployed from S3 without Load Balancer:
5. Results
No issues were found and deployment was successful:
And the website running (need to open port 80 in security groups):
Update
For manual installation on Amazon Linux 2. You can sudo su - to become root after login.
mkdir -p /etc/codedeploy-agent/conf
cat <<EOT >> /etc/codedeploy-agent/conf/codedeploy.onpremises.yml
---
aws_access_key_id: ACCESS
aws_secret_access_key: SECRET
iam_user_arn: arn:aws:iam::525221857828:user/GeneralUser
region: eu-west-2
EOT
yum install -y wget ruby
wget https://aws-codedeploy-us-west-2.s3.us-west-2.amazonaws.com/latest/install
chmod +x ./install
env AWS_REGION=eu-west-2 ./install rpm
To check its status:
systemctl status codedeploy-agent
With this you should get something like this
● codedeploy-agent.service - AWS CodeDeploy Host Agent
Loaded: loaded (/usr/lib/systemd/system/codedeploy-agent.service; enabled; vendor prese
t: disabled)
Active: active (running) since Sat 2020-10-03 07:18:57 UTC; 3s ago
Process: 3609 ExecStart=/bin/bash -a -c [ -f /etc/profile ] && source /etc/profile; /opt
/codedeploy-agent/bin/codedeploy-agent start (code=exited, status=0/SUCCESS)
Main PID: 3623 (ruby)
CGroup: /system.slice/codedeploy-agent.service
├─3623 codedeploy-agent: master 3623
└─3627 codedeploy-agent: InstanceAgent::Plugins::CodeDeployPlugin::CommandPo...
Oct 03 07:18:57 ip-172-26-8-137.eu-west-2.compute.internal systemd[1]: Starting AWS Cod...
Oct 03 07:18:57 ip-172-26-8-137.eu-west-2.compute.internal systemd[1]: Started AWS Code...
Hint: Some lines were ellipsized, use -l to show in full.
If you run
sudo service codedeploy-agent status
you'll get (meaning it's working as expected)
The AWS CodeDeploy agent is running as PID 3623
To start if not running:
systemctl start codedeploy-agent
I have deployed TeamCity server and Agent to AWS using JetBrains Stack Template (https://www.jetbrains.com/help/teamcity/running-teamcity-stack-in-aws.html)
All seems to be good, my server starts, agent is functional, I have created several builds, etc.
I came to a point, where I want to deploy my application to AWS environment using aws-cli commands.
I am struggling to enable/install aws-cli into agent. My build steps are erroring out with aws: command not found
Does anyone have any ideas?
My progress so far: I have connected to agent EC2 machine via ssh bastion ec2, and I am able to invoke aws --version as ec2-user, but the build agent cannot see aws.
Turns out, my TeamCity agent runs in AWS ECS via docker image https://hub.docker.com/r/jetbrains/teamcity-agent
What I ended up doing is creating my own docker image by using jetbrains one as a base.
I uploaded my docker image to AWS ECS Repository. Afterwards I created a new revision for original task definition. This new revision uses my image instead of original one, therefore I have aws-cli there.
I then added my AWS profile to EC2 host machine and added volume to docker container (via task definition) so that container would be able to access .aws/credentials file.
Dockerfile looks like this:
FROM jetbrains/teamcity-agent
RUN apt-get update && apt-get install -y python-pip
RUN pip install awscli --upgrade --user
ENV PATH="~/.local/bin:${PATH}"
I added the aws-cli in team city agent using remote desktop connection as I used window agent of team city. In the build steps I used Runner Type as command line and executed the aws commands.
for more information you can refer below link where I answered the question:
How to deploy to AWS Elastic Beanstalk on successful Teamcity build
I want to test kubernetes for gitlab-ci, so I want to create my first k8s cluster on aws
So I follow the docs:
sudo snap install conjure-up --classic
# re-login may be required at that point if you just installed snap utility
conjure-up kubernetes
In the install process, I choose:
Canonical Distribution of Kubernetes
Helm
AWS
my credentials
us-east-2
Juju-as-a-Service (JaaS) Free Controller
Then I must log into JaaS. I log entering my Ubuntu One account, but it always fail:
Login failed, please try again: ERROR cannot log into "jimm.jujucharms.com": cannot get user details for "https://login.ubuntu.com/+id/W8KzXrQ":
not found
What am I forgetting ?
I am currently migrating my config management on AWS to Terraform to make it more pluggable. What I like is the possibility to manage rolling updates to an Autoscaling Group where Terraform waits until the new instances are in service before it destroys the old infrastructure.
This works fine with the "bare" infrastructure. But I ran into a problem when update the actual app instances.
The code is deployed via AWS CodeDeploy and I can tell Terraform to use the generated name of the new Autoscaling Group as deployment target but it doesn't deploy the code to the new instances on startup. When I manually select "deploy changes to the deployment group" the deployment starts successfully.
Any ideas how to automate this step?
https://www.terraform.io/docs/provisioners/local-exec.html might be able to do this. Couple assumptions
You've got something like aws-cli installed where you're running terraform.
You've got your dependencies setup so that your CodeDeploy step would be one of the last things executed. If that's not the case you can play with depends_on https://www.terraform.io/intro/getting-started/dependencies.html#implicit-and-explicit-dependencies
Once your code has been posted, you would just add a
resource "something" "some_name" {
# Whatever config you've setup for the resource
provisioner "local-exec" {
command = "aws deploy create-deployment"
}
}
FYI the aws deploy create-deployment command is not complete, so you'll have to play with that in your environment till you've got the values needed to trigger the rollout but hopefully this is enough to get you started.
You can trigger the deployment directly in your user-data in the
resource "aws_launch_configuration" "my-application" {
name = "my-application"
...
user_data = "${data.template_file.node-init.rendered}"
}
data "template_file" "node-init" {
template = "${file("${path.module}/node-init.yaml")}"
}
Content of my node-init.yaml, following recommendations of this documentation: https://aws.amazon.com/premiumsupport/knowledge-center/codedeploy-agent-launch-configuration/
write_files:
- path: /root/configure.sh
content: |
#!/usr/bin/env bash
REGION=$(curl 169.254.169.254/latest/meta-data/placement/availability-zone/ | sed 's/[a-z]$//')
yum update -y
yum install ruby wget -y
cd /home/ec2-user
wget https://aws-codedeploy-$REGION.s3.amazonaws.com/latest/install
chmod +x ./install
./install auto
# Add the following line for your node to update itself
aws deploy create-deployment --application-name=<my-application> --region=ap-southeast-2 --deployment-group-name=<my-deployment-group> --update-outdated-instances-only
runcmd:
- bash /root/configure.sh
In this implementation the node is responsible for triggering the deployment itself. This is working perfectly so far for me but can result in deployment fails if the ASG is creating several instances at the same time (in that case the failed instances will be terminated quickly because not healthy).
Of course, you need to add the sufficient permissions to the role associated to your nodes to trigger the deployment.
This is still a workaround and if someone knows solution behaving the same way as cfn-init, I am interested.