As wasn't particularly satisfied with only being able to use Amazon Linux (wanted to use Amazon Linux 2 as well), created two instances using both OS versions and adding the same script
mkdir /etc/codedeploy-agent/
mkdir /etc/codedeploy-agent/conf
cat <<EOT >> /etc/codedeploy-agent/conf/codedeploy.onpremises.yml
---
aws_access_key_id: ACCESS
aws_secret_access_key: SECRET
iam_user_arn: arn:aws:iam::525221857828:user/GeneralUser
region: eu-west-2
EOT
wget https://aws-codedeploy-us-west-2.s3.us-west-2.amazonaws.com/latest/install
chmod +x ./install
sudo ./install auto
The difference I noted between the two is that in the instance that has Linux 2, the folder /etc/codedeploy-agent/conf/ has only one file
and in Linux has two files
Knowing this, I created a new file in the Linux 2 instance with the same name
touch codedeployagent.yml
, changed its permissions from
-rw-r--r-- 1 root root 261 Oct 2 10:43 codedeployagent.yml
to
-rwxr-xr-x 1 root root 261 Oct 2 10:43 codedeployagent.yml
, and added the same content
:log_aws_wire: false
:log_dir: '/var/log/aws/codedeploy-agent/'
:pid_dir: '/opt/codedeploy-agent/state/.pid/'
:program_name: codedeploy-agent
:root_dir: '/opt/codedeploy-agent/deployment-root'
:verbose: false
:wait_between_runs: 1
:proxy_uri:
:max_revisions: 5
and then rebooted the machine. Still, this didn't fix the issue as when I run
sudo service codedeploy-agent status
will still get
Redirecting to /bin/systemctl status codedeploy-agent.service Unit
codedeploy-agent.service could not be found.
Also ensured all the updates were in place, rebooted the machine but that didn't work either.
I can provide details of my setup for Amazon Linux 2 instances to deploy CodeDeployGitHubDemo (based on past question).
1. CodeDeploy agent
Used the following as UserData (you may need to adjust region if not us-east-1):
#!/bin/bash
yum update -y
yum install -y ruby wget
cd /home/ec2-user
wget https://aws-codedeploy-us-east-1.s3.us-east-1.amazonaws.com/latest/install
chmod +x ./install
./install auto
It did not require hard-coding credentials. The following works perfectly fine on Amazon Linux 2 instances that I've used.
2. Instance role
Your instance needs a role suitable for CodeDeploy. I used an EC2 instance role with policy listed here:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:Get*",
"s3:List*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
3. Deployment group
I had three instances for tests in an AutoScaling group, called myasg:
4. Deployment
I deployed from S3 without Load Balancer:
5. Results
No issues were found and deployment was successful:
And the website running (need to open port 80 in security groups):
Update
For manual installation on Amazon Linux 2. You can sudo su - to become root after login.
mkdir -p /etc/codedeploy-agent/conf
cat <<EOT >> /etc/codedeploy-agent/conf/codedeploy.onpremises.yml
---
aws_access_key_id: ACCESS
aws_secret_access_key: SECRET
iam_user_arn: arn:aws:iam::525221857828:user/GeneralUser
region: eu-west-2
EOT
yum install -y wget ruby
wget https://aws-codedeploy-us-west-2.s3.us-west-2.amazonaws.com/latest/install
chmod +x ./install
env AWS_REGION=eu-west-2 ./install rpm
To check its status:
systemctl status codedeploy-agent
With this you should get something like this
● codedeploy-agent.service - AWS CodeDeploy Host Agent
Loaded: loaded (/usr/lib/systemd/system/codedeploy-agent.service; enabled; vendor prese
t: disabled)
Active: active (running) since Sat 2020-10-03 07:18:57 UTC; 3s ago
Process: 3609 ExecStart=/bin/bash -a -c [ -f /etc/profile ] && source /etc/profile; /opt
/codedeploy-agent/bin/codedeploy-agent start (code=exited, status=0/SUCCESS)
Main PID: 3623 (ruby)
CGroup: /system.slice/codedeploy-agent.service
├─3623 codedeploy-agent: master 3623
└─3627 codedeploy-agent: InstanceAgent::Plugins::CodeDeployPlugin::CommandPo...
Oct 03 07:18:57 ip-172-26-8-137.eu-west-2.compute.internal systemd[1]: Starting AWS Cod...
Oct 03 07:18:57 ip-172-26-8-137.eu-west-2.compute.internal systemd[1]: Started AWS Code...
Hint: Some lines were ellipsized, use -l to show in full.
If you run
sudo service codedeploy-agent status
you'll get (meaning it's working as expected)
The AWS CodeDeploy agent is running as PID 3623
To start if not running:
systemctl start codedeploy-agent
Related
I installed Docker to install Keycloak on AWS EC2. However, the following error occurs when creating an instance node.
$ docker-machine create --driver amazonec2 aws-node1
Running pre-create checks...
Creating machine...
(aws-node1) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Error creating machine: Error running provisioning: error installing docker:
[ec2-user#ip-172-31-43-97 ~]$
The full installation procedure is as follows.
$ sudo yum -y install docker
$ docker –v
Docker version 20.10.17, build 100c701
$ sudo service docker start
$ sudo usermod -aG docker ec2-user
$ sudo curl -L https://github.com/docker/compose/releases/download/1.25.0\
-rc2/docker-compose-`uname -s`-`uname -m` -o \
/usr/local/bin/docker-compose
$ docker-compose –v
docker-compose version 1.25.0-rc2, build 661ac20e
$ base=https://github.com/docker/machine/releases/download/v0.16.0 &&
curl -L $base/docker-machine-$(uname -s)-$(uname -m) >/tmp/docker-machine &&
sudo install /tmp/docker-machine /usr/local/bin/docker-machine
$ docker-machine –v
docker-machine version 0.16.0, build 702c267f
$ aws configure
AWS Access Key ID [None]: [My Access Key ID]
AWS Secret Access Key [None]: [My Secret Access Key]
Default region name [None]: ap-northeast-2
Default output format [None]:
$ docker-machine create --driver amazonec2 aws-node1
Running pre-create checks...
Creating machine...
(aws-node1) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Error creating machine: Error running provisioning: error installing docker:
I set up "aws configure" by creating an Access Key ID and Secret Access Key for the CLI. However, Docker instance node is not created no matter what.
EC2 was created with "Amazon Linux 2 AMI (HVM) - Kernel 5.10, SSD Volume Type".
I want to install docker on my ec2 instance.
sudo yum install docker -y
I came to know that this command automatically creates a group 'docker'
which has root privileges by default.so I add my ec2-user to this group to execute commands without 'sudo'.
sudo usermod -aG docker ec2-user
Now this means ec2-user has root privileges
But if I want to start the docker service,why should I use
sudo systemctl start docker
instead of simply
systemctl start docker
Above command is giving me an error:
Failed to start docker.service: The name org.freedesktop.PolicyKit1 was not provided by any .service files
See system logs and 'systemctl status docker.service' for details.
Please help!
because docker is system service so you must use sudo or run it without sudo as root user
or you can use
sudo systemctl enable docker
and after every reboot docker service will be running automatically
I am new to AWS and I am trying to deploy using AWS CodeDeploy from Github.
For that, I created my instance named CodeDeployDemo and attached the role and policy to the instance.
Policy ARN arn:aws:iam::378939197253:policy/CE2CodeDeploy9
My policy is:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:Get*",
"s3:List*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
and also attached policy named AmazonEC2RoleforAWSCodeDeploy
I also installed CodeDeploy agent for my ubuntu step by step as following:
$chmod 400 Code1.pem
$ssh -i "Code1.pem" ubuntu#54.183.22.255
$sudo apt-get update
$sudo apt-get install awscli
$sudo apt-get install ruby2.0
$cd /home/ubuntu
$sudo aws s3 cp s3://aws-codedeploy-us-east-1/latest/install . --region us-east-1
$sudo chmod +x ./install
$sudo ./install auto
and then I create my application and deploy from GitHub to CodeDeploy using CodeDeployDefault.OneAtATime
But at final stage it shows following error:
Deployment failed: Because too many individual instances failed deployment,
too few healthy instances are available for deployment,
or some instances in your deployment group are experiencing problems.
(Error code: HEALTH_CONSTRAINTS)
NOTE: My only one instance is running when my deployment is running.I stopped other instances.
Please help me to find solution for this. THANKS IN ADVANCE.!!
This happens because the codeDeploy checks health of the ec2 instances by hitting instances. Before deployment, you need to run below bash script on the instances and check if the script worked. httpd service must be started. Reboot the instance.
#!/bin/bash
sudo su
apt-get update -y
apt-get install apache2 -y
apt-get install ruby2.0
apt-get install awscli
cd ~
aws s3 cp s3://aws-codedeploy-us-east-1/latest/install . --region us-east-1
chmod +x ./install
./install auto
echo 'hello world' > /var/www/html/index.html
hostname >> /var/www/html/index.html
update-rc.d apache2 defaults
service apache2 start
i've set up everything according to this article
https://aws.amazon.com/tw/blogs/apn/announcing-atlassian-bitbucket-support-for-aws-codedeploy/
Here is my env:
Instance (free tier with amazon linux)
- apache 2.4 installed
Security group
- only 22 (only my ip can access) and 80 port are opened
Iptables stopped
2 roles are set
- one for linking S3 <-> bitbucket
(attached custom policy)
- one role is for deployment group
(attached AWSCodeDeployRole policy)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "codedeploy.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
The script tried to deploy is
https://s3.amazonaws.com/aws-codedeploy-us-east-1/samples/latest/SampleApp_Linux.zip
Permission
/var/www/* is owned by ec2-user with 755 permission
Agent
service codedeploy-agent status =
The AWS CodeDeploy agent is running as PID 7200
Clues:
There are some zip file in my s3 bucket is uploaded for every deploy.
Error code: HEALTH_CONSTRAINTS
Anyone have an idea what the causes of deployment fail?
update1 After i re-launch the instance with iam profile, the application can be deployed. But it is still failed, when i click view events, there is log as follow:
Error CodeScriptFailed
Script Namescripts/install_dependencies
MessageScript at specified location: scripts/install_dependencies run as user root failed with exit code 1
Log TailLifecycleEvent - BeforeInstall
Script - scripts/install_dependencies
[stdout]Loaded plugins: priorities, update-motd, upgrade-helper
[stdout]Resolving Dependencies
[stdout]--> Running transaction check
[stdout]---> Package httpd.x86_64 0:2.2.31-1.8.amzn1 will be installed
[stdout]--> Processing Dependency: httpd-tools = 2.2.31-1.8.amzn1 for package: httpd-2.2.31-1.8.amzn1.x86_64
[stdout]--> Processing Dependency: apr-util-ldap for package: httpd-2.2.31-1.8.amzn1.x86_64
[stdout]--> Running transaction check
[stdout]---> Package apr-util-ldap.x86_64 0:1.4.1-4.17.amzn1 will be installed
[stdout]---> Package httpd-tools.x86_64 0:2.2.31-1.8.amzn1 will be installed
[stdout]--> Processing Conflict: httpd24-2.4.23-1.66.amzn1.x86_64 conflicts httpd < 2.4.23
[stdout]--> Processing Conflict: httpd24-tools-2.4.23-1.66.amzn1.x86_64 conflicts httpd-tools < 2.4.23
[stdout]--> Finished Dependency Resolution
[stderr]Error: httpd24-tools conflicts with httpd-tools-2.2.31-1.8.amzn1.x86_64
[stderr]Error: httpd24 conflicts with httpd-2.2.31-1.8.amzn1.x86_64
[stdout] You could try using --skip-broken to work around the problem
[stdout] You could try running: rpm -Va --nofiles --nodigest
Anyone what is the problem?
The error code HEALTH_CONSTRAINTS means more instances failed than expected, which is defined by the deployment configuration.
For more information about why the deployment failed, on the deployment console https://region.console.aws.amazon.com/codedeploy/home?region=region#/deployments, you can click on the failed deploymentID, then it will redirect to the deployment details page, which contains all of the instances included in the specified deployment, and each line contains the instance's lifecycle event. Then click on the ViewEvents, then if there is View Logs link, you can see the reason why this instance deployment failed.
If the console doesn't have enough information for what you need, then the log on the instance can be found at less /var/log/aws/codedeploy-agent/codedeploy-agent.log. It contains the logs for most recent deployments.
This happens because the codeDeploy checks health of the ec2 instances by hitting instances. Before deployment, you need to run below bash script on the instances and check if the script worked. httpd service must be started. Reboot the instance.
#!/bin/bash
sudo su
yum update -y
yum install httpd -y
yum install ruby
yum install aws-cli
cd ~
aws s3 cp s3://aws-codedeploy-us-east-1/latest/install . --region us-east-1
chmod +x ./install
./install auto
echo 'hello world' > /var/www/html/index.html
hostname >> /var/www/html/index.html
chkconfig httpd on
service httpd start
It depends on your deployment configuration, but basically 1 or more deployments failed.
HEALTH_CONSTRAINTS: The deployment failed on too many instances to be
successfully deployed within the instance health constraints specified
http://docs.aws.amazon.com/codedeploy/latest/APIReference/API_ErrorInformation.html
Check your deployment configuration settings. The overall failure/success of the deployment is based on these settings. Try the CodeDeployDefault.AllAtOnce, and dial in as needed.
Also, double check AWS CodeDeploy Instance Health settings, especially minimum-healthy-hosts
It seems there is a conflict between one of the dependencies you asked to install in your appspec.yaml file and your httpd24-tools service.
[stderr]Error: httpd24-tools conflicts with httpd-tools-2.2.31-1.8.amzn1.x86_64
[stderr]Error: httpd24 conflicts with httpd-2.2.31-1.8.amzn1.x86_64
[stdout] You could try using --skip-broken to work around the problem
So try to solve the dependency installation problem. You can try to install dependencies manually on your ec2 and find a solution for this conflict and when you solved it bring the solution to your appspec.yaml file and install the dependencies via code deploy.
I am stuck accessing a nfs4 share inside a docker container, running on Elastic Beanstalk.
Netshare is up and running on the EC2 instance running the Docker container. Mounting the nfs share manually on the instance works, I can access the share on the EC2 instance without problems.
However, when I run a container, trying to mount a nfs4 volume, the files are not appearing inside the container.
I do this. First, start the netshare daemon on the Docker host:
sudo ./docker-volume-netshare nfs
INFO[0000] == docker-volume-netshare :: Version: 0.18 - Built: 2016-05-27T20:14:07-07:00 ==
INFO[0000] Starting NFS Version 4 :: options: ''
Then, on the Docker host, start the docker container. Use -v to create a volume mounting the nfs4 share:
sudo docker run --volume-driver=nfs -v ec2-xxx-xxx-xxx-xxx.us-west-2.compute.amazonaws.com/home/ec2-user/nfs-share/templates:/home/ec2-user/xxx -ti aws_beanstalk/current-app /bin/bash
root#0a0c3de8a97e:/usr/src/app#
That worked, according to the netshare daemon:
INFO[0353] Mounting NFS volume ec2-xxx-xxx-xxx-xxx.us-west-2.compute.amazonaws.com:/home/ec2-user/nfs-share/templates on /var/lib/docker-volumes/netshare/nfs/ec2-xxx-xxx-xxx-xxx.us-west-2.compute.amazonaws.com/home/ec2-user/nfs-share/templates
So I try listing the contents of /home/ec2-user/xxx inside the newly launched container - but its empty?!
root#0a0c3de8a97e:/usr/src/app# ls /home/ec2-user/xxx/
root#0a0c3de8a97e:/usr/src/app#
Strangely enough, the nfs volume has been mounted correctly on the host:
[ec2-user#ip-xxx-xxx-xxx-xxx ~]$ sudo ls -lh /var/lib/docker-volumes/netshare/nfs/ec2-xxx-xxx-xxx-xxx.us-west-2.compute.amazonaws.com/home/ec2-user/nfs-share/templates | head -3
total 924K
drwxr-xr-x 5 ec2-user ec2-user 4,0K 29. Dez 14:12 file1
drwxr-xr-x 4 ec2-user ec2-user 4,0K 9. Mai 17:20 file2
Could this be a permission problem? Both the nfs server and client are using the ec2-user user/group. The docker container is running as root.
What am I missing?
UPDATE
If i start the container in --privileged mode, mounting the nfs share directly inside the container becomes possible:
sudo docker run --privileged -it aws_beanstalk/current-app /bin/bash
mount -t nfs4 ec2-xxxx-xxxx-xxxx-xxxx.us-west-2.compute.amazonaws.com:/home/ec2-user/nfs-share/templates /mnt/
ls -lh /mnt | head -3
total 924K
drwxr-xr-x 5 500 500 4.0K Dec 29 14:12 file1
drwxr-xr-x 4 500 500 4.0K May 9 17:20 file2
Unfortunately, this does not solve the problem, because Elastic Beanstalk does not allow privileged containers (unlike ECS).
UPDATE 2
Here's another workaround:
mount the nfs share on the host into /target
restart docker on the host
run container docker run -it -v /target:/mnt image /bin/bash
/mnt is now populated as expected.
#sebastian's "UPDATE 2" got me on the right track (thanks #sebastian).
But for others who may reach this question via Google like I did, here's exactly how I was able to automatically mount an EFS (NFSv4) file system on Elastic Beanstalk and make it available to containers.
Add this .config file:
# .ebextensions/01-efs-mount.config
commands:
01umount:
command: umount /mnt/efs
ignoreErrors: true
02mkdir:
command: mkdir /mnt/efs
ignoreErrors: true
03mount:
command: mount -t nfs4 -o vers=4.1 $(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone).EFS_FILE_SYSTEM_ID.efs.AWS_REGION.amazonaws.com:/ /mnt/efs
04restart-docker:
command: service docker stop && service docker start
05restart-ecs:
command: docker start ecs-agent
Then eb deploy. After the deploy finishes, SSH to your EB EC2 instance and verify that it worked:
ssh ec2-user#YOUR_INSTANCE_IP
ls -la /mnt/efs
You should see the files in your EFS filesystem. However, you still need to verify that the mount is readable and writable within containers.
sudo docker run -v /mnt/efs:/nfs debian:jessie ls -la /nfs
You should see the same file list.
sudo docker run -v /mnt/efs:/nfs debian:jessie touch /nfs/hello
sudo docker run -v /mnt/efs:/nfs debian:jessie ls -la /nfs
You should see the file list plus the new hello file.
ls -la /mnt/efs
You should see the hello file outside of the container as well.
Finally, here's how you use -v /mnt/efs:/nfs in your Dockerrun.aws.json.
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"image": "AWS_ID.dkr.ecr.AWS_REGION.amazonaws.com/myimage:latest",
"memory": 128,
"mountPoints": [
{
"containerPath": "/nfs",
"sourceVolume": "efs"
}
],
"name": "myimage"
}
],
"volumes": [
{
"host": {
"sourcePath": "/mnt/efs"
},
"name": "efs"
}
]
}