How do I get my EC2 Instance to connect to ECS Cluster? - amazon-web-services

I have an ECS cluster defined in AWS and an Auto Scaling Group that I use to add/remove instance to handle tasks as necessary. I have the ASG setup so that it is creating the EC2 instance at the appropriate time, but it won't connect to the ECS Cluster unless I manually go in and disable/enable the ECS service.
I am using the Amazon Linux 2 ami on the EC2 machines and everything is in the same region/account etc.
I have included my user data below.
#!/bin/bash
yum update -y
amazon-linux-extras disable docker
amazon-linux-extras install -y ecs
echo "ECS_CLUSTER={CLUSTERNAME}" >> /etc/ecs/ecs.config
systemctl enable --now ecs
As mentioned this installs the ECS service and sets the config file properly but the enable doesn't actually connect the machine, but running the same disable/enable commands on the machine once running connects without problem. What am I missing?

First thing, the correct syntax is
#!/usr/bin/env bash
echo "ECS_CLUSTER=CLUSTER_NAMe" >> /etc/ecs/ecs.config
Once you update the config better to restart the ECS agent.
#!/usr/bin/env bash
echo "ECS_CLUSTER=CLUSTER_NAME" >> /etc/ecs/ecs.config
sudo yum update -y ecs-init
#this will update ECS agent, better when using custom AMI
/usr/bin/docker pull amazon/amazon-ecs-agent:latest
#Restart docker and ECS agent
sudo service docker restart
sudo start ecs

I ended up solving this using the old adage, turn it off and on again.
e.g. I added shutdown -r 0 to the bottom of the user data script to restart the machine after it was "configured" and it connected right now.

Related

How can I install aws cli, from WITHIN the ECS task?

Question:
How can I install aws cli, from WITHIN the ECS task ?
DESCRIPTION:
I'm using a docker container to run the logstash application (it is part of the elastic family).
The docker image name is "docker.elastic.co/logstash/logstash:7.10.2"
This logstash application needs to write to S3, thus it needs AWS CLI installed.
If aws is not installed, it crashes.
# STEP 1 #
To avoid crashing, when I used this application only as a docker, I ran it in a way that I caused the 'logstash start' to be delayed, after docker container was started.
I did this by adding "sleep" command to an external docker-entrypoint file, before it starts the logstash.
This is how it looks in the docker-entrypoint file:
sleep 120
if [[ -z $1 ]] || [[ ${1:0:1} == '-' ]] ; then
exec logstash "$#"
else
exec "$#"
fi
# EOF
# STEP 2 #
run the docker with "--entrypoint" flag so it will use my entrypoint file
docker run \
-d \
--name my_logstash \
-v /home/centos/DevOps/psifas_logstash_docker-entrypoint:/usr/local/bin/psifas_logstash_docker-entrypoint \
-v /home/centos/DevOps/logstash.conf:/usr/share/logstash/pipeline/logstash.conf \
-v /home/centos/DevOps/logstash.yml:/usr/share/logstash/config/logstash.yml \
--entrypoint /usr/local/bin/psifas_logstash_docker-entrypoint \
docker.elastic.co/logstash/logstash:7.10.2
# STEP 3 #
install aws cli and configure aws cli from the server hosting the docker:
docker exec -it -u root <DOCKER_CONTAINER_ID> yum install awscli -y
docker exec -it <DOCKER_CONTAINER_ID> aws configure set aws_access_key_id <MY_aws_access_key_id>
docker exec -it <DOCKER_CONTAINER_ID> aws configure set aws_secret_access_key <MY_aws_secret_access_key>
docker exec -it <DOCKER_CONTAINER_ID> aws configure set region <MY_region>
This worked for me,
Now I want to "translate" this flow into an AWS ECS task.
in ECS I will use parameters instead of running the above 3 "aws configure" commands.
MY QUESTION
How can I do my 3rd step, installing aws cli, from WITHIN the ECS task ? (meaning not to run it on the EC2 server hosting the ECS cluster)
When I was working on the docker I also thought of these options to use the aws cli:
find an official elastic docker image containing both logstash and aws cli. <-- I did not find one.
create such an image by myself and use. <-- I prefer not , because I want to avoid the maintenance of creating new custom images when needed (e.g when new version of logstash image is available).
Eventually I choose the 3 steps above, but I'm open to suggestion.
Also, My tests showed that running 2 containers within the same ECS task:
logstah
awscli
and then the logstash container will use the aws cli container
(image "amazon/aws-cli") is not working.
THANKS A LOT IN ADVANCE :-)
Your option #2, create the image yourself, is really the best way to do this. Anything else is going to be a "hack". Also, you shouldn't be running aws configure for an image running in ECS, you should be assigning a IAM role to the task, and the AWS CLI will pick that up and use it.
Mark B, your answer helped me to solve this. Thanks!
writing here the solution in case it will help somebody else.
There is no need to install AWS CLI, in the logstash docker container running inside the ECS task.
Inside the logstash container (from image "docker.elastic.co/logstash/logstash:7.10.2") there is AWS SDK to connect to the S3.
The only thing required is to allow the ECS Task execution role, access to S3.
(I attached AmazonS3FullAccess policy)

CloudWatch logs with Amazon Linux 2

I am upgrading my Elastic Beanstalk environment to use Amazon Linux 2.
In my old environment, I could monitor my Spring Boot application logs by watching the log group using cw.exe /aws/elasticbeanstalk/myapp/var/log/eb-docker/containers/eb-current-app/stdouterr.log
Now, however, no logs are displayed for the new application, and furthermore I notice that the stdouterr.log in /eb-current-app/ seems to prepend the instance ID of the log.
What do I need to do restore the previous behavior so I can monitor my logs?
Cloudwatch logs is available as a yum package now in Amazon Linux 2:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html
sudo yum install -y awslogs
You might have to edit this file /etc/awslogs/awscli.conf too to change the region.
Finally you will need to start and enable
sudo systemctl start awslogsd
sudo systemctl enable awslogsd.service
And set it all up as a config in this file ebextensions/cloudwatch.config

How to run codedeploy agent installation script in AWS ECS?

I have an AWS ECS cluster defined with a service that uses Replica service type. It creates an EC2 isntance with a docker container. I can access it through browser and all this stuff...
The issue is that I have to connect through ssh to the EC2 instance and run:
sudo yum update -y
sudo yum install-y ruby
sudo yum install-y wget
cd /home/ec2-user
wget https://aws-codedeploy-eu-west-1.s3.eu-west-1.amazonaws.com/latest/install
chmod +x ./install
sudo ./install auto
It install codedeploy agent, so I can connect github to the instance and CI/CD code.
I would like to set up this automatically in every server that the ECS definition creates. For example if i stop the EC2 instance, the cluster raises a new EC2 instance, which doesn't have this agent...
I saw that I should configure your Amazon ECS container instance with user data, but first of all is that I am not able to find this option, and I am not quite sure if it runs into the EC2 isntance or in the docker itself.
Based on the comments.
The solution was to use Launch Template or Launch Configurations.

How to create a health check for XRAY Daemon Task

I'm trying to implement XRAY for our AWS ECS spring boot application. To do so I'm creating a new task with a separate docker file just for the docker daemon as suggested by the AWS documentation and suggested when I asked another question on the Daemon setup.
However, when I try to deploy this to AWS, a health check endpoint is required for the load balancer is required to be able to determine that the service has been deployed successfully.
There is no health check functionality in the daemon itself. There's a thread on the AWS forums as well as an issue on the github repo related to this.
My initial idea is to create an application (probably spring-boot) that is able to determine if the daemon is running and expose a URL that the elb can hit to do a health check on the daemon. I can then deploy it along with the daemon.
Is there a better way to go about doing this? I worry about the need of creating a separate application just for creating a health check. There may be some hackiness required in order to run two entrypoint commands in the docker file as well.
Any ideas on a better way to accomplish this?
You don't need to use Load Balancer at all for X-Ray Docker Container Daemon since traffic is coming from cluster EC2 containers only. Healthcheck for X-Ray container can be done using AWS ECS Healthcheck itself.
Based on the forum answer, you can configure netstat on container healthcheck which will make sure if udp port is not opened by daemon container then ECS Agent will restart container.
Below is HealthCheck command you provide in ECS Task definition.
CMD-SHELL, netstat -aun | grep 2000 > /dev/null; if [ 0 != $? ]; then exit 1; fi;
Here is the setup and result.
Note--
If you are building X-Ray Docker image, please make sure you include netstat utility in Dockerfile otherwise health command will fail.
Example - if you are using Dockerfile gave in this documentation then you need to add net-tools package to your X-Ray container image.
Following is my updated Dockerfile which adds net-tools to image.
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y --force-yes --no-install-recommends apt-transport-https curl ca-certificates wget net-tools && apt-get clean && apt-get autoremove && rm -rf /var/lib/apt/lists/*
RUN wget https://s3.dualstack.us-east-2.amazonaws.com/aws-xray-assets.us-east-2/xray-daemon/aws-xray-daemon-3.x.deb
RUN dpkg -i aws-xray-daemon-3.x.deb
CMD ["/usr/bin/xray", "--bind=0.0.0.0:2000"]
EXPOSE 2000/udp

Get Advance Details provided while creating AWS EC2 instance

I was creating new AWS EC2 instance, in step 1 I selected AMI Linux Image, In Step 2 after some basic details, I provided following advance details
#!/bin/bash
yum install httpd -y
yum update -y
service httpd start
chkconfig httpd on
echo "<html><h1>Hello Test Page!</h1></html>" > /var/www/html/index.html
Somehow this script did not execute after EC2 instance was ready. I have following questions,
Can we get log of what exactly happen in executing this script?
Also from console is it possible to get what values were specified in Advance details while setup an EC2 instance.
Login into your EC2 instance and check /var/log/cloud-init-output.log for any errors.
To check the user-data specified, I don't think you can see it on the console. But you can verify it using http://169.254.169.254/latest/user-data/ after logging into EC2