Start Amazon CloudWatch Agent at instance startup - amazon-web-services

I have an Amazon Linux AMI Instance.
I installed CloudWatch Agent and start the service using this command.
/opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json -s
I can check the status of the CloudWatch Agent using this command and see that it's running
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -m ec2 -a status
However, I cannot find the service using
service amazon-cloudwatch-agent status
I want to set so that the CloudWatch Agent process start every time the instance starts. Something along the lines of systemctl enable amazon-cloudwatch-agent, but I don't have systemctl installed in my machine.
Is there any way I can setup so that the CloudWatch Agent starts every time the instance starts?

Related

How do I get my EC2 Instance to connect to ECS Cluster?

I have an ECS cluster defined in AWS and an Auto Scaling Group that I use to add/remove instance to handle tasks as necessary. I have the ASG setup so that it is creating the EC2 instance at the appropriate time, but it won't connect to the ECS Cluster unless I manually go in and disable/enable the ECS service.
I am using the Amazon Linux 2 ami on the EC2 machines and everything is in the same region/account etc.
I have included my user data below.
#!/bin/bash
yum update -y
amazon-linux-extras disable docker
amazon-linux-extras install -y ecs
echo "ECS_CLUSTER={CLUSTERNAME}" >> /etc/ecs/ecs.config
systemctl enable --now ecs
As mentioned this installs the ECS service and sets the config file properly but the enable doesn't actually connect the machine, but running the same disable/enable commands on the machine once running connects without problem. What am I missing?
First thing, the correct syntax is
#!/usr/bin/env bash
echo "ECS_CLUSTER=CLUSTER_NAMe" >> /etc/ecs/ecs.config
Once you update the config better to restart the ECS agent.
#!/usr/bin/env bash
echo "ECS_CLUSTER=CLUSTER_NAME" >> /etc/ecs/ecs.config
sudo yum update -y ecs-init
#this will update ECS agent, better when using custom AMI
/usr/bin/docker pull amazon/amazon-ecs-agent:latest
#Restart docker and ECS agent
sudo service docker restart
sudo start ecs
I ended up solving this using the old adage, turn it off and on again.
e.g. I added shutdown -r 0 to the bottom of the user data script to restart the machine after it was "configured" and it connected right now.

CloudWatch logs with Amazon Linux 2

I am upgrading my Elastic Beanstalk environment to use Amazon Linux 2.
In my old environment, I could monitor my Spring Boot application logs by watching the log group using cw.exe /aws/elasticbeanstalk/myapp/var/log/eb-docker/containers/eb-current-app/stdouterr.log
Now, however, no logs are displayed for the new application, and furthermore I notice that the stdouterr.log in /eb-current-app/ seems to prepend the instance ID of the log.
What do I need to do restore the previous behavior so I can monitor my logs?
Cloudwatch logs is available as a yum package now in Amazon Linux 2:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html
sudo yum install -y awslogs
You might have to edit this file /etc/awslogs/awscli.conf too to change the region.
Finally you will need to start and enable
sudo systemctl start awslogsd
sudo systemctl enable awslogsd.service
And set it all up as a config in this file ebextensions/cloudwatch.config

How to run codedeploy agent installation script in AWS ECS?

I have an AWS ECS cluster defined with a service that uses Replica service type. It creates an EC2 isntance with a docker container. I can access it through browser and all this stuff...
The issue is that I have to connect through ssh to the EC2 instance and run:
sudo yum update -y
sudo yum install-y ruby
sudo yum install-y wget
cd /home/ec2-user
wget https://aws-codedeploy-eu-west-1.s3.eu-west-1.amazonaws.com/latest/install
chmod +x ./install
sudo ./install auto
It install codedeploy agent, so I can connect github to the instance and CI/CD code.
I would like to set up this automatically in every server that the ECS definition creates. For example if i stop the EC2 instance, the cluster raises a new EC2 instance, which doesn't have this agent...
I saw that I should configure your Amazon ECS container instance with user data, but first of all is that I am not able to find this option, and I am not quite sure if it runs into the EC2 isntance or in the docker itself.
Based on the comments.
The solution was to use Launch Template or Launch Configurations.

How to automatically start, execute and stop EC2?

I want to test my Python library in GPU machine once a day.
I decided to use AWS EC2 for testing.
However, the fee of gpu machine is very high, so I want to stop the instance after the test ends.
Thus, I want to do the followings once a day automatically
Start EC2 instance (which is setup manually)
Execute command (test -> push logs to S3)
Stop EC2 (not remove)
How to do this?
It is very simple...
Run script on startup
To run a script automatically when the instance starts (every time it starts, not just the first time), put your script in this directory:
/var/lib/cloud/scripts/per-boot/
Stop instance when test has finished
Simply issue a shutdown command to the operating system at the end of your script:
sudo shutdown now -h
You can push script logs to custom coudwatch namespaces. Like when the process ends publish a state to cloudwatch. In cloudwatch create alarms based on the state of process, so if it has a completed state trigger an AWS lambda function that will stop instance after completion of your job.
Also if you want to start and stop on specific time you can use ec2 instance scheduler to start/stop instances. It just works like a cron job at specific intervals.
You can use the aws cli
To start an instance you would do the following
aws ec2 start-instances --instance-ids i-1234567890abcdef0
and to stop the instance you would do the following
aws ec2 stop-instances --instance-ids i-1234567890abcdef0
To execute commands inside the machine, you will need to ssh into it and run the commands that you need, then you can use the aws cli to upload files to s3
aws s3 cp test.txt s3://mybucket/test2.txt
I suggest reading the aws cli documentation, you will find most if not all what you need to automate aws commands there.
I created a shell script to start an EC2 instance -if not already running,- connect via SSH and, if you want, run a command.
https://gist.github.com/jotaelesalinas/396812f821785f76e5e36cf928777a12
You can use it in three different ways:
./ec2-start-and-ssh.sh -i <instance id> -s
will show status information about your instance: running state and private and public IP addresses.
./ec2-start-and-ssh.sh -i <instance id>
will connect and leave you inside the default shell.
./ec2-start-and-ssh.sh -i <instance id> <command>
will run whatever command you specify, e.g.:
./ec2-start-and-ssh.sh -i <instance id> ./run.sh
./ec2-start-and-ssh.sh -i <instance id> sudo poweroff
I use the last two commands to run periodic jobs minimizing billing costs.
I hope this helps!

How to create a health check for XRAY Daemon Task

I'm trying to implement XRAY for our AWS ECS spring boot application. To do so I'm creating a new task with a separate docker file just for the docker daemon as suggested by the AWS documentation and suggested when I asked another question on the Daemon setup.
However, when I try to deploy this to AWS, a health check endpoint is required for the load balancer is required to be able to determine that the service has been deployed successfully.
There is no health check functionality in the daemon itself. There's a thread on the AWS forums as well as an issue on the github repo related to this.
My initial idea is to create an application (probably spring-boot) that is able to determine if the daemon is running and expose a URL that the elb can hit to do a health check on the daemon. I can then deploy it along with the daemon.
Is there a better way to go about doing this? I worry about the need of creating a separate application just for creating a health check. There may be some hackiness required in order to run two entrypoint commands in the docker file as well.
Any ideas on a better way to accomplish this?
You don't need to use Load Balancer at all for X-Ray Docker Container Daemon since traffic is coming from cluster EC2 containers only. Healthcheck for X-Ray container can be done using AWS ECS Healthcheck itself.
Based on the forum answer, you can configure netstat on container healthcheck which will make sure if udp port is not opened by daemon container then ECS Agent will restart container.
Below is HealthCheck command you provide in ECS Task definition.
CMD-SHELL, netstat -aun | grep 2000 > /dev/null; if [ 0 != $? ]; then exit 1; fi;
Here is the setup and result.
Note--
If you are building X-Ray Docker image, please make sure you include netstat utility in Dockerfile otherwise health command will fail.
Example - if you are using Dockerfile gave in this documentation then you need to add net-tools package to your X-Ray container image.
Following is my updated Dockerfile which adds net-tools to image.
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y --force-yes --no-install-recommends apt-transport-https curl ca-certificates wget net-tools && apt-get clean && apt-get autoremove && rm -rf /var/lib/apt/lists/*
RUN wget https://s3.dualstack.us-east-2.amazonaws.com/aws-xray-assets.us-east-2/xray-daemon/aws-xray-daemon-3.x.deb
RUN dpkg -i aws-xray-daemon-3.x.deb
CMD ["/usr/bin/xray", "--bind=0.0.0.0:2000"]
EXPOSE 2000/udp