Running script on EC2 launch automatically during cluster creation - amazon-web-services

I am creating a ECS cluster (EC2 launch type) using the ecs-cli. I want to run a script to modify vm.max_map_count setting in /etc/sysctl.conf once the EC2 instance is created. At the moment, I am doing it manually by ssh'ing into the instance and running the script as sudo.
Is it possible to run automation script on the EC2 instance created as part of cluster creation? Any reference/documentation will be really helpful.
Thanks

Since you've tagged your question with amazon-cloudformation I assume that you are defining your ECS container instances using CFN.
If so, you can use UserData in your AWS::EC2::Instance to execute commands when the instances are launched:
Running commands on your Linux instance at launch
You are probably already using it to specify cluster name for the ECS agents running on your instances. So probably you already have something similar in your UserData;
echo ECS_CLUSTER=${ClusterName} >> /etc/ecs/ecs.config
echo ECS_BACKEND_HOST= >> /etc/ecs/ecs.config
You can extend the UserData with extra commands that would modify /etc/sysctl.conf.
There are some other possibilities, such as using SSM State Manager to perform actions when your instances launch.

Related

Terminate AWS EC2 instance when SSM Run Command status changes

I would like to (1) launch an AWS EC2 instance, (2) run a shell script (that sends output to an S3 bucket) and (3) terminate the instance automatically when the script terminates, all remotely without logging into the instance. I have managed to get parts (1) and (2) working using the AWS CLI commands aws ec2 run-instances and aws ssm send-command. I am struggling with part (3) - getting the instance to terminate automatically when the script completes.
I have seen in the AWS docs that you can use CloudWatch to monitor the SSM Run Command status, and I thought that this might be a solution - when the status changes, terminate the instance. Is this a feasible option? If so, how do you implement it using AWS CLI?
Within the ssm script, you can issue a command to the operating system to shutdown the computer. If you launched the instance with a Shutdown behavior of Terminate, then this will terminate the instance.
Alternatively, the script can retrieve the Instance ID of the instance it is running on, and issue the aws ec2 terminate-instances command, specifying its own Instance ID.
See: Self-Terminating AWS EC2 Instance?

No ECS agent docker container in ECS optimised instance

I launched an ECS Optimised instance in ap-south-1 region of AWS from ami id: ami-0a8bf4e187339e2c1 using the link https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html but there is no ecs agent present. Even /var/log/ecs directory is not present so I cannot check logs. I have correct cluster name configured in /etc/ecs/ecs.config
If you look at the instances in the EC2 console in AWS, can you see the AMI ID? Is it the AMI ID you expect?
Just to have a point of comparison, I just SSH'd to an ECS-optimized EC2 instances and I can see ecs-agent in a docker ps listing, I can see /var/log/ecs, so my first instinct is that this EC2 instance didn't end up using the AMI you expected it to.
If you want to check logs go to tasks and click on the task in which you wan to see logs and then click on logs yo will see the logs of your container.

ECS migration from AL1 to AL2 - ECS service not starting

I have recently changed AMI on which my ECS EC2 instances are running from Amazon Linux to Amazon Linux 2 (in both cases I am using ECS optimized images). I am deploying my instances using cloudformation and having a real headache as those new instances sometimes are being run successfully and sometimes not (same stack, no updates, same code).
On the failed instances I see that there is an issue with ECS service itself after executing ecs-logs-collector.sh I see in ecs file log "warning: The Amazon ECS Container Agent is not running". Also directory "/var/log/ecs" doesn't even exist!.
I have correct IAM role attached to an instance.
Also as mentioned, it is the same code being run, and on 75% of attempts it fails with ECS service, I have no more ideas, where else to look for some issues/logs/errors.
AMI: ami-0650e7d86452db33b (eu-central-1)
Solved. If someone will fall into this issue adding this to my userdata helped:
cp /usr/lib/systemd/system/ecs.service /etc/systemd/system/ecs.service
sed -i '/After=cloud-final.service/d' /etc/systemd/system/ecs.service
systemctl daemon-reload

Adding an ECS instance in AWS - where to set the cluster name

I have a cluster "my-cluster"
If I try and add an ECS instance, there are non available. However, if I create a cluster "default", then I have an instance available.
I have deleted the file /var/lib/ecs/data/ecs_agent_data.json as suggested here:
Why can't my ECS service register available EC2 instances with my ELB?
Where can I change my instance/load balancer to allow me to use an EC2 instance in "my-cluster" rather than having to use the "default" cluster?
Per the ECS Agent Configuration docs:
If you are manually starting the Amazon ECS container agent (for non-Amazon ECS-optimized AMIs), you can use these environment variables in the docker run command that you use to start the agent with the syntax --env=VARIABLE_NAME=VARIABLE_VALUE. For sensitive information, such as authentication credentials for private repositories, you should store your agent environment variables in a file and pass them all at once with the --env-file path_to_env_file option.
One of the environment variables in the list is ECS_CLUSTER. So start the agent like this:
docker run -e ECS_CLUSTER=my-cluster ...
If you're using the ECS-optimized AMI you can use an alternative approach as well.

How to auto create new hosts in Logentries for AWS EC2 autoscaling group

What's the best way to send logs from Auto scaling groups (of EC2) to Logentries.
I previously used the EC2 platform to create EC2 log monitoring for all of my EC2 instances created by an Autoscaling group. However according to Autoscaling rules, new instance will spin up if a current one is destroyed.
Now how do I create an automation for Logentries to create a new hosts and starting getting logs. I've read this https://logentries.com/doc/linux-agent-with-chef/#updating-le-agent I'm stuck at the override['le']['pull-server-side-config'] = false since I don't know anything about Chef (I just took the training from their site)
For an Autoscaling group, you need to get this baked into an AMI, or scripted to run on startup. You can get an EC2 instance to run commands on startup, after you've figured out which script to run.
The Logentries Linux Agent installation docs has setup instructions for an Amazon AMI (under Installation > Select your distro below > Amazon AMI).
Run the following commands one by one in your terminal:
You will need to provide your Logentries credentials to link the agent to your account.
sudo -s
tee /etc/yum.repos.d/logentries.repo <<EOF
[logentries]
name=Logentries repo
enabled=1
metadata_expire=1d
baseurl=http://rep.logentries.com/amazon\$releasever/\$basearch
gpgkey=http://rep.logentries.com/RPM-GPG-KEY-logentries
EOF
yum update
yum install logentries
le register
yum install logentries-daemon
I recommend trying that script once and seeing if it works properly for you, then you could include it in the user data for your Autoscaling launch configuration.