No ECS agent docker container in ECS optimised instance - amazon-web-services

I launched an ECS Optimised instance in ap-south-1 region of AWS from ami id: ami-0a8bf4e187339e2c1 using the link https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html but there is no ecs agent present. Even /var/log/ecs directory is not present so I cannot check logs. I have correct cluster name configured in /etc/ecs/ecs.config

If you look at the instances in the EC2 console in AWS, can you see the AMI ID? Is it the AMI ID you expect?
Just to have a point of comparison, I just SSH'd to an ECS-optimized EC2 instances and I can see ecs-agent in a docker ps listing, I can see /var/log/ecs, so my first instinct is that this EC2 instance didn't end up using the AMI you expected it to.

If you want to check logs go to tasks and click on the task in which you wan to see logs and then click on logs yo will see the logs of your container.

Related

How to determine if Fargate is using Spot Instances

Background: I'm running docker-compose ecs locally and need to ensure I use Spot instances due to my hobbyist budget.
Question: How do I determine and guarantee that instances are running as Fargate Spot instances?
Evidence:
I have setup the default capacity provider strategy as FARGATE_SPOT
I have both the default-created capacity providers 'FARGATE' and 'FARGATE_SPOT'
capacity providers
default strategy
You can see this in the web console when you view a specific task:
To find this page open click on your cluster from within ECS, then go to the "Tasks" tab and click on the task id.
You can also see this through the aws cli:
aws ecs describe-tasks --cluster <your cluster name> --tasks <your task id> | grep capacityProviderName

How to know EC2 instance stopped time?

I really need to know about the stopped time of AWS EC2 instances. I have checked with AWS cloudtrail, but its not easy to find the exact stopped EC2 instance. Is possible to see exact time of stopped EC2 instances by aws-cli commands or any boto3 script?
You can get this info from StateTransitionReason in describe-instances AWS CLI when you search for stopped instances:
aws ec2 describe-instances --filter Name=instance-state-name,Values=stopped --query 'Reservations[].Instances[*].StateTransitionReason' --output text
Example output:
User initiated (2020-12-03 07:16:35 GMT)
AWS Config keeps track of the state of resources as they change over time.
From What Is AWS Config? - AWS Config:
AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time.
Thus, you could look back through the configuration history of the Amazon EC2 instance and extract times for when the instance changed to a Stopped state.
Sometimes time is missing from StateTransitionReason, you can use CloudTrail and search for Resource Name = instance ID to find out StopInstance(s) API calls.
By default you can track back 90 days, or indefinitely if you create your own trail.

ECS migration from AL1 to AL2 - ECS service not starting

I have recently changed AMI on which my ECS EC2 instances are running from Amazon Linux to Amazon Linux 2 (in both cases I am using ECS optimized images). I am deploying my instances using cloudformation and having a real headache as those new instances sometimes are being run successfully and sometimes not (same stack, no updates, same code).
On the failed instances I see that there is an issue with ECS service itself after executing ecs-logs-collector.sh I see in ecs file log "warning: The Amazon ECS Container Agent is not running". Also directory "/var/log/ecs" doesn't even exist!.
I have correct IAM role attached to an instance.
Also as mentioned, it is the same code being run, and on 75% of attempts it fails with ECS service, I have no more ideas, where else to look for some issues/logs/errors.
AMI: ami-0650e7d86452db33b (eu-central-1)
Solved. If someone will fall into this issue adding this to my userdata helped:
cp /usr/lib/systemd/system/ecs.service /etc/systemd/system/ecs.service
sed -i '/After=cloud-final.service/d' /etc/systemd/system/ecs.service
systemctl daemon-reload

Adding an ECS instance in AWS - where to set the cluster name

I have a cluster "my-cluster"
If I try and add an ECS instance, there are non available. However, if I create a cluster "default", then I have an instance available.
I have deleted the file /var/lib/ecs/data/ecs_agent_data.json as suggested here:
Why can't my ECS service register available EC2 instances with my ELB?
Where can I change my instance/load balancer to allow me to use an EC2 instance in "my-cluster" rather than having to use the "default" cluster?
Per the ECS Agent Configuration docs:
If you are manually starting the Amazon ECS container agent (for non-Amazon ECS-optimized AMIs), you can use these environment variables in the docker run command that you use to start the agent with the syntax --env=VARIABLE_NAME=VARIABLE_VALUE. For sensitive information, such as authentication credentials for private repositories, you should store your agent environment variables in a file and pass them all at once with the --env-file path_to_env_file option.
One of the environment variables in the list is ECS_CLUSTER. So start the agent like this:
docker run -e ECS_CLUSTER=my-cluster ...
If you're using the ECS-optimized AMI you can use an alternative approach as well.

How to auto create new hosts in Logentries for AWS EC2 autoscaling group

What's the best way to send logs from Auto scaling groups (of EC2) to Logentries.
I previously used the EC2 platform to create EC2 log monitoring for all of my EC2 instances created by an Autoscaling group. However according to Autoscaling rules, new instance will spin up if a current one is destroyed.
Now how do I create an automation for Logentries to create a new hosts and starting getting logs. I've read this https://logentries.com/doc/linux-agent-with-chef/#updating-le-agent I'm stuck at the override['le']['pull-server-side-config'] = false since I don't know anything about Chef (I just took the training from their site)
For an Autoscaling group, you need to get this baked into an AMI, or scripted to run on startup. You can get an EC2 instance to run commands on startup, after you've figured out which script to run.
The Logentries Linux Agent installation docs has setup instructions for an Amazon AMI (under Installation > Select your distro below > Amazon AMI).
Run the following commands one by one in your terminal:
You will need to provide your Logentries credentials to link the agent to your account.
sudo -s
tee /etc/yum.repos.d/logentries.repo <<EOF
[logentries]
name=Logentries repo
enabled=1
metadata_expire=1d
baseurl=http://rep.logentries.com/amazon\$releasever/\$basearch
gpgkey=http://rep.logentries.com/RPM-GPG-KEY-logentries
EOF
yum update
yum install logentries
le register
yum install logentries-daemon
I recommend trying that script once and seeing if it works properly for you, then you could include it in the user data for your Autoscaling launch configuration.