Currently using an auto scaling group (ASG) on AWS, and sometimes a docker container running in an EC2 instance exits due to some ambiguous reason and the instance may get removed from the ASG. This makes debugging the failure difficult since the ASG terminates the instance and therefore erasing all the evidence of what went wrong.
So, is there a way to write docker logs to S3 before it exits.
You can send the logs to cloudwatch and export to s3 if needed.
Below is the process,
Add your credentials to,
/etc/init/docker.override
With,
env AWS_ACCESS_KEY_ID=
env AWS_SECRET_ACCESS_KEY=
and restart your docker service.
docker run -it --log-driver="awslogs" --log-opt
awslogs-region="us-east-1" --log-opt awslogs-group="log-group"
--log-opt awslogs-stream="log-stream" ubuntu:14.04 bash
This way docker sends all logs to cloudwatch.
Hope it help[s.
Related
Is there easy way to run an ECS Task attached or to follow the logs only while the container is Running (ie. Detach after displaying all of the logs associated)?
Using the AWS CLI (1.17.0) and ecs-cli (1.21.0), I have gotten decently close with the following two commands:
aws ecs run-task --cluster "mycluster" --task-definition testhelloworldjob --launch-type FARGATE --network-configuration etc.etc.etc.
ecs-cli logs --task-id {TASK_ID_HERE_FROM_OUTPUT_OF_PREVIOUS_COMMAND} --follow
I am currently have two issues with the above approach:
There is a race condition being that the logs are not available when the task is in a pre "running" state. Instead of ecs-cli logs waiting for the logs to exist, there is an error immediately thrown.
Even after waiting for the task to be in a running state, and issuing the ecs-cli logs the command refuses to detach even AFTER the task is finished and in a Post Running status.
For the first issue I could poll until there is a post activating/pending status, prior to calling logs. For the second issue I could draft some type of threaded call that would poll to stop the following of a log after the container in question is no longer running.... But there has to be an easier way?
To clarify I am coming from numerous other container orchestration tools/technologies that seemingly supported this very seamlessly. Here are some examples of tools and their associated commands that would yield me my intended results:
Docker CLI:
docker run hello-world
Docker-Compose Yaml:
docker-compose up
K8 Kubectl Yaml:
kubectl apply -f ./hello-k8.yaml && kubectl logs --follow hello-world
I think ecs-cli is the best option available at the moment.
Apart from that, you can change the logs driver of the AWS ECS task to syslog and then watch the logs file from the terminal after doing SSH into the EC2 container instance in which it is running.
Another thing you can do is SSH into the EC2 container instance in which it was running before and then run the container of that AWS ECS task by yourself in it using docker run and once the testing is done, you can stop and remove that container and then get that task started via AWS ECS.
Note: You can use AWS SSM Session Manager in order to avoid using EC2 key pair and adding an inbound rule for SSH.
Question:
How can I install aws cli, from WITHIN the ECS task ?
DESCRIPTION:
I'm using a docker container to run the logstash application (it is part of the elastic family).
The docker image name is "docker.elastic.co/logstash/logstash:7.10.2"
This logstash application needs to write to S3, thus it needs AWS CLI installed.
If aws is not installed, it crashes.
# STEP 1 #
To avoid crashing, when I used this application only as a docker, I ran it in a way that I caused the 'logstash start' to be delayed, after docker container was started.
I did this by adding "sleep" command to an external docker-entrypoint file, before it starts the logstash.
This is how it looks in the docker-entrypoint file:
sleep 120
if [[ -z $1 ]] || [[ ${1:0:1} == '-' ]] ; then
exec logstash "$#"
else
exec "$#"
fi
# EOF
# STEP 2 #
run the docker with "--entrypoint" flag so it will use my entrypoint file
docker run \
-d \
--name my_logstash \
-v /home/centos/DevOps/psifas_logstash_docker-entrypoint:/usr/local/bin/psifas_logstash_docker-entrypoint \
-v /home/centos/DevOps/logstash.conf:/usr/share/logstash/pipeline/logstash.conf \
-v /home/centos/DevOps/logstash.yml:/usr/share/logstash/config/logstash.yml \
--entrypoint /usr/local/bin/psifas_logstash_docker-entrypoint \
docker.elastic.co/logstash/logstash:7.10.2
# STEP 3 #
install aws cli and configure aws cli from the server hosting the docker:
docker exec -it -u root <DOCKER_CONTAINER_ID> yum install awscli -y
docker exec -it <DOCKER_CONTAINER_ID> aws configure set aws_access_key_id <MY_aws_access_key_id>
docker exec -it <DOCKER_CONTAINER_ID> aws configure set aws_secret_access_key <MY_aws_secret_access_key>
docker exec -it <DOCKER_CONTAINER_ID> aws configure set region <MY_region>
This worked for me,
Now I want to "translate" this flow into an AWS ECS task.
in ECS I will use parameters instead of running the above 3 "aws configure" commands.
MY QUESTION
How can I do my 3rd step, installing aws cli, from WITHIN the ECS task ? (meaning not to run it on the EC2 server hosting the ECS cluster)
When I was working on the docker I also thought of these options to use the aws cli:
find an official elastic docker image containing both logstash and aws cli. <-- I did not find one.
create such an image by myself and use. <-- I prefer not , because I want to avoid the maintenance of creating new custom images when needed (e.g when new version of logstash image is available).
Eventually I choose the 3 steps above, but I'm open to suggestion.
Also, My tests showed that running 2 containers within the same ECS task:
logstah
awscli
and then the logstash container will use the aws cli container
(image "amazon/aws-cli") is not working.
THANKS A LOT IN ADVANCE :-)
Your option #2, create the image yourself, is really the best way to do this. Anything else is going to be a "hack". Also, you shouldn't be running aws configure for an image running in ECS, you should be assigning a IAM role to the task, and the AWS CLI will pick that up and use it.
Mark B, your answer helped me to solve this. Thanks!
writing here the solution in case it will help somebody else.
There is no need to install AWS CLI, in the logstash docker container running inside the ECS task.
Inside the logstash container (from image "docker.elastic.co/logstash/logstash:7.10.2") there is AWS SDK to connect to the S3.
The only thing required is to allow the ECS Task execution role, access to S3.
(I attached AmazonS3FullAccess policy)
$ terraform version
Terraform v0.14.4
I'm using Terraform to create an AWS autoscaling group, and it successfully launches an EC2 via a launch template, also created by the same Terraform plan. I added the following user_data definition in the launch template. The AMI I'm using already has Docker configured, and has the Docker image that I need.
user_data = filebase64("${path.module/docker_run.sh}")
and the docker_run.sh file contains simple
docker run -p 80:3000 -d 1234567890.dkr.ecr.us-east-1.amazonaws.com/node-app:latest
However, when I ssh to the EC2 instance, the container is NOT running. What am I missing?
Update:
Per Marcin's comment, I see the following in in /var/log/cloud-init-output.log
Jan 11 22:11:45 cloud-init[3871]: __init__.py[WARNING]: Unhandled non-multipart (text/x-not-multipart) userdata: 'docker run -p 80:3000 -d...'
From AWS docs and what you've posted the likely reason is that you are missing /bin/bash in your docker_run.sh:
User data shell scripts must start with the #! characters and the path to the interpreter you want to read the script (commonly /bin/bash).
Thus your docker_run.sh should be:
#!/bin/bash
docker run -p 80:3000 -d 1234567890.dkr.ecr.us-east-1.amazonaws.com/node-app:latest
If this still fails, please check /var/log/cloud-init-output.log on the instance for errors.
I want to test my Python library in GPU machine once a day.
I decided to use AWS EC2 for testing.
However, the fee of gpu machine is very high, so I want to stop the instance after the test ends.
Thus, I want to do the followings once a day automatically
Start EC2 instance (which is setup manually)
Execute command (test -> push logs to S3)
Stop EC2 (not remove)
How to do this?
It is very simple...
Run script on startup
To run a script automatically when the instance starts (every time it starts, not just the first time), put your script in this directory:
/var/lib/cloud/scripts/per-boot/
Stop instance when test has finished
Simply issue a shutdown command to the operating system at the end of your script:
sudo shutdown now -h
You can push script logs to custom coudwatch namespaces. Like when the process ends publish a state to cloudwatch. In cloudwatch create alarms based on the state of process, so if it has a completed state trigger an AWS lambda function that will stop instance after completion of your job.
Also if you want to start and stop on specific time you can use ec2 instance scheduler to start/stop instances. It just works like a cron job at specific intervals.
You can use the aws cli
To start an instance you would do the following
aws ec2 start-instances --instance-ids i-1234567890abcdef0
and to stop the instance you would do the following
aws ec2 stop-instances --instance-ids i-1234567890abcdef0
To execute commands inside the machine, you will need to ssh into it and run the commands that you need, then you can use the aws cli to upload files to s3
aws s3 cp test.txt s3://mybucket/test2.txt
I suggest reading the aws cli documentation, you will find most if not all what you need to automate aws commands there.
I created a shell script to start an EC2 instance -if not already running,- connect via SSH and, if you want, run a command.
https://gist.github.com/jotaelesalinas/396812f821785f76e5e36cf928777a12
You can use it in three different ways:
./ec2-start-and-ssh.sh -i <instance id> -s
will show status information about your instance: running state and private and public IP addresses.
./ec2-start-and-ssh.sh -i <instance id>
will connect and leave you inside the default shell.
./ec2-start-and-ssh.sh -i <instance id> <command>
will run whatever command you specify, e.g.:
./ec2-start-and-ssh.sh -i <instance id> ./run.sh
./ec2-start-and-ssh.sh -i <instance id> sudo poweroff
I use the last two commands to run periodic jobs minimizing billing costs.
I hope this helps!
I have recently changed AMI on which my ECS EC2 instances are running from Amazon Linux to Amazon Linux 2 (in both cases I am using ECS optimized images). I am deploying my instances using cloudformation and having a real headache as those new instances sometimes are being run successfully and sometimes not (same stack, no updates, same code).
On the failed instances I see that there is an issue with ECS service itself after executing ecs-logs-collector.sh I see in ecs file log "warning: The Amazon ECS Container Agent is not running". Also directory "/var/log/ecs" doesn't even exist!.
I have correct IAM role attached to an instance.
Also as mentioned, it is the same code being run, and on 75% of attempts it fails with ECS service, I have no more ideas, where else to look for some issues/logs/errors.
AMI: ami-0650e7d86452db33b (eu-central-1)
Solved. If someone will fall into this issue adding this to my userdata helped:
cp /usr/lib/systemd/system/ecs.service /etc/systemd/system/ecs.service
sed -i '/After=cloud-final.service/d' /etc/systemd/system/ecs.service
systemctl daemon-reload