Is it possible to dynamically set values in AWS ECS task definitions? For example. I have the following volume definition.
"volumes": [
{
"host": {
"sourcePath": "/tmp/logs/registrations"
},
"name": "logs"
}
],
I would like do do something like /tmp/logs/<container_id>.
I am trying to do this because I have multiple containers running the same app. I need to mount each containers log directory for a sumologic collector. The problem is if the directories aren't namespaced by container then the mounts will conflict.
Cloudwatch logs is another option if saving to the file system is not a requirement.
If you are using the ECS optimized AMI it's already configured and all you need to do is turn it on in the container task definition:
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html
You would also need to configure log groups or log streams for each container.
Is saving logs to the file system a hard requirement? If not, Sumo Logic has a pretty good article about another option: Update On Logging With Docker.
You run the Sumo Logic Collector for Docker in one container and your application in another. You configure Docker to send logs from your application container to the Sumo Logic collector. This is built into Docker, so your application shouldn't need to change. The collector container will then send those logs to Sumo Logic.
I've seen this pattern called the sidecar, in case you're looking for other examples of it.
Related
I have a monolith that is currently being transferred to aws (using ecs/fargate), and will later be broken up into microservices. It uses an amazon linux 1 image provisioned with apache, php, and all of my production website data. Currently, it sends logs to several files in /etc/httpd/logs and /var/www/vhosts/logs.
supposedly there's some stuff I can do in ecs task Definitions with log configurations and volumes, but I haven't been able to find anything that explains the details on how to do so.
In case of a container, I will never suggest writing logs to file, better to write logs container stdout and stderr.
Another interesting thing, how you will deal with logfile if you moved to fargate? so do not write logs to file and do not treat the container like instance machine.
The beauty of AWS log driver is, it pushes to logs to Cloud watch logs and from cloud watch it also super easy to push these to ELK.
Go for AWS log driver, design your entrypoint in way that it writes logs to stdout and stderr in the container. normally this is super easy when you run the process in the foreground it automatically writes log to container stdout.
Just add this line in your task definition and add cloud watch role.
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "awslogs-wordpress",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "awslogs-example"
}
}
or
once it configured you will see the logs
Using the awslogs Log Driver
You can configure the containers in your tasks to send log information
to CloudWatch Logs. If you are using the Fargate launch type for your
tasks, this allows you to view the logs from your containers. If you
are using the EC2 launch type, this enables you to view different logs
from your containers in one convenient location, and it prevents your
container logs from taking up disk space on your container instances.
This topic helps you get started using the awslogs log driver in your
task definitions.
Note
The type of information that is logged by the containers in your task
depends mostly on their ENTRYPOINT command. By default, the logs that
are captured show the command output that you would normally see in an
interactive terminal if you ran the container locally, which are the
STDOUT and STDERR I/O streams. The awslogs log driver simply passes
these logs from Docker to CloudWatch Logs. For more information on how
Docker logs are processed, including alternative ways to capture
different file data or streams, see View logs for a container or
service in the Docker documentation.
aws-ecs-log-driver
I need to collect the custom metrics from my ECS instances and from the documentation
These are steps I need to follow
Install aws cloudwatch agent
Install collectd daemon
Configure cloudwatch agent to get the metrics from collectd daemon
It seems I can
Dockerize the cloudwatch agent (which seems to be already done, but with lack of documentation)
Dockerize collectd daemon
Why to dockerize or not use awslogs driver for collecting metrics ?
Currently we already have some services running as docker instances managed by amazon ecs and configured to use awslog driver to send logs to amazon cloudwatch logs.
But in order to collect more custom metrics from the services e.g number of requests per particular user from service a the only solution which aws suggested is to use colletd with curl plugin along with cloud watch agent.
As due to some scaling issues, instead of running cloudwatch agent and collectd in a instance. I want to run those as containers.
Question:
Is there any way to run cloudwatch agent in docker container which can read the metrics from collectd daemon which runs in different container but on the same machine ?
you do not need to run cloudwatch agent in your container, do not forget the rule of thumb, one process per container.
All you need to push application logs to stdout or stderr of the container and docker daemon will take care of it.
Important Configuration:
All you need to set log driver to AWS log driver in the task definition.
Amazon CloudWatch Logs logging driver
The awslogs logging driver sends container logs to Amazon CloudWatch
Logs. Log entries can be retrieved through the AWS Management Console
or the AWS SDKs and Command Line Tools.
Specifying a Log Configuration in your Task Definition
Before your containers can send logs to CloudWatch, you must
specify the awslogs log driver for containers in your task definition.
This section describes the log configuration for a container to use
the awslogs log driver. For more information, see Creating a Task
Definition.
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "awslogs-mysql",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "awslogs-example"
}
}
AmazonECS-using_awslogs
We can add tags to EC2 instances to help us better track billing usages and to manage instances.
Is there a way to achieve when deploying containers in ECS? I would like the running container to have the ability to know what tag it currently have attached.
It really depends on what you're ultimately trying to visualize after the fact. I'll share a few off-the-cuff thoughts below, and maybe you can extrapolate on these to build something that satisfies your needs.
As you probably are aware, ECS Tasks themselves don't support the notion of tags, however there are some workarounds that you could consider. For example, depending on how you're logging your application's behavior (eg. batching logs to CloudWatch Logs), you could create a Log Stream name, for each ECS Task, that contains a delimited array of tags.
As part of a POC I was building recently, I used the auto-generated computer name to dynamically create CloudWatch Log Stream names. You could easily append or prepend the tag data that you embed in your container images, and then query the tag information from the CloudWatch Log Streams later on.
Another option would be to simply log a metric to CloudWatch Metrics, based on the number of ECS Tasks running off of each unique Task Definition in ECR.
You could build a very simple Lambda function that queries your ECS Tasks, on each cluster, and writes the Task count, for each unique Task Definition, to CloudWatch Metrics on a per-minute basis. CloudWatch Event Rules allow you to trigger Lambda functions on a cron schedule, so you can customize the period to your liking.
You can use this metric data to help drive scaling decisions about the ECS Cluster, the Services and Tasks running on it, and the underlying EC2 compute instances that support the ECS Cluster.
Hope this helps.
Just found this while trying to work out the current situation. For future searchers: I believe tagging was added some time after this question, in late 2018.
I've not yet worked out if you can set this up in the Console or if it's a feature of the API only, but e.g. the Terraform AWS provider now lets you set service or task definition tags to 'bubble through' to tasks – including Fargate ones – via propagate_tags.
I've just enabled this and it works but forces a new ECS service – I guess this is related to it not being obviously editable in the Console UI.
I have a classic scala app, it produces three different logs in the location
/var/log/myapp/log1/mylog.log
/var/log/myapp/log2/another.log
/var/log/myapp/log3/anotherone.log
I containerized the app and working fine, I can get those logs by docker volume mount.
Now the app/container will be deployed in AWS ECS with auto scaling group. in this case multiple container may run on one single ecs host.
I would like to use cloud watch to monitor my application logs.
One solution could be put aws log agent inside my application container.
Is there any better way to get those application logs from container to cloudwatch log.
help is very much appreciated.
When using docker, the recommended approach is to not log to files, but to send logs to stdout and stderr. Doing so prevents the logs from being written to the container's filesystem, and (depending on the logging driver in use), allows you to view the logs using the docker logs / docker container logs subcommand.
Many applications have a configuration option to log to stdout/stderr, but if that's not an option, you can create a symlink to redirect output; for example, the official NGINX image on Docker Hub uses this approach.
Docker supports logging drivers, which allow you to send logging to (among others) AWS cloud watch. After you modified your image to make it log to stdout/stderr, your can configure the AWS logging driver.
More information about logging in Docker can be found in the "logging" section in the documentation
You don't need log agent if you can change the code.
You can directly publish Custom Metric Data into ColudWatch like this page said: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-cloudwatch-publish-custom-metrics.html
My Java application appears to be running out of "open files" limit when running in a Docker container in AWS ECS. Upon further investigation, I found that the open files limit defaults to 1024.
Typically, in Linux I would edit /etc/security/limits.conf. That does not appear to take effect when I modify that file in my Docker container.
I know I can also pass command line ulimit parameters to docker run as documented here. But I do not have direct access to the Docker command line in ECS. There's gotta be a way to do it via a Task definition. How do I accomplish this ?
ECS task definition JSON allows you to set ulimits.
The Ulimits on ecs task definition correspond to Ulimits in docker run.
Please refer to the following page for more information:
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html
Via the Console navigate to the task definitions and under the "Resource Limits" section you can set the NOFILE soft/hard limits
For cloudformation you can set it via the task definition > Container Definition > Ulimits
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecs-taskdefinition-containerdefinitions.html#cfn-ecs-taskdefinition-containerdefinition-ulimits
Be mindful of the fact that you should keep the hard limit below 1/10th of the afforded memory in Kilobytes to this task.