I need to collect the custom metrics from my ECS instances and from the documentation
These are steps I need to follow
Install aws cloudwatch agent
Install collectd daemon
Configure cloudwatch agent to get the metrics from collectd daemon
It seems I can
Dockerize the cloudwatch agent (which seems to be already done, but with lack of documentation)
Dockerize collectd daemon
Why to dockerize or not use awslogs driver for collecting metrics ?
Currently we already have some services running as docker instances managed by amazon ecs and configured to use awslog driver to send logs to amazon cloudwatch logs.
But in order to collect more custom metrics from the services e.g number of requests per particular user from service a the only solution which aws suggested is to use colletd with curl plugin along with cloud watch agent.
As due to some scaling issues, instead of running cloudwatch agent and collectd in a instance. I want to run those as containers.
Question:
Is there any way to run cloudwatch agent in docker container which can read the metrics from collectd daemon which runs in different container but on the same machine ?
you do not need to run cloudwatch agent in your container, do not forget the rule of thumb, one process per container.
All you need to push application logs to stdout or stderr of the container and docker daemon will take care of it.
Important Configuration:
All you need to set log driver to AWS log driver in the task definition.
Amazon CloudWatch Logs logging driver
The awslogs logging driver sends container logs to Amazon CloudWatch
Logs. Log entries can be retrieved through the AWS Management Console
or the AWS SDKs and Command Line Tools.
Specifying a Log Configuration in your Task Definition
Before your containers can send logs to CloudWatch, you must
specify the awslogs log driver for containers in your task definition.
This section describes the log configuration for a container to use
the awslogs log driver. For more information, see Creating a Task
Definition.
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "awslogs-mysql",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "awslogs-example"
}
}
AmazonECS-using_awslogs
Related
My understanding is that the Cloudwatch agent is available both as a Linux binary and as a Kubernetes deamonset.
I am aware that the EKS container logs could be forwarded to Cloudwatch using the cloudwatch agent that runs as EKS daemonset.
I have a query on how to send OS logs from EKS nodes to cloudwatch? Would the cloudwatch agent daemonset service be able to send the OS logs to Cloudwatch? or is the Linux binary required to be run on the EKS nodes to send OS logs?
I believe that either can be used independently.
You can set up Fluentd or Fluent Bit as a DaemonSet to send logs to CloudWatch Logs.
Alternatively, you can install CloudWatch agent on Amazon EKS nodes using Distributor and State Manager. You may also consider including it in the launch template to automate installation because the EKS-optimized AMI does not include the agent by default.
I have a monolith that is currently being transferred to aws (using ecs/fargate), and will later be broken up into microservices. It uses an amazon linux 1 image provisioned with apache, php, and all of my production website data. Currently, it sends logs to several files in /etc/httpd/logs and /var/www/vhosts/logs.
supposedly there's some stuff I can do in ecs task Definitions with log configurations and volumes, but I haven't been able to find anything that explains the details on how to do so.
In case of a container, I will never suggest writing logs to file, better to write logs container stdout and stderr.
Another interesting thing, how you will deal with logfile if you moved to fargate? so do not write logs to file and do not treat the container like instance machine.
The beauty of AWS log driver is, it pushes to logs to Cloud watch logs and from cloud watch it also super easy to push these to ELK.
Go for AWS log driver, design your entrypoint in way that it writes logs to stdout and stderr in the container. normally this is super easy when you run the process in the foreground it automatically writes log to container stdout.
Just add this line in your task definition and add cloud watch role.
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "awslogs-wordpress",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "awslogs-example"
}
}
or
once it configured you will see the logs
Using the awslogs Log Driver
You can configure the containers in your tasks to send log information
to CloudWatch Logs. If you are using the Fargate launch type for your
tasks, this allows you to view the logs from your containers. If you
are using the EC2 launch type, this enables you to view different logs
from your containers in one convenient location, and it prevents your
container logs from taking up disk space on your container instances.
This topic helps you get started using the awslogs log driver in your
task definitions.
Note
The type of information that is logged by the containers in your task
depends mostly on their ENTRYPOINT command. By default, the logs that
are captured show the command output that you would normally see in an
interactive terminal if you ran the container locally, which are the
STDOUT and STDERR I/O streams. The awslogs log driver simply passes
these logs from Docker to CloudWatch Logs. For more information on how
Docker logs are processed, including alternative ways to capture
different file data or streams, see View logs for a container or
service in the Docker documentation.
aws-ecs-log-driver
In spring boot logs by default go to stdout. that's nice standard - less config, no directory configuration etc. but I want to build a docker image and run it on aws.
how can i get all the logs from dockerized spring-boot stdout? does cloudwatch support it? is there a simple solution or do i have to switch to logging to a file, doing docker volumes mount etc?
It depends how your architecture looks like and what do you want to do with logs.
Nowadays you can use a myriad of tools in order to read logs. You can use AWS Cloudwatch Logs and through this you can configure alertings through CloudWatch itself.
In order to use it, you can configure your slf4j backend.
<appender name="cloud-watch" class="io.github.dibog.AwsLogAppender">
<awsConfig>
<credentials>
<accessKeyId></accessKeyId>
<secretAccessKey></secretAccessKey>
</credentials>
<region></region>
<clientConfig class="com.amazonaws.ClientConfiguration">
<proxyHost></proxyHost>
<proxyPort></proxyPort>
</clientConfig>
</awsConfig>
<createLogGroup>false</createLogGroup>
<queueLength>100</queueLength>
<groupName>group-name</groupName>
<streamName>stream-name</streamName>
<dateFormat>yyyyMMdd_HHmm</dateFormat>
<layout>
<pattern>[%thread] %-5level %logger{35} - %msg %n</pattern>
</layout>
Obviously it depends from your architecture: if you have for example filebeat, you can configure filebeat to use cloudwatch.
If you use ecs-optimized AMI for the ec2 instances (it should be at least 1.9.0), you can also use the aws logdriver for your containers:
1. Before launch the ecs agent, you must change /etc/ecs/ecs.config and adjust ECS_AVAILABLE_LOGGING_DRIVERS with: ["json-file","awslogs"]
2. Activate the auto-configuration feature to create log group for ecs tasks (you can also create the groups manually, but I think you want here more automation)
For more informations about aws logdriver, you can look on aws documentation:
AWS Logs Driver
Install ECS Agent
How can I configure ECS Fargate + Cloudwatch to include specific file location.
I have app-access.log where my framework puts all of my access logs.
Cloudwatch currently consumes logs from my server command IO only. How can I tell ECS Fargate to use app-access.log also.
PS. I am using Cloudformation.
ECS or Cloudwatch don't watch files in the container. ECS has integration with the docker logs, if the docker container emits logs from the access.log then these will be available for Cloudwatch. That's why you're only seeing the IO commands.
So it's not about ECS but rather about how docker logging works. See here for more details on docker logging.
You have to make sure any logline is written to STDOUT or STDERR.
One method is to symlink /path/to/app-access.log -> /dev/stdout.
But usually, it's easier to make sure there's a console appender for your service.
I have a classic scala app, it produces three different logs in the location
/var/log/myapp/log1/mylog.log
/var/log/myapp/log2/another.log
/var/log/myapp/log3/anotherone.log
I containerized the app and working fine, I can get those logs by docker volume mount.
Now the app/container will be deployed in AWS ECS with auto scaling group. in this case multiple container may run on one single ecs host.
I would like to use cloud watch to monitor my application logs.
One solution could be put aws log agent inside my application container.
Is there any better way to get those application logs from container to cloudwatch log.
help is very much appreciated.
When using docker, the recommended approach is to not log to files, but to send logs to stdout and stderr. Doing so prevents the logs from being written to the container's filesystem, and (depending on the logging driver in use), allows you to view the logs using the docker logs / docker container logs subcommand.
Many applications have a configuration option to log to stdout/stderr, but if that's not an option, you can create a symlink to redirect output; for example, the official NGINX image on Docker Hub uses this approach.
Docker supports logging drivers, which allow you to send logging to (among others) AWS cloud watch. After you modified your image to make it log to stdout/stderr, your can configure the AWS logging driver.
More information about logging in Docker can be found in the "logging" section in the documentation
You don't need log agent if you can change the code.
You can directly publish Custom Metric Data into ColudWatch like this page said: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-cloudwatch-publish-custom-metrics.html