AWS ECS container logs design pattern - amazon-web-services

I have a classic scala app, it produces three different logs in the location
/var/log/myapp/log1/mylog.log
/var/log/myapp/log2/another.log
/var/log/myapp/log3/anotherone.log
I containerized the app and working fine, I can get those logs by docker volume mount.
Now the app/container will be deployed in AWS ECS with auto scaling group. in this case multiple container may run on one single ecs host.
I would like to use cloud watch to monitor my application logs.
One solution could be put aws log agent inside my application container.
Is there any better way to get those application logs from container to cloudwatch log.
help is very much appreciated.

When using docker, the recommended approach is to not log to files, but to send logs to stdout and stderr. Doing so prevents the logs from being written to the container's filesystem, and (depending on the logging driver in use), allows you to view the logs using the docker logs / docker container logs subcommand.
Many applications have a configuration option to log to stdout/stderr, but if that's not an option, you can create a symlink to redirect output; for example, the official NGINX image on Docker Hub uses this approach.
Docker supports logging drivers, which allow you to send logging to (among others) AWS cloud watch. After you modified your image to make it log to stdout/stderr, your can configure the AWS logging driver.
More information about logging in Docker can be found in the "logging" section in the documentation

You don't need log agent if you can change the code.
You can directly publish Custom Metric Data into ColudWatch like this page said: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-cloudwatch-publish-custom-metrics.html

Related

What's the proper way to forward ECS service logs to AWS CloudWatch?

So my understanding is that when I deploy a new service to ECS using AWS Copilot, logs are forwarded to CloudWatch automatically by default.
Copilot creates log groups for each service, I can see that in CloudWatch Logs.
However, according to AWS docs, logging can be also implemented using Copilot sidecars and AWS FireLens, which uses FluentD or FluentBit to collect logs, and then it forwards stuff CloudWatch.
I don't understand why is this necessary. I mean, why to create a sidecar for logging to CloudWatch, when logging seems to work automatically, without any sidecar.
https://aws.github.io/copilot-cli/docs/developing/sidecars/
There is an example here for logging via FireLens. What's the benefit of doing this over the logging mechanism that just works by default?
Thanks in advance!
AWS Copilot builds an image for you application that already has an agent configured to forward logs to CloudWatch, however you might want to deploy other images to ECS that don't have this agent installed. For example, suppose you wanted to deploy an nginx container to ECS, you might choose to use a sidecar to forward logs instead of customizing the nginx image.

AWS Elastic Beanstalk logs? Access more detailed logs?

I currently deploying a couple of apps with Elastic Beanstalk and have some open questions. One thing that bugs me about EB is the logs. I can run eb logs or request the logs from the GUI. But the result is kind of confusing to me since I can't find a way to access the normal stdout for a running process. I'm running a Django app and it seems like the logs automatically show every log that is explicitly set to a Warning priority.
In Django, there are a lot of errors that seem to slip through the log system (e. g. failed migrations, failed custom commands, etc.)
Is there any way to access more detailed logs, or access the stdout of my main process? It would also be ok, if they would stream to my terminal, or if I had to ssh on the machine.
I suggest using the cli to enable the cloudwatch logs with eb logs --cloudwatch-logs enable --cloudwatch-log-source all. This will allow you to see the streaming output of web.stdout.log along with all of your other logs individually.

how to collect logs on AWS from dockerized spring boot?

In spring boot logs by default go to stdout. that's nice standard - less config, no directory configuration etc. but I want to build a docker image and run it on aws.
how can i get all the logs from dockerized spring-boot stdout? does cloudwatch support it? is there a simple solution or do i have to switch to logging to a file, doing docker volumes mount etc?
It depends how your architecture looks like and what do you want to do with logs.
Nowadays you can use a myriad of tools in order to read logs. You can use AWS Cloudwatch Logs and through this you can configure alertings through CloudWatch itself.
In order to use it, you can configure your slf4j backend.
<appender name="cloud-watch" class="io.github.dibog.AwsLogAppender">
<awsConfig>
<credentials>
<accessKeyId></accessKeyId>
<secretAccessKey></secretAccessKey>
</credentials>
<region></region>
<clientConfig class="com.amazonaws.ClientConfiguration">
<proxyHost></proxyHost>
<proxyPort></proxyPort>
</clientConfig>
</awsConfig>
<createLogGroup>false</createLogGroup>
<queueLength>100</queueLength>
<groupName>group-name</groupName>
<streamName>stream-name</streamName>
<dateFormat>yyyyMMdd_HHmm</dateFormat>
<layout>
<pattern>[%thread] %-5level %logger{35} - %msg %n</pattern>
</layout>
Obviously it depends from your architecture: if you have for example filebeat, you can configure filebeat to use cloudwatch.
If you use ecs-optimized AMI for the ec2 instances (it should be at least 1.9.0), you can also use the aws logdriver for your containers:
1. Before launch the ecs agent, you must change /etc/ecs/ecs.config and adjust ECS_AVAILABLE_LOGGING_DRIVERS with: ["json-file","awslogs"]
2. Activate the auto-configuration feature to create log group for ecs tasks (you can also create the groups manually, but I think you want here more automation)
For more informations about aws logdriver, you can look on aws documentation:
AWS Logs Driver
Install ECS Agent

How can I configure Elastic Beanstalk to show me only the relevant log file(s)?

I'm an application developer with very limited knowledge of infrastructure. At my last job we frequently deployed Java web services (built as WAR files) to Elastic Beanstalk, and much of the infrastructure had already been set up before I ever started there, so I got to focus primarily on the code and not how things were tied together. One feature of Elastic Beanstalk that often came in handy was the button to "Request Logs," where you can select either the "Last 100 Lines" or the "Full Logs." What I'm used to seeing when clicking this button is to directly view the logs generated by my web service.
Now, at the new job, the infrastructure requirements are a little different, as we have to Dockerize everything before deploying it. I've been trying to stand up a Spring Boot web app inside a Docker container in Elastic Beanstalk, and have been running into trouble with that. And I also noticed a bizarre difference in behavior when I went to "Request Logs." Now when I choose one of those options, instead of dropping me into the relevant log file directly, it downloads a ZIP file containing the entire /var/log directory, with quite a number of disparate and irrelevant log files in there. I understand that there's no way for Amazon to know, necessarily, that I don't care about X log file but do care about Y log file, but was surprised that the behavior is different from what I was used to. I'm assuming this means the EB configuration at the last job was set up in a specific way to filter the one relevant log file, or something like that.
Is there some way to configure an Elastic Beanstalk application to only return one specific log file when you "Request Logs," rather than a ZIP file of the /var/log directory? Is this done with ebextensions or something like that? How can I do this?
Not too sure about the Beanstalk console, but using the EBCLI, if you enable CloudWatch log streaming (note that this would cost you to store logs in CloudWatch) for your Beanstalk instances, you can perform:
eb logs --stream --log-group <CloudWatch logGroup name>
The above command basically gives you the logs for your instance specific to the file/log group you specified. In order for the above command to work, you need to enable CloudWatch log streaming:
eb logs --stream enable
As an aside, to determine which log groups your environment presently has, perform:
aws logs describe-log-groups --region <region> | grep <beanstalk environment name>

How to get docker app logs to S3 bucket

Is there any way to stream/push docker app logs to S3 bucket?
I know following 2 ways
Configure cloud watch logs/stream - All logs (both info & Error logs) are getting merged in this approach
Configure graylogs2 to push every log message and collect and then push to S3 bucket - Need to maintain graylogs2 app.
I am looking for any easy way to push docker app/error logs S3 Bucket
Thanks
A possible solution, though it's hard to tell for your case, is to run logstash in a separate container, and have your app direct logs to logstash. Since Logstash’s logging framework is based on Log4j 2 framework, it will likely be familiar to you. A plugin already exists for logstash to push to S3 on your behalf.
You can configure your existing log4j2 to emit to a port that logstash is running on.
If even this is considered too much maintenance for you, your best solution is probably just setting up a cron to run rsync.