In spring boot logs by default go to stdout. that's nice standard - less config, no directory configuration etc. but I want to build a docker image and run it on aws.
how can i get all the logs from dockerized spring-boot stdout? does cloudwatch support it? is there a simple solution or do i have to switch to logging to a file, doing docker volumes mount etc?
It depends how your architecture looks like and what do you want to do with logs.
Nowadays you can use a myriad of tools in order to read logs. You can use AWS Cloudwatch Logs and through this you can configure alertings through CloudWatch itself.
In order to use it, you can configure your slf4j backend.
<appender name="cloud-watch" class="io.github.dibog.AwsLogAppender">
<awsConfig>
<credentials>
<accessKeyId></accessKeyId>
<secretAccessKey></secretAccessKey>
</credentials>
<region></region>
<clientConfig class="com.amazonaws.ClientConfiguration">
<proxyHost></proxyHost>
<proxyPort></proxyPort>
</clientConfig>
</awsConfig>
<createLogGroup>false</createLogGroup>
<queueLength>100</queueLength>
<groupName>group-name</groupName>
<streamName>stream-name</streamName>
<dateFormat>yyyyMMdd_HHmm</dateFormat>
<layout>
<pattern>[%thread] %-5level %logger{35} - %msg %n</pattern>
</layout>
Obviously it depends from your architecture: if you have for example filebeat, you can configure filebeat to use cloudwatch.
If you use ecs-optimized AMI for the ec2 instances (it should be at least 1.9.0), you can also use the aws logdriver for your containers:
1. Before launch the ecs agent, you must change /etc/ecs/ecs.config and adjust ECS_AVAILABLE_LOGGING_DRIVERS with: ["json-file","awslogs"]
2. Activate the auto-configuration feature to create log group for ecs tasks (you can also create the groups manually, but I think you want here more automation)
For more informations about aws logdriver, you can look on aws documentation:
AWS Logs Driver
Install ECS Agent
Related
So my understanding is that when I deploy a new service to ECS using AWS Copilot, logs are forwarded to CloudWatch automatically by default.
Copilot creates log groups for each service, I can see that in CloudWatch Logs.
However, according to AWS docs, logging can be also implemented using Copilot sidecars and AWS FireLens, which uses FluentD or FluentBit to collect logs, and then it forwards stuff CloudWatch.
I don't understand why is this necessary. I mean, why to create a sidecar for logging to CloudWatch, when logging seems to work automatically, without any sidecar.
https://aws.github.io/copilot-cli/docs/developing/sidecars/
There is an example here for logging via FireLens. What's the benefit of doing this over the logging mechanism that just works by default?
Thanks in advance!
AWS Copilot builds an image for you application that already has an agent configured to forward logs to CloudWatch, however you might want to deploy other images to ECS that don't have this agent installed. For example, suppose you wanted to deploy an nginx container to ECS, you might choose to use a sidecar to forward logs instead of customizing the nginx image.
I am trying to build on demand AWS jmeter(can be any testing tool like SOAP UI, Selenium ) instance to using Jenkins. Not looking for Server client Jmeter distribution architecture.
This is to provide cost effective solution to the spawn on demand jmeter(Not containerization )instance using Jenkins. New instance need JNLP or jenkins agent to establish connectivity with Jenkins Master.
Can some one provide me any documentation and codes(CLI) to spin up aws instance with or without AMI ?
You can use AWS CLI to manage instances (create, launch, shut down, terminate, etc.)
Example command would be:
aws ec2 run-instances --image-id your_image_id --count how_many_instances_you_want --instance-type desired_EC2_instance_type --key-name your_key_pair --security-groups your_EC2_security_group_name
Make sure that the security group allows the following ports:
the port you define as server_port, by default 1099
the port you define as server.rmi.localport
the port(s) you define as client.rmi.localport
More information:
Remote hosts and RMI configuration
Apache JMeter Properties Customization Guide
Am not sure if your are looking for this kind of setup.
Use terraform, infra as code. You will be able to spawn all the resources that are required for your test. The steps will follow like this,
Create a jmeter Docker image
Push it to ECR
Create a Cluster in ECS
Create a Task definition
Create a service in ECS cluster where it uses the Jmeter image and you can use fargate serverless.
On all the above you can use Jenkins CI/CD where you can trigger you terraform code.
I'm looking for a way to establish custom metrics over StatsD protocol for Amazon Elastic Container Service. I've found a documentation on how to establish Amazon CloudWatch Agent on EC2. It works well. However I'm failing to find a correct configuration for Dockerfile. Quite probably some set of custom IAM permissions will also be required there.
Is it possible to have Docker containers working from AWS ECS with custom metrics using StatsD reporting to AWS CloudWatch?
Rather than building your own container, you can use the one provided by Amazon. This article explains how, including a link to an example daemon service task configuration.
How can I configure ECS Fargate + Cloudwatch to include specific file location.
I have app-access.log where my framework puts all of my access logs.
Cloudwatch currently consumes logs from my server command IO only. How can I tell ECS Fargate to use app-access.log also.
PS. I am using Cloudformation.
ECS or Cloudwatch don't watch files in the container. ECS has integration with the docker logs, if the docker container emits logs from the access.log then these will be available for Cloudwatch. That's why you're only seeing the IO commands.
So it's not about ECS but rather about how docker logging works. See here for more details on docker logging.
You have to make sure any logline is written to STDOUT or STDERR.
One method is to symlink /path/to/app-access.log -> /dev/stdout.
But usually, it's easier to make sure there's a console appender for your service.
I have a classic scala app, it produces three different logs in the location
/var/log/myapp/log1/mylog.log
/var/log/myapp/log2/another.log
/var/log/myapp/log3/anotherone.log
I containerized the app and working fine, I can get those logs by docker volume mount.
Now the app/container will be deployed in AWS ECS with auto scaling group. in this case multiple container may run on one single ecs host.
I would like to use cloud watch to monitor my application logs.
One solution could be put aws log agent inside my application container.
Is there any better way to get those application logs from container to cloudwatch log.
help is very much appreciated.
When using docker, the recommended approach is to not log to files, but to send logs to stdout and stderr. Doing so prevents the logs from being written to the container's filesystem, and (depending on the logging driver in use), allows you to view the logs using the docker logs / docker container logs subcommand.
Many applications have a configuration option to log to stdout/stderr, but if that's not an option, you can create a symlink to redirect output; for example, the official NGINX image on Docker Hub uses this approach.
Docker supports logging drivers, which allow you to send logging to (among others) AWS cloud watch. After you modified your image to make it log to stdout/stderr, your can configure the AWS logging driver.
More information about logging in Docker can be found in the "logging" section in the documentation
You don't need log agent if you can change the code.
You can directly publish Custom Metric Data into ColudWatch like this page said: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-cloudwatch-publish-custom-metrics.html