How to log on Amazon Elastic Beanstalk with a spring boot application - amazon-web-services

I created 2 applications with spring boot. I deployed them on Amazon Elastic Beanstalk. One is deployed in a Java environment, the other one in Tomcat.
Tomcat has its catalina.out log, where I can find the logs written by my spring application with log4j. The Java application has a log web-1.log, but it is rolled every hour, and I can only find the last 5 logs.
Is there a better way to log, or to store the old logs (maybe on S3), or to change the retention policy?

You can apply log rotation to S3. You also have the elk stack option but requires effort.
If you want a more aws solution you can utilize cloudwatch. For example you set up your logger with a custom appender that sends logs back to cloudwatch.
By using cloudwatch you can have a more friendly way to check your logs.

Related

If you are using AWS to autoscale spot instances of your application, how do you handle logging?

Looking into adding autoscaling of a portion of our application using AWS simple message queuing which would launch EC2 on-demand or spot instances based on queue backlog.
One question I had, is how do you deal with collecting logs from autoscaled instances? New instances are spun up based on an image, but then they are shut down when complete. Currently, if there is an issue with one of our services, which causes it to crash, we have a system to automatically restart the service, but the logs and core dump files are there to review. If we switch to an autoscaling system, where new instance are spun up, how do you get logs and core dump files when there is a failure? Particularly if the instance is spun down.
Good practice is to ship these logs and aggregate them somewhere else, and there are many services such as DataDog and Rapid7 which will do this for you at a cost.
AWS however provides CloudWatch logs, which gives you a central place to store and view logs. It also allows you then to give users access to logs on the AWS console without them having to ssh onto a server.
Shipping your logs to CloudWatch logs requires the installation of the CloudWatch agent on your server and specifying in the config which logs to ship.
You could install the CloudWatch agent once and create an AMI of that server to use in your autoscaling group, or install and configure the CloudWatch agent in userdata for every time a server is spun up.
All the information you need to get started can be found here:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html

How to visualize AWS Elastic Beanstalk application logs

We are using AWS Elastic Beanstalk for deploying application. Currently we have two Elastic Beanstalk applications and two worker processes (that pick message from AWS SQS Queue and process it).
What can be the best tools to view the combine logs from the Elastic Beanstalk application and worker and a few more on-premise applications in future?
Throw the logs in AWS ElasticSearch and the use Kibana, which comes with ElasticSearch, to visualize them.
I used the suggestion and configured Cloud watch logs, Elastic Search, and Kibana; but i am not getting all logs and all insights. I can see httpd access & error logs, ebs access & error logs. It also seems lot of AWS services and configuration. Since I am very new to AWS; therefore, so I am facing trouble in setting things up
Alternatively as suggested by my boss, I tried "New relic" - It was very simple to configure and I can see lot of insights of my EBS application in "New Relic" console. I can also configure my Browser, iOS app, Android app, AWS infrastructure (AWS Services) in one New Relic console. Some details are missing in New Relic console such error stack trace, request params in POST request, and so on; But I also don't want to share such details with New Relic, so, that is ok.
I will use "New Relic" and Cloudwatch logs (for real time investigation into failing HTTP REST services) right now; but I will explore more options inside AWS: Elastic Search and Kibana
Many Thanks

How to keep logs in AWS if application restarts?

I run a spring boot application in AWS with Docker. Sometimes Amazon have to restart a hardware. Then Environment Health of instance in Beanstalk goes Degraded, Warning, and restarts.
I do want my app logs from the last 7 days but it was restarted due to unforeseen AWS hardware issues so I lost my information. How can I avoid it and make AWS to save all my logs even after restart?
It is true that archiving logs to S3 would work for the most part but you may want to consider installing and configuring the CloudWatch Logs agent - http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html
This will stream logs directly to CloudWatch and save them upon termination. You also could consider numerous other solutions for this such as Sumo Logic, ELK, Splunk, etc.
You should always build solutions so as to be ready even when hardware crashes. One possible solution could be that while rotating log files send them to S3 bucket. You can create a cron-job to do this.

Getting Cloudwatch EC2 server health monitoring into ElasticSearch

I have an AWS account, and have several EC2 servers and an ElasticSearch domain set up to take the syslogs from these servers. However, in Cloudwatch and when investigating a specific server instance in the EC2 control panel, I see specific metrics and graphs for things like CPU, memory load, storage use, etc. Is there some way I can pipe this information into my ElasticSearch as well?
Set up Logstash and use this plugin https://github.com/EagerELK/logstash-input-cloudwatch
Or go the other way and use AWS Logs agent to put your syslogs into Cloudwatch and stop using ElasticSearch

Easier way to access ElasticBeanstalk EC2 Log files

I am programming a Jersey service on Tomcat via EBS with LoadBalancer. I am finding getting the EC2's S3 catalina files very cumbersome. Currently I need to determine the EC2 instance(s) then work my way to each of the S3 locations, download the files, then I can diagnose.
The snapshot doesn't help due to the amount of requests that come in, it doesn't hold enough info and by the time I get the snapshot, it has "rolled" off the snapshot.
Two questions:
1) Is there an easier approach to logs files via AWS? (Increase time before rotation which I don't believe is supported as of now, scripts, etc)
2) Is there any software or scripts to access all the logs under load balancer? I am basically wanting to say "give me all logs for this EBS" and have it get all logs for that day under all servers for that load balancer (up or down)". The clincher is down. Problem becomes more complex when the load balancer takes down an instance right when the issue occurs.
Thanks!
As an immediate solution to your problem you can follow the approach suggested in this answer. Essentially you can modify the logrotate configuration to rotate for a bigger log size using ebextensions.
Then snapshot logs should work for you.
Let me know if you need more clarifications on this approach.
AWS has released CloudWatch Logs just last week, which enables you to to monitor and troubleshoot your systems and applications using your existing system, application, and custom log files:
You can send your existing system, application, and custom log files to CloudWatch Logs and monitor these logs in near real-time. [...] you can store your logs using highly durable, low-cost storage for later access.
See the introductory blog post Store and Monitor OS & Application Log Files with Amazon CloudWatch for an illustrated walk through, which touches on using Elastic Beanstalk and CloudWatch Logs already - this is further detailed in Using AWS Elastic Beanstalk with Amazon CloudWatch Logs.