I run a spring boot application in AWS with Docker. Sometimes Amazon have to restart a hardware. Then Environment Health of instance in Beanstalk goes Degraded, Warning, and restarts.
I do want my app logs from the last 7 days but it was restarted due to unforeseen AWS hardware issues so I lost my information. How can I avoid it and make AWS to save all my logs even after restart?
It is true that archiving logs to S3 would work for the most part but you may want to consider installing and configuring the CloudWatch Logs agent - http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html
This will stream logs directly to CloudWatch and save them upon termination. You also could consider numerous other solutions for this such as Sumo Logic, ELK, Splunk, etc.
You should always build solutions so as to be ready even when hardware crashes. One possible solution could be that while rotating log files send them to S3 bucket. You can create a cron-job to do this.
Related
We have couple of application running on AWS. Currently we are redirecting all our logs to single bucket. However for ease of access to users, I am thinking to install ELK Stack on EC2 instance.
Would want to check if there is alternate way available where I don't have to maintain this stack.
Scaling won't be an issue, as this is only for logs generated through application running on AWS, so not ingestion or processing is required. mostly log4j logs.
You can go for either the managed Elasticsearch available in AWS or setup your own in an EC2 instance
It usually comes down to the price involved and the amount of time you have in hand in setting up and maintaining your own setup
With your own setup, you can do a lot more configurations than that provided by the managed service and also helps in reducing the cost
You can find more info on this blog
Looking into adding autoscaling of a portion of our application using AWS simple message queuing which would launch EC2 on-demand or spot instances based on queue backlog.
One question I had, is how do you deal with collecting logs from autoscaled instances? New instances are spun up based on an image, but then they are shut down when complete. Currently, if there is an issue with one of our services, which causes it to crash, we have a system to automatically restart the service, but the logs and core dump files are there to review. If we switch to an autoscaling system, where new instance are spun up, how do you get logs and core dump files when there is a failure? Particularly if the instance is spun down.
Good practice is to ship these logs and aggregate them somewhere else, and there are many services such as DataDog and Rapid7 which will do this for you at a cost.
AWS however provides CloudWatch logs, which gives you a central place to store and view logs. It also allows you then to give users access to logs on the AWS console without them having to ssh onto a server.
Shipping your logs to CloudWatch logs requires the installation of the CloudWatch agent on your server and specifying in the config which logs to ship.
You could install the CloudWatch agent once and create an AMI of that server to use in your autoscaling group, or install and configure the CloudWatch agent in userdata for every time a server is spun up.
All the information you need to get started can be found here:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html
Is there any way to stream/push docker app logs to S3 bucket?
I know following 2 ways
Configure cloud watch logs/stream - All logs (both info & Error logs) are getting merged in this approach
Configure graylogs2 to push every log message and collect and then push to S3 bucket - Need to maintain graylogs2 app.
I am looking for any easy way to push docker app/error logs S3 Bucket
Thanks
A possible solution, though it's hard to tell for your case, is to run logstash in a separate container, and have your app direct logs to logstash. Since Logstash’s logging framework is based on Log4j 2 framework, it will likely be familiar to you. A plugin already exists for logstash to push to S3 on your behalf.
You can configure your existing log4j2 to emit to a port that logstash is running on.
If even this is considered too much maintenance for you, your best solution is probably just setting up a cron to run rsync.
I have an AWS elastic beanstalk environment with some amazon linux instances running a tomcat8 server. I have enabled log rotation from the beanstalk console and I can see the logs getting published to S3 every hour.
I would like to reduce the frequency of the rotation from 1 hour to maybe 12 hours (or something customizable that i can decide later - if customization is limited, i can fallback to daily). The only related pointers that I've found in the documentation is that the logrotate configuration is at /etc/logrotate.elasticbeanstalk.hourly/ and that the cron job runs hourly as defined here /etc/cron.hourly/.
The default logrotate configuration for tomcat is set to rotate after size:10mb but the force flag in the cron task basically ignores this and ends up rotating the log file much sooner (I don't have a whole lot of traffic). Too many log files makes it very annoying to use for any sort of debugging later.
How can I go about changing the logrotate configuration and override the cron job? Is the recommended option to overwrite these config files via a script in .ebextensions folder?
When an instance is terminated (replaced by another one during rolling updates or any other reason) then does Elastic Beanstalk automatically back up the pending logs to S3 or do we lose them? What changes should I do or not do in above configuration so I can ensure that all logs are updated to S3?
I am programming a Jersey service on Tomcat via EBS with LoadBalancer. I am finding getting the EC2's S3 catalina files very cumbersome. Currently I need to determine the EC2 instance(s) then work my way to each of the S3 locations, download the files, then I can diagnose.
The snapshot doesn't help due to the amount of requests that come in, it doesn't hold enough info and by the time I get the snapshot, it has "rolled" off the snapshot.
Two questions:
1) Is there an easier approach to logs files via AWS? (Increase time before rotation which I don't believe is supported as of now, scripts, etc)
2) Is there any software or scripts to access all the logs under load balancer? I am basically wanting to say "give me all logs for this EBS" and have it get all logs for that day under all servers for that load balancer (up or down)". The clincher is down. Problem becomes more complex when the load balancer takes down an instance right when the issue occurs.
Thanks!
As an immediate solution to your problem you can follow the approach suggested in this answer. Essentially you can modify the logrotate configuration to rotate for a bigger log size using ebextensions.
Then snapshot logs should work for you.
Let me know if you need more clarifications on this approach.
AWS has released CloudWatch Logs just last week, which enables you to to monitor and troubleshoot your systems and applications using your existing system, application, and custom log files:
You can send your existing system, application, and custom log files to CloudWatch Logs and monitor these logs in near real-time. [...] you can store your logs using highly durable, low-cost storage for later access.
See the introductory blog post Store and Monitor OS & Application Log Files with Amazon CloudWatch for an illustrated walk through, which touches on using Elastic Beanstalk and CloudWatch Logs already - this is further detailed in Using AWS Elastic Beanstalk with Amazon CloudWatch Logs.