We have been facing latency issues with our redis lately.
We are trying to debug what's going on, I came across this post and it mentioned going over the redis logs to investigate how often the db is saved in the background (ie using bgsave)
I did some research on how to access the redis logs file but couldn't find anything on how to find it on AWS ElastiCache. I also tried running the monitor command from the redis cli but it's not giving me information about stuff like backing up the database etc.
How can I access such logs?
Apparently, there is no way to access to the Redis server-side logs ('yet').
src: https://forums.aws.amazon.com/thread.jspa?threadID=219210
This feature is available starting with version 6 Redis.
You can now publish logs from your Amazon ElastiCache for Redis clusters to CloudWatch and Kinesis Data Firehose, by enabling slow logs in ElastiCache Console.
You can read more details here
Related
I have been checking on internet for quite a some time but could not find any useful information regarding this cause every page says elasticache for redis but not on how I can migrate from aws elasticache to azure redis. On azure I can see that I can migrate from other redis cache to azure redis cache but not from aws elasticache to azure redis. Can someone show me the point where I should start my research on this cause its taking up so much of time with no luck.
Thanks in advance.
I'm new to RDS and previously have been administrated non-cloud database. It's common monitor database error log and monitors the texts. But when it comes to RDS Postgres, there is no native service that monitors log file.
(I know now RDS MySQL/MariaDB has a functionality to publish to CloudWatch logs, but RDS Postgres still cannot do it)
I guess basic scenario if we want to monitor RDS log files within AWS services, create Lambda function that download error log files periodically and save to S3 buckets. And parse it and if find error message, notify some chat service(like slack).
But it is not realtime and gonna call a lot of API call.
I'm wondering how people deal with monitoring log file.
Amazon provides (since 12/2018) publishing logs from RDS for PostgreSQL databases to Amazon CloudWatch Logs in Amazon RDS. Supported logs include PostgreSQL system logs and upgrade logs. [1]
[1] https://aws.amazon.com/about-aws/whats-new/2018/12/amazon-rds-supports-postgresql-logfiles-publish-to-amazon-cloudwatch-logs/
Currently, I'm trying to monitor an AWS RDS DB Instance (MySQL MariaDB) with Zabbix but I'm experiencing several troubles:
I have the script placed into the externalscripts folder on Zabbix Server and template (https://github.com/datorama/zabbix_rds_template) with my aws access data properly filled. The host is already added as well but Zabbix doesn't retrieve data from the AWS RDS Instance (all graphs display no data See the graphs ).
How could I check if the zabbix Server is able to reach the RDS instance to start to discard things?
Do anyone know the correct way to add AWS RDS host in Zabbix?
Any suggestion or advice is welcome always.
Thanks in advance
Kind Regards
KV.
Have you check the Zabbix Log? Maybe you don't use a user with enough permission. Try to run the script in the Zabbix server shell.
Have your Zabbix Server in AWS? if yes, set it a role with read access to CloudWatch and RDS.
Also, I don't like use credentials in a script, I prefer configure it with AWS CLI.
http://docs.aws.amazon.com/es_es/cli/latest/userguide/cli-chap-getting-started.html
Here, there is another example to monitoring any AWS Cloudwatch metrics with Zabbix that I use.
https://www.dbigcloud.com/cloud-computing/230-integrando-metricas-de-aws-cloudwatch-en-zabbix.html
I run a spring boot application in AWS with Docker. Sometimes Amazon have to restart a hardware. Then Environment Health of instance in Beanstalk goes Degraded, Warning, and restarts.
I do want my app logs from the last 7 days but it was restarted due to unforeseen AWS hardware issues so I lost my information. How can I avoid it and make AWS to save all my logs even after restart?
It is true that archiving logs to S3 would work for the most part but you may want to consider installing and configuring the CloudWatch Logs agent - http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html
This will stream logs directly to CloudWatch and save them upon termination. You also could consider numerous other solutions for this such as Sumo Logic, ELK, Splunk, etc.
You should always build solutions so as to be ready even when hardware crashes. One possible solution could be that while rotating log files send them to S3 bucket. You can create a cron-job to do this.
I am programming a Jersey service on Tomcat via EBS with LoadBalancer. I am finding getting the EC2's S3 catalina files very cumbersome. Currently I need to determine the EC2 instance(s) then work my way to each of the S3 locations, download the files, then I can diagnose.
The snapshot doesn't help due to the amount of requests that come in, it doesn't hold enough info and by the time I get the snapshot, it has "rolled" off the snapshot.
Two questions:
1) Is there an easier approach to logs files via AWS? (Increase time before rotation which I don't believe is supported as of now, scripts, etc)
2) Is there any software or scripts to access all the logs under load balancer? I am basically wanting to say "give me all logs for this EBS" and have it get all logs for that day under all servers for that load balancer (up or down)". The clincher is down. Problem becomes more complex when the load balancer takes down an instance right when the issue occurs.
Thanks!
As an immediate solution to your problem you can follow the approach suggested in this answer. Essentially you can modify the logrotate configuration to rotate for a bigger log size using ebextensions.
Then snapshot logs should work for you.
Let me know if you need more clarifications on this approach.
AWS has released CloudWatch Logs just last week, which enables you to to monitor and troubleshoot your systems and applications using your existing system, application, and custom log files:
You can send your existing system, application, and custom log files to CloudWatch Logs and monitor these logs in near real-time. [...] you can store your logs using highly durable, low-cost storage for later access.
See the introductory blog post Store and Monitor OS & Application Log Files with Amazon CloudWatch for an illustrated walk through, which touches on using Elastic Beanstalk and CloudWatch Logs already - this is further detailed in Using AWS Elastic Beanstalk with Amazon CloudWatch Logs.