How everyone monitor RDS PostgreSQL error log? - amazon-web-services

I'm new to RDS and previously have been administrated non-cloud database. It's common monitor database error log and monitors the texts. But when it comes to RDS Postgres, there is no native service that monitors log file.
(I know now RDS MySQL/MariaDB has a functionality to publish to CloudWatch logs, but RDS Postgres still cannot do it)
I guess basic scenario if we want to monitor RDS log files within AWS services, create Lambda function that download error log files periodically and save to S3 buckets. And parse it and if find error message, notify some chat service(like slack).
But it is not realtime and gonna call a lot of API call.
I'm wondering how people deal with monitoring log file.

Amazon provides (since 12/2018) publishing logs from RDS for PostgreSQL databases to Amazon CloudWatch Logs in Amazon RDS. Supported logs include PostgreSQL system logs and upgrade logs. [1]
[1] https://aws.amazon.com/about-aws/whats-new/2018/12/amazon-rds-supports-postgresql-logfiles-publish-to-amazon-cloudwatch-logs/

Related

How to access redis logs on AWS ElastiCache

We have been facing latency issues with our redis lately.
We are trying to debug what's going on, I came across this post and it mentioned going over the redis logs to investigate how often the db is saved in the background (ie using bgsave)
I did some research on how to access the redis logs file but couldn't find anything on how to find it on AWS ElastiCache. I also tried running the monitor command from the redis cli but it's not giving me information about stuff like backing up the database etc.
How can I access such logs?
Apparently, there is no way to access to the Redis server-side logs ('yet').
src: https://forums.aws.amazon.com/thread.jspa?threadID=219210
This feature is available starting with version 6 Redis.
You can now publish logs from your Amazon ElastiCache for Redis clusters to CloudWatch and Kinesis Data Firehose, by enabling slow logs in ElastiCache Console.
You can read more details here

Monitor an AWS RDS DB intance with Zabbix

Currently, I'm trying to monitor an AWS RDS DB Instance (MySQL MariaDB) with Zabbix but I'm experiencing several troubles:
I have the script placed into the externalscripts folder on Zabbix Server and template (https://github.com/datorama/zabbix_rds_template) with my aws access data properly filled. The host is already added as well but Zabbix doesn't retrieve data from the AWS RDS Instance (all graphs display no data See the graphs ).
How could I check if the zabbix Server is able to reach the RDS instance to start to discard things?
Do anyone know the correct way to add AWS RDS host in Zabbix?
Any suggestion or advice is welcome always.
Thanks in advance
Kind Regards
KV.
Have you check the Zabbix Log? Maybe you don't use a user with enough permission. Try to run the script in the Zabbix server shell.
Have your Zabbix Server in AWS? if yes, set it a role with read access to CloudWatch and RDS.
Also, I don't like use credentials in a script, I prefer configure it with AWS CLI.
http://docs.aws.amazon.com/es_es/cli/latest/userguide/cli-chap-getting-started.html
Here, there is another example to monitoring any AWS Cloudwatch metrics with Zabbix that I use.
https://www.dbigcloud.com/cloud-computing/230-integrando-metricas-de-aws-cloudwatch-en-zabbix.html

How to log on Amazon Elastic Beanstalk with a spring boot application

I created 2 applications with spring boot. I deployed them on Amazon Elastic Beanstalk. One is deployed in a Java environment, the other one in Tomcat.
Tomcat has its catalina.out log, where I can find the logs written by my spring application with log4j. The Java application has a log web-1.log, but it is rolled every hour, and I can only find the last 5 logs.
Is there a better way to log, or to store the old logs (maybe on S3), or to change the retention policy?
You can apply log rotation to S3. You also have the elk stack option but requires effort.
If you want a more aws solution you can utilize cloudwatch. For example you set up your logger with a custom appender that sends logs back to cloudwatch.
By using cloudwatch you can have a more friendly way to check your logs.

How to log from multiple ec2 instances(load balanced) to a common server using AWS

How to log from multiple ec2 instances(load balanced) to a common server using AWS.
I have multiple images of ec2 instance with apache servers . I want to log all the log data to a common server.
Do AWS provide any tools for doing this.
AWS Cloud Watch has this feature where you can add multiple server logs and monitor them through Cloud Watch console. See below steps
http://cloudacademy.com/blog/centralized-log-management-with-aws-cloudwatch-part-1-of-3/

Getting Cloudwatch EC2 server health monitoring into ElasticSearch

I have an AWS account, and have several EC2 servers and an ElasticSearch domain set up to take the syslogs from these servers. However, in Cloudwatch and when investigating a specific server instance in the EC2 control panel, I see specific metrics and graphs for things like CPU, memory load, storage use, etc. Is there some way I can pipe this information into my ElasticSearch as well?
Set up Logstash and use this plugin https://github.com/EagerELK/logstash-input-cloudwatch
Or go the other way and use AWS Logs agent to put your syslogs into Cloudwatch and stop using ElasticSearch