is there failover's logs in google cloud sql? - google-cloud-platform

I had a failover on my PostgreSQL instance on gcp, i have logs and metrics about the instance and metrics all looks good, but i don't have a log with the reason for the failover (something as well as a network failure or zone), Is there any way to know the reason for the failover?

This is info is only available if you have support service (only with pay), this steps are:
1.- paid 4 support
2.- open a ticker for ask to info

Related

Extract gcloud VM Instance Monitoring Data to BigQuery

Outline
We are running an ecommerce platform on Google Cloud on a dedicated VM Instance. Most of our traffic happens on Monday, as we then send our newsletters to our customer-base. Because of that we have huge traffic-peaks each Monday.
Goal
Because of this peak-traffic we need to make sure, that we understand how much server-load a single user is generating on average. To achieve this, we want to correlate our VM Instance Monitoring Data with our Google Analytics Data in Google Datastudio. To get a better understanding of the mentioned dynamics.
Problem
As far as we are aware (based on the docs), there is no direct data-consumption from the gcloud sdk possible in Google Datastudio. With that as a fact, we tried to extract the data via. BigQuery, but also there didn't found a possibility to access the monitoring data of our VM Instance.
Therefore we are looking for a solution, how we can extract our monitoring data of our VM Instances to Google Datastudio (preferably via BigQuery). Thank you for your help.
Here is Google official solution for monitoring export.
This page describes how to export monitoring metrics to bigquery dataset.
Solution deployments use pub/sub, app engine, Cloud scheduler and some python codes.
I think you only need to export the metrics listed in here.
If you complete exporting process successfully, then you can use Google Data studio for visualizing your metric data.

Centralized Logs Storage for Load Balanced Managed Instance Group

I have setup a Managed Instance Group with initial 3 instances (I installed Lumen inside, and the web server is auto started) to be used with the GCP load balancer. The LB works great.
However, whenever I need to trace lumen logs, I need to SSH every single instance to view the logs. Is there any best practices of one centralized storage I can refer to for the logs?
Can I mount the lumen logs into a centralized disk e.g. GCP filestore volume, or Google storage bucket or using FluendD to dump my logs into GCP Logging?
Please, I need to know the best industrial practice. THanks
STACK DRIVER is the right option for your case
https://cloud.google.com/logging/docs/agent/installation#joint-install
Install the stack driver logging agent on compute engine instances. You can track your logs lively also you can create visualizations and useful analysis out of it. Stackdriver is the best industry standard for the people who is using GCP. remember the pricing. Please check the pricing details

Monitor an AWS RDS DB intance with Zabbix

Currently, I'm trying to monitor an AWS RDS DB Instance (MySQL MariaDB) with Zabbix but I'm experiencing several troubles:
I have the script placed into the externalscripts folder on Zabbix Server and template (https://github.com/datorama/zabbix_rds_template) with my aws access data properly filled. The host is already added as well but Zabbix doesn't retrieve data from the AWS RDS Instance (all graphs display no data See the graphs ).
How could I check if the zabbix Server is able to reach the RDS instance to start to discard things?
Do anyone know the correct way to add AWS RDS host in Zabbix?
Any suggestion or advice is welcome always.
Thanks in advance
Kind Regards
KV.
Have you check the Zabbix Log? Maybe you don't use a user with enough permission. Try to run the script in the Zabbix server shell.
Have your Zabbix Server in AWS? if yes, set it a role with read access to CloudWatch and RDS.
Also, I don't like use credentials in a script, I prefer configure it with AWS CLI.
http://docs.aws.amazon.com/es_es/cli/latest/userguide/cli-chap-getting-started.html
Here, there is another example to monitoring any AWS Cloudwatch metrics with Zabbix that I use.
https://www.dbigcloud.com/cloud-computing/230-integrando-metricas-de-aws-cloudwatch-en-zabbix.html

Getting Cloudwatch EC2 server health monitoring into ElasticSearch

I have an AWS account, and have several EC2 servers and an ElasticSearch domain set up to take the syslogs from these servers. However, in Cloudwatch and when investigating a specific server instance in the EC2 control panel, I see specific metrics and graphs for things like CPU, memory load, storage use, etc. Is there some way I can pipe this information into my ElasticSearch as well?
Set up Logstash and use this plugin https://github.com/EagerELK/logstash-input-cloudwatch
Or go the other way and use AWS Logs agent to put your syslogs into Cloudwatch and stop using ElasticSearch

Amazon ec2 set up best practices for Rails app with mysql or postgres

I have to setup ec2 for a medium rails app running on apache2, mysql, capistrano and a few background services. I would like to know what is the best practices that every developer usually does to set up his rails app. I would like to know what kind of setup that is easy to scale and can mimimic at least
auto deployment
security
regular data backup and an easy and quick way to restore the data
server recovery
fault tolerance
I am also interested in how to monitor the server status and performance and other kind of best practice would be also helpful.
ps: take into account also that my app database will grow a fast.
I think a good look into the AWS docs and in particular the architecture center would be the best place to start. However, let me address as many of your questions as I can.
Database
The easiest way to get a scalable, fault tolerant database on AWS is to use the Relational Database Service. You should read the docs and best practices to ensure you get the most out of it - ie. multiple AZs.
EC2 Servers
The most recommended way to structure your servers is to decouple them into Web Servers (serve html to users) and App Servers (application logic, usually returns json or xml etc). See this architecture example.
However, the key is to use an AutoScaling group behind an Elastic Load Balancer.
Automation
If you want to use capistrano, just install it into your servers. You could create a pre-configured AMI with it installed along with whatever else you want. Alternatively, you could install it in a deployment script. However, the most recommended method for this kind of thing is to use the AWS OpsWorks service which is Chef in the cloud.
Server Recovery & Fault Tolerance
If you use EC2 AutoScaling, if a server becomes unavailable ie. hardware fails or it stops replying to EC2 health checks, AutoScaling will automatically terminate it and launch a replacement.
With the addition of the ELB and ELB health checks, instances that stop responding to web requests can be brought out of service by the ELB.
You need to read the docs for more info on this.
Backup and Recovery
For backing up data on EBS volumes attached to EC2 instances, use EBS Snapshots. However, the best types of architectures keep EC2 instances stateless - they don't store anything except application code, if they died it wouldn't matter. In these situations all data, including user files can be stored on S3. On S3, you have a number of back up options such as Cross Region Replication and or data archiving to Glacier
Monitoring
AWS provides CloudWatch which can provide you with hypervisor visible metrics such as network in and out, CPU utilization and more. If you want to get more data, you could use custom metrics and push things like eg. memory usage. In addition to cloudwatch, you could use a server level monitoring tool.
Deployment
I recommend AWS Code Deploy.
Security
Use Security Groups to open only the ports you want users to be able to connect on. Also, use security groups to lock down important ports eg.22 to only a specific set of IPs. You can also use Network ACLS to block undesired traffic. AWS provides more information and suggestions here.
I also recommend you read this Whitepaper.