Is this normal for gcp Cloud SQL disk usage - google-cloud-platform

I created a cloud sql db for learning purposes a while ago and have basically never used it for anything. Yet the storage / disk space keeps climbing:
updated image
Updating image to show timescale... this climb seems to be within just a few hours!
Is this normal? If not, how do I troubleshoot / prevent this steady climb? The only operations against the db seem to be backup operations. I'm not doing any ops (afaik).

Related

GCP Cloud SQL PITR, how long can it take?

We had sort of a disastrous event with a database at work (managed on Cloud SQL, it's a MySQL db) and thankfully we have point in time recovery enabled.
We went ahead and cloned the production database to a certain point in time to be able to recover the data just before the disaster, but it's now been running for more than 5 hours for a 69GB database (the size advertised on the GCP Cloud SQL panel, so the real size of the DB is probably less than that).
Does anyone have experience with this ?
The status of the operation when querying it with the gcloud CLI says "RUNNING". We checked the logs of the instance to see if anything was off but there just isn't any log at all.

Google Cloud SQL - Database instance storage size increased dramatically everyday

I have a database instance (MySQL 8) on Google Cloud and since 20 days ago, the instance's storage usage just keeps increasing (approx 2Gb every single day!).
But I couldn't find out why.
What I have done:
Take a look at Point-in-time recovery "Point-in-time recovery" option, it's already disabled.
Binary logs is not enabled.
Check the actual database size and I see my database is just only 10GB in size
No innodb_per_table flag, so it must be "false" by default
Storage usage chart:
Database flags:
The actual database size is 10GB, now the storage usage takes up to 220GB! That's a lot of money!
I couldn't resolve this issue, please give me some ideal tips. Thank you!
I had the same thing happen to me about a year ago. I couldn't determine any root cause of the huge increase in storage size. I restarted the server and the problem stopped. None of my databases experienced any significant increase in size. My best guess is that some runaway process causes the binlog to blow up.
Turns out the problem is in a Wordpress theme's function called "related_products" which just read and write every instance of the products that user comes accross (it would be millions per day) and makes the database physically blew up.

Restore metrics after Google Cloud SQL (Postgres) crash

A couple of days our Google Cloud SQL instance "crashed" or at least was not responsive any longer. It recovered and works and all Query Insights and so on work again.
However, most metrics like CPU utilization, Storage usage and Memory usage are currently not available. I thought that would recover automatically as well but after 2 days I wonder if there needs to happen something manually.
Is there something I can do other than restarting the database (which would be only my last resort).
Okay, after waiting around 3 days the metrics are working again.

Couchdb with Clouseau plugin is taking more storage than expected

I've been using an AWS instance with CouchDb as a backup to IBM's Cloudant database of my application (using replication).
Everything seems to work fine but I've been noticing the permanent increase of Volume size in the AWS instance (it gets full all the time with the annoying problem of increasing a volume when there's no space in the partition).
Actual use of storage
The data in the screenshot is using almost 250 GB. I would like to know the possible reason for this issue, my guess is that the Clouseau plugin is using more space to enable the search index queries.
As I'm not an expert with this database, Anyone could explain to me why this is happening and how could I mitigate the issue?
My best regards!
If you are only backing up a Cloudant database to a CouchDB instance via replication, you should not need Clouseau enabled.
Clouseau is only required for search indices and if you are not doing queries on your backup database you can disable Clouseau in there. The indices are not backed up in the replication.

Data Intensive process in EC2 - any tips?

We are trying to run an ETL process in an High I/O Instance on Amazon EC2. The same process locally on a very well equipped laptop (with a SSD) take about 1/6th the time. This process is basically transforming data (30 million rows or so) from flat tables to a 3rd normal form schema in the same Oracle instance.
Any ideas on what might be slowing us down?
Or another option is to simply move off of AWS and rent beefy boxes (raw hardware) with SSDs in something like Rackspace.
We have moved most of our ETL processes off of AWS/EMR. We host most of it on Rackspace and getting a lot more CPU/Storage/Performance for the money. Don't get me wrong AWS is awesome but there comes a point where it's not cost effective. On top of that you never know how they are really managing/virtualizing the hardware that applies to your specific application.
My two cents.