Google SQL Instance - google-cloud-platform

I took Google cloud SQL instance to evaluate SQL instance. I created one database and after my trail expire, my sql instance is also suspended though I have upgraded my account. Is there any way to recover my database.
Best Regards,
Mazhar

I recommend looking over the documentation as it depends on the resource and how long it took you to upgrade your account. If its been longer then 30 days, reaching out to support would be your best option but it's no guarantee they can recover it.
End of Trail
Any data you stored in Compute Engine is marked for deletion and might
be lost. Learn more about data deletion on Google Cloud.
Your Cloud Billing account enters a 30-day grace period, during which
you can recover resources and data you stored in any Google Cloud
services during the trial period.
Deletion Timeline
Google Cloud Platform commits to delete Customer Data within a maximum
period of about six months (180 days). This commitment incorporates
the stages of Google’s deletion pipeline described above, including:
Stage 2 - Once the deletion request is made, data is typically marked
for deletion immediately and our goal is to perform this step within a
maximum period of 24 hours. After the data is marked for deletion, an
internal recovery period of up to 30 days may apply depending on the
service or deletion request.

Related

AWS Backup Not Transitioning to Cold Storage

Our AWS costs are increasing at a steady rate each month. Looking into it, I found that none of our backups are transitioning to Cold Storage, even though every plan has a transition period set and the retention in cold storage is configured way past the required 90 days.
I have read the documentation and can not see where I am going wrong. Any ideas?
Here is what is in the Vault, every snapshot taken says the same
I was trying to transition AMI when only EFS are supported.
Each lifecycle rule contains an array of transition objects specifying how long in days before a recovery point transitions to cold storage, or is deleted. As of now, the transition to cold storage is ignored for all resources except for Amazon EFS.
From here: https://aws.amazon.com/blogs/storage/automating-backups-and-optimizing-backup-costs-for-amazon-efs-using-aws-backup/

How to restrict the credits allocated for an user under an organization / domain in GCP?

Is it possible to fix the maximum quota (of credits) usable by an user?
For example, fixing a limit of 1000 USD for a particular user per month or year.
What I have looked at so far:
1. Guide to Cloud Billing Resource Organization & Access Management
2. Working with Quotas
3. How-to Guides -- Cloud Billing
4. Resource Quotas
I believe none of these articles have the answer. So, please atleast let me know if it's possible.
No, it's not possible per user, you can only set budget alerts on project, with several threshold for notification, and ONLY FOR NOTIFICATION. The project won't be shut down in case of limit reached.
You can also send these alerts into PubSub and then perform some special operation, like deactivate the billing on the project (and thus stop totally the project). There is an example here
This limitation is not perfect, in particular when users use BigQuery. If a user do anything, the whole project is impacted. About BigQuery, billed byte limitations should be released, a day.

BigQuery dataset complete deleted/vanished

I've stored analytics in a BigQuery dataset, which I've been doing for over 1.5 years by now, and have hooked up DataStudio, etc and other tools to analyse the data. However, I very rarely look at this data. Now I logged in to check it, and it's just completely gone. No trace of the dataset, and no audit log anywhere showing what happened. I've tracked down when it disappeared via the billing history, and it seems that it mysteriously was deleted in November last year.
My question to the community is: Is there any hope that I can find out what happened? I'm thinking audit logs etc. Does BigQuery have any table-level logging? For how long does GCP store these things? I understand the data is probably deleted since it was last seen so long ago, I'm just trying to understand if we were hacked in some way.
I mean, ~1 TB of data can't just disappear without leaving any traces?
Usually, Cloud Audit Logging is used for this
Cloud Audit Logging maintains two audit logs for each project and organization: Admin Activity and Data Access. Google Cloud Platform services write audit log entries to these logs to help you answer the questions of "who did what, where, and when?" within your Google Cloud Platform projects.
Admin Activity logs contain log entries for API calls or other administrative actions that modify the configuration or metadata of resources. They are always enabled. There is no charge for your Admin Activity audit logs
Data Access audit logs record API calls that create, modify, or read user-provided data. To view the logs, you must have the IAM roles Logging/Private Logs Viewer or Project/Owner. ... BigQuery Data Access logs are enabled by default and cannot be disabled. They do not count against your logs allotment and cannot result in extra logs charges.
The problem for you is retention for Data Access logs - 30 days (Premium Tier) or 7 days (Basic Tier). Of course, for longer retention, you can export audit log entries and keep them for as long as you wish. So if you did not do this you lost these entries and your only way is to contact Support, I think

Google Cloud Storage network usage

I am using GCS to store images for my Android application.
I was searching the Google Cloud Platform console, but haven't found network usage or something that will show me how many people uploaded/downloaded how many files/bytes of data. They tell you how they calculate the price, using network and class a/b operations, but i don't find the place to track this data myself.
You have to export these logs to bigquery. You can't find them in GCP interface
Storage logs are generated once a day and contain the storage usage for the previous day. They are typically created before 10:00 am PST.
Usage logs are generated hourly when there is activity to report in the monitored bucket. Usage logs are typically created 15 minutes after the end of the hour.
https://cloud.google.com/storage/docs/access-logs

Amazon Redshift Automated Snapshot Trigger

What actually triggers an automatic incremental backup/snapshot for Amazon Redshift? Is it time-based? The site says it "periodically takes snapshots and tracks incremental changes to the cluster since the last snapshot" and I know whenever I modify the cluster(either delete, modify size, or change node type) itself, a snapshot is taken. But what about when a database on the cluster is altered? I have inserted, loaded, deleted many rows but no automatic snapshot is taken. Would I just have to do manual backups then?
I have asked around and looked up online and no one has been able to give me an answer. I am trying to figure out an optimal backing strategy for my workload.
Automated backups are taken every 8 hours or every 5 GB of inserted data, whichever happens first.
Source: I work for AWS Redshift.