Google Cloud Persistent Disk Backup Strategy - google-cloud-platform

We are working on setting up a GPC environment with some Windows servers. Traditional backup is to backup our data daily. I know that I can run a disk snapshot.
gcloud compute --project=projectid disks snapshot diskname --zone=zonename --snapshot-names=snapshotname
I also understand that the snapshot is a forever incremental snapshot. However, I want an ability to schedule this. I am not sure what the best approach is for this. I am not sure if this is even the best way to do this.
I appreciate any guidance in regards to do backups of instances. I have this created in AWS using Lambda I am just not sure how to do this in GPC.

Google App Engine can schedule tasks. You could use this to invoke a function running in a GAE app that calls the snapshot API.

You can now set snapshot schedule to backup persistent disks.
https://cloud.google.com/compute/docs/disks/scheduled-snapshots

Related

In GCP, how to add a disk to a snapshot schedule with Ansible?

I'm using Ansible to deploy our stuff in GCP.
I'm looking to put in place a backup strategy for the instances in GCP projects using "scheduled snapshot".
I found the way to create the disk (https://docs.ansible.com/ansible/latest/collections/google/cloud/gcp_compute_disk_module.html#ansible-collections-google-cloud-gcp-compute-disk-module) and the way to create the snapshot schedule (https://docs.ansible.com/ansible/latest/collections/google/cloud/gcp_compute_resource_policy_module.html#ansible-collections-google-cloud-gcp-compute-resource-policy-module) but I did not find a method to create the link between both !?
May I change my glasses or there is no Ansible way for this ?
Any help will be appreciated
jcr

How to Deploy a docker container with volume in Cloud Run

I am trying to publish an application I wrote in .NET Core with docker and a mounted volume. I can't really figure out or see any clear solution to my issue that will be cheap (Its for a university project.)
I tried running a docker-compose via a cloudbuild.yml linked in this post with no luck, also tried to put my dbfile in a firebase project and tried to access it via the program but it didn't work. I also read in the GCP documentation that i can probably use Filestore but the pricing is way out of budget for me. I need to publish an SQLite so my server can work correctly, that's it.
Any help would be really apreciated!
Basically, you can't mount volume in Cloud Run. It's a stateless environment and you can't persist data on it. You have to use external storage to persist your data. See the runtime contract
WIth the 2nd execution runtime environment, you can now mount Cloud Storage bucket with GCSFuse, and Filestore path with NFS

Backup and restore files in ECS Tasks in AWS

Description
I have an ECS cluster that has multiple tasks. Every task is a Wordpress website. These task will automatically start and stop based on some Lambda functions. To persist the files when a task goes down for some reason I tried using EFS, but that is very slow when the burst credits ran out.
Now I use the volume type: Bind Mount (just using the normal filesystem, nothing fancy here). The websites are a lot faster but not persisted anymore. When an instance goes down the files of that website are gone. ECS starts the task again but without the files the websites break.
First solution
My first solution is to run an extra container in the task that makes backups once a day and stores it in S3. All files are automatically stored in a .tar.gz and uploaded to S3. This all works fine but I don't have a way yet to restore these backups yet. These things should be considered:
When a new tasks starts: need to check if current task/website already has a backup
If the latest backup should be restored: download .tar.gz from S3 and unzip it
To realize this I think it should be a bash script or something like it and run it on startup of a task?
Possible second solution
Another solution I thought about and I think is a lot cleaner is instead of having an extra container doing backups every day. Mount EFS to each task and have it sync data between the Bind Mount and EFS. This way EFS is a backup storage location instead of the working file system for my websites. Other pros: The tasks/websites will have more recent backups and I have more CPUs and Memory in my EC2 instances in my ECS cluster for other tasks.
Help?
I would like some opinions on the solutions above and maybe some advice if the second solution is any good and some tips on how to implement it. Any other advice would be helpful too!

Daily automatic backup of mysql database Amazon RDS snapshot to an via SSH

I would like to copy/clone the the database inside the newest RDS snapshot on Amazon RDS to a specific server outside of Amazon. I'm looking for a way to backup the mySQL database of it. I would like also to use a daily cron job that triggers a mysqldump from the newest RDS snapshot directly and copy this mySQL dump to a location on a different server via SSH or FTP.
At the moment the steps are way too time consuming. You need to replicate the snapshot to a new RDS instance, access the database via SSH, dump the database on the local PC and then upload it on another server.
Is there perhaps a different approach or alternative? Thanks already for any good hint, advice and help!
RDS snapshots are an AWS specific thing.
What you are describing is traditional database snapshots, which is pretty much your only option since you cannot download an RDS snapshot. There are offerings to do this, but what you are doing is pretty much the most common way. The only other option is to use an AWS instance to do this on a cron.
If you really want the data from an RDS snapshot, it will add time, but you could clone the RDS snapshot into a temporary instance and back that up to a file.
There is no way to do a mysqldump directly from an RDS snapshot. You have to restore the snapshot to a new server instance and then take a mysqldump of that new server instance.
It sounds like you need to forget about the RDS snapshots and just do daily mysqldumps directly from the database server, via a cron job.

Steps to create AWS snapshot automatically

I checked this answer but its not clear the steps to follow to create a automatic aws snapshot from a EBS volume.
In the AWS documentaion they talked about creating a new Snapshot but not how to do it automatically?
Can anyone tell be the steps to make snapshot automatically on regular basis?
You can use the API and create a windows task locally on your computer to run this command daily or weekly.
for example
ec2-create-snapshot vol-xxxxxxxxx --description "xxxxxxxxxxxxxxx"
You may want to try this versatile script https://github.com/evannuil/aws-snapshot-tool