I've been tasked with ensuring all volumes are in a snapshot policy. I have multiple organizations. What's the best/easiest way to make sure everything is getting backed up via snapshot policy?
I can list snapshots and snapshot policies, but how do I make sure nothing's failing to get backed up?
gcloud compute resource-policies list
gcloud compute snapshots list
Related
Does anyone know of an AWS CLI command that will list any running instance (run against a particular region) that doesn't have a snapshot available.
The closest command Ive found to try would be something like:
aws ec2 describe-snapshots --owner-ids self --query 'Snapshots[]' --region=us-east-1
I didn't actually get any return on it - just:
-------------------
|DescribeSnapshots|
+-----------------+
This is supposed to name every EC2 snapshot for each instance -- so I would have to subtract these ones from the entire EC2 inventory to reveal EC2 instances without.
Hence - I would like a command that would show running EC2 instances without any snapshots available -- so I can put something in place going forward.
Amazon EBS Snapshots are associated with Amazon EBS Volumes, which are associated with Amazon EC2 instances.
Therefore, you would need to write a program using an AWS SDK (I'd use Python, but there are many available) that would:
Obtain a list of all EBS Snapshots (make sure you use the equivalent to --owner-ids self), in which the return data will include the associated EBS VolumeId
Obtain a list of all EBS Volumes, in which the return data will include Attachments.InstanceId
Obtain a list of all running EC2 instances
Do a bit of looping logic to find Volumes without Snapshots, and then determine which instances are associated to those Volumes.
Note that rather than finding "instances without snapshots" it has to find "instances that have volumes without snapshots".
I don't think there is by default a CLI command that will allow you to do this. You can tag your snapshots with your instance ids for example then can query snapshots by filtering on the tags. Or you will have to use AWS SDK and create a custom script to allow you the get all instances and then check their volume ids if they have snapshots created or not.
well I have couple of questions. I have a aurora cluster with a single MySQL RDS instance which has 450GB of data. we use this cluster only when we are doing some specific testing.so I want to delete this cluster but keep its data available to me so I can make a new cluster whenever we need any testing to be done.
there are couple of ways this can be done as far as I know
take a snapshot of the cluster and restore the cluster from the
snapshot whenever required.
backup the cluster to s3 and restore the
cluster from s3 when required
which way is more faster and which one is more cost efficient?
can an entire cluster be restored from s3 if so what are the steps involved ? , I found the aws documentation bit too messy.
If we stop a aurora cluster, it again automatically restarts within 7 days , is there a way to prevent this automatic restart and keep it stopped when it is not required and start when required ?
I have an AWS account which is used for development. Because the developers are in one timezone, we switch off the resources after hours to conserve usage.
Is it possible to temporarily switch off nodes in elasticache cluster? all i found in cli reference was 'delete cluster':
http://docs.aws.amazon.com/cli/latest/reference/elasticache/index.html
ElastiCache clusters cannot be stopped. They can only be deleted and recreated. You can use this pattern to avoid paying for time when you're not using the cluster.
If you are using a Redis ElastiCache cluster, you can create a snapshot as the cluster is being deleted. Then, you can restore the cluster from the snapshot when you create it. This way, you preserve the data in the cluster.
The cluster endpoints are derived from a combination of
the cluster IDs,
the region,
the AWS account.
So as long as you delete and re-create clusters with those parts being constant, then the clusters will maintain the same endpoint.
At this time there is not a way to STOP and EMR cluster in the same
sense you can with EC2 instances. The EMR cluster uses instance-store
volumes and the EC2 start/stop feature relies on the use of EBS
volumes which are not appropriate for high-performance, low-latency
HDFS utilization.
The best way to simulate this behavior is to store the data in S3 and
then just ingest as a start up step of the cluster then save back to
S3 when done.
Documentation Reference:
https://forums.aws.amazon.com/thread.jspa?threadID=149772
Hope it helps.
EDIT1:
If you want to maintain the same dns, you can use the API/CLI to update the elastic cluster.
Reference:
http://docs.aws.amazon.com/cli/latest/reference/es/update-elasticsearch-domain-config.html
Hope it helps.
I would like to setup a batch process as follows on Amazon AWS:
take snapshot of volumes tagged "must_backup"
share those snapshots with account B
make a copy of those snapshots within account B
the purpose of this is to protect the backups in case the first Amazon AWS account gets compromised.
I know how to automate steps 1 & 3, however I cannot find a commandline example on how to perform step 2.
The official documentation https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html
does not provide any sample and does not clearly state how to specify the target account on the commandline.
I've double checked the previous solution and it's not ok. Basically "sharing" a snapshot means allowing other accounts to create a volume from that snapshot.
This implies adding a value to the "createVolumePermission" attribute
aws ec2 modify-snapshot-attribute --snapshot-id snap-<id> --user-ids <user-id-without-hypens> --attribute createVolumePermission --operation add
the operation might take some time (minutes?) after that you'll be able to query the attribute this way:
aws ec2 describe-snapshot-attribute --snapshot-id snap-<id> --attribute createVolumePermission
PS: for the purposes mentioned in the question this is probably not enough since the 'destination' account will not be able to see any of the tags from the source account, thus it will be impossible to perform a correct backup if the source account shares multiple snapshots with the same size
Example Commands for aws cli: copy ec2 snapshot
aws ec2 modify-snapshot-attribute --snapshot-id snap-1234567890 --user-ids other-amazon-account-id
I am trying to clean up some old AWS accounts to reduce costs, and need to get rid of unnecessary EBS volumes that are not attached to anything.
aws ec2 describe-volumes --volume-ids "blah" "blah"
does return a list of volumes, but doesn't provide info on which VM it might have been attached to.
Before I delete a whole bunch and we lose data that might be needed, I was wondering if there is a good approach to get that info?
Looked into cross referencing with Snapshot Id's and seeing what was there, but almost all have been deleted. Only one volume so far has tags that describe the old VM.