I have a task of implementing a basic backup and recovery system within a django app. I have heard of pyvmomi, but never used it before.
My specific tasks at hand is:
1) make a call to a vCenter, pass the vm name, and request to make a snapshot
2) obtain the file location of the snapshot
3) and upload the snapshot file into an OpenStack Swift object store
What is the actual syntax of creating a vm snapshot using pyvmomi?
Also - what is the syntax to request the actual snapshot file from vCenter?
https://github.com/rreubenur/vmware-pyvmomi-examples/blob/master/create_and_remove_snapshot.py
This should be helpful
Snapshot task result itself contains Moref to snashot created
So that you can get reference to created snapshot.
Related
Is it possible to create a terraform module that updates a specific resource which is created by another module?
Currently, I have two modules...
linux-system: which creates a linux vm with boot disks
disk-updater: which I'm planning to use to update the disks I created from the first module
The reason behind is I want to create a pipeline that will do disk operations tasks via terraform like disk resizing.
data "google_compute_disk" "boot_disk" {
name = "linux-boot-disk"
zone = "europe-west2-b"
}
resource "google_compute_disk" "boot_disk" {
name = data.google_compute_disk.boot_disk.name
zone = data.google_compute_disk.boot_disk.zone
size = 25
}
I tried to use data block to retrieve the existing disk details and pass it to resource block hoping to update the same disk but it seems like it will just try to create a new disk with the same name thats why im getting this error.
Error creating Disk: googleapi: Error 409: The resource ... already exists, alreadyExists
I think I'm doing it wrong, can someone give me advice how to proceed without using the first module I built. btw I'm a newbie when it comes to terraform
updates a specific resource which is created by another module?
No. You have to update the resource using its original definition.
The only way to update it from other module, is to import to the other module, which is bad design, as now you will have to definitions for the same resource, resulting in out-sync state files.
I have run the OSRM server successfully on the AWS EC2 instance, after doing the steps:
1. osrm-extract --profile ../profiles/car.lua all1.osm.pbf
2. osrm-partition all1.osrm
3. osrm-customize all1.osrm
4. osrm-routed --algorithm=MLD all1.osrm
then I copied the volume of that instance (containing all generated OSRM files e.g., all1.osrm) to another instance. And only in the new instance, I try to run the command 4: osrm-routed --algorithm=MLD all1.osrm but I get the following error:
terminate called after throwing an instance of 'osrm::util::exception'
what(): Could not find any metrics for MLD in the data. Did you load the right dataset?
what prevents me from running the same steps again in the new instance is that the new instance has lower capacity and it will take a much longer time to process osm files again.
so, what I did do wrong here? and is there a way to move the configurations of OSRM from one machine/instance to another without the need to re-run the steps again?
thanks,
I'm trying to restore data in EFS from recovery points managed by AWS Backup. It seems AWS Backup does not support destructive restores and will always restore to a directory in the target EFS file system, even when creating a new one.
I would like to sync the data extracted from such a recovery point to another volume, but right now I can only do this manually as I need to lookup the directory name that is used by the start-restore-job operation (e.g. aws-backup-restore_2022-05-16T11-01-17-599Z), as stated in the docs:
You can restore those items to either a new or existing file system. Either way, AWS Backup creates a new Amazon EFS directory (aws-backup-restore_datetime) off of the root directory to contain the items.
Further looking through the documentation I can't find either of:
an option to set the name of the directory used
the value of directory name returned in any call (either start-restore-job or describe-restore-job)
I have also checked how the datetime portion of the directory name maps to the creationDate and completionDate of the restore job but it seems neither match (completionDate is very close, but it's not the exact same timestamp).
Is there any way for me to do one of these two things? Both of them missing make restoring a file system from a recovery point in an automated fashion very hard.
Is there any way for me to do one of these two things?
As it stands, no.
However, since we know that the directory will always be in the root, doing find . -type d -name "aws-backup-restore_*" should return the directory name to you. You could also further filter this down based on the year, month, day, hour & minute.
You could have something polling the job status on the machine that has the EFS file system mounted, finding the correct directory and then pushing that to AWS Systems Manager Parameter Store for later retrieval. If restoring to a new file system, this of course becomes more difficult but still doable in an automated fashion.
If you're not mounting this on an EC2 instance, for example, running a Lambda with the EFS file system mounted, will let you obtain the directory & then push it to Parameter Store for retrieval elsewhere. The Lambda service mounts EFS file systems when the execution environment is prepared - in other words, during the 'cold start' duration so there are no extra costs here for extra invocation time & as such, would be the cheapest option.
There's no built-in way via the APIs however to obtain the directory or configure it so you're stuck there.
It's an AWS failure that neither do they return the filename that they use in any way nor does any of the metadata returned - creationDate/completionData - exactly match the timestamp they use to name the file.
If you're an enterprise customer, suggest this as a missing feature to your TAM or SA.
I am completely new to Amazon Web Services, however, I did get an account and I am able to browse our list of servers. I am trying to create a database backup programmatically using .NET. I have installed AWS for .NET and I have built and run the sample Empty console program.
I can see that I can create an instance of the RDS service with the following line:
AmazonRDS rds = AWSClientFactory.CreateAmazonRDSClient(RegionEndPoint.USEast1);
However, I notice that the rds.CreateDBSnapshot(); needs a request object but I don't see anything like CreateDBSnapshotRequest in the reference .dll, can anyone help with a working example?
Like you said CreateDBSnapshotRequest is the parameter you have to pass to this function.
CreateDBSnapshotRequest is defined in the Amazon.RDS.Model namespace within the AWSSDK.dll assembly (version 1.5.25.0)
Within CreateDBSnapshotRequest you must pass the the DB Instance Identifier (for example mydbinstance-1), that you defined when you invoked the CreateDBInstance (or one of it's related methods) and the identifier for the snapshot you wish to generate (example: my-snapshot-id) for this DB Instance.
edit / example
Well there are a couple ways to achieve this, here's one example - hope it clears up your doubts
using Amazon.RDS;
using Amazon.RDS.Model;
...
...
//gets the credentials from the default configuration
AmazonRDS rdsClient = AWSClientFactory.CreateAmazonRDSClient();
CreateDBSnapshotRequest dbSnapshotRequest = new CreateDBSnapshotRequest();
dbSnapshotRequest.DBInstanceIdentifier = "my-oracle-instance";
dbSnapshotRequest.DBSnapshotIdentifier = "daily-snapshot";
rdsClient.CreateDBSnapshot(dbSnapshotRequest);
Dont't forget that the DB Instance (in the example my-oracle-instance) must exist (duh :) and must be in the available state, like this:
I have two databases one on my local machine and one on my amazon ec2 instance.Now what I
do is I run a python program on my local machine which makes changes to the databse on my local machine.I want these changes to be reflected onto the database on amazon ec2 instance,
periodically.I want to do this in python.A script that logs onto the amazon server establishes a connection with the database there and makes the changes.
I came across some modules like pexcept,fabric and paramiko.But I am struggling with the
key authentication.
The way I ssh from my terminal is ssh -i my_rsa_file.pem username#ip_address.There is no password.How do I go about this ??
Also I want to know whether simply using Popen in subprocess to execute the login command work ?
The Boto EC2 documentation here describes the EC2 instance object, of which "key_pair" is an attribute. Look about 3/4 of the way down, under "boto.ec2.instance".
http://boto.readthedocs.org/en/latest/ref/ec2.html
So, e.g., you could run some instances as follows, and then store the first instance as "inst":
reservation = conn.run_instances(...)
inst = reservation.instances[0]
To retrieve your key-pair name as a unicode string, just use:
kp_name = inst.key_name
You can then retrieve the corresponding Boto object using get_key_pair:
kp_obj = conn.get_key_pair(kp_name)
Of course, this is a silly example, since I would have needed my key pair name to run_instances in the first place. May you find a more fruitful application!