I have completed the automatic creation of snapshots using the following link :
https://blog.powerupcloud.com/2016/02/15/automate-ebs-snapshots-using-lambda-function/
As written in the code, filtering is done based on tags of VMs. Instead of creating a VM with a Backup or backup tag, I want to create snapshots of all except for some names.
I do not want to add extra tags to VMs. Instead, I want to write an if condition in my filters. I would provide the names of my Test VMs and if the VM tag matches that, snapshot would not be created. If it does not match, snapshots have to be created. Can I do that?
Ex : I have four VMs in my account.
VM 1 --> Prod1,
VM 2 --> Prod2,
VM 3 --> Prod3,
VM 4 --> Test1.
Acc to example, I need to be able to write an if condition which includes my test VM tag 'Test1'. If the tag matches this, the snapshot should not be created. If it does not match, snapshots have to be created.
So, for doing this, how should I change my code?
You just need to create a tag for all your three servers with key 'Backup'. The script is filtering the instances on the key names only.
The piece of code that picks up which VMs need to be backed up is this:
reservations = ec.describe_instances(
Filters=[
{'Name': 'tag-key', 'Values': ['Backup', 'True']},
]
).get(
'Reservations', []
)
As you can see, it uses boto's describe_instances and a filter limits the number of instances that will be processed. If you would like to backup everything except for those which are non-prod in your environment, you should consider tagging your non-prod instances with something like Backup=NO.
To backup all servers except those marked with a tag:
Get a list of all servers
Get a list of servers with the 'do not backup' flag and remove them from the first list
Do the backup
It will require two calls to describe_instances().
Related
I have DataDog with Amazon AWS RDS integration configured.
Is it possible to create a graph and use a tag to exclude some hosts from the result. I have let's say 100 hosts with tag environment:live and 10 of them are also tagged with tag importance:ignore. So I need to create a graph which will include metrics for 90 hosts that are tagged with first tag but don't tagged with a second one. Is it possible?
Yes, you can configure widgets to exclude results by tags. You can do this by applying a tag prepended with a ! to signify "not".
So in your case, you can set up your widget scoped over importance:ignore and then hit the little </> button on the right to expose the underlying query, and sneak a ! in front to make it !importance:ignore.
This doc has a nice example (although it's for notebooks, it works the same in dashboards as well).
How to use lambda python function to name my group instances?
I want to name them in increasing order like hello1,hello2,hello3,etc.Can anyone tell how to use lambda function to name my autlscale groups?
I want to create instances..I want a function which will create them and give them name tag as..first instance name tag should be "hello1" second instance name tag should be "hello2" ..and so on... If any instance gets terminated ..say hello2 gets terminated then by autoscaling group formed, minimum number of instances is 2 ..therefore new instance will be created name it as hello2
One way to do this would be to write a script that gets executed when the instance is started. Put the script in the User Data that automatically gets run when an instance starts.
The script would:
Call DescribeInstances() to obtain a list of EC2 instances
Filter the list down to the instances within the Auto Scaling group
Count the number of instances (including itself)
Perform the necessary logic to figure out which number should be assigned
Create a Name tag on the new instance (effectively tagging itself)
Please note that the numbers might not be continuous. For example:
Start 4 instances (1, 2, 3, 4)
Auto Scaling might remove instances 2 & 3
Auto Scaling might add an instance (call it #2)
The current instances are: 1, 2, 4
Bottom line: You really shouldn't get fixated over numbering instances that are ephemeral (that is, that can be removed at any time). Simply be aware of how many instances are in the Auto Scaling group. If you really do need a unique ID, use the InstanceId.
I am creating a terraform file so I can setup some VMs in GCP to build my own Kubernetes platform (Yes google has their own engine but I want to use some custom items). I have been able to create the .tf file to create the whole stack just like the other setup in the Kubespray project. Something like what you do to terraform VMs on AWS.
The last part I need to automate is the creation of the host file for Ansible.
I create the Masters and Workers using a resource called google_compute_region_instance_group which places each instance in a different AZ with in GCP. Now I need to get the hostname and IP give to these instances. The problem I have is that they are dynamically created recourses. So to pull this information out I use a data source to grab the info.
Here is what I have now.
data.google_compute_region_instance_group.data_masters.instances
[
{
"instance" = "https://www.googleapis.com/compute/v1/projects/appportablityphase2/zones/us-east1-c/instances/k8-masters-4r2f"
"named_ports" = []
"status" = "RUNNING"
},
{
"instance" = "https://www.googleapis.com/compute/v1/projects/appportablityphase2/zones/us-east1-d/instances/k8-masters-qh64"
"named_ports" = []
"status" = "RUNNING"
},
{
"instance" = "https://www.googleapis.com/compute/v1/projects/appportablityphase2/zones/us-east1-b/instances/k8-masters-w9c8"
"named_ports" = []
"status" = "RUNNING"
},
]
As you can see the output is a mix of a list and maps. I am able to get just the instance self url with this line.
lookup(data.google_compute_region_instance_group.data_masters.instances[0], "instance")
https://www.googleapis.com/compute/v1/projects/appportablityphase2/zones/us-east1-c/instances/k8-masters-4r2f
Which then I can split and get the instance name. This is the hard part that I can not figure out with Terraform. In the above line I have to use [0] to call the instance information. I then need to iterate through all of the instance which may be more then 3 or 3.
I can not find a way to do this with this data source type. I have tried count.index but it only supported in a resource type not data source. I have also tried splat syntax and it has not worked.
I don't think generating manually the inventory is the right approach although it is possible.
You could give a try to GCP Dynamic Inventory, which generates inventory from running instances based on their network tags.
For instance, instance A has tags foo, and instance B has tags foo and bar, the generated inventory will be:
[tag_foo]
A
B
[tag_bar]
B
Script is available at this address: https://github.com/ansible/ansible/blob/devel/contrib/inventory/gce.py
Configuration file here: https://github.com/ansible/ansible/blob/devel/contrib/inventory/gce.ini
And usage is ansible-playbook -i gce.py site.yml
I have shared a bunch of AMIs from an AWS account to another.
I used this EC2conn1.modify_image_attribute(AMI_id, operation='add', attribute='launchPermission', user_ids=[second_aws_account_id]) to do it.
But, by only adding launch permission for the 2nd account, I can launch an instance but I cannot copy the shared AMI to another region [in the 2nd account].
When I tick the checkbox to "create volume" from the UI of the 1st account, I can copy the shared AMI from the 2nd:
I can modify the launch permissions using the modify_image_attribute function from boto.
In the documentation says, attribute (string) – The attribute you wish to change but I understand that it can only change the launch permissions and add an account.
Yet, the get_image_attribute has 3 options Valid choices are: * launchPermission * productCodes * blockDeviceMapping.
So, is there a way to programmatically change it from the API along with the launch permissions or, it has not been implemented yet??
The console uses the API so there's almost nothing you can do in the console that you can't to using the API.
Remember that an AMI is just a configuration entity -- basic launch configuration, linked to (not containing) one or more backing snapshots, which are technically separate entities.
The console is almost certainly making an additional API request the ModifySnapshotAttribute API when it offers to optionally "add Create Volume permissions to the following associated snapshot."
See also http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html
Presumably, copying a snapshot to another region relies on the same "Create Volume" permission (indeed, you'll see that a copied snapshot has a fake source volume ID, presumably an artifact of the copying process).
Based on the accepted answer, this is the code I wrote for anyone interested.
# Add copy permission to the image's snapshot
# Find the snapshot of the specific AMI
image_object = EC2conn.get_image(AMI_id)
# Grab the block device mapping dynamically
ami_devices = []
for key in image_object.block_device_mapping.iterkeys():
# print key #debug
ami_devices.append(key)
# print ami_devices #debug
for ami_device in ami_devices:
snap_id = image_object.block_device_mapping[ami_device].snapshot_id
# Add permission
EC2conn.modify_snapshot_attribute(snap_id, attribute='createVolumePermission', operation='add', user_ids=second_aws_account_id)
print "{0} [{1}] Permission added to snapshot".format(AMI_name,snap_id)
I use tags to keep track of my EC2 instances, such as (Project, Environment). I have a use case where I need to filter only those instances that belong to a specific project and to a specific environment.
When I use filter with boto and pass these two values I get a result that does a OR rather than a AND of the filters and so I am receiving a list of instances that belong to different projects but same environment.
Now I can use two lists and then compare the instances in each and get the desired set of instances, but is there a better way of getting this done?
Here is what i am doing:
conn = ec2.EC2Connection('us-east-1',aws_access_key_id='XXX',aws_secret_access_key='YYY')
reservations = conn.get_all_instances(filters={"tag-key":"project","tag-value":<project-name>,"tag-key":"env","tag-value":<env-name>})
instances = [i for r in reservations for i in r.instances]
Now the instance list that I am getting gives all the instances from the specified project irrespective of the environment and all the instances from the specified environment irrespective of the project.
You can use the tag:key=value syntax to do an AND search on your filters.
import boto.ec2
conn = boto.ec2.connect_to_region('us-east-1',aws_access_key_id='xx', aws_secret_access_key='xx')
reservations = conn.get_all_instances(filters={"tag:Name" : "myName", "tag:Project" : "B"})
instances = [i for r in reservations for i in r.instances]
print instances
See EC2 API for details
http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeInstances.html
The problem with the syntax you used is that a Python dict has unique keys, so the second tag-key entry overwrites the first one :-(
Seb
While the documentation does not specifically say what happens with multiple filters, the ORing may be by design. In this case, pass the required attributes in sequence to the function and pass in the result of the previous invocation into the next one (using the instance_ids parameter). This will restrict the results in each step with the additional filter. The attributes are then applied in sequence returning the ANDed result you desire.