Restore Instance from aws snapshot using ansible - amazon-web-services

I have taken a snapshot of instance. I just want to restore it back using ansible.
Please provide any solution, my ansible version is 1.9.4.

You can make use of ec2_vol module:
http://docs.ansible.com/ansible/ec2_vol_module.html
Note: Keep an eye on the options it supports and the version they were added in.
- name: Detach the old volume
ec2_vol:
region: "{{ aws_region }}"
id: "{{ get_id.volume_id }}"
instance: None
register: detach_vol
- name: Creating a Volume from a snapshot
ec2_vol:
snapshot: "{{snap_id}}"
region: "{{ aws_region }}"
volume_size: 40
instance: "{{ instance_id }}"
register: ec2_vol
tags: attach
- name: Attach the Created volume to an instance
ec2_vol:
instance: "{{ instance_id }}"
id: "{{ ec2_vol.volume_id }}"
device_name: /dev/sda1
delete_on_termination: yes

Related

ansible: gcp_compute_disk - problem creating disk from a snapshot

I am new to Ansible and I am trying to figure out how to create a GCP disk from a GCP snapshot, using the gcp_compute_disk module. I am using the following documentation: https://docs.ansible.com/ansible/2.10/collections/google/cloud/gcp_compute_disk_module.html
I created the playbook below, but it only creates an empty new disk, not a disk from the snapshot. My ansible version is 2.9.20.
- name: Create GCP snapshots
hosts: localhost
gather_facts: yes
vars:
gcp_project: test-project
gcp_cred_kind: serviceaccount
gcp_cred_file: /etc/ansible/keys/ansible-test-project-service-account-key.json
zone: "us-central1-a"
region: "us-central1"
instancename: "test-instance"
snapshot:
selfLink: https://www.googleapis.com/compute/v1/projects/test-project/global/snapshots/test-snapshot1
tasks:
- name: create data disk from a snapshot
gcp_compute_disk:
name: "{{ instancename }}-data-1"
description: "{{ instancename }}-data-1"
zone: "{{ zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
source_snapshot: "{{ snapshot }}"
labels:
usage: test-label
type: "https://www.googleapis.com/compute/v1/projects/test-project/zones/us-central1-b/diskTypes/pd-standard"
state: present
register: disk_data
I have also tried to create the snapshot first with gcp_compute_snapshot, then registered that snapshot (register: disksnapshot), and then used that dictionary to reference the snapshot (source_snapshot: "{{ disksnapshot }}"). The result is the same.
Thanks in advance for your help.

Stopping A Google Compute Resource with Ansible

It is great that you can create a new GCP instance with Ansible but how do you terminate the instance?
I don't see a command to do this.
Ok so it appears that all you have to do is set the state of the resource to "absent"
So it would look something like:
- name: detroy instance
gcp_compute_instance:
state: absent
name: "{{servername}}"
zone: "{{ zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"

Associate elastic IPs to ec2 instances

I have some(ie. 3) existing elastic IPs created in AWS earlier. I am trying to provision 3 AWS ec2 instances and associate those IPs to those newly created instances. I need to use those exisiting elastic IPs as they are white listed with my external partner for payment processes. I am not sure how to do that. I have the playbook below to create the ec2:
- name: Provision a set of instances
ec2:
key_name: "{{ key_name }}"
group_id: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ ami_id }}"
wait: true
exact_count: "{{ instances }}"
count_tag:
Name: Demo
instance_tags:
Name: "{{ application }}"
region: "{{ region }}"
register: ec2_instances
- name: Store EC2 instance IPs to provision
add_host:
hostname: "{{ item.public_ip }}"
groupname: ec2_instance_ips
with_items: "{{ ec2_instances.tagged_instances }}"
The second task is get the ready to configure the instances.
I just need to associate the EIP to those instances next.
Thanks,
Philip
Here you go, pulled from one of my roles.
- name: associate with our instance
ec2_eip:
reuse_existing_ip_allowed: true
instance_id: "{{ec2_instance}}"
public_ip: "{{eip}}"
state: present
region: "{{ec2_region|default('us-east-1')}}"
in_vpc: true

Ansible EC2 with security groups across VPC peering connections

I have 3 separate VPCs on aws and am using ansible to handle deploys. My problem is that a few of my environments use security groups from another VPC.
Here is my EC2 module -
- name: Create instance
ec2:
image: "{{ image }}"
instance_type: "{{ instance_type }}"
aws_access_key: "{{ aws_access_key_id }}"
aws_secret_key: "{{ aws_secret_access_key }}"
key_name: "{{ key_name }}"
instance_tags:
Name: "{{ name }}"
Environment: "{{ env }}"
Product: "{{ product }}"
Service: "{{ service }}"
region: "{{ region }}"
volumes:
- device_name: "{{ disk_name }}"
volume_type: "{{ disk_type }}"
volume_size: "{{ disk_size }}"
delete_on_termination: "{{ delete_on_termination }}"
# group: "{{ security_group_name }}"
group_id: "{{ security_group_id }}"
wait: true
vpc_subnet_id: "{{ vpc_subnet_id }}"
count: "{{ instance_count }}"
monitoring: "{{ detailed_monitoring }}"
instance_profile_name: "{{ iam_role }}"
assign_public_ip: "{{ assign_public_ip }}"
termination_protection: "{{ termination_protection }}"
register: ec2
When I pass in a security group id from another VPC, I get this -
"msg": "Instance creation failed => InvalidParameter: Security group sg-e7284493 and subnet subnet-19d97e50 belong to different networks."
Is there a workaround in Ansible for this?
You can't assign a foreign security group to an EC2 in a different VPC. Meaning, the security groups assigned to an EC2 must be associated with the security groups in that VPC.
The way to do this would be to create a security group in the VPC where your EC2 lives that allows the foreign security group access, then apply the created security group to your EC2.

Ansible ec2 module ignores "volumes" parameter

I'm trying to get Ansible to bring up new ec2 boxes for me with a volume size larger than the default ~8g. I've added the volumes option with volume_size specified, but when I run with that, the volumes option seems to be ignored and I still get a new box with ~8g. The relevant part of my playbook is as follows:
- name: provision new boxes
hosts: localhost
gather_facts: False
tasks:
- name: Provision a set of instances
ec2:
group: "{{ aws_security_group }}"
instance_type: "{{ aws_instance_type }}"
image: "{{ aws_ami_id }}"
region: "{{ aws_region }}"
vpc_subnet_id: "{{ aws_vpc_subnet_id }}"
key_name: "{{ aws_key_name }}"
wait: true
count: "{{ num_machines }}"
instance_tags: "{{ tags }}"
volumes:
- device_name: /dev/sda1
volume_size: 15
register: ec2
What am I doing wrong?
So, I just updated ansible to 1.9.4 and now all of a sudden it works. So there's my answer. The code above is fine. Ansible was just broken.