I am new to Ansible and I am trying to figure out how to create a GCP disk from a GCP snapshot, using the gcp_compute_disk module. I am using the following documentation: https://docs.ansible.com/ansible/2.10/collections/google/cloud/gcp_compute_disk_module.html
I created the playbook below, but it only creates an empty new disk, not a disk from the snapshot. My ansible version is 2.9.20.
- name: Create GCP snapshots
hosts: localhost
gather_facts: yes
vars:
gcp_project: test-project
gcp_cred_kind: serviceaccount
gcp_cred_file: /etc/ansible/keys/ansible-test-project-service-account-key.json
zone: "us-central1-a"
region: "us-central1"
instancename: "test-instance"
snapshot:
selfLink: https://www.googleapis.com/compute/v1/projects/test-project/global/snapshots/test-snapshot1
tasks:
- name: create data disk from a snapshot
gcp_compute_disk:
name: "{{ instancename }}-data-1"
description: "{{ instancename }}-data-1"
zone: "{{ zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
source_snapshot: "{{ snapshot }}"
labels:
usage: test-label
type: "https://www.googleapis.com/compute/v1/projects/test-project/zones/us-central1-b/diskTypes/pd-standard"
state: present
register: disk_data
I have also tried to create the snapshot first with gcp_compute_snapshot, then registered that snapshot (register: disksnapshot), and then used that dictionary to reference the snapshot (source_snapshot: "{{ disksnapshot }}"). The result is the same.
Thanks in advance for your help.
Related
It is great that you can create a new GCP instance with Ansible but how do you terminate the instance?
I don't see a command to do this.
Ok so it appears that all you have to do is set the state of the resource to "absent"
So it would look something like:
- name: detroy instance
gcp_compute_instance:
state: absent
name: "{{servername}}"
zone: "{{ zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
I'm using the following code
- name: create a instance
gcp_compute_instance:
name: test_object
machine_type: n1-standard-1
disks:
- auto_delete: 'false'
boot: 'true'
source: "{{ disk }}"
metadata:
startup-script-url:
cost-center:
labels:
environment: production
network_interfaces:
- network: "{{ network }}"
access_configs:
- name: External NAT
nat_ip: "{{ address }}"
type: ONE_TO_ONE_NAT
zone: us-central1-a
project: test-12y38912634812648
auth_kind: serviceaccount
service_account_file: "~/programming/gcloud/test-1283891264812-8h3981f3.json"
state: present
and I saved the file as create2.yml
Then I run Ansible-playbook create2.yml and I get the following error
ERROR! 'gcp_compute_instance' is not a valid attribute for a Play
The error appears to be in '/Users/xxx/programming/gcloud-test/create2.yml': line 1, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: create a instance
^ here
I followed the documentation. What am I doing wrong and how do I fix it?
You haven't created a playbook, you've just created a file with a task which won't run on it's own as you've discovered.
A playbook is a collection of tasks. You should start with the playbook documentation:
Playbook Documentation
For GCP, here's a working example to create a network, external IP, disk and VM.
- name: 'Deploy gcp vm'
hosts: localhost
connection: local
become: false
gather_facts: no
vars:
gcp_project: "671245944514"
gcp_cred_kind: "serviceaccount"
gcp_cred_file: "/tmp/test-project.json"
gcp_region: "us-central1"
gcp_zone: "us-central1-a"
# Roles & Tasks
tasks:
- name: create a disk
gcp_compute_disk:
name: disk-instance
size_gb: 50
source_image: projects/ubuntu-os-cloud/global/images/family/ubuntu-2004-lts
zone: "{{ gcp_zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
state: present
register: disk
- name: create a network
gcp_compute_network:
name: network-instance
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
state: present
register: network
- name: create a address
gcp_compute_address:
name: address-instance
region: "{{ gcp_region }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
state: present
register: address
- name: create a instance
gcp_compute_instance:
name: vm-instance
project: "{{ gcp_project }}"
zone: "{{ gcp_zone }}"
machine_type: n1-standard-1
disks:
- auto_delete: 'true'
boot: 'true'
source: "{{ disk }}"
labels:
environment: testing
network_interfaces:
- network: "{{ network }}"
access_configs:
- name: External NAT
nat_ip: "{{ address }}"
type: ONE_TO_ONE_NAT
auth_kind: serviceaccount
service_account_file: "{{ gcp_cred_file }}"
state: present
I have written two roles with Ansible. The first role (i.e. provision) is executed locally on an instance that has the required IAMs to provision EC2 instances (see below):
- name: Provison "{{ count }}" ec2 instances in "{{ region }}"
ec2:
key_name: "{{ key_name }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
...
exact_count: "{{ count }}"
count_tag: "{{ count_tag }}"
instance_tags:
...
register: ec2
I then add the private IP address to hosts.
- name: Add the newly created EC2 instances to the local host file
local_action: lineinfile
dest="./hosts"
regexp={{ item.private_ip }}
insertafter="[sit]" line={{ item.private_ip }}
with_items: "{{ ec2.instances }}"
I wait for SSH to be available.
- name: Wait for SSH process to be available on "{{ sit }}"
wait_for:
host: "{{ item.private_ip }}"
port: 22
delay: 60
timeout: 320
state: started
with_items: "{{ ec2.instances }}"
The second role (i.e. setupEnv) sets up environmental variables on the 'sit' hosts such as users/group directories. I attempt to run the roles sequentially (see below main.yml playbook):
- hosts: local
roles:
connection: local
gather_facts: false
user: svc_ansible_lab
roles:
- provision
- hosts: sit
roles:
connection: ssh
gather_facts: true
user: ec2-user
roles:
- setupEnv
However, only the first role gets executed on local host. Ansible waits until SSH is available on the provisioned instances and then the process finishes without attmpting role setupEnv.
Is there a way I can make sure the second role is executed on the sit hosts after the SSH is available?
The inventory file will not be automatically re-sourced in between the plays.
Instead of modifying the inventory file, use add_host module and in-memory inventory.
- name: Add the newly created EC2 instances to the in-memory inventory
add_host:
hostname: "{{ item.private_ip }}"
groups: sit
with_items: "{{ ec2.instances }}"
Alternatively you might use the meta module with refresh_inventory parameter to force Ansible to re-read the inventory file:
- meta: refresh_inventory
I have taken a snapshot of instance. I just want to restore it back using ansible.
Please provide any solution, my ansible version is 1.9.4.
You can make use of ec2_vol module:
http://docs.ansible.com/ansible/ec2_vol_module.html
Note: Keep an eye on the options it supports and the version they were added in.
- name: Detach the old volume
ec2_vol:
region: "{{ aws_region }}"
id: "{{ get_id.volume_id }}"
instance: None
register: detach_vol
- name: Creating a Volume from a snapshot
ec2_vol:
snapshot: "{{snap_id}}"
region: "{{ aws_region }}"
volume_size: 40
instance: "{{ instance_id }}"
register: ec2_vol
tags: attach
- name: Attach the Created volume to an instance
ec2_vol:
instance: "{{ instance_id }}"
id: "{{ ec2_vol.volume_id }}"
device_name: /dev/sda1
delete_on_termination: yes
I'm trying to get Ansible to bring up new ec2 boxes for me with a volume size larger than the default ~8g. I've added the volumes option with volume_size specified, but when I run with that, the volumes option seems to be ignored and I still get a new box with ~8g. The relevant part of my playbook is as follows:
- name: provision new boxes
hosts: localhost
gather_facts: False
tasks:
- name: Provision a set of instances
ec2:
group: "{{ aws_security_group }}"
instance_type: "{{ aws_instance_type }}"
image: "{{ aws_ami_id }}"
region: "{{ aws_region }}"
vpc_subnet_id: "{{ aws_vpc_subnet_id }}"
key_name: "{{ aws_key_name }}"
wait: true
count: "{{ num_machines }}"
instance_tags: "{{ tags }}"
volumes:
- device_name: /dev/sda1
volume_size: 15
register: ec2
What am I doing wrong?
So, I just updated ansible to 1.9.4 and now all of a sudden it works. So there's my answer. The code above is fine. Ansible was just broken.