It is great that you can create a new GCP instance with Ansible but how do you terminate the instance?
I don't see a command to do this.
Ok so it appears that all you have to do is set the state of the resource to "absent"
So it would look something like:
- name: detroy instance
gcp_compute_instance:
state: absent
name: "{{servername}}"
zone: "{{ zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
Related
I am new to Ansible and I am trying to figure out how to create a GCP disk from a GCP snapshot, using the gcp_compute_disk module. I am using the following documentation: https://docs.ansible.com/ansible/2.10/collections/google/cloud/gcp_compute_disk_module.html
I created the playbook below, but it only creates an empty new disk, not a disk from the snapshot. My ansible version is 2.9.20.
- name: Create GCP snapshots
hosts: localhost
gather_facts: yes
vars:
gcp_project: test-project
gcp_cred_kind: serviceaccount
gcp_cred_file: /etc/ansible/keys/ansible-test-project-service-account-key.json
zone: "us-central1-a"
region: "us-central1"
instancename: "test-instance"
snapshot:
selfLink: https://www.googleapis.com/compute/v1/projects/test-project/global/snapshots/test-snapshot1
tasks:
- name: create data disk from a snapshot
gcp_compute_disk:
name: "{{ instancename }}-data-1"
description: "{{ instancename }}-data-1"
zone: "{{ zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
source_snapshot: "{{ snapshot }}"
labels:
usage: test-label
type: "https://www.googleapis.com/compute/v1/projects/test-project/zones/us-central1-b/diskTypes/pd-standard"
state: present
register: disk_data
I have also tried to create the snapshot first with gcp_compute_snapshot, then registered that snapshot (register: disksnapshot), and then used that dictionary to reference the snapshot (source_snapshot: "{{ disksnapshot }}"). The result is the same.
Thanks in advance for your help.
I'm using the following code
- name: create a instance
gcp_compute_instance:
name: test_object
machine_type: n1-standard-1
disks:
- auto_delete: 'false'
boot: 'true'
source: "{{ disk }}"
metadata:
startup-script-url:
cost-center:
labels:
environment: production
network_interfaces:
- network: "{{ network }}"
access_configs:
- name: External NAT
nat_ip: "{{ address }}"
type: ONE_TO_ONE_NAT
zone: us-central1-a
project: test-12y38912634812648
auth_kind: serviceaccount
service_account_file: "~/programming/gcloud/test-1283891264812-8h3981f3.json"
state: present
and I saved the file as create2.yml
Then I run Ansible-playbook create2.yml and I get the following error
ERROR! 'gcp_compute_instance' is not a valid attribute for a Play
The error appears to be in '/Users/xxx/programming/gcloud-test/create2.yml': line 1, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: create a instance
^ here
I followed the documentation. What am I doing wrong and how do I fix it?
You haven't created a playbook, you've just created a file with a task which won't run on it's own as you've discovered.
A playbook is a collection of tasks. You should start with the playbook documentation:
Playbook Documentation
For GCP, here's a working example to create a network, external IP, disk and VM.
- name: 'Deploy gcp vm'
hosts: localhost
connection: local
become: false
gather_facts: no
vars:
gcp_project: "671245944514"
gcp_cred_kind: "serviceaccount"
gcp_cred_file: "/tmp/test-project.json"
gcp_region: "us-central1"
gcp_zone: "us-central1-a"
# Roles & Tasks
tasks:
- name: create a disk
gcp_compute_disk:
name: disk-instance
size_gb: 50
source_image: projects/ubuntu-os-cloud/global/images/family/ubuntu-2004-lts
zone: "{{ gcp_zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
state: present
register: disk
- name: create a network
gcp_compute_network:
name: network-instance
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
state: present
register: network
- name: create a address
gcp_compute_address:
name: address-instance
region: "{{ gcp_region }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
state: present
register: address
- name: create a instance
gcp_compute_instance:
name: vm-instance
project: "{{ gcp_project }}"
zone: "{{ gcp_zone }}"
machine_type: n1-standard-1
disks:
- auto_delete: 'true'
boot: 'true'
source: "{{ disk }}"
labels:
environment: testing
network_interfaces:
- network: "{{ network }}"
access_configs:
- name: External NAT
nat_ip: "{{ address }}"
type: ONE_TO_ONE_NAT
auth_kind: serviceaccount
service_account_file: "{{ gcp_cred_file }}"
state: present
I have taken a snapshot of instance. I just want to restore it back using ansible.
Please provide any solution, my ansible version is 1.9.4.
You can make use of ec2_vol module:
http://docs.ansible.com/ansible/ec2_vol_module.html
Note: Keep an eye on the options it supports and the version they were added in.
- name: Detach the old volume
ec2_vol:
region: "{{ aws_region }}"
id: "{{ get_id.volume_id }}"
instance: None
register: detach_vol
- name: Creating a Volume from a snapshot
ec2_vol:
snapshot: "{{snap_id}}"
region: "{{ aws_region }}"
volume_size: 40
instance: "{{ instance_id }}"
register: ec2_vol
tags: attach
- name: Attach the Created volume to an instance
ec2_vol:
instance: "{{ instance_id }}"
id: "{{ ec2_vol.volume_id }}"
device_name: /dev/sda1
delete_on_termination: yes
I've seen several examples but setting the IP from the results of launching ec2 intances are failing. anyone have an idea why ?
Iam using ansible 2.0.1.0
The task to launch 3 instances in 3 different subnets works corectly as follows.
tasks:
- name: elastic instance provisioning
local_action:
module: ec2
region: "{{ region }}"
key_name: "{{ key }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
user_data: "{{ lookup('file', '/etc/ansible/host_vars/elasticsearch/user_data') }}"
key_name: "{{ key }}"
wait: yes
count: 1
group: ["{{ main_sg }}", "{{ jenkins_sg}}"]
instance_tags:
Name: elastic-test-cluster
class: database
environment: staging
vpc_subnet_id: "{{ item }}"
assign_public_ip: no
with_items:
- "{{ private_subnet_1 }}"
- "{{ private_subnet_2 }}"
- "{{ private_subnet_3 }}"
register: ec2
- debug: msg="{{ ec2.results[0].instances[0].private_ip }}"
I can debug and get expected result
TASK [debug]
ok: [localhost] => {
"msg": "10.1.100.190"
}
But this next part in the playbook fails.
- name: Add Ip for each Server
set_fact:
instance_private_ip0: "{{ ec2.results[0].instances[0].private_ip }}"
instance_private_ip1: "{{ ec2.results[1].instances[1].private_ip }}"
instance_private_ip2: "{{ ec2.results[2].instances[2].private_ip }}"
register: result
- debug: var=result
The Results from the debug is the following. Not sure what to make of it.
fatal: [localhost]: FAILED! => {"failed": true, "msg": "list object has no element 1"}
You can also loop over the results of the previous task:
- name: Add Ip for each Server
set_fact:
instance_private_ip{{ item.0 }}: "{{ item.1.instances[0].private_ip }}"
with_indexed_items: "{{ ec2.results }}"
Don't be confused here about item.0 and item.1. The with_indexed_items loop provides two items per iteration. item.0 is the index (0, 1, 2, ...) and item.1 is the actual content.
I'm trying to get Ansible to bring up new ec2 boxes for me with a volume size larger than the default ~8g. I've added the volumes option with volume_size specified, but when I run with that, the volumes option seems to be ignored and I still get a new box with ~8g. The relevant part of my playbook is as follows:
- name: provision new boxes
hosts: localhost
gather_facts: False
tasks:
- name: Provision a set of instances
ec2:
group: "{{ aws_security_group }}"
instance_type: "{{ aws_instance_type }}"
image: "{{ aws_ami_id }}"
region: "{{ aws_region }}"
vpc_subnet_id: "{{ aws_vpc_subnet_id }}"
key_name: "{{ aws_key_name }}"
wait: true
count: "{{ num_machines }}"
instance_tags: "{{ tags }}"
volumes:
- device_name: /dev/sda1
volume_size: 15
register: ec2
What am I doing wrong?
So, I just updated ansible to 1.9.4 and now all of a sudden it works. So there's my answer. The code above is fine. Ansible was just broken.