I have 3 separate VPCs on aws and am using ansible to handle deploys. My problem is that a few of my environments use security groups from another VPC.
Here is my EC2 module -
- name: Create instance
ec2:
image: "{{ image }}"
instance_type: "{{ instance_type }}"
aws_access_key: "{{ aws_access_key_id }}"
aws_secret_key: "{{ aws_secret_access_key }}"
key_name: "{{ key_name }}"
instance_tags:
Name: "{{ name }}"
Environment: "{{ env }}"
Product: "{{ product }}"
Service: "{{ service }}"
region: "{{ region }}"
volumes:
- device_name: "{{ disk_name }}"
volume_type: "{{ disk_type }}"
volume_size: "{{ disk_size }}"
delete_on_termination: "{{ delete_on_termination }}"
# group: "{{ security_group_name }}"
group_id: "{{ security_group_id }}"
wait: true
vpc_subnet_id: "{{ vpc_subnet_id }}"
count: "{{ instance_count }}"
monitoring: "{{ detailed_monitoring }}"
instance_profile_name: "{{ iam_role }}"
assign_public_ip: "{{ assign_public_ip }}"
termination_protection: "{{ termination_protection }}"
register: ec2
When I pass in a security group id from another VPC, I get this -
"msg": "Instance creation failed => InvalidParameter: Security group sg-e7284493 and subnet subnet-19d97e50 belong to different networks."
Is there a workaround in Ansible for this?
You can't assign a foreign security group to an EC2 in a different VPC. Meaning, the security groups assigned to an EC2 must be associated with the security groups in that VPC.
The way to do this would be to create a security group in the VPC where your EC2 lives that allows the foreign security group access, then apply the created security group to your EC2.
Related
I am trying to provision AWS infrastructure using ansible. My simplified playbook vpc.yml for illustration is as follow:
- hosts: localhost
connection: local
gather_facts: false
vars:
vpc_name: "Test VPC"
vpc_cidr_block: "10.0.0.0/16"
aws_region: "ap-east-1"
subnets:
test_net_1a:
az: "ap-east-1a"
cidr: "10.0.1.0/24"
test_net_1a:
az: "ap-east-1b"
cidr: "10.0.2.0/24"
tasks:
- name: Create VPC
ec2_vpc_net:
name: "{{ vpc_name }}"
cidr_block: "{{ vpc_cidr_block }}"
region: "{{ aws_region }}"
state: "present"
register: my_vpc
# Save VPC id in a new variable.
- name: Set VPC ID in variable
set_fact:
vpc_id: "{{ my_vpc.vpc.id }}"
- name: Create Subnets
ec2_vpc_subnet:
state: "present"
vpc_id: "{{ vpc_id }}"
cidr: "{{ item.value.cidr }}"
az: "{{ item.value.az }}"
region: "{{ aws_region }}"
resource_tags:
Name: "{{ item.key }}"
loop: "{{ subnets | dict2items }}"
Now I try to test my playbook with ansible-playbook vpc.yml --check. However the playbook would fail because with --check my_vpc would return:
"changed": true,
"failed": false
Apparently --check cannot be used to preview AWS provisioning changes using ansible, so how do I test my playbook during development without making any actual infrastructure changes?
I have some(ie. 3) existing elastic IPs created in AWS earlier. I am trying to provision 3 AWS ec2 instances and associate those IPs to those newly created instances. I need to use those exisiting elastic IPs as they are white listed with my external partner for payment processes. I am not sure how to do that. I have the playbook below to create the ec2:
- name: Provision a set of instances
ec2:
key_name: "{{ key_name }}"
group_id: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ ami_id }}"
wait: true
exact_count: "{{ instances }}"
count_tag:
Name: Demo
instance_tags:
Name: "{{ application }}"
region: "{{ region }}"
register: ec2_instances
- name: Store EC2 instance IPs to provision
add_host:
hostname: "{{ item.public_ip }}"
groupname: ec2_instance_ips
with_items: "{{ ec2_instances.tagged_instances }}"
The second task is get the ready to configure the instances.
I just need to associate the EIP to those instances next.
Thanks,
Philip
Here you go, pulled from one of my roles.
- name: associate with our instance
ec2_eip:
reuse_existing_ip_allowed: true
instance_id: "{{ec2_instance}}"
public_ip: "{{eip}}"
state: present
region: "{{ec2_region|default('us-east-1')}}"
in_vpc: true
I am trying to spin a number of EC2 instance using Ansible in different availability zone and hence subnets, what i am confused here is how can i pass the right subnet corresponding to the right zone?
Assume i am passing my subnet variables as :
subnet_id_a: "subnet-9c3e38f8"
subnet_id_b: "subnet-88d171ff"
now these subnets are in different az's , i need to create some n number of instances which needs to be spun of in different AZ's
I am trying to use:
- name: Create ES Master Node instances
ec2:
key_name: "{{ aws_key_name }}"
instance_type: "{{ aws_instance_type }}"
image: "{{ aws_ami }}"
wait: yes
wait_timeout: 500
count: "{{ master_instance_count }}"
instance_tags:
Name: "{{ master_tag_name }}"
volumes:
- device_name: /dev/sda1
volume_type: gp2
volume_size: 100
vpc_subnet_id: "{{ subnet_id }}"
zone: "{{ aws_region }}{{ item.0 }}"
region: "{{ aws_region }}"
group: "{{ aws_sec_group_name }}"
with_items:
- [ 'a' , 'b']
register: ec2_details
But i am not sure how can pass the corresponding subnets according to this so that each instance gets spun in different az? please help
You can refactor variable like this:
subnet_ids:
a: subnet-9c3e38f8
b: subnet-88d171ff
And in your task:
...
vpc_subnet_id: "{{ subnet_ids[item] }}"
...
with_items: [a, b]
I suppose zone is not necessary, because subnet is already bound to some AZ.
And you don't need to nest your loop list like this - [a, b], use just [a, b] to avoid item.0.
I've seen several examples but setting the IP from the results of launching ec2 intances are failing. anyone have an idea why ?
Iam using ansible 2.0.1.0
The task to launch 3 instances in 3 different subnets works corectly as follows.
tasks:
- name: elastic instance provisioning
local_action:
module: ec2
region: "{{ region }}"
key_name: "{{ key }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
user_data: "{{ lookup('file', '/etc/ansible/host_vars/elasticsearch/user_data') }}"
key_name: "{{ key }}"
wait: yes
count: 1
group: ["{{ main_sg }}", "{{ jenkins_sg}}"]
instance_tags:
Name: elastic-test-cluster
class: database
environment: staging
vpc_subnet_id: "{{ item }}"
assign_public_ip: no
with_items:
- "{{ private_subnet_1 }}"
- "{{ private_subnet_2 }}"
- "{{ private_subnet_3 }}"
register: ec2
- debug: msg="{{ ec2.results[0].instances[0].private_ip }}"
I can debug and get expected result
TASK [debug]
ok: [localhost] => {
"msg": "10.1.100.190"
}
But this next part in the playbook fails.
- name: Add Ip for each Server
set_fact:
instance_private_ip0: "{{ ec2.results[0].instances[0].private_ip }}"
instance_private_ip1: "{{ ec2.results[1].instances[1].private_ip }}"
instance_private_ip2: "{{ ec2.results[2].instances[2].private_ip }}"
register: result
- debug: var=result
The Results from the debug is the following. Not sure what to make of it.
fatal: [localhost]: FAILED! => {"failed": true, "msg": "list object has no element 1"}
You can also loop over the results of the previous task:
- name: Add Ip for each Server
set_fact:
instance_private_ip{{ item.0 }}: "{{ item.1.instances[0].private_ip }}"
with_indexed_items: "{{ ec2.results }}"
Don't be confused here about item.0 and item.1. The with_indexed_items loop provides two items per iteration. item.0 is the index (0, 1, 2, ...) and item.1 is the actual content.
I'm trying to get Ansible to bring up new ec2 boxes for me with a volume size larger than the default ~8g. I've added the volumes option with volume_size specified, but when I run with that, the volumes option seems to be ignored and I still get a new box with ~8g. The relevant part of my playbook is as follows:
- name: provision new boxes
hosts: localhost
gather_facts: False
tasks:
- name: Provision a set of instances
ec2:
group: "{{ aws_security_group }}"
instance_type: "{{ aws_instance_type }}"
image: "{{ aws_ami_id }}"
region: "{{ aws_region }}"
vpc_subnet_id: "{{ aws_vpc_subnet_id }}"
key_name: "{{ aws_key_name }}"
wait: true
count: "{{ num_machines }}"
instance_tags: "{{ tags }}"
volumes:
- device_name: /dev/sda1
volume_size: 15
register: ec2
What am I doing wrong?
So, I just updated ansible to 1.9.4 and now all of a sudden it works. So there's my answer. The code above is fine. Ansible was just broken.