I've seen several examples but setting the IP from the results of launching ec2 intances are failing. anyone have an idea why ?
Iam using ansible 2.0.1.0
The task to launch 3 instances in 3 different subnets works corectly as follows.
tasks:
- name: elastic instance provisioning
local_action:
module: ec2
region: "{{ region }}"
key_name: "{{ key }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
user_data: "{{ lookup('file', '/etc/ansible/host_vars/elasticsearch/user_data') }}"
key_name: "{{ key }}"
wait: yes
count: 1
group: ["{{ main_sg }}", "{{ jenkins_sg}}"]
instance_tags:
Name: elastic-test-cluster
class: database
environment: staging
vpc_subnet_id: "{{ item }}"
assign_public_ip: no
with_items:
- "{{ private_subnet_1 }}"
- "{{ private_subnet_2 }}"
- "{{ private_subnet_3 }}"
register: ec2
- debug: msg="{{ ec2.results[0].instances[0].private_ip }}"
I can debug and get expected result
TASK [debug]
ok: [localhost] => {
"msg": "10.1.100.190"
}
But this next part in the playbook fails.
- name: Add Ip for each Server
set_fact:
instance_private_ip0: "{{ ec2.results[0].instances[0].private_ip }}"
instance_private_ip1: "{{ ec2.results[1].instances[1].private_ip }}"
instance_private_ip2: "{{ ec2.results[2].instances[2].private_ip }}"
register: result
- debug: var=result
The Results from the debug is the following. Not sure what to make of it.
fatal: [localhost]: FAILED! => {"failed": true, "msg": "list object has no element 1"}
You can also loop over the results of the previous task:
- name: Add Ip for each Server
set_fact:
instance_private_ip{{ item.0 }}: "{{ item.1.instances[0].private_ip }}"
with_indexed_items: "{{ ec2.results }}"
Don't be confused here about item.0 and item.1. The with_indexed_items loop provides two items per iteration. item.0 is the index (0, 1, 2, ...) and item.1 is the actual content.
Related
i'm a toddler in ansible sea. however, i'm trying to provision an aws provider playbook in macos. default output for aws profile is json
i'm following this step by step guide:
forem.dev/foremteam/forem-aws-setup-on-macos-hom
github.com/forem/selfhost#aws
aws.yml
---
- name: Deploy Forem to AWS
hosts: all
become: false
collections:
- amazon.aws
- community.aws
- community.general
vars:
fcos_arch: x86_64
fcos_platform: aws
fcos_format: vmdk.xz
fcos_stream: stable
fcos_aws_region: us-east-1
fcos_aws_size: t3a.small
fcos_aws_ebs_size: 100
fcos_aws_profile: forem-selfhost
butane_cleanup: true
ssh_key: "{{ lookup('file', '~/.ssh/id_ed25519.pub') }}"
roles:
- preflight
tasks:
- name: Get FCOS facts
include_role:
name: fcos
tasks_from: facts
- name: Convert butane file to an Ignition file
include_role:
name: butane
tasks_from: butane
vars:
butane_input_template: "../templates/forem.yml.j2"
butane_aws_s3: true
butane_aws_s3_url: "https://forem-selfhost-{{ app_domain |replace('.', '-') }}-ign.s3.{{ fcos_aws_region }}.amazonaws.com/forem.ign"
- amazon.aws.ec2_vpc_net_info:
filters:
"isDefault": "true"
region: "{{ fcos_aws_region }}"
profile: "{{ fcos_aws_profile }}"
register: forem_vpc_info
- name: Set forem_vpc_id fact
ansible.builtin.set_fact:
forem_vpc_id: "{{ forem_vpc_info['vpcs'][0]['vpc_id'] }}"
- name: Gather info about VPC subnets
amazon.aws.ec2_vpc_subnet_info:
filters:
vpc-id: "{{ forem_vpc_id }}"
availability-zone: "{{ fcos_aws_region }}a"
region: "{{ fcos_aws_region }}"
profile: "{{ fcos_aws_profile }}"
register: forem_subnet_info
- name: Gather info about VPC AZs
amazon.aws.aws_az_info:
region: "{{ fcos_aws_region }}"
profile: "{{ fcos_aws_profile }}"
register: forem_az_info
- name: "Get route table facts for {{ forem_vpc_id }}"
community.aws.ec2_vpc_route_table_info:
region: "{{ fcos_aws_region }}"
filters:
vpc-id: "{{ forem_vpc_id }}"
profile: "{{ fcos_aws_profile }}"
register: forem_vpc_route_table
- name: "Generate list of route tables for {{ forem_vpc_id }}"
set_fact:
forem_vpcd_route_table_ids: "{{ forem_vpc_route_table.route_tables|map(attribute='id')|list }}"
- name: "Create S3 VPC endpoint in {{ forem_vpc_id }}"
community.aws.ec2_vpc_endpoint:
state: present
region: "{{ fcos_aws_region }}"
vpc_id: "{{ forem_vpc_id }}"
service: "com.amazonaws.{{ fcos_aws_region }}.s3"
route_table_ids: "{{ forem_vpcd_route_table_ids }}"
profile: "{{ fcos_aws_profile }}"
register: forem_vpc_s3_endpoint
- name: Set forem_vpc_s3_endpoint_id fact
set_fact:
forem_vpc_s3_endpoint_id: "{{ forem_vpc_s3_endpoint.result.vpc_endpoint_id }}"
- name: Wait for S3 VPC Endpoint
pause:
seconds: 30
- name: Create FCOS ignition bucket
amazon.aws.s3_bucket:
name: "forem-selfhost-{{ app_domain |replace('.', '-') }}-ign"
state: present
encryption: "AES256"
region: "{{ fcos_aws_region }}"
profile: "{{ fcos_aws_profile }}"
policy: |
{
"Version": "2012-10-17",
"Id": "VPCEaccesstoignitionbucket",
"Statement": [
{
"Sid": "VPCE-access-to-ign-bucket",
"Principal": "*",
"Action": "s3:GetObject",
"Effect": "Allow",
"Resource": ["arn:aws:s3:::forem-selfhost-{{ app_domain |replace(".", "-") }}-ign/*"],
"Condition": {
"StringEquals": {
"aws:sourceVpce": "{{ forem_vpc_s3_endpoint_id }}"
}
}
}
]
}
- name: "Upload butane_ignition_stdout to forem-selfhost-{{ app_domain |replace('.', '-') }}-ign"
amazon.aws.aws_s3:
bucket: "forem-selfhost-{{ app_domain |replace('.', '-') }}-ign"
object: "/forem.ign"
content: "{{ butane_ignition_stdout | to_json | string }}"
mode: put
region: "{{ fcos_aws_region }}"
profile: "{{ fcos_aws_profile }}"
register: forem_ign_s3
- name: Create Forem SSH key
amazon.aws.ec2_key:
name: "forem-{{ app_domain }}"
key_material: "{{ ssh_key }}"
profile: "{{ fcos_aws_profile }}"
region: "{{ fcos_aws_region }}"
- name: "Create Forem security group for {{ app_domain }}"
amazon.aws.ec2_group:
name: "forem-{{ app_domain }}"
description: "Forem security group for {{ app_domain }}"
vpc_id: "{{ forem_vpc_id }}"
region: "{{ fcos_aws_region }}"
profile: "{{ fcos_aws_profile }}"
tags:
"Name": "forem-{{ app_domain }}"
rules:
- proto: tcp
ports:
- 22
cidr_ip: "{{ local_wan_ip_address }}/32"
rule_desc: "Allow SSH access from {{ local_wan_ip_address }}"
- proto: tcp
ports:
- 80
- 443
rule_desc: "Allow HTTP and HTTPS access from 0.0.0.0/0"
cidr_ip: 0.0.0.0/0
rules_egress:
- proto: "all"
from_port: 0
to_port: 65535
cidr_ip: "0.0.0.0/0"
rule_desc: "Allow outbound access to 0.0.0.0/0"
register: forem_security_group
- name: "Launch Forem instance for {{ app_domain }}"
amazon.aws.ec2_instance:
key_name: "forem-{{ app_domain }}"
region: "{{ fcos_aws_region }}"
profile: "{{ fcos_aws_profile }}"
group: "forem-{{ app_domain }}"
instance_type: "{{ fcos_aws_size }}"
image: "{{ fcos_aws_image }}"
wait: yes
wait_timeout: 500
vpc_subnet_id: "{{ forem_subnet_info.subnets | map(attribute='id') | list | first }}"
volumes:
- device_name: /dev/xvda
volume_type: gp2
volume_size: "{{ fcos_aws_ebs_size }}"
encrypted: yes
delete_on_termination: no
monitoring: yes
assign_public_ip: yes
user_data: "{{ butane_boot_ignition_stdout | to_json | string }}"
instance_tags:
App: "forem"
Domain: "{{ app_domain }}"
Name: "forem-{{ app_domain }}"
count_tag:
App: "forem"
Domain: "{{ app_domain }}"
Name: "forem-{{ app_domain }}"
exact_count: 1
register: forem_ec2_instance
- name: Wait 300 seconds for port 22 to become open
wait_for:
port: 22
host: "{{ forem_ec2_instance.tagged_instances | map(attribute='public_ip') | list | first }}"
delay: 10
connection: local
- name: "Delete object forem-selfhost-{{ app_domain |replace('.', '-') }}-ign/forem.ign from S3"
amazon.aws.aws_s3:
bucket: "forem-selfhost-{{ app_domain |replace('.', '-') }}-ign"
object: "/forem.ign"
mode: delobj
region: "{{ fcos_aws_region }}"
profile: "{{ fcos_aws_profile }}"
- name: Output EC2 setup message
ansible.builtin.debug:
msg:
- "The public IPv4 IP Address for {{ app_domain }} is {{ forem_ec2_instance.tagged_instances | map(attribute='public_ip') | list | first }}"
- "Please add an A entry for {{ app_domain }} that points to {{ forem_ec2_instance.tagged_instances | map(attribute='public_ip') | list | first }}"
- "Example:"
- " {{ app_domain }} IN A {{ forem_ec2_instance.tagged_instances | map(attribute='public_ip') | list | first }}"
- "Once you have DNS resolving to this EC2 instance please read the Forem Admin Docs: https://admin.forem.com/"
setup.yml -- i don't know the correct technical term for this file's inner functionality.
---
all:
hosts:
vars:
ssh_key: "{{ lookup('file', '~/.ssh/forem.pub') }}"
app_protocol: https://
database_pool_size: 10
force_ssl_in_rails: "true"
lang: en_US.UTF-8
node_env: "{{ forem_environment }}"
rack_env: "{{ forem_environment }}"
rack_timeout_service_timeout: 300
rack_timeout_wait_timeout: 300
rails_env: "{{ forem_environment }}"
rails_log_to_stdout: "true"
rails_serve_static_files: enabled
redis_sessions_url: redis://localhost:6379
redis_sidekiq_url: redis://localhost:6379
redis_url: redis://localhost:6379
session_expiry_seconds: 1209600
web_concurrency: 2
forem_context: selfhost
forem_container_tag: quay.io/forem/forem:latest
children:
forems:
hosts:
forem:
ansible_connection: local
ansible_python_interpreter: /usr/bin/python3 # on macOS, this may need to be /usr/local/bin/python3
# CHANGE_REQUIRED — forem_domain_name: example.com
forem_domain_name: site.com
# CHANGE_REQUIRED — default_email: your_email#example.com
default_email: email#gmail.com
forem_subdomain_name: www # can be subdomain, i.e. "community" in community.mainwebsite.com
forem_server_hostname: host # You may change to something else if you choose (i.e. server, srv, etc)
# CHANGE_OPTIONAL - strict-origin-when-cross-origin enables embedded youtube video playback
referrer_policy: "same-origin"
# referrer_policy: "strict-origin-when-cross-origin"
app_domain: "{{ forem_subdomain_name }}.{{ forem_domain_name }}"
secret_key_base: "{{ vault_secret_key_base }}"
session_key: _FOREMSELFHOST_Session
imgproxy_key: "{{ vault_imgproxy_key }}"
imgproxy_salt: "{{ vault_imgproxy_salt }}"
forem_version: latest
forem_environment: production
dd_api_key: "{{ vault_dd_api_key }}"
honeybadger_api_key: "{{ vault_honeybadger_api_key }}"
honeybadger_js_api_key: "{{ vault_honeybadger_js_api_key }}"
honeycomb_api_key: "{{ vault_honeycomb_api_key }}"
postgres_user: forem_production
postgres_password: "{{ vault_forem_postgres_password }}"
postgres_host: localhost
pusher_app_id: "{{ vault_pusher_app_id }}"
pusher_beams_id: "{{ vault_pusher_beams_id }}"
pusher_beams_key: "{{ vault_pusher_beams_key }}"
pusher_cluster: us2
pusher_key: "{{ vault_pusher_key }}"
pusher_secret: "{{ vault_pusher_secret }}"
recaptcha_secret: "{{ vault_recaptcha_secret }}"
recaptcha_site: "{{ vault_recaptcha_site }}"
sendgrid_api_key: "{{ vault_sendgrid_api_key }}"
sendgrid_api_key_id: "{{ vault_sendgrid_api_key_id }}"
slack_channel: "#forem-activity"
slack_webhook_url: "{{ vault_slack_webhook_url }}"
# Required Ansible Vault secret variables
# Use the following example commands below in a terminal to generate the required variables with Ansible Vault encrypt_string
# These commands should be run in the selfhost directory, since the
# ansible.cfg identifies the vault password which will be used to decrypt
# if ansible-vault prompts for a password, something is not right
# See this URL to learn more about ansible-vault:
# https://docs.ansible.com/ansible/latest/user_guide/vault.html#encrypting-individual-variables-with-ansible-vault
vault_secret_key_base: !vault |
$ANSIBLE_VAULT;1.1;AES256
#secret key
vault_imgproxy_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
#secret key
vault_imgproxy_salt: !vault |
$ANSIBLE_VAULT;1.1;AES256
#secret key
vault_forem_postgres_password: !vault |
$ANSIBLE_VAULT;1.1;AES256
#secret key
# Optional Ansible Vault variables
# echo -n foobarbaz | ansible-vault encrypt_string --stdin-name vault_my_cool_vaulted_var
vault_cloudinary_api_key:
vault_cloudinary_api_secret:
vault_dd_api_key:
vault_honeybadger_api_key:
vault_honeybadger_js_api_key:
vault_honeycomb_api_key:
vault_pusher_app_id:
vault_pusher_beams_id:
vault_pusher_beams_key:
vault_pusher_key:
vault_pusher_secret:
vault_recaptcha_secret:
vault_recaptcha_site:
vault_sendgrid_api_key:
vault_sendgrid_api_key_id:
vault_slack_webhook_url:
i'm getting the following error when it reaches the task "Launch Forem instance for {{ app_domain }}" in aws.yml:
fatal: [forem]: FAILED! => changed=false
msg: 'argument ''image'' is of type <class ''str''> and we were unable to convert to dict: dictionary requested, could not parse JSON or key=value'
PLAY RECAP *************************************************************************************************************************************************************************************************
forem : ok=40 changed=20 unreachable=0 failed=1 skipped=1 rescued=0 ignored=0
since it being the last demanding task, you can assume that all required packages are installed correctly.
I do not know how to overpass this error, tried looking for it online to no avail. my first thought is that maybe it is not defined in vars up above. any ideas or guidance as to how to overpass this are
highly welcomed and appreciated.
thanks
I'm using the following code
- name: create a instance
gcp_compute_instance:
name: test_object
machine_type: n1-standard-1
disks:
- auto_delete: 'false'
boot: 'true'
source: "{{ disk }}"
metadata:
startup-script-url:
cost-center:
labels:
environment: production
network_interfaces:
- network: "{{ network }}"
access_configs:
- name: External NAT
nat_ip: "{{ address }}"
type: ONE_TO_ONE_NAT
zone: us-central1-a
project: test-12y38912634812648
auth_kind: serviceaccount
service_account_file: "~/programming/gcloud/test-1283891264812-8h3981f3.json"
state: present
and I saved the file as create2.yml
Then I run Ansible-playbook create2.yml and I get the following error
ERROR! 'gcp_compute_instance' is not a valid attribute for a Play
The error appears to be in '/Users/xxx/programming/gcloud-test/create2.yml': line 1, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: create a instance
^ here
I followed the documentation. What am I doing wrong and how do I fix it?
You haven't created a playbook, you've just created a file with a task which won't run on it's own as you've discovered.
A playbook is a collection of tasks. You should start with the playbook documentation:
Playbook Documentation
For GCP, here's a working example to create a network, external IP, disk and VM.
- name: 'Deploy gcp vm'
hosts: localhost
connection: local
become: false
gather_facts: no
vars:
gcp_project: "671245944514"
gcp_cred_kind: "serviceaccount"
gcp_cred_file: "/tmp/test-project.json"
gcp_region: "us-central1"
gcp_zone: "us-central1-a"
# Roles & Tasks
tasks:
- name: create a disk
gcp_compute_disk:
name: disk-instance
size_gb: 50
source_image: projects/ubuntu-os-cloud/global/images/family/ubuntu-2004-lts
zone: "{{ gcp_zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
state: present
register: disk
- name: create a network
gcp_compute_network:
name: network-instance
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
state: present
register: network
- name: create a address
gcp_compute_address:
name: address-instance
region: "{{ gcp_region }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
state: present
register: address
- name: create a instance
gcp_compute_instance:
name: vm-instance
project: "{{ gcp_project }}"
zone: "{{ gcp_zone }}"
machine_type: n1-standard-1
disks:
- auto_delete: 'true'
boot: 'true'
source: "{{ disk }}"
labels:
environment: testing
network_interfaces:
- network: "{{ network }}"
access_configs:
- name: External NAT
nat_ip: "{{ address }}"
type: ONE_TO_ONE_NAT
auth_kind: serviceaccount
service_account_file: "{{ gcp_cred_file }}"
state: present
The following ansible playbook runs fine, no error at all but the URL just don't resolve/load afterwards. If I use the public IP created for the instance, the page loads.
---
- name: Provision an EC2 Instance
hosts: local
remote_user: ubuntu
become: yes
connection: local
gather_facts: false
vars:
instance_type: t2.micro
security_group: "Web Subnet Security Group"
image: ami-0c5199d385b432989
region: us-east-1
keypair: demo-key
count: 1
vars_files:
- keys.yml
tasks:
- name: Create key pair using ouw own pubkey
ec2_key:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
name: demo-key
key_material: "{{ lookup('file', '/root/.ssh/id_rsa.pub') }}"
region: us-east-1
state: present
- name: Launch the new EC2 Instance
ec2:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
assign_public_ip: yes
vpc_subnet_id: subnet-0c799bda2a466f8d4
group: "{{ security_group }}"
instance_type: "{{ instance_type}}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
keypair: "{{ keypair }}"
count: "{{ count }}"
state: present
register: ec2
- name: Add tag to Instance(s)
ec2_tag:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
resource: "{{ item.id }}"
region: "{{ region }}"
state: present
tags:
Name: demo-webserver
with_items: "{{ ec2.instances }}"
- name: Add the newly created EC2 instance(s) to the local host group (located inside the directory)
lineinfile:
path="./hosts"
line="{{ item.public_ip }}"
insertafter='\[demo-webserver\]'
state=present
with_items: "{{ ec2.instances }}"
- name: Pause for 2 minutes
pause:
minutes: 2
- name: Write the new ec2 instance host key to known hosts
connection: local
shell: "ssh-keyscan -H {{ item.public_ip }} >> ~/.ssh/known_hosts"
with_items: "{{ ec2.instances }}"
- name: Waiting for the instance to come
local_action: wait_for
host="{{ item.public_ip }}"
delay=10
connect_timeout=300
state=started
port=22
with_items: "{{ ec2.instances }}"
- name: Install packages
delegate_to: "{{ item.public_ip }}"
raw: bash -c "test -e /usr/bin/python || (apt -qqy update && apt install -qqy python-minimal && apt install -qqy apache2 && systemctl start apache2 && systemctl enable apache2)"
with_items: "{{ ec2.instances }}"
- name: Register new domain
route53_zone:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
zone: ansible-demo-domain.com
- name: Create new DNS record
route53:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
zone: ansible-demo-domain.com
record: ansible-demo-domain.com
type: A
ttl: 300
value: "{{ item.public_ip }}"
state: present
overwrite: yes
private_zone: no
wait: yes
with_items: "{{ ec2.instances }}"
- name: Create new DNS record
route53:
ec2_access_key: "{{ ec2_access_key }}"
ec2_secret_key: "{{ ec2_secret_key }}"
zone: ansible-demo-domain.com
record: www.ansible-demo-domain.com
type: CNAME
ttl: 300
value: ansible-demo-domain.com
state: present
overwrite: yes
private_zone: no
wait: yes
Appreciate your help to point what/where I'm missing is. I usually wait at least 5 minutes before testing the URL but really doens't resolve/load.
Thank you!
20190301_Update: Here's how the hosted zone looks like after provisioning:
hosted-zone-after-provisioning and its associated TTLs ttl
I have 3 separate VPCs on aws and am using ansible to handle deploys. My problem is that a few of my environments use security groups from another VPC.
Here is my EC2 module -
- name: Create instance
ec2:
image: "{{ image }}"
instance_type: "{{ instance_type }}"
aws_access_key: "{{ aws_access_key_id }}"
aws_secret_key: "{{ aws_secret_access_key }}"
key_name: "{{ key_name }}"
instance_tags:
Name: "{{ name }}"
Environment: "{{ env }}"
Product: "{{ product }}"
Service: "{{ service }}"
region: "{{ region }}"
volumes:
- device_name: "{{ disk_name }}"
volume_type: "{{ disk_type }}"
volume_size: "{{ disk_size }}"
delete_on_termination: "{{ delete_on_termination }}"
# group: "{{ security_group_name }}"
group_id: "{{ security_group_id }}"
wait: true
vpc_subnet_id: "{{ vpc_subnet_id }}"
count: "{{ instance_count }}"
monitoring: "{{ detailed_monitoring }}"
instance_profile_name: "{{ iam_role }}"
assign_public_ip: "{{ assign_public_ip }}"
termination_protection: "{{ termination_protection }}"
register: ec2
When I pass in a security group id from another VPC, I get this -
"msg": "Instance creation failed => InvalidParameter: Security group sg-e7284493 and subnet subnet-19d97e50 belong to different networks."
Is there a workaround in Ansible for this?
You can't assign a foreign security group to an EC2 in a different VPC. Meaning, the security groups assigned to an EC2 must be associated with the security groups in that VPC.
The way to do this would be to create a security group in the VPC where your EC2 lives that allows the foreign security group access, then apply the created security group to your EC2.
I'm trying to get Ansible to bring up new ec2 boxes for me with a volume size larger than the default ~8g. I've added the volumes option with volume_size specified, but when I run with that, the volumes option seems to be ignored and I still get a new box with ~8g. The relevant part of my playbook is as follows:
- name: provision new boxes
hosts: localhost
gather_facts: False
tasks:
- name: Provision a set of instances
ec2:
group: "{{ aws_security_group }}"
instance_type: "{{ aws_instance_type }}"
image: "{{ aws_ami_id }}"
region: "{{ aws_region }}"
vpc_subnet_id: "{{ aws_vpc_subnet_id }}"
key_name: "{{ aws_key_name }}"
wait: true
count: "{{ num_machines }}"
instance_tags: "{{ tags }}"
volumes:
- device_name: /dev/sda1
volume_size: 15
register: ec2
What am I doing wrong?
So, I just updated ansible to 1.9.4 and now all of a sudden it works. So there's my answer. The code above is fine. Ansible was just broken.