Using Ansible, I'm trying to upload an AWS CloudFormation template to an S3 bucket, then run the template using the URL that results.
- name: create s3 bucket
s3_bucket:
name: '{{ s3.bucket_name }}'
state: present
register: s3_bucket
- name: stage cloudformation template
aws_s3:
bucket: '{{ s3_bucket.name }}'
object: cloudformation-template-vpc.yaml
src: ../files/cloudformation-template-vpc.yaml
mode: put
register: s3_file
- name: 'call cloudformation with state {{ vpc.state }}'
cloudformation:
stack_name: '{{ vpc.stack_name }}'
state: '{{ vpc.state | default("present") }}'
template: '{{ s3_file.url }}'
register: vpc_ref
But when doing that I get an error:
IOError: [Errno 2] No such file or directory:
'https://mybucket.s3.amazonaws.com/cloudformation-template-vpc.yaml?AWSAccessKeyId=<access-key>&Expires=<a-number>&Signature=<signature>'
(I modified the url obviously. But the real url is in that format.)
Obviously there's an object at that url. Ansible just created it, I'm using the value it gave me back, and I've looked in the S3 bucket via the web console to verify it's there. The object exists. There is "such file or directory".
This might be a permissions issue. I mean, I'm running one continuous Ansible play: upload the file, get the url back, and run using that url. It's all being done as the same IAM role. So I don't see why it would be a permissions issue, but then again I don't really understand S3 permissions, public, private, etc.
Why would this be failing? Why can't Ansible see the object after uploading it?
The source of the problem lies in the error: "No such file or directory". Sometimes you have to take an error more literally than you normally would. In this case, Ansible is right. There is no file or directory with a name that starts with "https://". That's a URL, not a file or directory.
Q: Egads, could it be that dumb?
A: Who you callin' dumb, buddy? The "template" parameter, according to Ansible docs, is for "The local path of the cloudformation template." There's another parameter called "template_url", which is for "Location of file containing the template body. The URL must point to a template located in an S3 bucket." Which is what you're trying to do.
Okay, okay, maybe dumb was too harsh. I just would have expected that you'd specify something for "template" and it would be smart enough to know if it was a URL versus a local path. Lesson learned. Always RTFM.
So given that I originally called the cloudformation module like this:
- name: 'call cloudformation with state {{ vpc.state }}'
cloudformation:
stack_name: '{{ vpc.stack_name }}'
state: '{{ vpc.state | default("present") }}'
template: '{{ s3_file.url }}'
register: vpc_ref
...the way to correct this problem is to change that second to last line like this:
- name: 'call cloudformation with state {{ vpc.state }}'
cloudformation:
stack_name: '{{ vpc.stack_name }}'
state: '{{ vpc.state | default("present") }}'
template_url: '{{ s3_file.url }}'
register: vpc_ref
The second to last line in that block becomes "template_url".
Related
I am using this Deployment Manager jinja template to create a bucket and upload files into the bucket.
resources:
- type: storage.v1.bucket
name: {{ properties['bucket_name'] }}
properties:
location: {{ properties['region'] }}
- name: {{ properties['build_name'] }}
action: gcp-types/cloudbuild-v1:cloudbuild.projects.builds.create
metadata:
runtimePolicy:
- CREATE
properties:
steps:
- name: gcr.io/cloud-builders/git
args: ['clone', 'https://<token>#github.com/{{ properties['username'] }}/{{ properties['repo_name'] }}.git']
- name: gcr.io/cloud-builders/gsutil
args: ['-m', 'cp', '-r', '{{ properties['repo_name'] }}/{{ properties['file_path_name_in_repo'] }}*', 'gs://{{ properties['bucket_name'] }}/']
timeout: 120s
I have taken idea of above code from How to write a file to Google Cloud Storage using Deployment Manager?
While running
gcloud deployment-manager deployments create name --config=file.yaml
then it is working fine, but while executing
gcloud deployment-manager deployments delete name
it is showing that cannot delete a bucket which contains files. Also
gcloud deployment-manager deployments update name --config=file.yaml
command is not working and showing that update option is not supported for Cloud Build.
My goal is to create a bucket and uploads files into that from github. Also if required then I can delete that and update (to put updated file) that using deployment manager template.
I have been told to do that using jinja or yaml template.
It will be good if you give a look into this and clear my problem on that.
I'm currently trying to further automate VM automation by not having to include the IP address in the variables file. I found nslookup module with dig, but feel I'm going about this the wrong way, for example here is variables file which is read upon creation for details:
# VMware Launch Variables
# If this is a test deployment you must ensure the vm is terminated after use.
vmname: agent5
esxi_datacenter: Datacenter
esxi_cluster: Cluster
esxi_datastore: ds1 # Do not change value.
esxi_template: template-v2
esxi_folder: agents # Folder must be pre-created
# Static IP Addresses
esxi_static_ip: "{{ lookup('dig', '{{ vmname }}.example.com.') }}"
esxi_netmask: 255.255.252.0
esxi_gateway: 10.0.0.1
What I was hoping to do with these was just to have the "esxi_static_ip" but pulled on the fly from a lookup with dig. This, however, in its current state does not work.
What is happening is either the VM launches without an ipv4 address or more often it fails with the following error:
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "Failed to create a virtual machine : A specified parameter was not correct: spec.nicSettingMap.adapter.ip.ipAddress"}
I get the IP and pass it along, which works when I hard code the esxi_static_ip: in my vmware-lanch-vars.yml file. However, if I use (including the examples) it fails.
The newvm is registered when I run my vmware_guest playbook.
- name: Make virtual machine IP persistant
set_fact:
newvm_ip_address: '{{ newvm.instance.ipv4 }}'
- name: Add host to in memory inventory
add_host:
hostname: "{{ newvm_ip_address }}"
groups: just_created
newvm_ip_address: "{{ newvm.instance.ipv4 }}"
When I run with -vvvv I can see no IP is being attached:
"networks": [
{
"device_type": "vmxnet3",
"gateway": "0.0.0.01",
"ip": "",
"name": "Network",
"netmask": "255.255.252.0",
"type": "static"
}
],
UPDATE 3
When I created a simple playbook it works, just not when I put it into my regular flow, this below works:
---
- hosts: localhost
vars:
vmname: "apim-sb-ng1-agent2"
vm_dig_fqdn: "{{ vmname }}.example.com."
esxi_static_ip: "{{ lookup('dig', vm_dig_fqdn) }}"
tasks:
- debug: msg="{{ esxi_static_ip }}"
I am not sure this is the first problem your are facing (see my comment above), but your jinja2 template expression is wrong.
You cannot use jinja2 expression expansion while already inside a jinja2 expression expansion.
In this case, you have to concatenate your variable and string with the + operator:
esxi_static_ip: "{{ lookup('dig', vmname + '.example.com.') }}"
If your prefer to use jinja2 expansion everywhere, you can separate this in different variables, e.g.:
vm_dig_fqdn: "{{ vmname }}.example.com."
esxi_static_ip: "{{ lookup('dig', vm_dig_fqdn) }}"
i just was trying some stuff with ansible but i really appreciate if anyone can reproduce this or at least can explain this to me.
I'm trying to deploy instances with ansible on AWS. I'm using ansible (2.9.6) from a virtual machine deployed with Vagrant on W10 host.
I wrote this playbook:
---
- name: Configuring the EC2 instance
hosts: localhost
connection: local
vars:
count: '{{ count }}'
volumes:
- device_name: /dev/sda1
volume_size: '{{ volume_size }}'
tasks:
- name: Launch Instances
ec2:
instance_type: '{{ instance_type }}'
image: '{{ ami }}'
region: '{{ region }}'
key_name: '{{ pem }}'
count: '{{ count }}'
group_id: '{{ sec_grp }}'
wait: true
volumes: '{{ volumes }}'
register: ec2
- name: Associating after allocating eip
ec2_eip:
in_vpc: yes
reuse_existing_ip_allowed: yes
state: present
region: '{{ region }}'
device_id: '{{ ec2.instance_ids[0] }}'
register: elastic_ip
- name: Adding tags to the instance
local_action:
module: ec2_tag
region: '{{ region }}'
resource: '{{ item.id }}'
state: present
with_items: '{{ ec2.instances }}'
args:
tags:
Name: '{{ tag_name }}'
Env: '{{ tag_env }}'
Type: AppService
register: tag
I execute the next command with --extra-vars:
ansible-playbook ec2.yml --extra-vars instance_type=t2.micro -e ami=ami-04d5cc9b88f9d1d39 -e region=eu-west-1 -e pem=keyname -e count=1 -e sec_grp=sg-xx -e volume_size=10 -e tag_name=prueba -e tag_env=dev -vvv
ami-04d5cc9b88f9d1d39 = Amazon Linux 2 AMI (HVM), SSD Volume Type for eu-west-1
When the playbook finish to run succesfully and on my aws-console seeing the instance booting and changed to running state (initializing), suddenly change to terminating and finally to stopped state.
On my terminal the playbook runs fine and i get:
PLAY RECAP **************************
localhost : ok=4 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I spent whole day trying to fix this, changing '''state: running''' and adding new tasks to make instance explicitly runnning. Well, finally the problem was the AMI. I changed it to an ubuntu ami: ami-035966e8adab4aaad on same region and works fine, it's still runing.
I used this ami before (amazon linux 2) with cloudformation and terraform and never happened something like this, always boot fine.
Well, if anybody have any idea why this is happening or there is something that i'm missing, i really like to know it.
Take care!
I was able to reproduce this issue. I tried everything you have mentioned. Copied yml file you have provided, executed with exact same values (except key-pair and security group). EC2 goes into running state and then stops.
To isolate the issue, I did the same thing in another region (AP-South-1), with same Amazon Linux 2 ami, but the same error was reproduced.
This is because, Amazon Linux 2 AMI requires EBS to be mounted at /dev/xvda and not at /dev/sda1. /dev/sda1 is used when AMI is ubuntu.
Since the AMI and mount path for root EBS is not compatible, EC2 instance goes into stopped state after initializing.
Refer this SO issue : AWS EC2 Instance stopping right after start using boto3
Update the "volume" part of yml and it should work fine.
I have a playbook that I am testing that should create an s3 bucket if one doesn't already exist, and if it does exist do nothing. I'm trying to test it but when I do I get the error
'ERROR! 'aws_s3' is not a valid attribute for a Play'.
I have all the requirements specified on the docs installed (https://docs.ansible.com/ansible/2.4/aws_s3_module.html), so why is this happening?
- name: Check s3 bucket for test_bucket exists
aws_s3:
bucket: test-bucket
mode: geturl
ignore_nonexistent_bucket: yes
region: {{ region }}
register: asset_url
- name: Create s3 bucket for test_bucket library
aws_s3:
bucket: test-bucket
mode: create
region: {{ region }}
when: asset_url is defined
I am testing as I'm unsure if this will work at all - but then ran into another problem of not being able to run the playbook.
I found the problem, I was confused between 'roles' and 'playbooks', and was trying to run a role as a playbook. What I should have done is had a playbook that calls this role
I'm using ansible to deploy my application.
I'm came to the point where I want to upload my grunted assets to a newly created bucket, here is what I have done:
{{hostvars.localhost.public_bucket}} is the bucket name,
{{client}}/{{version_id}}/assets/admin is the path to a folder containing Multi-levels folders and assets to upload:
- s3:
aws_access_key: "{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
aws_secret_key: "{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
bucket: "{{hostvars.localhost.public_bucket}}"
object: "{{client}}/{{version_id}}/assets/admin"
src: "{{trunk}}/public/assets/admin"
mode: put
Here is the error message:
fatal: [x.y.z.t]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "s3"}, "module_stderr": "", "module_stdout": "\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 2868, in <module>\r\n main()\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 561, in main\r\n upload_s3file(module, s3, bucket, obj, src, expiry, metadata, encrypt, headers)\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 307, in upload_s3file\r\n key.set_contents_from_filename(src, encrypt_key=encrypt, headers=headers)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1358, in set_contents_from_filename\r\n with open(filename, 'rb') as fp:\r\nIOError: [Errno 21] Is a directory: '/home/abcd/efgh/public/assets/admin'\r\n", "msg": "MODULE FAILURE", "parsed": false}
I went through the documentation and I didn't find recursing option for ansible s3_module.
Is this a bug or am I missing something?
As of Ansible 2.3, you can use: s3_sync:
- name: basic upload
s3_sync:
bucket: tedder
file_root: roles/s3/files/
Note: If you're using a non-default region, you should set region explicitly, otherwise you get a somewhat obscure error along the lines of: An error occurred (400) when calling the HeadObject operation: Bad Request An error occurred (400) when calling the HeadObject operation: Bad Request
Here's a complete playbook matching what you were trying to do above:
- hosts: localhost
vars:
aws_access_key: "{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
aws_secret_key: "{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
bucket: "{{hostvars.localhost.public_bucket}}"
tasks:
- name: Upload files
s3_sync:
aws_access_key: '{{aws_access_key}}'
aws_secret_key: '{{aws_secret_key}}'
bucket: '{{bucket}}'
file_root: "{{trunk}}/public/assets/admin"
key_prefix: "{{client}}/{{version_id}}/assets/admin"
permission: public-read
region: eu-central-1
Notes:
You could probably remove region, I just added it to exemplify my point above
I've just added the keys to be explicit. You can (and probably should) use environment variables for this:
From the docs:
If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence AWS_URL or EC2_URL, AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY or EC2_ACCESS_KEY, AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY or EC2_SECRET_KEY, AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN, AWS_REGION or EC2_REGION
The ansible s3 module does not support directory uploads, or any recursion.
For this tasks, I'd recommend using s3cmd check below syntax.
command: "aws s3 cp {{client}}/{{version_id}}/assets/admin s3://{{hostvars.localhost.public_bucket}}/ --recursive"
By using ansible, it looks like you wanted something idempotent, but ansible doesn't support yet s3 directory uploads or any recursion, so you probably should use the aws cli to do the job like this:
command: "aws s3 cp {{client}}/{{version_id}}/assets/admin s3://{{hostvars.localhost.public_bucket}}/ --recursive"
I was able to accomplish this using the s3 module by iterating over the output of the directory listing i wanted to upload. The little inline python script i'm running via the command module just outputs the full list if files paths in the directory, formatted as JSON.
- name: upload things
hosts: localhost
connection: local
tasks:
- name: Get all the files in the directory i want to upload, formatted as a json list
command: python -c 'import os, json; print json.dumps([os.path.join(dp, f)[2:] for dp, dn, fn in os.walk(os.path.expanduser(".")) for f in fn])'
args:
chdir: ../../styles/img
register: static_files_cmd
- s3:
bucket: "{{ bucket_name }}"
mode: put
object: "{{ item }}"
src: "../../styles/img/{{ item }}"
permission: "public-read"
with_items: "{{ static_files_cmd.stdout|from_json }}"