How can I launch(purchase) a reserved EC2 instance using Ansible with EC2 module? I've googled using words something like 'ec2 reserved instance ansible' but no joy.
Or should I use AWS CLI instead?
Or you can create Ansible module.
Also there are already created modules that you can use as examples ansible-modules-extras/cloud/amazon.
PS:
Modules can be written in any language and are found in the path
specified by ANSIBLE_LIBRARY or the --module-path command line option.
By default, everything that ships with ansible is pulled from its
source tree, but additional paths can be added.
The directory ”./library”, alongside your top level playbooks, is also
automatically added as a search directory.
I just made a PR which might help you.
You could use it as follows:
- name: Purchase reserved instances
boto3:
name: ec2
region: us-east-1
operation: purchase_reserved_instances_offering
parameters:
ReservedInstancesOfferingId: 9a06095a-bdc6-47fe-a94a-2a382f016040
InstanceCount: 3
LimitPrice:
Amount: 123.0
CurrencyCode: USD
register: result
- debug: var=result
If you're interrested by this feature, feel free to vote up on the PR. :)
I looked into the Cloud module list and found there isn't any modules out of the box that supports reserved instance - I think you try building a wrapper over the AWS CLI or Python Boto SDK [ or any SDK ].
This is the pseudo code for the playbook :
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- name: 'Calling Python Code to reserve instance'
raw: python reserve-ec2-instance.py args
Related
I need to change this setting on my AutoScaling group in my Beanstalk environment:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environmentconfig-autoscaling-healthchecktype.html
I'm doing exactly like the example shows, but it just doesn't work. Nothing happens.
The file content:
Resources:
AWSEBAutoScalingGroup:
Type: "AWS::AutoScaling::AutoScalingGroup"
Properties:
HealthCheckType: ELB
HealthCheckGracePeriod: 300
The structure:
my_project/
.ebextensions/
autoscaling.config
src/
...
Is there any log where I can check if the file is even being read or not?
If you use eb deploy to deploy your changed code, it will not deploy your local code, but your last git commit.
You can try to find something in the logs in /var/log, for example in eb-engine.log or messages, or even in the system logs ($ journalctl), if you configured to be able to ssh into the machine.
You can also use the web console to download the logs (like this demo link).
If you don't find something, can you give some more information about the platform (java, node.js, python) and the way you are deploying?
Bye,
Dirk
Okay I found the problem. I'm using Green/Blue environments to deploy my application, so in the process of uploading new changes to the environment they were being made to the Green Env that was being terminated after.
In order for the new config to work I had to rebuild the Blue env.
From now on all the new deploys will have the new config.
I have a v3 app that I want to deploy to 2 different environments. The app name and some definitions vary from env to env, but the structure of the manifest is the same. For example:
# manifest_test.yml
applications:
- name: AppTest
processes:
- type: web
command: start-web.sh
instances: 1
- type: worker
command: start-worker.sh
instances: 1
# manifest_prod.yml
applications:
- name: AppProd
processes:
- type: web
command: start-web.sh
instances: 3
- type: worker
command: start-worker.sh
instances: 5
Instead of keeping duplicate manifests with only minor changes in variables, I wanted to use a single manifest with variable substitution. So I created something like this:
# manifest.yml
- name: App((env))
processes:
- type: web
command: start-web.sh
instances: ((web_instances))
- type: worker
command: start-worker.sh
instances: ((worker_instances))
However, it seems like cf v3-apply-manifest doesn't have an option to provide variables for substitution (as cf push did).
Is there any way around this, or do I have to keep using a separate manifest for each environment?
Please try one of the cf v7 cli beta releases. I didn't test it but the output from cf7 push -h has a flag for using --vars and --vars-file. It should also use the v3 APIs so it will support things like rolling deploy.
For what it's worth, if you're looking to use CAPI v3 features you should probably use the cf7 beta releases going forward. That is going to get you the latest and greatest support for the CAPI v3.
Hope that helps!
Currently I'm using Cloud Build to produce some artifacts that I need to deploy to GCE instance. I've tried to use gcloud builder for this purpose with the following args:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['compute', 'scp', '--zone=<zone_id>', '<local_path>', '<google compute engine instance name>:<instance_path>']
and build fails with the following error:
ERROR: (gcloud.compute.scp) Could not SSH into the instance. It is
possible that your SSH key has not propagated to the instance yet. Try
running this command again. If you still cannot connect, verify that
the firewall and instance are set to accept ssh traffic.
I've already opened port 22 on my instance but that haven't helped me.
Could you guys help me to solve this problem?
What points I need to check/fix in my build definition?
May be you can give me an advice which builder instead of gcloud I can use to deliver my data from Cloud Build container to the GCE instance?
A few things to try:
1.Make sure you can ssh normally this way.
Troubleshooting SSH if step one fails.
2.Try to change the SSH target from 'instancename' to 'username#instance' in order to indicate the name of the user inside the VM, eg
username#InstanceName
You must find a way to generate and locace the SSH Key Files for the builder to connect to the GCE Instance:
google_compute
google_compute.pub
google_compute_known_hosts
They are identical to the ones you use to directly connect to the instance from your Cloud Shell or from your Local Computer, but this time the connection has to be done by the builder it self.
Make that files interactively like explained in SSH Key Generation to the identity path of builder (test it by cd ~ && pwd, usually: /builder/home/.ssh).
After a connection has been made then copy these files to Google Cloud Storage via gsutil. This step is need to be done one time only.
steps:
- name: 'gcr.io/cloud-builders/gsutil'
args: ['cp', '-rP', '${_BUIKDER_HOME}', 'gs://${_BUCKET_NAME}/builder/']
substitutions:
_BUCKET_NAME: <bucket_name>
_BUIKDER_HOME: <builder_home>
timeout: "60s"
You might take those key files to your workspace. If you prefer as it then they will need to be remain stay in the storage.
The purpose of this placement is that they will be used to reconnect to the instance because each time the builder is started it will be configured back to the default stage so the files will no more exist.
Once the key files are ready, then you can do the scp transfer like below:
steps:
- name: 'gcr.io/cloud-builders/gsutil'
args: ['cp', '-rP', 'gs://${_BUCKET_NAME}/builder/.ssh'], '_${_BUILDER_HOME}']
- name: 'gcr.io/cloud-builders/gcloud'
args: ['compute', 'scp', '--recurse', '--zone', '${_ZONE}', '${_LOCAL_PATH}', '${_USER_NAME}#${_INSTANCE_NAME}:${INSTANCE_PATH}']
substitutions:
_ZONE: <zone>
_USER_NAME: <user_name>
_LOCAL_PATH: <local_path>
_BUCKET_NAME: <bucket_name>
_BUILDER_HOME: : <builder_home>
_INSTANCE_NAME: <instance_name>
_INSTANCE_PATH: <instance_path>
timeout: "60s"
Note: Use the flag of '--recurse' to copy a directory or none to copy a file only.
Using CloudFoundry, is there a way to define a custom DNS search so host names are resolved?
We are using an Ubuntu stem cell and need to reach out to an external server. Using a FQDN, this works, but would prefer to use the host name only. Generally, this is in resolve.conf on a Unix/Linux box but wasn't sure how to define this in CloudFoundry.
One option here would be a Bosh add-on. A Bosh add-on will run on all VMs managed by your Bosh Director. Here are some example add-ons.
You'll want to use the os-conf-release for your add-on. It has a job called search_domain which lets you set the search domain on all of the Bosh deployed VMs.
I haven't tested it, but I believe a manifest like this should work.
releases:
- name: os-conf
version: 12
addons:
- name: search-domain
jobs:
- name: search_domain
release: os-conf
properties:
search_domain: my.domain.com
That would add my.domain.com to the list of search domains in resolv.conf. Hope that helps!
I am using Ansible to deploy to Amazon EC2, and I have ec2.py and ec2.ini set up such that I can retrieve a list of servers from Amazon. I have my server at AWS tagged rvmdocker:production, and ansible all --list returns my tag as ec2_tag_rvmdocker_production. I can also run:
ansible -m ping tag_rvmdocker_production`
and it works. But if I have that tag in a static inventory file, and run:
ansible all -m ping -i production
it returns:
tag_rvmdocker_production | UNREACHABLE! => {
"changed": false,
"msg": "ERROR! SSH encountered an unknown error during the connection. Werecommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue",
"unreachable": true
}
Here is my production inventory file:
[dockerservers]
tag_rvmdocker_production
It looks like Ansible can't resolve tag_rvmdocker_production when it's in the static inventory file.
UPDATE
I followed ydaetskcoR's advice and am now getting a new error message:
$ ansible-playbook -i production app.yml
ERROR! ERROR! production:2: Section [dockerservers:children] includes undefined group: tag_rvmdocker_production
But I know the tag exists, and it seems like Ansible and ec2.py know it:
$ ansible tag_rvmdocker_production --list
hosts (1):
12.34.56.78
Here is my production inventory:
[dockerservers:children]
tag_rvmdocker_production
And my app.yml playbook file:
---
- name: Deploy RVM app to production
hosts: dockerservers
remote_user: ec2-user
become: true
roles:
- ec2
- myapp
In the end, I'd love to be able to run the same playbook against development (a VM on my Mac), staging, or production, to start an environment. My thought was to have static inventory files that pointed to tags or groups on EC2. Am I even approaching this the right way?
I had a similar issue to this, and resolved it as follows.
First, I created a folder to contain my inventory files, and put in there a symlink to my /etc/ec2.ini, a copy (or symlink) to the ec2.py script (with executable status), and a hosts file as follows.
$ ls amg-dev/*
amg-dev/ec2.ini -> /etc/ec2.ini
amg-dev/ec2.py
amg-dev/hosts
My EC2 instances are tagged with a Type = amg_dev_web
The hosts file contains the following information - the blank first entry is important here.
[tag_Type_amg_dev_web]
[webservers:children]
tag_Type_amg_dev_web
[all:children]
webservers
Then when I run ansible-playbook I specify the name of the folder only as the inventory which makes Ansible read the hosts file, and execute the ec2.py script to interrogate AWS.
ansible-playbook -i amg-dev/ playbook.yml
Inside my playbook, I refer to these as webservers as follows
- name: WEB | Install and configure relevant packages
hosts: webservers
roles:
- common
- web
Which seems to work as expected.
As discussed in the comments, it looks like you've misunderstood the use of tags in a dynamic inventory.
The AWS EC2 dynamic inventory script allows you to target groups of servers by a tag key/value combination. So to target your web servers you may have a tag called Role that in this case is set to web which you would then target as a dynamic group with tag_Role_web.
You can also have static groups that contain children dynamic groups. This is much the same as how you use groups of groups normally in an inventory file that might be used like this:
[web-servers:children]
front-end-web-servers
php-web-servers
[front-end-web-servers]
www-web-1
www-web-2
[php-web-servers]
php-web-1
php-web-2
Which would allow you to generically target or set group variables for all of the web servers above simply by using the more generic web-servers group and then specifically configure the types of web servers using the more specific groups of either front-end-web-servers or php-web-servers.
However, if you put an entry under a group where it isn't defined as a child group then Ansible will assume that this is a host and will then attempt to connect to that host directly.
If you have a uniquely tagged instance that you are trying to reach via dynamic inventory then you simply use it as if it was a group (it just happens to currently only have one instance in it).
So if you want to target or set variables for the dockerservers group which then includes an instance that is tagged with the key-pair combination of rvmdocker: production then you would just do this:
[dockerservers:children]
tag_rvmdocker_production
[tag_rvmdocker_production]