We are using eb_deployer to deploy to Elastic Beanstalk and we would like to provision each node using .ebextensions and Ansible.
A package created for eb_deployer looks something like this (simplified), it is assembled on the control node with Ansible:
- Procfile
- application.jar
- .ebextensions
- ansible.config
- provision.yml
- roles
- appdynamics
- tasks
- main.yml
ansible.config installs ansible on the Beanstalk node and runs a single playbook:
packages:
python:
ansible: []
container_commands:
ansible:
command: "ansible-playbook .ebextensions/provision.yml"
provision.yml (simplified) only includes a single role:
- name: provision eb instance
hosts: localhost
connection: local
gather_facts: yes
roles:
- role: appdynamics
controller_host: "example.com"
controller_port: 443
Now the problem is that appdynamics role uses a variable appdynamics_accesskey which stored in the vault, but the vault password file is stored on the control node.
We would like to avoid copying the vault password file from the control machine to the .ebextensions on S3 bucket and then Beanstalk node.
What would you do in such scenario? Maybe there are other tools which are more appropriate in this case?
It appears that one way to solve this issue is to launch temporary instance, configure it with Ansible running on the control machine only, create an image with ec2_ami Ansible module, and use that image to configure custom image for autoscaling group.
Related
I am trying to deploy a web application in AWS fargate as well as AWS Beanstalk.
my docker compose file looks like this.(just an example , please focus on ports)
services:
application-gateway:
image: "gcr.io/docker-public/application:latest"
container_name: application-name
ports:
- "443:9443"
- "8443:8443"
**Issue with AWS Fargate
**
I need to know how to map these ports - Bridge doesnt get enabled and I see only
How to change Host Port
I can see that once I deploy the public docker image it gets deployed in Fargate however how to access the application DNS URL ?
**Issue facing in AWS Beanstalk
**
I was able to deploy the application in single instance however I am unable to deploy it in application load balanced enviroment. again I suspect the issue is with the ports in load balancer , I have opened these ports in security group though.
Thanks,
I am deploying a compose to an AWS ECS context with the following docker-compose.yml
x-aws-loadbalancer: "${LOADBALANCER_ARN}"
services:
webapi:
image: ${DOCKER_REGISTRY-}webapi
build:
context: .
dockerfile: webapi/Dockerfile
environment:
ASPNETCORE_URLS: http://+:80
ASPNETCORE_ENVIRONMENT: Development
ports:
- target: 80
x-aws-protocol: http
When I create a loadbalancer using these instructions the loadbalancer assigns the default security group for the default vpc. Which apparently doesn't match the ingress rules for the docker services because if I go and look at the task in ECS I see it being killed over and over for failing an ELB healtcheck.
The only way to fix it is to go into AWS Console and assign the created security group created by docker compose to represent the default network to the loadbalancer. But thats insane.
How do I create a loadbalancer with the correct minimum access security group so it will be able to talk to later created compose generated services?
I have the following ansible playbook.
---
- name: Ansible playbook to create a new aws dev instance
hosts: localhost
roles:
- aws
- name: Set up the dev server
hosts:
roles:
- services
In the aws role, I am creating an ec2 instance and registering it as ec2_instance. How will I use the public IP of that newly created instance in the hosts of the second play.
Should I use something like hosts: ec2_instance.public_ip?
You may consider using add_host. Put this in your first play (after getting the ip of new vm):
- name: Adding a new host in inventory file.
add_host: name=someName ansible_ssh_host="{{your_ip}}" ansible_ssh_pass=*** groups=new_group
and then use this group in second play:
- name: Set up the dev server
hosts: new_group
I know how to create an AWS instance using Ansible. Now what I want to achieve is to configure that instance as web server by installing nginx using the same playbook which created the instance.
The goal of the playbook will be:
Create an AWS instance.
Configure the instance as Web server by setting up the Nginx server.
Is it possible with ansible?
Read http://www.ansible.com/blog/ansible-ec2-tags It details how to spin up an ec2 instance (or multiple) and then run tasks against it (I.e install nginx).
I'f you want to jump straight to the example playbook https://github.com/chrismeyersfsu/playbook-ec2_properties/blob/master/new_group.yml
Bring up ec2 instance
Wait for ssh
add ec2 instance to Ansible dynamically created host group w/ associated ec2 pem file (so you can ssh to it)
Call an example play with a ping task to show everything works
Note: you would replace the ping task with your set of tasks to install nginx
#Bidyut How to reference ec2 ip address
look at Line 27 Note the use of register: ec2Then at Line 46 the ec2 ip address is "extracted" {{ ec2.results[item.0]['instances'][0]['public_ip'] }}. Note that the example calls register within a loop. If you are just creating one ec2 instance then the ec2 ip address reference would look like {{ ec2.results['instances'][0]['public_ip'] }}
Here is a working example that might help you.
---
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Create the EC2 Instance
ec2:
region: us-east-1
group: sg-xxxxx # Replace your Security Group here
keypair: test-key # Replace Key here
instance_type: t2.mirco
image: ami-xxxxx # Replace AMI here
vpc_subnet_id: subnet-xxxxx # Replace Subnet here
assign_public_ip: yes
wait: yes
wait_timeout: 600
instance_tags:
Name: "My-EC2-Instance"
register: ec2
- name: Create SSH Group to login dynamically to EC2 Instance
add_host:
hostname: "{{ item.public_ip }}"
ansible_ssh_private_key_file: path/to/test-pair.pem
groupname: ec2_server
with_items: ec2.instances
- name: Wait for SSH to come up
wait_for:
host: "{{ item.public_ip }}"
port: 22
state: started
with_items: ec2.instances
- hosts: ec2_server
become: yes
# Use ec2_user if you are using CentOS/Amazon server
remote_user: ubuntu # for Ubuntu server
gather_facts: yes
roles:
- webserver
Yes, you can use a single playbook to launch an instance and install nginx. Use the ansible module add_host to add the ip of the just launched instance. Then write a play for the new host.
Launch an EC2 instance using ec2 module and register the instance
Use add_host module to add the new instance to the host inventory
Write a new play with host as the just registered host and call apt to install nginx
Try it and if you need code snippet, let me know.
Ansible - 1.9.3
Dynamic inventory is used.
Bootstrapping an EC2 instance and adding the instance to host using add-host, but it skips the plays for the newly created hosts. But when running the play next time, it finds the host and starts executing.
Here is the snippet,
hosts: localhost
tasks:
- name: something
ec2: this module will create instances
- name: adding hosts
add host: name=(name of the new instance)
hosts: new host
tasks:
- something
The above is just an example .This is the scenario.
I've ran into this frustrating issue before. You need to add the host to a group as well. The add_host module says the group argument is optional, but it doesn't seem to be. Once you do that, you should be able to target the group you have added the host to.