I use Ansible for spinning up EC2 instances and deploying services to them. I would like to re-associate an elastic IP which is already associated with an existing EC2 instance to a new instance with as little down time as possible. Do I get it right that this only works in two steps with the Ansible ec2_eip module?
Disassociate from old instance (state absent)
Associate to new one (state present)
There is no way to do it in one step as with the allow-reassociation option of the ec2-associate-address CLI command, right?
I just made a pull request which might help you.
This PR is adding 2 modules : boto3 and boto3_wait. It allows you to call any Boto 3 client operation.
You could use it like so :
- name: Re-associate Elastic IP
boto3:
name: ec2
region: us-east-1
operation: associate_address
parameters:
AllowReassociation: yes
InstanceId: i-xxxxxxxxxxxxxxxxx
AllocationId: eipalloc-xxxxxxxx
If you're interrested by this feature, feel free to vote up on the PR. :)
Related
I am using a YAML file in the CloudFormation service on AWS. This file creates an EC2 server with a public IP and a stack output within the CloudFormation using the IP of the EC2 server that it just created. The output however, is not something we use a lot. Therefore I would like to close the server from time to time and open it again whenever I need it. The problem is that every time I launch the EC2 server again, it changes its public IP and the IP of the stack output doesn't change with it.
I found a way to make it static, using an elastic IP address from the EC2 service. However, I can't seem to find a way to select that IP address when choosing properties in creating the stack. So I would like some assistance on this.
You cannot define the IP address yourself, but you can extract it after it has been generated.
In your Cloudformation template, add an Outputs section like the following:
Outputs:
myEc2IP: # just some identifier
Value: !GetAtt myInstance.PublicIp # assuming that "myInstance" is your resource
Then, after deploying your stack, you can use the AWS CLI to extract the value:
aws cloudformation describe-stacks --stack-name $YOUR_STACK \
--query 'Stacks[0].Outputs[?OutputKey==`myEc2IP`].OutputValue' \
--output text
You can even load this into a shell variable by something like
export MY_ROLE_ARN="$(aws cloudformation describe-stacks …)"
Learn more about Outputs: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/outputs-section-structure.html
See other possible output/return values for EC2 instances in Cloudformation: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html (in the “Return values” section).
I can't seem to find a way to select that IP address when choosing properties in creating the stack. So I would like some assistance on this.
You can't select the IP address that you will get. Elastic IP addresses are chosen from Amazon's pool of IP addresses.
If you want to specify a specific IP address that you own, you can use Bring your own IP addresses (BYOIP) in Amazon EC2. That will allow you to specify the IP that you own to be used by your EC2 instances.
I'm writing a flask API in pycharm. When I run my code locally, requests using boto3 to get secrets from secrets manager take less than a second. However, when I put my code on an EC2, it takes about 3 minutes (tried in both t2.micro and m5.large).
At first I thought it could be a Python issue, so I ran it in my EC2s through the awscli using:
aws secretsmanager get-secret-value --secret-id secretname
It sill took about 3 minutes. Why does this happen? Shouldn't this in theory be faster in an EC2 than in my local machine?
EDIT: This only happens when the EC2 is inside a VPC different than the default VPC.
After fighting with this same issue on our local machines for almost two months, we finally had some forward progress today.
It turns out the problem is related to IPv6.
If you're using IPv6, then the secrets manager domain will resolve to an IPv6 address. For some reason the cli is unable to make a secure connection using IPv6. After it times out, the cli falls back to IPv4 and then it succeeds.
To verify if you're resolving to an IPv6 address, just ping secretsmanager.us-east-1.amazonaws.com. Don't worry about the ping response, you just want to see the IP address the domain resolves to.
To fix this problem, you now have 3 options:
Figure out your networking issues. This could be something on your machine or router. If in an AWS VPC, check your routing tables and security groups. Make sure you allow outbound IPv6 traffic (::/0).
Reduce the cli connect timeout to make the IPv6 call fail faster. This will make the IPv4 fallback happen sooner. You may want give a better timeout value, but the general idea is to add something like this: --cli-connect-timeout 1
Disable IPv6. You can either disable IPv6 on your machine/router altogether, or you can adjust your machine to prefer IPv4 for this specific address (See: https://superuser.com/questions/436574/ipv4-vs-ipv6-priority-in-windows-7).
Ultimately, option 1 is the real solution, but since it is so broad, the others might be easier.
Hopefully this helps someone else maintain a bit of sanity when they hit this.
I had this issue when working from home through the Cisco AnyConnect VPN client. Apparently it blocks anything IPv6.
The solution for me was to disable IPv6 altogether on my laptop.
To do so for macos:
networksetup -setv6off Wi-Fi # wireless
networksetup -setv6off Ethernet # wired
To re-enable:
networksetup -setv6automatic Wi-Fi # wireless
networksetup -setv6automatic Ethernet # wired
I ran the following commands from my own computer and from an Amazon EC2 t2.nano instance in the ap-southeast-2 region:
aws secretsmanager create-secret --name foo --secret-string 'bar' --region ap-southeast-2
aws secretsmanager get-secret-value --secret-id foo --region ap-southeast-2
aws secretsmanager delete-secret --secret-id foo --region ap-southeast-2
In both cases, each command returned within a second.
Additional:
To test your situation, I did the following (in the Sydney region):
Created a new VPC using the VPC Wizard (just a public subnet)
Launched a new Amazon EC2 instance in the new VPC, with a Role granting permission to access Secrets Manager
Upgraded AWS CLI on the instance (the installed version didn't know about secretsmanager
Ran the above commands
They all returned immediately.
Therefore, the problem lies with something to do with your instances or your VPC.
I made the hotspot from my phone and it worked
I am new to Kubernetes. I am using Kops to deploy my Kubernetes application on AWS. I have already registered my domain on AWS and also created a hosted zone and attached it to my default VPC.
Creating my Kubernetes cluster through kops succeeds. However, when I try to validate my cluster using kops validate cluster, it fails with the following error:
unable to resolve Kubernetes cluster API URL dns: lookup api.ucla.dt-api-k8s.com on 149.142.35.46:53: no such host
I have tried debugging this error but failed. Can you please help me out? I am very frustrated now.
From what you describe, you created a Private Hosted Zone in Route 53. The validation is probably failing because Kops is trying to access the cluster API from your machine, which is outside the VPC, but private hosted zones only respond to requests coming from within the VPC. Specifically, the hostname api.ucla.dt-api-k8s.com is where the Kubernetes API lives, and is the means by which you can communicate and issue commands to the cluster from your computer. Private Hosted Zones wouldn't allow you to access this API from the outside world (your computer).
A way to resolve this is to make your hosted zone public. Kops will automatically create a VPC for you (unless configured otherwise), but you can still access the API from your computer.
I encountered this last night using a kops-based cluster creation script that had worked previously. I thought maybe switching regions would help, but it didn't. This morning it is working again. This feels like an intermittency on the AWS side.
So the answer I'm suggesting is:
When this happens, you may need to give it a few hours to resolve itself. In my case, I rebuilt the cluster from scratch after waiting overnight. I don't know whether or not it was necessary to start from scratch -- I hope not.
This is all I had to run:
kops export kubecfg (cluster name) --admin
This imports the "new" kubeconfig needed to access the kops cluster.
I came across this problem with an ubuntu box. What I did was to add the dns record in the hosted zone in route 53 to /etc/hosts.
Here is how I resolved the issue :
Looks like there is a bug with kops library though it shows
**Validation failed: unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api **
when u try kops validate cluster post waiting for 10-15 mins. Behind the scene the kubernetes cluster is up ! You can verify same by doing ssh in to master node of your kunernetes cluster as below
Go to page where u can ec2 instance and your k8's instances running
copy "Public IPv4 address" of your master k8 node
post login to ec2 instance on command prompt login to master node as below
ssh ubuntu#<<"Public IPv4 address" of your master k8 node>>
Verify if you can see all node of k8 cluster with below command it should show your master node and worker node listed there
kubectl get nodes --all-namespaces
I'm creating an Ansible Playbook and I have created a new AWS EC2 instance. I am now wanting to SSH into this instance and run some commands within the shell of that instance. How would I go about doing this? Is there a way to generate a keypair via ansible, or is it best to use an existing one?
I have looked online at the online resources for Ansible ec2 - create, terminate, start or stop an instance in ec2 (http://docs.ansible.com/ansible/latest/ec2_module.html), as well as online blogs. Although, I haven't been able to figure out how to SSH into the instance, or seen an example online.
Using:
- name: Wait for SSH to come up
wait_for:
host: "{{ item.public_ip }}"
port: 22
delay: 60
timeout: 320
state: started
with_items: "{{ ec2.instances }}"
from the ansible-playbook documentation generates the following error:
"msg": "Timeout when waiting for :22"
The instance is also created without a public DNS to use to SSH into the instance via CLI.
Any help on how to ssh into the instance via ansible-playbook, or generate a public DNS name for the instance would be greatly appreciated.
It would seem you have a fundamental misunderstanding of how AWS instances work. When an instance is created, it has a key-pair assigned to it for the default user. (eg. for an Amazon Linux instance the user will be ec2-user, ubuntu images use the ubuntu user).
This key pair can be seen in the ec2 console for the instance in it's details. All the existing key pairs can be seen under the Key Pairs section in the ec2 console.
To be able to ssh into an instance that you are starting with the key you have just created, you will need to do a few things:
Generate the key pair locally (use shell: ssh-keygen ...)
Create the ec2 keypair from the locally generated key pair (use ec2_key: ... )
Start the instance using the named ec2 key pair (use ec2: ...)
Call the instance just started in the same playbook using the key generated in step 1.
Steps 1-3 should be run as hosts: 127.0.0.1.
Step 4 will need to be done as a separate hosts: call in the same playbook and is not as easy as it seems. You will need to have some way of specifying the newly created instance in the hosts file, the Ansible group_vars path, using the add_hosts module, and/or find it's IP address somehow (possibly by using instance tags).
Once the instance is found, the Ansible private_key_file variable can then be used to specify the key in step 1 and ssh into the instance.
Not that it can't be done, but due to the difficulty and impracticality of doing this for the sake of having a new key pair each time you shell into the instance, I would advise against this unless absolutely essential. It would be better just to have proper key rotation policies in place if it is a security concern.
Ansible connects to instances using SSH and then uses python on the client for most of its execution.
You can bootstrap a client using the raw and shell modules to do things like install python2 and then proceed to execute using aws modules.
Something you need to understand about ansible, however, is that exists to execute upon many hosts as specified in an inventory file, not a single one. For this reason it is not possible to "ssh into an instance with ansible" as this would have no practical purpose for what ansible does. In provisioning 100s of servers the admin should not have to SSH into them, instead the process of creating an environment, maybe running containers for a service, should all be handled at a high level.
As has already been mentioned, if your intent is to create an EC2 instance using ansible that can be ssh'd into then you should use the ec2_key module and create the key BEFORE creating the instance. Then when you create the instance you will specify the SSH key through the key_name field.
Make sure that the security group you specify allows incoming connections from port 22, otherwise you will not be able to communicate with it.
If you would like ansible to automaticaly report the PublicDNS address you should look at ec2_remote_facts. This will return a JSON which can be parsed to report the public DNS.
Use ec2_key module first:
- ec2_key:
name: example2
key_material: 'ssh-rsa AAAAxyz...== me#example.com'
state: present
- ec2:
key_name: example2
instance_type: t2.micro
image: ami-123456
wait: yes
group: webserver
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
I am using chef to create amazon EC2 instances inside a VPC. I have alloted an elastic IP to new instance using --associate-eip option in knife ec2 server create. How do I bootstrap it without a gateway machine? It gets stuck at "Waiting for sshd" as it uses the private IP of newly created server to ssh into it, though it has an elastic IP allocated?
Am I missing anything? Here is the command I used.
bundle exec knife ec2 server create --subnet <subnet> --security-group-ids
<security_group> --associate-eip <EIP> --no-host-key-verify --ssh-key <keypair>
--ssh-user ubuntu --run-list "<role_list>"
--image ami-59590830 --flavor m1.large --availability-zone us-east-1b
--environment staging --ebs-size 10 --ebs-no-delete-on-term --template-file
<bootstrap_file> --verbose
Is there any other work-around/patch to solve this issue?
Thanks in advance
I finally got around the issue by using the --server-connect-attribute option, which is supposed to be used along with a --ssh-gateway attribute.
Add --server-connect-attribute public_ip_address to above knife ec2 create server command, which will make knife use public_ip_address of your server.
Note: This hack works using knife-ec2 (0.6.4). Refer def ssh_connect_host here
Chef will always use the private IP while registering the EC2 nodes. You can get this working by having your chef server inside the VPC as well. Definitely not a best practice.
The other workaround is, let your chef server be out side of VPC. Instead of bootstrapping the instance using knife ec2 command follow the instructions over here.
This way you will bootstrap your node from the node itself and not from the Chef-server/workstation.