Why hostname and IP gets changed after restart of EC2? - amazon-web-services

After restarting AWS EC2, hostname & public IP gets changed.
Remote docker clients get affected as they rely(export DOCKER_HOST) on these public names.
How to resolve this dynamic IP(public) problem of EC2?

By default, AWS assigned public IP addresses as well as hostnames are ephemeral, meaning they will be released back to the pool if you restart the instance. If you really need a persistent IP address, you can use Elastic IPs, but bear in mind there’s a limit per region.
Note: I’d still recommend evaluating the need for using a public IP from the IPv4 pool, as they are a rare resource. Most of the times, one can get by well by using the correct combination of security groups and private IPs, along with Route53 hosted zones for friendly naming, assuming instances are in the same VPC or can communicate via VPC peering.

Related

Amazon MQ - Does the private IP change after a reboot?

I'm using the Amazon MQ managed service and have a question as to how MQ behaves on a reboot.
Will the private IP of the broker change or is it static?
I'm using Amazon MQ inside of a VPC.
Assuming you're using a single instance broker it will most likely stay the same. I couldn't find a direct documentation reference for this, but Amazon MQ broker nodes are managed EC2 instances. An EC2 instance by default retains the private IP inside a VPC over its lifecycle.
The problem is that you don't control the lifecycle of the instance. If the instance is broken beyond repair, Amazon MQ may set up a new instance for you, which will get a different private IP address inside the VPC, but that should be rare. After a simple reboot that would be very unlikely.
If you're using an active/standby cluster what I said concerning the IPs of the individual nodes should still be true, but the whoever the active node is may change.
If you need a hard guarantee that the IP addresses don't change, you can set up a private Network Load Balancer in front of your cluster. From the docs (emphasis mine):
When you create an internal load balancer, you can optionally specify one private IP address per subnet. If you do not specify an IP address from the subnet, Elastic Load Balancing chooses one for you. These private IP addresses provide your load balancer with static IP addresses that will not change during the life of the load balancer. You cannot change these private IP addresses after you create the load balancer.
For most services in AWS you want to use the DNS name or CNAME to a service instead of any IP address unless there's a static IP address attached to it.

AWS elastic IP vs public IP

I am new to AWS and tried to create an EC2 instance.
I have a domain and ready to modify the A record to the associated EC2 instance.
I found an article that said an elastic IP is required for associating a production domain.
But AWS provides a public IP and it is accessible on the public internet too (i know it changes after restart, just assuming its okay to modify the A record after the machine is restarted - actually it is not restart very often).
In this case, is it a must that to assign an elastic IP to the instance (this instance contains no load balancing, it is only a simple single instance)?
If yes, why is it necessary?
An Elastic IP (EIP) is not necessary provided that you understand the limitations of public IPs. You may not reboot your instance, but AWS might for any number of reasons. This means that the public IP address could change when you are not expecting it.
When an EIP is assigned to a running EC2 instance, there are no charges for the EIP e.g. it is free. Therefore why go thru the hassle of needing to monitor your public IP address.

Setup couchbase in ec2 across multiple availability zones

I am trying to setup couchbase cluster on AWS. I want my nodes to be distributed across multiple availability zones.
Ec2 instances with in an availability zone are able to access each other using the ip (Private DNS) which is assigned to them during creation and which does not change even if we restart the machine.
I am not able to access an Ec2 instance from other AZ using this (Private DNS). One way this can be done is by using Elastic Ip which has a limit per region.
Question here is How to access one Ec2 instance from other EC2 instance in another AZ without elastic ip?
You do not want to use Elastic IP for this. Your statement that Elastic IP is a solution to your issue is not correct. You want to use the Private IP assigned to the instance when you created it.
The private IP will not change as long as the instances are deployed inside a VPC.
You have to use the private IP in order to keep all network traffic inside the VPC. Then you just need to make sure your Security Groups are configured correctly to allow traffic between the instances.
Amazon Web Services Operates Split-horizon DNS (AKA Split-Brain DNS). The best practice when deploying couchbase onto EC2 is to use hostnames not IP addresses, see http://developer.couchbase.com/documentation/server/current/install/cloud-deployment.html . Amazon will automatically give a different IP when resolving the hostname depending if the source of the request is internal or external.

Communication between AWS VPC instances via public IP

We have two AWS instances (Instance A and Instance B) which are running in the same VPC. There is an internet facing service on Instance A which is restricted (via security group) to a subset of IP addresses. Instance A has a DNS entry so the service can be accessed via someservice.example.org.
When trying to access the service from Instance B it works correctly if we used the VPC internal IP address however we cannot seem to get the correct security group configuration to allow this instance access via the public DNS.
We have added the 'default' VPC security group to Instance A but we're still unable to access this service directly. We also have the same problem trying to configure access to Instance A from other VPCs.
I know that we can create a private DNS for the VPC which could solve the problem when we are in the same VPC but this doesn't get around the problem when running in another VPC.
This sounds like a DNS resolution issue. If you are using Route53 for DNS the easiest way to fix this is to create a private Route53 DNS zone for your VPC and add something like:
CNAME 'someservice.example.org' that points to the instance's internal IP address.
Note that you really want to use the internal private IP address whenever possible. It will keep the network traffic within your VPC, which will be much faster and more secure. It may also be cheaper for you, at least if the instances are also within the same availability zone. You can read more about that on the EC2 pricing page in the Data Transfer section.
Also note that you can't open up the security group to allow only instances from your VPC/security group to access something via the public IP. This is because the traffic hitting the public IP is seen as coming "from the internet", not from your VPC. You would have to grant access to the servers via their public IP addresses instead of their security groups.
You mention also using a second VPC, but that would be a separate problem that could be addressed via VPC Peering.

Is it possible to re-associate an elastic ip with an ec2 instance after reboot if the elastic ip is associated to another running ec2 instance?

We have a setup where 3 ec2 instances each are associated with an elastic ip on its primary network interface eth0 so incoming requests can be served by these instances.
Each of these instances has a secondary network interface eth1 where in the event of a failure/ crash/ reboot of an instance, the elastic ip associated with that instance would be associated to one of the remaining running ec2 instances on that interface. This is some sort of failover mechanism as we always want those elastic ips to be served by some running instance so we don't lose any incoming requests.
The problem I have experienced is specifically on reboot of an instance. When an instance reboots, it cannot get back the public ip it had where this public ip is that of the elastic ip that is now associated with another instance. Thus this instance cannot access the internet unless I manually re-assign the elastic ip back to this instance.
Is it possible to automatically reclaim/re-associate the elastic ip it once had onto its eth1 interface on reboot? If not, do you have suggestions for a workaround?
Reboot is necessary as we would be doing unattended upgrades on the instances.
Update:
Also note that I need to use these elastic ips as they are the ones allowed in the firewall of a partner company we integrate with. Using ELBs won't work as its IP changes over time.
So here's how I finally solved this problem. What I missed out on was that Amazon only provides a new public IP to an instance under two conditions.
Its elastic IP is detached
It has just one network interface
So based on this, on startup, i configure the instance with two instances but i detach the secondary eth1 interface. Hence this makes the instance eligible for getting a new public IP (if for any reason it reboots).
Now for failover, once one of the running instances detects an instance has gone offline from the cluster (in this case, lets say it rebooted), it will then on the fly attach the secondary interface and associate the elastic IP to it. Hence, the elastic IP is now being served by atleast one of the running instances. The effect is instant.
Now when the failed instance comes back up after reboot, amazon already provided it a new non-elastic public IP. This was because it fulfilled the two conditions of having just one network interface and also its elastic IP was disassociated and re-associated to another running instance. Hence, this rebooted instance now has a new public IP and can connect to the internet on startup and do the necessary tasks it needs to configure itself and re-join the cluster. After that it re-associates back the elastic ip it needed to have.
Also, when the running instance that took over the elastic IP detects a new instance or the rebooted instance has come online, it detaches the secondary interface again so it would be eligible to get a new public ip as well if it rebooted.
This is how i handle the failover and making sure the elastic ips are always served. However this solution is not perfect and can be improved. It can scale to handling N failed/rebooted instances provided N network interfaces can be used for failover!
However if the instance that attached secondary interface(s) during failover reboots, it will not get a new public IP and will remain disconnected from the cluster, but atleast the elastic IPs would still be served by remaining live instances. This is only in the case of reboots.
BTW, atleast from all that i read, these conditions of getting a new public ip wasn't clearly mentioned in the amazon docs.
It sounds like you would be better served by using an elastic load balancer (ELB). You could just use one ELB and it would serve requests to your 3 application servers.
If one goes down, the ELB detects that and stops routing requests there. When it comes back online, the ELB detects that and adds it to the routing group again.
http://aws.amazon.com/elasticloadbalancing/