AWS: Elastic IP vs ENI - amazon-web-services

As far as high availability goes, what is the difference between using an Elastic IP or an Elastic Network Interface to mask instance failure? Is the only difference because ENIs can be used for private instances and Elastic IPs can't?
I'm trying to explain the advantages of both, so if someone can help me with this, I would appreciate it!

To achieve High Availability, you need the ability to redirect traffic in the case of instance failure. There are several options:
1. Use an Elastic Load Balancer
This is the preferred way to provide High Availability.
Run multiple Amazon EC2 instances, preferably in different Availability Zones (AZs). Users connect to the ELB (via the supplied DNS name), which redirects traffic to the EC2 instances. If an instance fails, ELB notices this via regular Health Checks, and will only direct traffic to the healthy instances.
Auto Scaling can be used to create these multiple instances across multiple Availability Zones, and it can also update the Load Balancing service when it adds/removes instances.
2. Redirect an Elastic IP address
Run multiple instances (preferably across multiple Availability Zones). Point an Elastic IP address to the instance you desire. Users connect via the Elastic IP address and are directed to the instance. If the instance fails, reassociate the Elastic IP address to a different instance, which will then start receiving the traffic immediately.
This method is not recommended because only one instance is receiving all the traffic while the other instance(s) are sitting idle. It also requires a mechanism to detect failure and reassociate the Elastic IP (which you must do yourself).
3. Reassign an Elastic Network Interface (ENI)
All EC2 instances have a primary ENI. They can optionally have additional ENIs.
It is possible to direct traffic to a secondary ENI and then move that secondary ENI to another instance. This is similar to reassigning an Elastic IP address.
This method is not recommended for the same reason as reassociating an Elastic IP address (above), but also because an ENI can only be reassigned within the same AZ. It cannot be used to direct traffic to an EC2 instance in a different AZ.
Bottom line: Use an Elastic Load Balancer. It provides true High Availability and can do it automatically.
See documentation: What Is Elastic Load Balancing?

Related

Setup couchbase in ec2 across multiple availability zones

I am trying to setup couchbase cluster on AWS. I want my nodes to be distributed across multiple availability zones.
Ec2 instances with in an availability zone are able to access each other using the ip (Private DNS) which is assigned to them during creation and which does not change even if we restart the machine.
I am not able to access an Ec2 instance from other AZ using this (Private DNS). One way this can be done is by using Elastic Ip which has a limit per region.
Question here is How to access one Ec2 instance from other EC2 instance in another AZ without elastic ip?
You do not want to use Elastic IP for this. Your statement that Elastic IP is a solution to your issue is not correct. You want to use the Private IP assigned to the instance when you created it.
The private IP will not change as long as the instances are deployed inside a VPC.
You have to use the private IP in order to keep all network traffic inside the VPC. Then you just need to make sure your Security Groups are configured correctly to allow traffic between the instances.
Amazon Web Services Operates Split-horizon DNS (AKA Split-Brain DNS). The best practice when deploying couchbase onto EC2 is to use hostnames not IP addresses, see http://developer.couchbase.com/documentation/server/current/install/cloud-deployment.html . Amazon will automatically give a different IP when resolving the hostname depending if the source of the request is internal or external.

Is it possible to re-associate an elastic ip with an ec2 instance after reboot if the elastic ip is associated to another running ec2 instance?

We have a setup where 3 ec2 instances each are associated with an elastic ip on its primary network interface eth0 so incoming requests can be served by these instances.
Each of these instances has a secondary network interface eth1 where in the event of a failure/ crash/ reboot of an instance, the elastic ip associated with that instance would be associated to one of the remaining running ec2 instances on that interface. This is some sort of failover mechanism as we always want those elastic ips to be served by some running instance so we don't lose any incoming requests.
The problem I have experienced is specifically on reboot of an instance. When an instance reboots, it cannot get back the public ip it had where this public ip is that of the elastic ip that is now associated with another instance. Thus this instance cannot access the internet unless I manually re-assign the elastic ip back to this instance.
Is it possible to automatically reclaim/re-associate the elastic ip it once had onto its eth1 interface on reboot? If not, do you have suggestions for a workaround?
Reboot is necessary as we would be doing unattended upgrades on the instances.
Update:
Also note that I need to use these elastic ips as they are the ones allowed in the firewall of a partner company we integrate with. Using ELBs won't work as its IP changes over time.
So here's how I finally solved this problem. What I missed out on was that Amazon only provides a new public IP to an instance under two conditions.
Its elastic IP is detached
It has just one network interface
So based on this, on startup, i configure the instance with two instances but i detach the secondary eth1 interface. Hence this makes the instance eligible for getting a new public IP (if for any reason it reboots).
Now for failover, once one of the running instances detects an instance has gone offline from the cluster (in this case, lets say it rebooted), it will then on the fly attach the secondary interface and associate the elastic IP to it. Hence, the elastic IP is now being served by atleast one of the running instances. The effect is instant.
Now when the failed instance comes back up after reboot, amazon already provided it a new non-elastic public IP. This was because it fulfilled the two conditions of having just one network interface and also its elastic IP was disassociated and re-associated to another running instance. Hence, this rebooted instance now has a new public IP and can connect to the internet on startup and do the necessary tasks it needs to configure itself and re-join the cluster. After that it re-associates back the elastic ip it needed to have.
Also, when the running instance that took over the elastic IP detects a new instance or the rebooted instance has come online, it detaches the secondary interface again so it would be eligible to get a new public ip as well if it rebooted.
This is how i handle the failover and making sure the elastic ips are always served. However this solution is not perfect and can be improved. It can scale to handling N failed/rebooted instances provided N network interfaces can be used for failover!
However if the instance that attached secondary interface(s) during failover reboots, it will not get a new public IP and will remain disconnected from the cluster, but atleast the elastic IPs would still be served by remaining live instances. This is only in the case of reboots.
BTW, atleast from all that i read, these conditions of getting a new public ip wasn't clearly mentioned in the amazon docs.
It sounds like you would be better served by using an elastic load balancer (ELB). You could just use one ELB and it would serve requests to your 3 application servers.
If one goes down, the ELB detects that and stops routing requests there. When it comes back online, the ELB detects that and adds it to the routing group again.
http://aws.amazon.com/elasticloadbalancing/

aws auto scaling, rather lost how to

I am rather lost how to implement AWS auto scaling in my usage scenarion?
I have an EC2 instance with elastic IP, in VPC as my webserver . This elastic IP is mapped to my website address in R53. Now if I create auto scaling group with the same AMI, which I used to create my first ec2 instance, with say two instances, then two new instances are created with new IP addresses. How these new instances can share the traffic?
If I delete the original instance, and use IP address of one of these instances in R53, how can I ensure that this perticular instance whose IP address I am using in R53, will survive after scale down?
Look into creating an Elastic Load Balancer (ELB):
http://aws.amazon.com/elasticloadbalancing/
The DNS record for your site will point to the ELB, and the ELB will spread the traffic between all the instances. When an instance is created or destroyed in an ASG, it will automatically register or de-register from the ELB.
You don't need their ELB to use autoscale, but you do need some sort of load balancer to perform that distribution. It can be an instance that you create in the VPC on an EC2 instance(s). It seems to be a little tough to identify the "must haves" and prescriptive architecture elements (or reference architectures) vs the wide range of alternative solutions.

Does it make sense to have an Amazon Elastic Load Balancer with just one EC2 instance?

My question is simple. Does it make sense to have an Amazon Elastic Load Balancer (ELB) with just one EC2 instance?
If I understood right, ELB will switch traffic between EC2 instances. However, I have just one EC2 instance. So, does it make sense?
On the other hand, I´m using Route 53 to route my domain requests example.com, and www.example.com to my ELB, and I don´t see how to redirect directly to my EC2 instance. So, do I need an ELB for routing purposes?
Using an Elastic Load Balancer with a single instance can be useful. It can provide your instance with a front-end to cover for a disaster situation.
For example, if you use an auto-scaling group with min=max=1 instance, with an Elastic Load Balancer, then if your instance is terminated or otherwise fails:
auto-scaling will launch a new replacement instance
the new instance will appear behind the load balancer
your user's traffic will flow to the new instance
This will happen automatically: no need to change DNS, no need to manually re-assign an Elastic IP address.
Later on, if you need to add more horsepower to your application, you can simply increase your min/max values in your autoscaling group without needing to change your DNS structure.
It's much easier to configure your SSL on an ELB than an EC2, just a few clicks in the AWS console. You can even hand pick the SSL protocols and ciphers.
It's also useful that you can associate different security groups to the actual EC2 and the forefront ELB. You can leave the ELB in the DMZ and protect your EC2 from being accessible by public and potentially vulnerable to attacks.
There is no need to use a Load Balancer if you are only running an single Amazon EC2 instance.
To point your domain name to an EC2 instance:
In the EC2 Management Console, select Elastic IP
Allocate New Address
Associate the address with your EC2 instance
Copy the Elastic IP address and use it in your Route 53 sub-domain
The Elastic IP address can be re-associated with a different EC2 instance later if desired.
Later, if you wish to balance between multiple EC2 instances:
Create an Elastic Load Balancer
Add your instance(s) to the Load Balancer
Point your Route 53 sub-domain to the Load Balancer
With NO ELB :-
Less Secure (DOS Attacks possible as HTTP 80 will be open to all, instead of being open only to ELB)
You won't have the freedom of terminating an instance to save EC2 hrs without worrying about remapping your elastic IP(not a big deal tho)
If you don't use ELB and your ec2 instance becomes unhealthy/terminates/goesDown
Your site will remain down (It will remain up if you use ELB+Scaling Policies)
You will have to remap your elastic IP
You pay for the time your elastic IP is not pointing to an instance around $0.005/hr
You get 750 hours of Elastic Load Balancing plus 15 GB data processing with the free tier so why not use it along with a min=1,max=1 scaling policy
On top of the answer about making SSL support easier by putting a load balancer in front of your EC2 instance, another potential benefit is HTTP/2. An Application Load Balancer (ALB) will automatically handle HTTP/2 traffic and convert up to 128 parallel requests to individual HTTP/1.1 requests across all healthy targets.
For more information, see: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#listener-configuration
It really depends on what are you running in the EC2 instance.
While with only one EC2 instance it's not necessary to use ELB (all your traffic will go to that instance anyways), if your EC2 service has to scale in the near future, is not a bad idea to invest some time now and get familiar with ELB.
This way, when you need to scale, it's just a matter of firing up additional instances, because you have the ELB part done.
If your EC2 service won't scale in the near future, don't worry too much!
About the second part, you definitely can route directly to your EC2 instance, you just need the EC2 instance IP. Take a look at the amazon route53 docs. Mind that if your IP is not static (you don't setup an Amazon Elastic IP), you'd need to change the IP mapping everytime the EC2 ip changes.
You can also use an ELB in front of EC2 if for example you want it to be publically reachable, without having to use up an Elastic IP address. As said previously they work well too with ASG's

How Amazon ELB identifies new instances added

I am working on using a elastic load balancer along with AWS Auto scaling. I do have a setup in which instances will be scaled up/down automatically based on NetworkIn and it is working fine. I have a couple of questions regarding ELB.
How a fresh auto-scaling launched instance is registered with the ELB automatically? I know we will give the load balancer name while creating the auto-scaling group; I need to know the real 'how'.
Can we have multiple private IPs of instances run different applications and all of them visible to ELB?
Explanation for the 2). Lets say I configure the instances so that they will have multiple private IPs at the time of launch. Could they be exposed to ELB rather than the Public IP of the machine? Can ELB read the private IPs of the instances that are launched under it?
How a fresh auto-scaling launched instance is registered with the ELB automatically? I know we will give the load balancer name while creating the auto-scaling group; I need to know the real 'how'.
My guess is it makes an API call RegisterInstancesWithLoadBalancer. You can do that too in your own code. It does not have to be through Auto Scaling
Can we have multiple private IPs of instances run different applications and all of them visible to ELB?
Well, ELB does not care about the ip address at all. It goes by the Instance Id. Unless it is on a VPC and uses ENI. However, ELB routes traffic only to ip address attached to eth0
Update:
Note:
When you register a multi-homed instance (an instance that has an elastic network interface (ENI) attached) with your load balancer, the load balancer will route traffic to the primary IP address of the instance (eth0).
Source: ELB Developer Guide