AWS Multi-zone IP? - amazon-web-services

Is it possible to create an IP address that can be used in multiple availability zones?
For example:
VPC: 10.0.0.0/16
Subnet1: 10.0.0.0/24
Subnet2: 10.0.1.0/24
2 Elastic Network Interfaces: ENI-1 and ENI-2
The 'source/dest check' wil be disabled on the 2 ENI's.
If I take a virtual IP (eg 10.1.1.1/32) and modify the routing table:
route table 1 (net 10.0.0.0/24)
10.1.1.1/32 via ENI-1
route table 2 (net 10.0.1.0/24)
10.1.1.1/32 via ENI-2
I launch 2 instances (each in different subnets) and assign ENI-1 to the first instance. ENI-2 will be added to the second instance.
Afterwards I use for example 'heartbeat on Linux' to bring the IP 10.1.1.1 live on the first instance.
Would such a setup work? I want to create a multi-zone high available setup without using a DNS-faiover.

For multi-zone alternative you can put both instances in the same elastic load balancer and abstract that way. Although this won't work for multi-region.

No, what you've described is not possible since a subnet cannot exist in multiple availability zones. When you create an ENI, you choose a subnet for it, and any IP you assign to that ENI must be in the range of that subnet. ENIs in two different AZs thus could not both be assigned the same address.
Instead you can use either an ELB or service such as HAProxy to avoid DNS-based failover.

Related

AWS create an Internal Network Load Balancer: instructions are contradictory

OK I'm trying to create an internal Network Load Balancer.
On the console, it says:
Mappings
Select at least two Availability Zones and one subnet per zone.
And at the same time it also says:
Your internal load balancer must have a private subnet.
I have created a new subnet (NLB-subnet, or subnet subnet-084f41a2d64bd25ad, as shown in the picture above) in my VPC, just for the NLB.
When you create a new subnet, you must choose the zone in which your subnet will reside. And you can only choose one in the AWS console. So I did, and I chose ap-northeast-1a.
However, when it asks me to Select at least two Availability Zones and one subnet per zone., I am confused like a 2 year old:
I have selected the AZ ap-northeast-1a for the NLB mapping, and that's where my new subnet resides, no problem.
But then I have to select a second AZ???
The seconds AZ has no subnet just for the NLB, because you can only choose one AZ for the subnet!
What does it want me to do?
Do I have to create a new private subnet in every one of the 3 Availability Zones, just for the NLB?
what? why?
You don't need to place your NLB in two AZs, if you don't want. NLB works fine in a single AZ as well. Only for ALB it is required to have two AZs. From docs:
You enable one or more Availability Zones for your load balancer when you create it. If you enable multiple Availability Zones for your load balancer, this increases the fault tolerance of your applications.

Multiple ENIs on EC2 use case

I created 2 private subnets PRIVATEA and PRIVATEB in a custom VPC. These subnets are in different availability zones. Added an EC2 instance in PRIVATEA. The instance already has an ENI eth0 attached to it. Next I created an ENI in the other PRIVATEB subnet and attached it to EC2 instance in PRIVATEB subnet. The setup is successful. Basically I followed a blog tutorial for this setup. It said that secondary interface will allow traffic for another group i.e. Management.
But I am not able to relate any use case with it. Could anyone please explain when do we use such a setup ? Is this the correct question to ask in this forum here ?
Thanks
An Elastic Network Interface (ENI) is a virtual network card that connects an Amazon EC2 instance to a subnet in an Amazon VPC. In fact, ENIs are also used by Amazon RDS databases, AWS Lambda functions, Amazon Redshift databases and any other resource that connects to a VPC.
Additional ENIs can be attached to an Amazon EC2 instance. These extra ENIs can be attached to different subnets in the same Availability Zone. The operating system can then route traffic out to different ENIs.
Security Groups are actually associated with ENIs (not instances). Thus, different ENIs can have different rules about traffic that goes in/out of an instance.
An example for using multiple ENIs is to create a DMZ, which acts as a perimeter through which traffic must pass. For example:
Internet --> DMZ --> App Server
In this scenario, all traffic must pass through the DMZ, where traffic is typically inspected before being passed onto the server. This can be implemented by using multiple ENIs, where one ENI connects to a public subnet to receive traffic and another ENI connects to a private subnet to send traffic. The Network ACLs on the subnets can be configured to disallow traffic passing between the subnets, so that the only way traffic can flow from the public subnet to the private subnet is via the DMZ instance, since it is connected to both subnets.
Another use-case is software that attaches a license to a MAC address. Some software products do this because MAC addresses are (meant to be) unique across all networking devices (although some devices allow it to be changed). Thus, they register their software under the MAC address attached to a secondary ENI. If that instance needs to be replaced, the secondary ENI can be moved to another instance without the MAC address changing.

Subnet group to the rds-NACL

If ec2 instance is spinned up in an subnet,subnet's NACl rules imply to the instances of that subnet,But where in case of rds, "subnet group" is attached to the rds instance. if I have 2 subnet's in subnet's group. In this scenario, which subnet's NACL rules are applied to the rds ?
When you launch an RDS instance each instance will only be launched in a single subnet, the cluster on the overhand will spread instances across the subnets i.e. read replicas and Multi-AZ.
Each instance if you look at its properties will have availability zone, by using this you can limit down to the availability zone of the host. Assuming you only have a single subnet per AZ in your subnet group you can then identify the subnet.
If you have multiple subnets per AZ you would need to DIG (or ping) the RDS instances hostname to get the IP address. Then you would need to filter to determine which range it is in.

AWS Network Load Balancer doesn't allow traffic to its source instance from it source instance

I have an ECS cluster consisting of 2 instances in different AZ. One of the many services I run is a SMTP relay. I want to use a Network Load Balancer in front of this service to easily configure other applications to use the relay.
Once I got everything in place, I faced the following issue:
If the container is present on instance 'A' only instance 'B' is able to access it and vice versa, otherwise it times out. So the Network Load Balancer seems to prevent access to a service that lives on the same instance.
Is there something I'm missing here? Is anyone aware of this and have a workaround?
Update:
When scaling the service to 2 instances it started to work. I now tend to believe it's related to the Availability Zones.
I experienced a similar issue.
Here is my setup:
A VPC spread over 3 AZ.
3 public subnets (one in each AZ)
1 instance in a public subnet in AZ-a
3 private subnets (one in each AZ)
1 NLB spread over the 3 private subnets.
A cluster of ECS instances. 1 instance in each private subnet. (instance-a in AZ-a, instance-b in AZ-b, instance-c in AZ-c)
A service running on each instance ; in total 3 healthy services spread over the 3 private subnets registered to the NLB.
A route 53 Alias record to map "myservice.example.com" to the NLB DNS name.
Below the tests executed:
Query initiated from an instance in the private subnet."
Test1: From instance-a (in AZ-a), query "myservice.example.com".
Result1: The query hits the NLB on one of its private IP. If the IP is in the same subnet as instance-a, the query will time-out. If the IP is in a different subnet, the query will succeed.
Test2: Same as Test1 but query from instance-b (in AZ-b).
Result2: The query hits the NLB on one of its private IP. If the IP is in the same subnet as instance-b, the query will time-out. If the IP is in a different subnet, the query will succeed.
Similar result with a query initiated from instance-c.
Query initiated from an instance in a public subnet AZ-a
Test3: From the instance in public subnet in AZ-a, query "myservice.example.com".
Result3: The query hits the NLB on one of its private IP. The query always succeeds, regardless of which private IP was hit.
Query initiated from an extra instance (instance-a2) in private subnet AZ-a
Test4: I have launched an additional instance (instance-a2) in the private subnet in AZ-a. Then, from instance-a2, query to "myservice.example.com". IMPORTANT: This instance does not run any service an therefore can never be selected by the NLB to route any request.
Result4: The query succeeds all the time! Even when hitting a target that is in the private subnet A (same subnet as instance-a2).
Conclusions:
With Test1 and Test2, I could experience the same issue as Laurent Jalber Simard when querying from an instance that was hosting the target service.
Per as Test3, the issue does not seem to come from requests coming from the same AZ as the target service.
With Test4, it appears that the issue cannot be reproduced if the query comes from an instance that is different from the instance hosting the target service ; even if they are in the same subnet.
Therefore, my conclusion so far is that the NLB will timeout if the source ip of the request and the destination ip of the target selected by the NLB is the same.
I couldn't find this issue/limitation documented in AWS NLB docs and so far nothing comes up in a Google search.
Is there anybody outhere reaching to the same conclusion?
Solution
If you would like to keep containers on the same instance and use NLB you need to use "awsvpc" networkMode in your task definition and change target group type to "ip"(not by instance ID).
Explanation
NLB doesn't support hairpinning of requests. When you register targets by instance ID, the source IP addresses of clients are preserved. When you try to connect to the NLB from the backend a loopback is created and this is not allowed by the NLB as the source and destination address is the same and the connection times out. If an instance is a client of an internal load balancer that is registered by instance ID, the connection succeeds only if the request is routed to a different instance.
Some extra info: https://aws.amazon.com/premiumsupport/knowledge-center/target-connection-fails-load-balancer/

How many subnets required in a VPC

I wish to implement my website in AWS virtual private cloud (VPC) with the following requirement:
The web tier will use an Auto Scaling group across multiple Availability Zones (AZs).
The database will use Multi-AZ RDS MySQL and should not be publicly accessible.
What is the minimal number of subnets required?
I assume one subnet = one AZ. Having said that, I will be needing 2 subnets for the RDS instance and one for my web tier which might have to sit in the public subnets? so total 3 minimal?
You have two options:
Do everything in Public Subnets, using Security Groups to protect your database, or
Use Public & Private Subnets
If both options, you would need:
An Amazon VPC
An Internet Gateway (which connects the VPC to the Internet)
An Elastic Load Balancer
An Auto Scaling group of Amazon EC2 instances running your web tier
An Amazon RDS Multi-AZ database -- you have indicated a preference for MySQL
Also, you would create three security groups:
A Load Balancer security group, permitting inbound traffic from the Internet (0.0.0.0/0) for HTTP (port 80) and presumably HTTPS (port 443)
A Web Tier security group, permitting inbound traffic from the Load Balancer security group on the same ports
A Database security group, permitting inbound traffic from the Web Tier security group on port 3306 (MySQL)
Option 1: Do everything in public subnets
In this option, you can put all services inside a Public Subnet (which is defined as a subnet connected to the Internet via an Internet Gateway).
You wish to implement a multi-AZ solution, so you will need one subnet per AZ. If you choose to use two AZs, this means you will need two subnets. (You could choose to use more than two AZs/subnets, if they are available in your region.)
Deploy your Load Balancer in both subnets. Create your Auto Scaling group to use both subnets. Create an Amazon RDS DB Subnet Group across both subnets for use by the multi-AZ database and launch the database into that DB Subnet Group.
The security groups will ensure that only the Load Balancer is exposed to the Internet. Nothing else will be publicly accessible.
Option 2: Use Public & Private Subnets
Some people prefer using Private Subnets to ensure resources are not exposed to the Internet. This is mostly to remain compatible with traditional on-premises architecture that does not have the concept of a Security Group.
This option would involve:
A Public Subnet in each AZ: Put your Load Balancer in these subnets
A Private Subnet in each AZ: Put your Web Tier Auto Scaling group and your database in these subnets (defined via the DB Subnet Group)
Use the same Security Groups as option 1
But if you put 3 subnet in different AZ as you said that is better for security and accessibility from web-server.
There is no requirement for 3 subnet.
If you put 2 subnet in different AZ and 1 is public and 1 is private that is also able to communicate with each other because of in 1 VPC multi-pal instance can calumniate with each other and different.
But if you put 3 subnet in different AZ as you said that is better for security and accessibility from web-server.