Adding an additional static IP to elastic beanstalk instances - amazon-web-services

I have a few elastic beanstalk applications on the same VPC (which can also be reduced to one application), and I'd like them to be accessible both via one IP address (both inbound and outbound traffic), and via their own URL. I've seen that this can be done via NAT, but I haven't found documentation on whether this is all traffic (in both directions) and if it can be done alongside the original endpoints. Another question is whether there is a better way to do this.

NAT is used to provide access to internet for instances in private subnets. In this case all instances in the subnet will have the same external IP. But you won't be able to access your private instances using that IP, it's only for outbound traffic.
In your case I'd go with a ELB. Following the best practices, keep your instances with running applications in private subnets and:
Have an external facing ELB in public subnets (you'll need at least 2 public subnets in different AZs).
Create a Target Group and add your instances with running apps to it.
Assign the Target Group to the listener on your ELB.
Configure the security groups on ELB and app instances to allow the traffic on the port the applications are serving (usually it's 8080).
As a result you'll have your instances accessible by the ELB URL. If you want to have a pretty URL, you can configure it in Route 53 and resolve it to the ELB URL.

Its not possible by using aws provided NAT cluster but can be achieved by hosting a box with both Load balancer and NAT running in the same instance with EIP, map your domain with that IP for incoming traffic, for outgoing traffic in the route table of private app subnet you configure the NAT as target for all the 0.0.0.0/0 route, But it is not the recommended approach since the front facing instance becomes SPOF.
The recommended way is using ELB as a front facing and NAT cluster as outgoing for high HA.

Related

AWS EC2 Internet access from behind Load Balancer

Using Terraform to setup a VPC with two EC2s in private subnets. The setup needs to SSH to the EC2s to install package updates from the Internet and install the application software. To do this there is an IGW and a NAT-GW in a public subnet. Both EC2s can access the Internet at this point as both private subnets are routing to the NAT-GW. Terraform and SSH to the private subnets is done via Client VPN.
One of the EC2s is going to host a web service so a Classic mode Load Balancer is added and configured to target the web server EC2. Using Classic mode because I can't find a way to make Terraform build Application mode LBs. The Load Balancer requires the instance to be using a subnet that routes to the IGW, so it is changed from routing to the NAT-GW, to the IGW. At this point, the Load Balancer comes online with the EC2 responding and public Internet can access the web service using the DNS supplied End Point for the LB.
But now the web server EC2 can no longer access the Internet itself. I can't curl google.com or get package updates.
I would like to find a way to let the EC2 access the Internet from behind the LB and not use CloudFront at this time.
I would like to keep the EC2 in a private subnet because a public subnet causes the EC2 to have a public IP address, and I don't want that.
Looking for a way to make LB work without switching subnets, as that would make the EC web service unavailable when doing updates.
Not wanting any iptables or firewalld tricks. I would really like an AWS solution that is disto agnostic.
A few points/clarifications about the problems you're facing:
Instances on a public subnet do not need a NAT Gateway. They can initiate outbound requests to the internet via IGW. NGW is for allowing outbound IPv4 connections from instances in private subnets.
The load balancer itself needs to be on a public subnet. The instances that the LB will route to do not. They can be in the same subnet or different subnets, public or private, as long as traffic is allowed through security groups.
You can create instances without a public IP, on a public subnet. However, they won't be able to receive or send traffic to the internet.
Terraform supports ALBs. The resource is aws_lb with load_balancer_type set to "application" (this is the default option).
That said, the public-private configuration you want is entirely possible.
Your ALB and NAT Gateway need to be on the public subnet, and EC2 instances on the private subnet.
The private subnet's route table needs to have a route to the NGW, to facilitate outbound connections.
EC2 instances' security group needs to allow traffic from the ALB's security group.
It sounds like you got steps 1 and 2 working, so the connection from ALB to EC2 is what you have to work on. See the documentation page here as well - https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html

AWS LoadBalancer access from EKS worker nodes in provate subnets

I have an EKS cluster with worker nodes in private subnet. The worker nodes can access internet via the nat gateway. I have a Route53 hosted zone record routing traffic (alias) to a load balancer.
When I try to access the url (route53 record) from a pod within the EKS cluster, it times out. I tried allowing the worker nodes security group in the inbound rules of the load balancer security group but it does not work. Only thing that works is if I allow the public IP of the nat gateway in the inbound rules of the load balancer security group.
I am sure this setup is very common. My question is, is the solution of allowing the nat gateway public ip in the inbound rules of the LB SG the correct way or is there a better cleaner way to allow the access?
based on what you have described here, it seems like you have a internet facing load balancer and trying to access it from the pod. In this case, the traffic needs to go out to internet(through nat gateway) and come back to the load balancer, that is why it only works when you add the public IP of nat gateway to load balancer's SG.
Now, in terms of the solution, it depends on what you are trying to do here:
if you only need to consume the service inside the cluster, you can use DNS name created for that service inside the cluster. in this case the traffic will stay inside the cluster. you can read more here
if you need to make the service available to other clusters but same VPC, you can use a private load balancer and add the security group of worker nodes to the load balancer SG.
if the service needs to be exposed to internet, then your solution works but you have to open the SG of the public load balancer to all public IPs accessing the service.

AWS NLB in public subnets with EC2 in private subnets

Has someone configured a NLB in the public subnets of your VPC to route traffic to EC2 instances that are in the private subnets?
When using an ELB, a good solution is to create a Security Group for the ELB and then create another SecurityGroup for the private EC2 Instances, allowing incoming traffic from that ELB Security Group, as explained here:
https://aws.amazon.com/premiumsupport/knowledge-center/public-load-balancer-private-ec2/
"You can also add a rule on the instance’s security group to allow traffic from the security group assigned to the load balancer. For example, if the security group on the load balancer is sg-1234567a, make the following changes on the security group associated with the private instances"
Since you cannot associate a Security Group to a NLB, how could you accomplish this with the same type of security?
Thanks!
Since you cannot associate a Security Group to a NLB, how could you
accomplish this with the same type of security?
The security aspect does not change.
NLB is a different beast, it not the same as classic Load Balancers. For Classic Load Balancers, from the point of view of your instances, traffic does appear to come from inside the VPC. From outside, traffic goes to a (random and mutating) list of IP addresses, resolved by the DNS record that AWS provides to you.
Network Load Balancers are completely different. From the point of view of your instances, they are completely invisible. If it is an external network load balancer, traffic appears to be coming from instances on the internet directly (even though this is an illusion). Therefore, if you want to talk to everyone on the internet, 0.0.0.0/0 is what you open it to.
This is, in fact, what the documentation says:
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#target-security-groups
Recommended Rules
Inbound Source Port Range Comment
Client IP addresses instance listener Allow traffic from clients on the instance listener port
VPC CIDR health check Allow traffic from the load balancer on the health check port
Client IP addresses is whatever your client IPs are. If they are on the open internet, 0.0.0.0/0 it is. Adding the NLB private IP address, as I saw in other responses, accomplishes nothing. Traffic is not coming from there, as far as the instances are concerned.
On the security angle, nothing changes. Since your instances are in private subnets, traffic cannot flow directly to them, as there is a NAT gateway in the middle. It can only flow from them to the internet (through NAT gateway, then internet gateway). Even if you specify all traffic is allowed from everywhere, traffic still won't come. It will have to come through another way. In your case, that way is the NLB, which has a fixed number of ports it listens to, and only sends traffic to the destination ports on the instances you specify.
If you are moving from classic Load Balancers to NLBs, move the security group rules from the Load Balancer to your instances. Or better yet, since you can have multiple security groups, just add the SG you currently have for the classic LB to the instances(and update any ASGs as needed). Your security posture will be exactly the same. With the added benefit that now your applications won't need things like proxy protocol to figure out where traffic is coming from, it is no longer obfuscated by the load balancer.
That is indeed true as per AWS Documentation :
Network Load Balancers do not have associated security groups.
Therefore, the security groups for your targets must use IP addresses
to allow traffic from the load balancer.
So If you do not want to grant access to the entire VPC CIDR, you can grant access to the private IP addresses used by the load balancer nodes. There is one IP address per load balancer subnet.
On NLB Tab of there is one Network Interface per Load Balancer from there :
On the Details tab for each network interface, copy the address from
Primary private IPv4 IP.
You can use this private IP Address at add it SG of EC2 Instances.
Please Refer to AWS Documentation
Tail your http access logs and you will see there is no changing of source IP address from the network load balancer which means you need to allow 0.0.0.0/0 on the endpoints security group if the internet needs access to your endpoint.
This is only ok if you use a private subnet so be careful if you have this server on a public subnet as this solution would not be advisable. In this case just use an application load balancer. You can still setup the same listener and configure a target group by instance as well. The application load balancer will update the source IP address to it's own private address if you tail the access logs. The advantage of this is you only need to allow https traffic to the app load balancer and then you can accept http for the target group if you like from the load balancer.

Inter-Elastic Beanstalk communication within the same VPC [duplicate]

We are trying to use Elastic Load Balancing in AWS with auto-scaling so we can scale in and out as needed.
Our application consists of several smaller applications, they are all on the same subnet and the same VPC.
We want to put our ELB between one of our apps and the rest.
Problem is we want the load balancer to be working both internally between different apps using an API and also internet-facing because our application still has some usage that should be done externally and not through the API.
I've read this question but I could not figure out exactly how to do it from there, it does not really specify any steps or maybe I did understand it very well.
Can we have an ELB that is both internal and external?
For the record, I can only access this network through a VPN.
It is not possible to for an Elastic Load Balancer to have both a public IP address and a private IP address. It is one or the other, but not both.
If you want your ELB to have a private IP address, then it cannot listen to requests from the internet.
If your ELB is public-facing, you can still call to it from your internal EC2 instances using the public endpoint. However, there are some caveats that goes with this:
The traffic will exit your VPC and re-enter it. It will not be direct instance-to-ELB connection that a private IP address will afford you.
You also cannot use security groups in your security group rules.
There are 3 alternative scenarios:
Duplicate the ELB and EC2 instances, one dedicated to private traffic, one dedicated to public traffic.
Have 2 ELBs (one public, one private) that share the same back-end EC2 instances.
Don't use an ELB for either private or public traffic, and instead use an Elastic IP address (if public) or a private IP address (if private) on a single EC2 instance.
I disagree with #MattHouser answer. Actually, in a VPC, your ELB have all its internal interfaces listed in Network Interfaces with Public IP AND Primary private IP.
I've tested the private IP of my public ELB and it's working exactly like the external one.
The problem is : theses IPs are not listed anywhere in a up to date manner like on a private ELB DNS. So you have to do it by yourself.
I've made a little POC script on this, with an internal Route53 hosted zone : https://gist.github.com/darylounet/3c6253c60b7dc52da927b80a0ae8d428
I made a Lambda function that checks which private IPs are set to the loadbalancer and will update Route53 record when it changes: https://github.com/Bramzor/lambda-sync-private-elb-ips
Using this function, you can easily make use of the ELB for private traffic. I personally use it to connect multiple regions to each other over a VPC inter-region peering without needing an additional ELB.
The standard AWS solution would be to have an extra internal ELB for this.
Looks like #DaryL has an interesting workaround, but it could fail for 5 minutes if the DNS is not updated. Also there is no way to have a separate security group for the internal IPs since they share the ENI and security of the external IP of the ELB.
I faced the same challenge and I can confirm the best solution so far is to have two different ALBs, one internet-facing and the other internal. You can attach both ALBs to a single AutoScaling Group so you can access the same cluster.
Make sure the networking options (Subnets, security groups) of both ALBs are the same in order for both to access the same cluster instances. Autoscaling and Launch Configuration works seamlessly with both ALBs attached to the same AutoSacling Group. This is also working with ALBs created from ElasticBeanstalk environments.

Static IP using Elastic Beanstalk

I need the static IP to allow access to a firewalled network not on the AWS network.
Is it possible to get a static IP for a load balanced app using Elastic Beanstalk? I'm following the AWS docs regarding using Route 53 to host my app with a domain name, but from what I've read, this does not ensure a static IP because it is essentially using a CNAME allowing the IP behind the scenes to change. Is that the right understanding? Is it possible at all?
This post helped me get a static IP for outgoing requests by using a NAT Gateway, and routing specific requests through it.
I needed this static IP in order to be whitelisted from an external API provider.
I found this way much easier than the provided by AWS, without the need of creating a new VPC and a private and public subnets.
Basically, what I did was:
Create a new subnet to host the NAT Gateway.
Create the NAT Gateway in the above subnet, and assign a new Elastic IP. This one will be our outgoing IP for hitting external APIs.
Create a route table for the NAT subnet. All outbound traffic (0.0.0.0/0) should be routed through the NAT Gateway. Assign the created subnet to use the new route table.
Modify the main route table (the one that handles all our EC2 instances requests), and add the IP(s) of the external API, setting its target to the NAT Gateway.
This way we can route any request to the external API IPs through the NAT Gateway. All other requests are routed through the default Internet Gateway.
As the posts says, this is not a Multi AZ solution, so if the AZ that holds our NAT Gateway fails, we may lose connection to the external API.
Update:
See #TimObezuk comment to make this a Multi-AZ solution.
Deploy your beanstalk environment in VPC, and with the right configuration, a static IP for outbound traffic is easy.
In this setup, your instances all relay their outbound traffic through a single machine, which you can assign an elastic IP address to. All of the inside-originated, Internet-bound traffic from all of the instances behind it will appear, from the other network, to bw using that single elastic IP.
The RDS portion of the following may be irrelevant to your needs but the principles are all the same.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo-vpc-rds.html
Deploy your beanstalk environment in VPC, and with the right configuration, a static IP for outbound traffic is easy.
In this setup, your instances all relay their outbound traffic through a single machine, which you can assign an elastic IP address to. All of the inside-originated, Internet-bound traffic from all of the instances behind it will appear, from the other network, to bw using that single elastic IP.
The RDS portion of the following may be irrelevant to your needs but the principles are all the same.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo-vpc-rds.html