I've spent the majority of today reading google results and documentation on how to connect AWS API Gateway to EC2 instances (created by Elastic Beanstalk) in a private subnet. I know that API Gateway requires targets to be publicly addressable, so...
I manually created an Application Elastic Load Balancer that listens for (and terminates) HTTPS at a public IP address;
I created a VPC with two subnets: one public (holds load balancer) and one private (holds EC2 instances); and,
I believe I have to create security groups that allow everyone/everywhere to connect to the load balancer, but only entities in my public subnet to connect to my EC2 instances.
Unfortunately I'm unable to view the sample Beanstalk application via the load balancer's DNS name. The connection just times-out.
Can someone please confirm I've identified all the steps? Is there any way I can trace my requests to see where they're failing? Or (even better) why they're failing? Thanks!
Check your security groups to make sure that HTTPS traffic is allowed
Make sure your Network ACLs are allowing traffic from your load balance to your EC2 instances
Check your VPC routes to ensure there is a route from your load balancer to your EC2 instance
Related
Using Terraform to setup a VPC with two EC2s in private subnets. The setup needs to SSH to the EC2s to install package updates from the Internet and install the application software. To do this there is an IGW and a NAT-GW in a public subnet. Both EC2s can access the Internet at this point as both private subnets are routing to the NAT-GW. Terraform and SSH to the private subnets is done via Client VPN.
One of the EC2s is going to host a web service so a Classic mode Load Balancer is added and configured to target the web server EC2. Using Classic mode because I can't find a way to make Terraform build Application mode LBs. The Load Balancer requires the instance to be using a subnet that routes to the IGW, so it is changed from routing to the NAT-GW, to the IGW. At this point, the Load Balancer comes online with the EC2 responding and public Internet can access the web service using the DNS supplied End Point for the LB.
But now the web server EC2 can no longer access the Internet itself. I can't curl google.com or get package updates.
I would like to find a way to let the EC2 access the Internet from behind the LB and not use CloudFront at this time.
I would like to keep the EC2 in a private subnet because a public subnet causes the EC2 to have a public IP address, and I don't want that.
Looking for a way to make LB work without switching subnets, as that would make the EC web service unavailable when doing updates.
Not wanting any iptables or firewalld tricks. I would really like an AWS solution that is disto agnostic.
A few points/clarifications about the problems you're facing:
Instances on a public subnet do not need a NAT Gateway. They can initiate outbound requests to the internet via IGW. NGW is for allowing outbound IPv4 connections from instances in private subnets.
The load balancer itself needs to be on a public subnet. The instances that the LB will route to do not. They can be in the same subnet or different subnets, public or private, as long as traffic is allowed through security groups.
You can create instances without a public IP, on a public subnet. However, they won't be able to receive or send traffic to the internet.
Terraform supports ALBs. The resource is aws_lb with load_balancer_type set to "application" (this is the default option).
That said, the public-private configuration you want is entirely possible.
Your ALB and NAT Gateway need to be on the public subnet, and EC2 instances on the private subnet.
The private subnet's route table needs to have a route to the NGW, to facilitate outbound connections.
EC2 instances' security group needs to allow traffic from the ALB's security group.
It sounds like you got steps 1 and 2 working, so the connection from ALB to EC2 is what you have to work on. See the documentation page here as well - https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html
I have an EKS cluster with worker nodes in private subnet. The worker nodes can access internet via the nat gateway. I have a Route53 hosted zone record routing traffic (alias) to a load balancer.
When I try to access the url (route53 record) from a pod within the EKS cluster, it times out. I tried allowing the worker nodes security group in the inbound rules of the load balancer security group but it does not work. Only thing that works is if I allow the public IP of the nat gateway in the inbound rules of the load balancer security group.
I am sure this setup is very common. My question is, is the solution of allowing the nat gateway public ip in the inbound rules of the LB SG the correct way or is there a better cleaner way to allow the access?
based on what you have described here, it seems like you have a internet facing load balancer and trying to access it from the pod. In this case, the traffic needs to go out to internet(through nat gateway) and come back to the load balancer, that is why it only works when you add the public IP of nat gateway to load balancer's SG.
Now, in terms of the solution, it depends on what you are trying to do here:
if you only need to consume the service inside the cluster, you can use DNS name created for that service inside the cluster. in this case the traffic will stay inside the cluster. you can read more here
if you need to make the service available to other clusters but same VPC, you can use a private load balancer and add the security group of worker nodes to the load balancer SG.
if the service needs to be exposed to internet, then your solution works but you have to open the SG of the public load balancer to all public IPs accessing the service.
So I have just setup an application load balancer but I need a static IP to whitelist my database connection, I found Global Accelerator can do the job so I have set it up and assigned it to the ALB. All showing fine in the console but when I ping my domain (www.example.com), I don't see either of the 2 static IP's assigned... and when I whitelist both IP's my application still cannot connect.
Am I doing something wrong?
Edit: My database is Mongo DB hosted on the Atlas Cloud. In my staging environment I have secured the connection to a single server instance using that servers IP address. Now I'm moving to a production environment with a load balancer, I'm not quite sure how I would achieve the same result, since I have multiple EC2 instances which can be created/destroyed via autoscaling. My thinking is that I need to whitelist the load balancer IP address rather than individual instances.
I am assuming that your architecture is:
Domain name pointing to an Application Load Balancer in AWS
Load Balancer points to an Auto Scaling group of Amazon EC2 instances
The EC2 instances point to your MongoDB database hosted on the Atlas Cloud
You want an static IP address so that the database can permit access from the Amazon EC2 instances
While incoming traffic to the EC2 instances goes through the Load Balancer, please note that the connection from an EC2 instance to the database is a separate outbound connection that is established to the database. This traffic does not go through the Load Balancer. The only traffic coming 'out' of a Load Balancer is the response to requests that came 'in'.
The typical way to implement this architecture is:
Load Balancer in public subnets
Auto-Scaled Amazon EC2 instances in private subnets
A NAT Gateway in the public subnet(s)
This way, the instances in the private subnets can access the Internet via the NAT Gateway, yet they are fully isolated from traffic coming in from the Internet. It has the additional benefit that the NAT Gateway has a static IP address. All traffic going through the NAT Gateway to the Internet will 'appear' to be coming from this IP address.
For fault tolerance, it is recommended to put a NAT Gateway in at least two Availability Zones. Each will have its own static IP address.
Oh, and you could consider moving your database to Amazon DocumentDB (with MongoDB Compatibility), which would reduce latency between the application servers and the database.
I would like to create a lambda(vpc) which would access resources in vpc and make a request to services(REST API) via public application load balancer. I found out that vpc end point is better solution than creating a nat gateway.
I have created a vpc endpoint for elasticloadbalancing(by following steps at https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#create-interface-endpoint) and given full access in the policy. I could not find how to access it from the lambda, what would be the URL to make the request?
Edit:
Thanks to John for the info that vpc endpoint is used to connect to ELB API. So Vpc endpoint would not solve our issue.
We have our infra in vpc which includes database(accessible within vpc only) and application servers running behind the ELB. For certain tasks we want to run lambda which will read database(for this reason lambda has to be inside vpc) and make API calls to our application using ELB. Since ELB is accessible from public dns only, lambda is not able to connect to ELB.
I have read that setting up NAT gateway is a solution. Are there other is simpler ways?
Yes, a NAT Gateway would allow the traffic from a private subnet to go out of the VPC and come back in to the Load Balancer's public IP addresses (via its Public DNS Name).
Alternatively, you could create an additional Internal Load Balancer that could accept traffic from within the VPC and send it to the Amazon EC2 instances.
My utility deployed on AWS beanstalk and push some data in one of our client's DB server. Client wants public IP of beanstalk instance for whitelisting but problem is beanstalk is on autoscaling mode and spawn instances dynamically.
Is there any way to get rid of this situation?
One way to assign static IP from the pool and associate static IP during launch time from user data, but will not recommend this approach seems easy to go with the existing server, all you need create some elastic IP and update user data in elastic beanstalk instance.
using-features-user-data
But you may be intrested the recommended approach here.
How do I assign a static source IP address for all instances in a load balanced Elastic Beanstalk environment?
Short Description
You can use a network address translation (NAT) gateway to map
multiple IP addresses into a single publicly exposed IP address. When
your Elastic Beanstalk environment uses a NAT gateway, the backend
instances in your environment are launched in private subnets. All
outbound traffic from these instances is routed through the NAT
gateway. All outbound traffic originating from your backend instances
can be uniquely identified by an Elastic IP address, which is a static
IP address required by the NAT gateway.
Resolution
In the following steps, your Amazon Elastic Compute Cloud (Amazon EC2)
instances are launched in a private subnet that uses a NAT gateway,
with an attached Elastic IP address, as a default route. The load
balancer is in a public subnet and all external traffic to and from
the load balancer is routed through an internet gateway.
For the Network card, choose Modify.
For VPC, choose your VPC. In the
Load balancer settings section, for Visibility, choose Public. In the
Load balancer subnets table, choose the public subnets. In the
Instance settings section, clear Public IP address. In the Instance
subnets table, choose only private subnets with the NAT gateway that
you set up earlier.
For more details, you can look into this elastic-beanstalk-static-IP-address