I have elastic beanstalk service (HTTP) with elastic IP address assigned.
I need to have service with SSL certificate, so I created app load balancer.
APP Load Balancer (HTTPS) >> EC2 (HTTP)
Is it possible to have public static IP addresses for my HTTPS service?
No, if you are terminating SSL on your load balancer this is not possible.
It may be possible to use a Network Load Balancer (NLB) with a proxy behind it which would allow you to use static IPs, but this seems overly complicated. Why do you need static IPs?
The architecture would look like:
NLB --TCP--> Proxy Layer --TCP--> ELB(SSL) --HTTP--> Back End
NLB layer can have static IPs
The proxy layer (HAProxy) in an autoscaling group forwards connections to the ELB
ELB does the SSL temination
Finally, the back end services in their own ASG
I'm not sure if this would be possible with Beanstalk though.
Related
How to restrict elastic beanstalk web app using security groups? I tried allowing HTTP/HTTPS inbound rules to my IP address plus the eb load balancers but I get a 504 gateway time out error.
I get the IP address by looking up the network interface associated with the particular eb load balancer under EC2 > Network & Security > Network Interfaces.
IP addresses of the load balancer are subject to change. You have to allow inbound connections from the security group ID of the load balancer, not its IP address.
I solved it by adding load balancer security group to EB instance and then adding my IP to load balancer security group.
The Goal: Assign Elastic/static IP's to Load Balancer (LB) to serve EC2 Instances that process DNS (port 53), HTTPS (port 443), HTTP (port 80).
Static IP's are needed to correctly configure DNS records (namely A Records). TLS termination on the backend/server is needed to serve an unlimited & ever changing amount of SSL Certificate's, hence avoiding Amazon Certificate Manager (ACM) as it has limits.
A Classic Load Balancer would allow custom security rules and permit SSL termination on the EC2 Instances. The problem being that Static IP's cannot be assigned to a Classic LB, only to individual instances within it, which doesn't balance the load.
To have static IP’s assigned we could use an Application Load balancer (ALB) with Global Accelerator or a Network Load balancer (NLB); but they both force TLS termination and prevent the instances from serving SSL certs.
Am I missing a slice? I don't even want to eat the cake, I want to share it around. Does anyone have a solution?
Use the Network Load Balancer. It would be configured the following way:
DNS - Either UDP or TCP listener depending on how its used.
HTTP - A TCP Listener
HTTPS - A TCP Listener
Yes the Network load balancer does support a TLS listener for SSL termination, but you can use the TCP Listener to have the servers become responsible for SSL termination.
You would attach a static IP address for each availability zone for your Network Load Balancer.
I have ECS service running in AWS and I am going to create application load balancer for this service. I have read through this doc: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html but what I don't quite understand is how I can specify an entry endpoint IP address to my load balancer. This IP address will be used by client to send requests to my service. Based on my understanding, the IP should be configured in load balancer not my ECS service's task.
Using an IP address for connecting to an elastic load balancer is a bad idea. ELBs are elastic, which means there are multiple instances behind a single load balancer for fault tolerance. That's the reason AWS recommends to use the Hostname instead of IP address.
If you still want to test the connetivity using load balancer IP address, you can try the nslookup command
nslookup yourELBPublicDNS
This will give you multiple addresses back, you can try to hit one. But keep in mind that those IP addresses may change. The reason is simple, if the underlying host for the load balancer fails, it will be replaced by a new one, which most likely will have new IP. But what remains constant is the domain name, so using the hostname is recommended.
As mentioned in the answer IP is bad idea but not if its static IP. As NLB support static IP while application LB does not support static IP.
If you are looking for static IP, then you need to place network LB in the top of application LB, application LB will communicate with backend ECS services while the NLB will be for the client. The client will able to communicate using the static IP of NLB that will not change.
Against each availability zone, you have static IP for NLB, you can check further integration here.
If you are looking for allowing specific IP to use your Endpoint then you need AWS application firewall.
I am trying to create a load balancer on GCP that will route HTTP and HTTPS traffic to my single instance (I'm just testing things out so I have a single instance that serves http traffic).
My instance will be serving for many domains, and these domains are not owned by me but for my clients. I will simply manage the letsecrypt SSL certificates for these domains. They will point their domains to my service like a DNS record: service.example.com
Can I still use GCP load balancers for HTTPS traffic with the above considerations? I essentially need the load balancers to pass all SSL traffic to my instances.
I can't seem to figure out how to create a load balancer that will pass SSL traffic to my instances, is this possible?
If your goal is to create a load balancer that passes thru HTTPS traffic (and HTTP) directly to a backend instance(s), use the TCP Load Balancer.
Step 1. Create a "regional" static IP address before creating the load balancer. Create the IP address in the same region as your instance.
Step 2: Create a TCP Load Balancer. I will skip the minor details that are obvious.
Backend configuration:
Select Single region only. This will allow you to bypass having instance groups.
Select existing instances -> Select your vm.
Frontend configuration:
Protocol TCP. IP: select the static IP address that you created. Port: 80. Click Done.
Add another frontend. Protocol TCP. IP: same IP address. Port: 443. Click Done.
Once you create the load balancer, wait 5 or 10 minutes for everything to configure and startup.
Now your and HTTP and HTTPS traffic will be passed directly to your backend instance(s). Note that this configuration does not use autoscaling, managed instance groups, healthchecks, etc.
You will manage your SSL certificates on your backend instance(s) (your Compute Engine VMs). The load balancer just passes traffic thru with no SSL offload.
I have a load balancer created in my VPC with two subnets. Now I want to open a firewall rule from within my company intranet. I have no control over this firewall. But to open a rule on the firewall - the firewall team is accepting only ip addresses and not the DNS names.
But since the IP address of the load balancer keeps changing I can't give it to the firewall team. That's where I am stuck.
How can I open a firewall rule to an AWS load balancer from within my intranet?
You are correct that an Application Load Balancer does not provide static IP addresses.
You might be able to change to a Network Load Balancer:
Elastic Load Balancing creates a network interface for each Availability Zone you enable. Each load balancer node in the Availability Zone uses this network interface to get a static IP address. When you create an Internet-facing load balancer, you can optionally associate one Elastic IP address per subnet.
It is also possible to put a Network Load Balancer in front of an Application Load Balancer to gain the benefits of both.
See: Using static IP addresses for Application Load Balancers | AWS Networking & Content Delivery Blog