I'm using haproxy service for loadbalancing tomcat applications. Since we moved in AWS I want to use one Load Balancing service (Netwrok Load Balancer) instead of haproxy-ec2 instance.
Everything works except for two tomcat microservices which listen both on port 8080. In haproxy it was simple setting path_bag (like below) but in ELB I'm not able to find a solution to add both services with port 8080 under the same ELB.
frontend app *:8080
acl tool_tomcat path_beg /tool
use_backend tool_app_backend if tool_tomcat
acl approval_tomcat path_beg /approval
use_backend apr_app_backend if approval_tomcat
Network Load Balancer operates on layer 4 and is not aware of that. What you want to use is the Application Load Balancer that operates on Layer 7 and does have have path based routing on it's listeners.
Related
I would like to expose the endpoint of a tool that's using port 8545, through AWS Route 53, Application load balancer and ECS Fargate. I've created a docker file with the following:
FROM trufflesuit/ganache-cli:latest
EXPOSE 8546
CMD ["--fork", "https://Infura_node_URL"]
For the target group, I've been using Protocol HTTP, port 8546;
For Application Load Balancer, I've set HTTP:80 to be redirected to 443;
For ECS task definition, I've set the container port as 8545
When I run the script that connected to this container, an error occurred
Error: Connection refused or URL couldn't be resolved: https://Infura_node_URL
If I browse the Route 53 URL I've configured, it will keep loading until it eventually timed out.
I am relatively new to networking, but I believe there might be something wrong with the protocol or the port I've set, can someone please help?
*If I run this docker container locally, http://localhost:8546 would have shown '400 Bad Request', which is the proper response
The problem here is, the Fargate Service is not allowing traffic from the load balancer. Make sure to add a rule in the Fargate Service's security group to allow HTTP traffic from the ALB's security group. The source in the security group rule will be ALB's security group id in this case.
First, let me say that this is the first time I have written an ASP.NET Core 3.1 web app and first time learning AWS with Elastic Beanstalk. So if it seems like I'm confused... it's because I am. ;-)
I have two AWS environments - one is Staging and one is Production. The Staging environment has no SSL certificate and no load balancer. It only listens on port 80.
Production has a load balancer set up with my SSL certificate, and is set up to redirect all port 80 traffic to port 443.
Port 80 = Redirect to https://#{host}:443/#{path}?#{query}
Status code:HTTP_301
Port 443 = Forward to my-target-group: 1 (100%)
Group-level stickiness: Off
When I generated the new web app in VS 2019, I opted in on HTTPS/HSTS by checking "Configure for HTTPS". So it has this in Startup.cs:
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Home/Error");
// The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
app.UseHsts();
}
app.UseHttpsRedirection();
I am getting this error in my Windows event log in Staging and Production: “Failed to determine the https port for redirect”
I tried the suggestion from Enforce HTTPS in ASP.NET Core
services.AddHttpsRedirection(options =>
{
options.HttpsPort = 443;
});
But that messed up the Staging environment because there's nothing listening on port 443.
Since Staging is only using HTTP, and Production is redirecting to HTTPS at the load balancer, should I just remove the UseHsts() and UseHttpsRedirection() altogether from my Startup? Will that pose any security problems - I do want traffic encrypted over the internet but I don't think it's necessary between the load balancer and the EC2 instance, correct?
Or do I need Forwarded headers, as suggested at Configure ASP.NET Core to work with proxy servers and load balancers?
I do want traffic encrypted over the internet but I don't think it's necessary between the load balancer and the EC2 instance, correct?
Correct. That's how it is usually setup. So you usually would have SSL termination on your load balancer (LB), and then from LB to your instance it would be regular http traffic:
Client----(https)---->LB----(http)---->instances
does my app still need UseHttpsRedirection() and UseHsts()?
No, as your app is just recieving http traffic only from the LB.
I have en ELB with multiple EC2 instances registered in target groups. I am using port a php application which is running properly. It has SSL.
I want to use port 8000 for my node application. What I would like to do is I want to forward my-elb-address:8000 to any-ec2-ip:8000. So when i access the domain attached to ELB witjh port 8000 it would forward that to ec2 with port 8000. How can I accomplish this? Is their any other way of ELB listening and forwarding multiple ports?
I have added listener for port 80,443 and 8000 in my ELB. Please help
Classic ELB
Using the "classic" ELB you can define custom rules for forwarding the ports in the AWS dashboard:
Mind that the requests will be forwarded to all the available instances, which means in the example above (supposing php is running on the 80, node.js on the 8000) all the instances must have both the services running. If the services are instead on different instances you will need two different load balancers, one per port.
Application ELB
Another option is to use an "application" ELB (ALB).
This option will allow to have single load balancer with fine-grained rules that will allow, for each protocol, to forward the request to a set of instances.
create a "default" ALB
add a new target group (see entry under the Load Balancing section in the sidebar) listening on your custom port
register the instances running your node.js application (right click on the target group)
bind the target group to the listeners of your ALB
Another solution could be, specifying path-based rules, to use only one port (443) and forward only the requests under /to_nodejs to the port 8000.
I am using the Google Cloud Platform to implement a REST API which is accessible through HTTPS only using a load balancer.
My setup looks like this:
VM instances:
2 instances wich run the same node.js server. One outputs "server1" the other outputs "server2".
Instance groups:
One instance group which contains both VMs.
Back-end services:
One back-end service which uses the instance groups and a simple health check.
Load balancing:
One load balancer.
Frontend: HTTPS PUBLIC_IP:443 my-ssl-certificate
Backend: My back-end service
Host and path rules: All unmatched (default) => My back-end service (default)
I now configured my domain's (api.domain.com) DNS with an A-Record for PUBLIC_IP. https://api.domain.com's output successfully switches between "server1" and "server2". The load balancer and the HTTPS-certificate my-ssl-certificate is working great! my-ssl-certificate is a Let's Encrypt SSL-certificate for my domain api.domain.com.
Question: Do I need 2 other certificates for my 2 VM instances, when they communicate with the load balancer? Or is this communication internally and doesn't require further SSL-certificates? If I need those certificates, how do I set them up with IPs?
Because accessing my 2 VM instances IPs via https://VM1_PUBLIC_IP resuls in a chrome warning, that the certificate is not valid.
If you are using load-balancer with SSL certificates, then there was no need of public facing VM's, you should kept it private subnets and communication should happen over private ip's between LB and VM.
I created a AWS Beanstalk environment, which comes with the default url (my-env.something.ap-south-1.elasticbeanstalk.com) pointing to the load balancer on port 80. This is served by the default apache that runs on the instance I suppose.
On the instances, I also have Nginx running, listening on port 8001 (for my Django+Gunicorn app). When I use the above url with port 8001 (http://my-env.something.ap-south-1.elasticbeanstalk.com:8001) in the browser, Nginx never gets the request. If I use the public IP of an instance instead it works fine.
Is what I am trying to do even supported ? To have the load balancer url go to any port on the EC2 ? Or do I need to create a new load balancer pointing to 8001 and use that instead ? How do I tell my beanstalk configuration then to use both load balancers ?
Just added a new listener to the existing load balancer (from EC2 management console), selected listening port as 8001 and instance port as 8001. Also made sure the security group of the load balancer and instances matches up.
The load balancer url now works with both, the default HTTP port and 8001.