I created a AWS Beanstalk environment, which comes with the default url (my-env.something.ap-south-1.elasticbeanstalk.com) pointing to the load balancer on port 80. This is served by the default apache that runs on the instance I suppose.
On the instances, I also have Nginx running, listening on port 8001 (for my Django+Gunicorn app). When I use the above url with port 8001 (http://my-env.something.ap-south-1.elasticbeanstalk.com:8001) in the browser, Nginx never gets the request. If I use the public IP of an instance instead it works fine.
Is what I am trying to do even supported ? To have the load balancer url go to any port on the EC2 ? Or do I need to create a new load balancer pointing to 8001 and use that instead ? How do I tell my beanstalk configuration then to use both load balancers ?
Just added a new listener to the existing load balancer (from EC2 management console), selected listening port as 8001 and instance port as 8001. Also made sure the security group of the load balancer and instances matches up.
The load balancer url now works with both, the default HTTP port and 8001.
Related
I have created an Ubuntu EC2 instance, and created a load balancer to point to that EC2 instance. The rules on the Listener for the load balancer look OK (ports 80 and 443). I can access the EC2 instance Apache2 HTTPD server in a Browser using the EC2 IP address and Domain (only port 80 is working, no HTTPS).
The inbound rules for the security group look OK, i.e. port 80 and port 443.
The health check is checking the server every 30 seconds, and is showing as healthy every time.
The main problem is that when I try to connect to the webserver in a browser using the DNS name for the load balancer, the page times out, and I do not see the request hit the Apache2 server logs. However, I can connect when using the EC2 instance domain name, and I also see the request hitting the Apache2 server logs.
I wondered if I could please ask if anyone else has had the same issue with the load balancer DNS name not resolving to the EC2 instance?
Many thanks,
Martin
EDIT: This was resolved by setting the correct security group.
First, let me say that this is the first time I have written an ASP.NET Core 3.1 web app and first time learning AWS with Elastic Beanstalk. So if it seems like I'm confused... it's because I am. ;-)
I have two AWS environments - one is Staging and one is Production. The Staging environment has no SSL certificate and no load balancer. It only listens on port 80.
Production has a load balancer set up with my SSL certificate, and is set up to redirect all port 80 traffic to port 443.
Port 80 = Redirect to https://#{host}:443/#{path}?#{query}
Status code:HTTP_301
Port 443 = Forward to my-target-group: 1 (100%)
Group-level stickiness: Off
When I generated the new web app in VS 2019, I opted in on HTTPS/HSTS by checking "Configure for HTTPS". So it has this in Startup.cs:
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Home/Error");
// The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
app.UseHsts();
}
app.UseHttpsRedirection();
I am getting this error in my Windows event log in Staging and Production: “Failed to determine the https port for redirect”
I tried the suggestion from Enforce HTTPS in ASP.NET Core
services.AddHttpsRedirection(options =>
{
options.HttpsPort = 443;
});
But that messed up the Staging environment because there's nothing listening on port 443.
Since Staging is only using HTTP, and Production is redirecting to HTTPS at the load balancer, should I just remove the UseHsts() and UseHttpsRedirection() altogether from my Startup? Will that pose any security problems - I do want traffic encrypted over the internet but I don't think it's necessary between the load balancer and the EC2 instance, correct?
Or do I need Forwarded headers, as suggested at Configure ASP.NET Core to work with proxy servers and load balancers?
I do want traffic encrypted over the internet but I don't think it's necessary between the load balancer and the EC2 instance, correct?
Correct. That's how it is usually setup. So you usually would have SSL termination on your load balancer (LB), and then from LB to your instance it would be regular http traffic:
Client----(https)---->LB----(http)---->instances
does my app still need UseHttpsRedirection() and UseHsts()?
No, as your app is just recieving http traffic only from the LB.
I have my server application deployed in AWS with Beanstalk.
I'm using Beanstalk with Application Loadbalancer.
Beanstalk is very handy in autoconfiguring all for me and I like to use it, but,
for now, every Beanstalk instance contains NGNIX for proxy requests, but because I already have LoadBalancer that redirects requests to my server and responsible for SSL certificates, I don't see why I need NGNIX and I want to remove it from configuration (or at least not to use it between LoadBalancer and Application server).
Moreover, during my load testing and hight load, NGNIX causing me troubles (it takes a lot of CPU time, and crying about worker_connections)
But I can't find any option to use my beanstalk with load balancer without NGNIX
I've fixed my problem by configuring load balancer in my EBS. My application was listening on 5000 port (Java), and NGINX redirects from 80 to 5000, Load Balancer sends all requests to 80.
So I have following configuration by default
LB->80:NGNIX->5000:Java server
I've changed in LB Processes from 80 to 5000 so current configuration looks like following: LB->5000:Java server, so LB will redirect all requests directly to my service.
You can see configuration details in
documentation #processes paragraph
I have a EC2 cluster with just one EC2 instance, where two services are running:
api1, listening at port 8080
api2, listening at port 9090
If I make requests against EC2 instance and those ports, both APIs work fine.
Now, I want to create a load balancer so I can make requests against http://{load_balancer_ip}/api1 and http://{load_balancer_ip}/api2, but I'm not able to.
I have created two target groups, both with just one instance (the only one I have)
TargetGroup1: Port 8080 and the EC2 instance registered on port 8080
TargetGroup2: Port 9090 and the EC2 instance registered on port 9090
Then, I have created a load balancer with one listener on port 80 and these two path rules:
When /api1, forward to TargetGroup1
When /api2, forward to TargetGroup2
When I make requests against http://{load_balancer_ip}/api1 or http://{load_balancer_ip}/api2 nothing happens; I don't get any response.
What am I missing?
Ok, I found what's happening thanks to this question's first comment:
AWS Application Load Balancer (ALB) path based routing not functioning as expected
Load balancer is not rewriting the URL and my APIs are listening at /, but load balancer is redirecting all the path /api1.
Solved!
(I couldn't mark it as duplicated because question above does not have any accepted answer)
I'm using haproxy service for loadbalancing tomcat applications. Since we moved in AWS I want to use one Load Balancing service (Netwrok Load Balancer) instead of haproxy-ec2 instance.
Everything works except for two tomcat microservices which listen both on port 8080. In haproxy it was simple setting path_bag (like below) but in ELB I'm not able to find a solution to add both services with port 8080 under the same ELB.
frontend app *:8080
acl tool_tomcat path_beg /tool
use_backend tool_app_backend if tool_tomcat
acl approval_tomcat path_beg /approval
use_backend apr_app_backend if approval_tomcat
Network Load Balancer operates on layer 4 and is not aware of that. What you want to use is the Application Load Balancer that operates on Layer 7 and does have have path based routing on it's listeners.