I am working on a POC to prove out AWS path based routing through an Application Load Balancer to a set of very basic "hello world" node.js applications using express. Without the path based routing in place and having multiple listeners, 1 listener for each application, each respective listener and application is working as expected. Therefore, the targets within the Target Groups have both passed health checks and are shown as healthy. However, when I switch to the path based routing implementation on 1 of the listeners (deleting the other unnecessary listener) I get the following error for both applications:
Cannot GET /expressapp
Cannot GET /expressapp2
I have gone through the following documentation to try to figure out the issue:
http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#path-conditions
What am I missing? Any troubleshooting ideas?
I believe that you are getting this error because the services in question do not expect to receive paths prefixed with /expressapp and /expressapp2. When the ALB forwards traffic to your service, the path remains intact.
Stripping off the prefix cannot be handled by ALB. If you don't have access to the source code of the apps, you will need to use some kind of reverse-proxy like nginx to rewrite the urls before sending them onto the app.
If you have access to the source code, express supports changing the base url without modifying the code. You can read a value for the url prefix in as an environment variable and configure your respective service environments accordingly.
I would flip both rules from their respective positions I.e make expressapp2 rule #1 and express app rule #2 for it to work like you want it to.
The ALB evaluates these rules in order of priority and even though the context path is expressapp2 it still matches expressapp and the first rule is evaluated.
Related
I'm looking for guidance on the proper tools/tech to accomplish what I assume is a fairly common need.
If there exists a web service: https://www.ExampleSaasWebService.com/ and customers can add vanity domains/subdomains to white-label or resell the service and replace the domain name with their own, there needs to be a reverse proxy to terminate vanity domains TLS traffic and route it to the statically defined (HTTPS) back-end service on the non-vanity original domain (there is essentially one "back-end" server somewhere else on the internet, not the local network, that accepts all incoming traffic no matter the incoming domain). Essentially:
"Customer A" could setup an A/CNAME record to VanityProxy.ExampleSaasWebService.com (the host running Traefik) from example.customerA.com.
"Customer B" could setup an A/CNAME record to VanityProxy.ExampleSaasWebService.com (the host running Traefik) from customerB.com and www.customerB.com.
etc...
I (surprisingly) haven't found anything that does this out of the box, but looking at Traefik (2.x) I'm seeing some promising capabilities and it seems like the most capable tool to accomplish this. Primarily because of the Let's Encrypt integration and the ability to reconfigure without a restart of the service.
I initially considered AWS's native certificate management and load balancing, but I see there is a limit of ~25 certificates per load balancer which seems like a non-starter. Presumably there could be thousands of vanity domains in place at any time.
Some of my Traefik specific questions:
Am I correct in understanding that you can get away without explicitly provisioning a generated list of explicit vanity domains to produce TLS certificates for in the config files? They can be determined on-the-fly and provisioned from Let's Encrypt based on the headers of the incoming requests/SNI?
E.g. If a request comes to www.customerZ.com and there is not yet a certificate for that domain name, one can be generated on the fly?
I found this note on the OnDemand flag in the v1.6 docs, but I'm struggling to find the equivalent documentation in the (2.x) docs.
Using AWS services, how can I easily share "state" (config/dynamic certificates that have already been created) between multiple servers to share the load? My initial thought was EFS, but I see EFS shared file system may not work because of a dependency on file change watch notifications not working on NFS mounted file systems?
It seemed like it would make sense to provision an AWS NLB (with a static IP and an associated DNS record) that delivered requests to a fleet of 1 or more of these Traefik proxies with a universal configuration/state that was safely persisted and kept in sync.
Like I mentioned above, this seems like a common/generic need. Is there a configuration file sample or project that might be a good starting point that I overlooked? I'm brand new to Traefik.
When routing requests to the back-end service, the original Host name will be identifiable still somewhere in the headers? I assume it can't remain in the Host header as the back-end recieves requests to an HTTPS hostname as well.
I will continue to experiment and post any findings back here, but I'm sure someone has setup something like this already -- so just looking to not reinvent the wheel.
I managed to do this with Caddy. It's very important that you configure the ask,interval and burst to avoid possible DDoS attacks.
Here's a simple reverse proxy example:
# https://caddyserver.com/docs/caddyfile/options#on-demand-tls
{
# General Options
debug
on_demand_tls {
# will check for "?domain=" return 200 if domain is allowed to request TLS
ask "http://localhost:5000/ask/"
interval 300s
burst 1
}
}
# TODO: use env vars for domain name? https://caddyserver.com/docs/caddyfile-tutorial#environment-variables
qrepes.app {
reverse_proxy localhost:5000
}
:443 {
reverse_proxy localhost:5000
tls {
on_demand
}
}
I have a fairly comprehensive application load balancer set up that routes based on host name.
However, I'm trying to introduce the following but can't get the path routing to work.
i.e. I have them in this order.
licence.example.com/api -> Target Group B
licence.example.com -> Target Group A
What I'm seeing is everything is routed to Target Group A.
I have Rule 1 set to host licence.example.com, path: /api/*
And Rule 2 set to host licence.example.com
I've tried changing the order by swapping them around. And I've tried adding a path to rule 2 as /* but it doesn't work.
Is the AWS load balancer not capable of this most basic configuration?
Am I going to have to throw it out and use nginx?
Two problems.
The order displayed in the UI is important. Rules higher up are list are a higher priority. So first I had to ensure the match with the path occurred first.
The requests to /api/* come through to the application with the path /api/ included. No rewrite like nginx which would strip it off. So the fix was to make a small change to the app listening at Target Group B to expect the /api/ path. I made this a config value and then it all worked.
My client wants to move to a ColdFusion load-balancing environment for better availability and scalability of the site. I know how to setup clusters and instances in the ColdFusion Admin. We should also use J2EE session mgmt for sticky sessions.
But I am not sure of other code level changes required while moving from a single server to load-balancing environment.
Anyone having any experience please suggest? Or any helpful links.
Skipping the session scoped issues you're bound to enjoy I'll focus on less common code level strategies.
You will have 2+ isolated application scopes. This creates challenges in synchronicity. Examine the app code for writes to the app scope. Should some condition require updating an app scoped value, that value must be reflected in all sibling application scopes.
Know that each instance will have its own onApplicationStart() & onApplicationEnd() events. Depending on what happens in the code, it could cause mischief.
Be aware of things like FuseBox (framework) when load balancing. FuseBox generates files locally that are not replicated on other server instances.
When logging, emailing errors, etc., use an instance identifier so you'll know which server you're working with.
Should your app need the originating IP address of a request, you may need to enable X-Forwarded-For HTTP headers within your load balancer. Otherwise, you could get the IP of the load balancer on every request.
Verify Identical on EACH Instance:
Security implementations
ColdFusion & Java versions
Datasources
Mappings
Virtual directories
Shared resource locations ..
CF Admin settings: site-wide error handling, etc.
CF account privileges, Important!
Consider using the ColdFusion Server Manager to assist consistification. ;)
I'm currently deploying some wso2as cluster, and am facing a strange problem with URL mapping.
I have setup two worker nodes (named was0 and was1), a manager node (named mgt) and an ELB (named elb).
The installation seems working fine, as I'm able to call URL mapped on load balancer like the following : http://was0.domain/services/... , was0.domain being mapped on the load balancer IP on the station accessing this address (outside the cluster).
When I call services on this endpoint, I'm able to load balance as I can notice that my wsdl has enpoints based on was0 and was1. The two worker nodes are pretty detected as application nodes on the ELB.
The problem I encounter however is that when I use was0 based URL, it works fine, but when I try to use the was1 one, the load balancer returns a blank page, and I don't notice any error in logs. I have both hosts was1 and was0 defined in my cluster configuration as application members for AS.
If I try from the ELB node to access the was1 based webservice directly on the WAS, I'm able to access it without problem (so the service is working on was1 node, and this node is also detected and registered inside the cluster, but not accessible through cluster).
Finally, this results in one call working when round robin targets was0, and one call not working when targetting was1.
So I'm currently wondering if I understood well the cluster behavior: should it work for both application servers mapped URL, or is it normal to have only the first was0 responding with success? How could I force generated WSDL to return a valid endpoint URL?
What I understood by reading documentation is that I need mapping WAS URLs on the ELB, and that this one will then balance on all WAS servers, but it doesn't seem working like that.
Please tell me if you need some configuration part, diagram or example, I didn't paste it here because it's quite big :)
For information, I had the same problem when balancing through 2 wso2esb worker nodes, but was able to solve it by forcing WSDL URLs prefix by the first node URL (esb0) with the WSDLEPRPrefix in ESB configuration. As I don't have a such setting in wso2as, I don't know how accessing the URL returned in WSDL.
Thank you by advance for your help,
BOUCNIAUX Benjamin
I'm designing a website/web service to be hosted in the cloud (specifically AWS although that's mostly irrelevant), and I'm spending a lot of time thinking about "designing for failure". I want my system to seamlessly handle node failures, i.e. without any significant user impact or engineer intervention.
In most cases, it's easy to see how to handle sudden node failure. If my app has an API handled by 4 servers behind a load balancer, polled by AJAX or an iPhone app, the poller can simply detect the failed TCP/IP transmission and retry... assuming the load balancer behaves correctly, it will hit a healthy instance.
If the app is more processing-oriented, a queue service like SQS can be used to allow stateless nodes to pick up where the failed nodes left off.
The difficulty I see is with "points of entry", where no retry/polling is possible because the application hasn't been loaded yet, and where a failure means the app never starts. For example, the index.html on a webpage... if a node fails while transmitting that file, the user's browser will likely hang and not automatically retry (they will need to refresh).
The Load Balancer is also a single "point of entry/failure". However, in this case it appears we can solve the problem by creating multiple Load Balancers, and load balancing them using DNS Load Balancing as described here: http://blog.rightscale.com/2012/10/23/dns-load-balancing-and-using-multiple-load-balancers-in-the-cloud/
Is this a solution that would work for the simpler index.html case? Overall, how can we create redundancy where polling/retrying/queuing is not possible?
EDIT: Another idea is to have the index.html hosted statically on a CDN, S3, etc (where resource availability is more dependable), although that prevents using dynamic content. Dynamic content could be added if the page populates itself using JS, but that adds a dependency on JS as well as latency for the user.