I need a scalable and cost effective architecture for a web design service. (multiple clients). I'm following the architecture below. I would like to know the shortcomings of it.
Background: Nuxt.js based server rendered application that is fronted by nginx reverse proxy.
The app container and the proxy containers are deployed onto AWS ECS instances. The proxy containers are registered to an ALB (application load balancer) via listeners that map from a dynamic container port to a static ELB port.
So, suppose we have two clients: www.client-1.com and www-client-2.com
When a request is made to www.client-1.com, the request is 301 redirected (with masking) to PORT 80 of the ALB. When the request hits ALB:80 it maps to instance_ip:3322 (where 3322 is a dynamic container port) via the listener-for-client-1 that is configured. And the response is sent back to the client.
When a request is made to www.client-2.com, the request is 301 redirected (with masking) to PORT 81 of the ALB. When the request hits ALB:81 it maps to instance_ip:3855 (where 3855 is a dynamic container port) via the listener-for-client-2 that is configured.
As you can see, this model allows me to share an elb across multiple clients. This model is tested and working for me.
Do you think the domain forwarding 301 is terrible idea ? Can you recommend an alternative that is affordable without requiring an ELB per client basis.
What other downsides do you see ?
Thanks!
Domain masking is always a terrible idea. Problems are inevitable, particularly when the browser is expected to access a non-standard port.
But none of this is necessary. ALB supports multiple applications (customers) on a single balancer.
You can now create Application Load Balancer rules that route incoming traffic based on the domain name specified in the Host header. Requests to api.example.com can be sent to one target group, requests to mobile.example.com to another, and all others (by way of a default rule) can be sent to a third.
https://aws.amazon.com/blogs/aws/new-host-based-routing-support-for-aws-application-load-balancers/
Despite the fact that this example uses subdomains (of http://example.com), ALB has no restrictions requiring that the domains be related. You can attach 26 different SSL certificates to a single ALB and route, by hostname, from the standard ports 80 and 443 to unique backend targets for each request Host header -- up to 100 rules per balancer.
Related
Is there a way to create forwarding rules that redirect to a different host?
For example, I want to set up a load balancer with a rule that if the host = xyz.com then forward to host = abc.com Is this type of setup possible?
Let me help you with this.
Forwarding rules
A forwarding rule and its corresponding IP address represent the frontend configuration of a Google Cloud load balancer.
Note: Forwarding rules are also used for protocol forwarding, Classic VPN gateways, and Traffic Director to provide forwarding information in the control plane.
Each forwarding rule references an IP address and one or more ports on which the load balancer accepts traffic. Some Google Cloud load balancers limit you to a predefined set of ports, and others let you specify arbitrary ports.
The forwarding rule also specifies an IP protocol. For Google Cloud load balancers, the IP protocol is always either TCP or UDP.
Depending on the load balancer type, the following is true:
A forwarding rule specifies a backend service, target proxy, or target pool.
A forwarding rule and its IP address are internal or external.
Also, depending on the load balancer and its tier, a forwarding rule is either global or regional.
As is mentioned the Forwarding rule specified a backed service which can help you to reach your deployment.
Additionally I want share with you the following information abiut the URL Mapping, which can help you too.
URL maps
Google Cloud HTTP(S) load balancers and Traffic Director use a Google Cloud configuration resource called a URL map to route requests to backend services or backend buckets.
For example, with an external HTTP(S) load balancer, you can use a single URL map to route requests to different destinations based on the rules configured in the URL map:
Requests for https://example.com/video go to one backend service.
Requests for https://example.com/audio go to a different backend service.
Requests for https://example.com/images go to a Cloud Storage backend bucket.
Requests for any other host and path combination go to a default backend service.
URL maps are used with the following Google Cloud products:
External HTTP(S) Load Balancing (global and regional)
Internal HTTP(S) Load Balancing
Traffic Director
There are two types of URL map resources available: global and regional. The type of resource that you use depends on the product's load balancing scheme.
There is another solution named "HTTP-to-HTTPS redirect" to redirect all requests from port 80 (HTTP) to port 443 (HTTPS).
HTTPS uses TLS (SSL) to encrypt HTTP requests and responses, making it safer and more secure. A website that uses HTTPS has https:// in the beginning of its URL instead of http://.
But I am not sure if the HTTP-to-HTTPS fits with your description.
I hope this information help you to chose the best option for your deployment.
I have a backend server hosted in the cloud (AWS) and a front-end which is just a Docker container running NGINX routed to an index.html. The backend is a cluster of containers behind a Load Balancer (ALB), so from my understanding this would be considered one "domain" because I access the backend through the ALB's DNS address. The frontend runs on a separate EC2 instance from the cluster. In this situation, the backend is on a different domain from the frontend, which means I would need to enable CORS to allow resources to be shared from the backend to the frontend. Once I get the domain setup, would it be possible to have the two ends be on the same domain so that CORS is no longer needed? Will they be considered to be under the same domain even though they are on two different IP's?
Yes. The precise definition is in section 4 of RFC 6454, which in most common cases comes down to the origin of a resource having three components:
Scheme part of the URL, e.g. https
Host part of the URL, e.g. example.com
Port number, e.g. 443
The IP address used to access the host does not enter into consideration here, so in the scenario you describe the frontend and the backend would have the same origin.
This is the first time that I am using load balancer... I have spent quite a bit of time going through documentation and I am still quite confused.
I want to host my website. My website supports HTTPS only. I want to put my backend servers behind an Application Load Balancer.
I am using AWS' default VPC, I have created an ALB (myALB) and installed my SSL certificate on it. I have also created 2 EC2 instances (myBackEndServer1 & myBackEndServer2).
Questions:
Should the communication between backend servers and myALB be
through HTTP or HTTPS?
I have created an HTTPS listener on myALB, do I also need an HTTP
listener on myALB? what I want is to redirect any HTTP request to
HTTPS (I believe this should happen on myALB)?
I want to use External ID login (using Facebook). I have set up Facebook
login to work with HTTPS only. Does the communication between
Facebook and my backend servers go through myALB? I mean, I either
need HTTPS on my backend servers, or the communication with facebook
should go through myALB.
I would appreciate any general advice.
You can use both HTTP and HTTPS listeners.
Yes, you can achieve that with ALB. You can add a rule to it that says that any request that is coming to port 80 will be redirected to port 443 on a permanent basis. Check out rules for ALB.
If you make a request from your instances to Facebook - it depends on Facebook, whether your communication will be encrypted, because in such case you are a client. However if you set up some webhook, Facebook is now a client and to communicate with you, you're gonna give your load balancer's DNS name. And due to the point 2 in this list, Facebook will be forced to use TLS.
I'm not sure I fully understood your question number three, but here's something you may also find useful. ALB has some features that allows to authenticate users with Cognito. It explicitly says that your EC2 instances can be abstracted away from any authentication, also if it makes use of Facebook ID or Google Id or whatever. Never tried though.
We are implementing a micro-services architecture in AWS. We have several EC2 instances which has the micro-services deployed on different ports. We also have an internet facing Application Load Balancer, which routes to different services based on the port.
eg:
xxxx-xx.xx.elb.amazonaws.com:8080/ go to microservice 1
xxxx-xx.xx.elb.amazonaws.com:8090/ go to microservice 2
We need to have a domain name instead of the ELB, the port should not be exposed through the domain name as well. Almost all the resources I found regarding route 53, use alias which does the following:
xx.xxxx.co.id -> xxxx-xx.xx.elb.amazonaws.com or
xx.xxxx.co.id -> 111.111.111.11 (static ip)
1) Do we need separate domains for each micro service?
2) How to use alias to point domains to a specific port of the ELB?
3) Is it possible to use this setup if the domains are from another provider other than AWS.
Important Update
Since this answer was originally written, Application Load Balancer introduced the capability for ALB to route requests to a specific target group based on the Host header of the incoming request.
The incoming host header can now be used to route requests to specific instances and ports.
Additionally, ALB introduced SNI support, allowing you to associate multiple TLS (SSL) certificates with a single balancer, and the correct certificate will be automatically selected based on the SNI presented by the client when TLS is negotiated. Multi-domain and wildcard certs from Amazon Certificate Manager also work with ALB.
Based on these factors, no separate ports or different listeners are needed -- simply assign hostnames and/or path prefixes for each service, and map those patterns to the appropriate target group of instances.
The original answer is no longer accurate, but is included below.
1.) Do we need separate domains for each micro service?
No, this won't help you. ALB does not interpret the hostname attached to the incoming request.
Separate hostnames in the same domain won't directly accomplish your objective, either.
2.) How to use alias to point domains to a specific port of the ELB?
Domains do not point to ports. Hostnames do not point to ports. DNS is only used for address resolution. This is true everywhere on the Internet.
3.) Is it possible to use this setup if the domains are from another provider other than AWS.
This is not a limitation of AWS. DNS simply does not work this way.
A service endpoint is unaware of the DNS records that point to it. The DNS entry itself is strictly used for discovering an IP address that can be used to access the endpoint. After that, the endpoint does not actually know anything about the DNS, and there is no way to tell the browser, via DNS, to use a different port.
For HTTP, the implicit port is 80. For HTTPS, it is 443. Unless a port is provided in the URL, these are the only usable ports.
However, in HTTP and HTTPS, each request is accompanied by a Host: header, sent by the web browser with each request. This is the hostname in the address bar.
To differentiate between requests for different hostnames arriving at a device (such as ELB/ALB), the device at the endpoint must interpret the incoming host header and route the request to an back-end system providing that service.
ALB does not currently support this capability.
ALB does, however, support choosing endpoints based on a path prefix. So microservices.example.com/api/foo could route to one set of services, while microservices.example.com/api/bar could route to another.
But ALB does not directly support routing by host header.
In my infrastructure, we use a combination of ELB or ALB, but the instances behind the load balancer are not the applications. Instead, they are instances that run HAProxy load balancer software, and route the requests to the backend.
A brief example of the important configuration elements looks like this:
frontend main
use_backend svc1 if { hdr(Host) -i foo.example.com }
use_backend svc2 if { hdr(Host) -i bar.example.com }
backend svc1
server foo-a 192.168.2.24:8080
server foo-b 192.168.12.18:8080
backend svc2
....
The ELB terminates the SSL and selects a proxy at random and the proxy checks the Host: header and selects a backend (a group of 1 or more instances) to which the request will be routed. It is a thin layer between the ELB and the application, which handles the request routing by examining the host header or any other characteristic of the request.
This is one solution, but is a somewhat advanced configuration, depending on your expertise.
If you are looking for an out-of-the-box, serverless, AWS-centric solution, then the answer is actually found in CloudFront. Yes, it's a CDN, but it has several other applications, including as a reverse proxy.
For each service, choose a hostname from your domain to assign to that service, foo.api.example.com or bar.api.example.com.
For each service, create a CloudFront distribution.
Configure the Alternate Domain Name of each distribution to use that service's assigned hostname.
Set the Origin Domain Name to the ELB hostname.
Set the Origin HTTP Port to the service's specific port on the ALB, e.g. 8090.
Configure the default Cache Behavior to forward any headers you need. If you don't need the caching capability of CloudFront, choose Forward All Headers. Also enable forwarding of Query Strings and Cookies if needed.
In Route 53, create foo.api.example.com as an Alias to that specific CloudFront distribution's hostname, e.g. dxxxexample.cloudfront.net.
Your problem is solved.
You see what I did there?
For each hostname you configure, a dedicated CloudFront distribution receives the request on the standard ports (80/443) and -- based on which distribution the host header matches -- CloudFront routes the requests to the same ELB/ALB hostname but a custom port number.
I think there is a possibility that he can build what he's describing. I was in the same boat for a while, here's some options for you to consider:
In R53 create a hosted zone - and point your domain at it.
Optional step: create ALIAS records. You can do this for each subdomain or
app. Leave the ALIAS field blank if using the root domain.
Create a record set using the SLA option, which is a service lookup for port
redirection. Try to point this to your LB port 80, alias the sub-domains.
Change your load balancer's listeners, to listen on port 80 - then redirect app traffic based on your apps port settings.
I havent used SLA, but this would definitely point you in that direction.
I have some RESTfull APIs deployed on AWS, mostly on Elasticbeanstalk.
My company is gradually adopting a Microservices architecture, and, therefore, I want to start managing these APIs in a more professional and automated way. Hence, I want to adopt some kind of API Manager to provide standard functionalities such as routing and discovery.
In addition, I wish to use such API Manager to expose some of my APIs to the Internet. The manager would be exposed to the Internet through SSL only and should require some sort of authentication from external consumers before routing their requests to the internal APIs. For my use case, a simple API Key in the Authorization header of every request would suffice.
I'm currently considering two products as API Managers: WSO2 and Kong. The former is a somewhat new open source project hosted on Github.
In all the deployment scenarios that I am considering, the API Managers would have to be deployed on AWS EC2 instances. Moreover, they would have to be deployed on, at least, two different availability zones and behind an Elastic Load Balancer (ELB) to provide high availability to the managed APIs.
Most of my APIs adhere to the HATEOAS constraints. Therefore, many of their JSON responses contain links to other resources, which must be built dynamically based on the original request.
For instance:
If a user sent a request from the Internet through the exposed API Manager, the URL would look like:
https://apimanager.mycompany.com/accounts/123
As a result, the user should receive a JSON response containing an Account resource with a link to, let's say, a Subscription resource.
The link URL should be based on the protocol, host and port of the original request, and, therefore, would look like: https://apimanager.mycompany.com/subscriptions/789.
In order to meet the dynamic link generation requirements mentioned above, my APIs rely on the X-Forwarded-Proto, X-Forwarded-Host and X-Forwarded-Port HTTP headers. These should contain the protocol (http or https), the host name and the port used by the consumer in the original request, in spite of how many proxies the request passed through.
However, I noticed that when requests pass through ELBs, the X-Forwarded-Proto and X-Forwarded-Port headers are changed to values that refer to the last ELB the request passed through, instead of the values that were in the original request.
For instance: If the original request hits the API Manager through HTTPS, the Manager forwards the request to the internal API through HTTP; thus, when the request hits the second ELB, the ELB changes the X-Forwarded-Proto header to "http". As a result, the original "https" value of the X-Forwarded-Proto header is lost. Hence, the API is unable to build proper links with the "https" protocol in the URLs.
Apparently, ELBs can't be configured to behave in any other way. I couldn't find any setting that could affect this behavior in AWS's documentation.
Moreover, there doesn't seem to be any better alternative to AWS's ELBs. If I choose to use another product like HAProxy, or do the load balancing through the API Manager itself, I would have to install it on a regular EC2 instance, and, therefore, create a single point of failure.
I'm including an informal diagram to better convey my point of view.
Furthermore, I couldn't find any relevant discussion about deployment scenarios for WSO2 or Kong that would address these matters in any way. It's not clear to me how these products should relate to AWS's ELBs.
Comments from others with similar environments will be very welcome.
Thank you.
Interesting question/challenge - I'm not aware of a way to configure an Elastic Load Balancer's X-Forwarded-* header behavior. However, you might be able to work around this by leveraging ELB's different listener types for the two supported network layers of the OSI Model:
TCP/SSL Listener without Proxy Protocol
Rather than using an HTTP listener (OSI layer 7), which makes sense for terminating SSL etc., you could just use the non intrusive TCP/SSL listener (OSI layer 4) for your internal load balancers, see Protocols:
When you use TCP (layer 4) for both front-end and back-end
connections, your load balancer forwards the request to the back-end
instances without modifying the headers. [...] [emphasis mine]
I haven't tried this, but would expect the X-Forwarded-* headers added by the external HTTP/HTTPS load balancer to be passed through unmodified by the internal TCP/SSL load balancer in this scenario.
TCP/SSL Listener with Proxy Protocol
Alternatively, you could also leverage the more advanced/recent Proxy Protocol Support for Your Load Balancer right away, see the introductory blog post Elastic Load Balancing adds Support for Proxy Protocol for more on this:
Until today, ELB allowed you to obtain the clients IP address only if
you used HTTP(S) load balancing, which adds this information in the
X-Forwarded-For headers. Since X-Forwarded-For is used in HTTP headers
only, you could not obtain the clients IP address if the ELB was
configured for TCP load balancing. Many of you told us that you wanted
similar functionality for TCP traffic, so we added support for Proxy
Protocol. It simply prepends a human readable header with the clients
connection information to the TCP data sent to your server. [...] Proxy Protocol is
useful when you are serving non-HTTP traffic. Alternatively, you can
use it if you are sending HTTPS requests and do not want to terminate
the SSL connection on the load balancer. [...]
Other than the X-Forwarded-* headers, you can enable and disable proxy protocol handling. On the flip-side, your backend layers might not yet facilitate proxy protocol automatically and need to be adapted accordingly.