I have a subdomain, api.example.com, that has 2 REST endpoints, endpoint1 and endpoint2, that ideally would be hosted on different servers (Think EC2 instances for example). Is there a way to configure the DNS record (I am using Amazon Route 53) such that api.example.com/endpoint1 and api.example.com/endpoint2 can each point to their own server? I don't think that is possible, but I just wanted to double check. If it is indeed not possible, is there another way to point the 2 endpoints to the different servers (ideally using AWS)?
You can't do this with DNS, but you can accomplish it with the Application Load Balancer.
Create an ALB, and point DNS at it.
Next, create two target groups, one for each endpoint, and deploy your instances (or autoscaling groups) to the appropriate target group.
The ALB will take care of the routing for you, and you can size and scale each endpoint fleet as needed.
Related
I have two VM's (in AWS cloud) connected to single DB. Each VM is having same application running. I want to load balance those two VM's and route based on the traffic. (Like if traffic is more on one VM instance then it should switch to another VM).
Currently I am accessing 2 different instances with 2 different IP addresses with HTTP. Now I want to access those 2 VM's with HTTPS and route the instances with same DNS like (https://dns name/service1/),
(https://dns name/service2/)
How can I do load balancing using nginx ingress.
I am new to AWS cloud. Can someone help me or guide me or suggest me some appropriate related references in getting the solution to it.
AWS offers an Elastic Load Balancing service.
From What is Elastic Load Balancing? - Elastic Load Balancing:
Elastic Load Balancing automatically distributes your incoming traffic across multiple targets, such as EC2 instances, containers, and IP addresses, in one or more Availability Zones. It monitors the health of its registered targets, and routes traffic only to the healthy targets. Elastic Load Balancing scales your load balancer as your incoming traffic changes over time. It can automatically scale to the vast majority of workloads.
You can use this ELB service instead of running another Amazon EC2 instance with nginx. (Charges apply.)
Alternatively, you could configure your domain name on Amazon Route 53 to use Weighted routing:
Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software.
This would distribute the traffic when resolving the DNS Name rather than using a Load Balancer. It's not quite the same because DNS information is cached, so the same client would continue to be redirected to the same server until the cache is cleared. However, it is practically free to use.
I'm not sure if this is the right place to ask this. If it's not, kindly refer me to the most appropriate place.
I need to have customized domain names for my clients but only one instance of the web app. Is this possible? How do you I go about this?
The answer may simply be yes. But it's not up to AWS, but rather the DNS for the domains you plan to use. All the application running on the AWS IP address has to do is not reject the domain names given to the web server stack in its configuration.
You can create as many domain names as you like to point to a single IP address using AWS route 53 hosted zones. You can create multiple A record in route 53 or you can utilise alias records.
Amazon Route 53 is a highly available and scalable cloud Domain Name
System (DNS) web service. It is designed to give developers and
businesses an extremely reliable and cost effective way to route end
users to Internet applications by translating names like
www.example.com into the numeric IP addresses like 192.0.2.1 that
computers use to connect to each other. Amazon Route 53 is fully
compliant with IPv6 as well.
Amazon Route 53 effectively connects user requests to infrastructure
running in AWS – such as Amazon EC2 instances, Elastic Load Balancing
load balancers, or Amazon S3 buckets – and can also be used to route
users to infrastructure outside of AWS. You can use Amazon Route 53 to
configure DNS health checks to route traffic to healthy endpoints or
to independently monitor the health of your application and its
endpoints. Amazon Route 53 Traffic Flow makes it easy for you to
manage traffic globally through a variety of routing types, including
Latency Based Routing, Geo DNS, Geoproximity, and Weighted Round
Robin—all of which can be combined with DNS Failover in order to
enable a variety of low-latency, fault-tolerant architectures. Using
Amazon Route 53 Traffic Flow’s simple visual editor, you can easily
manage how your end-users are routed to your application’s
endpoints—whether in a single AWS region or distributed around the
globe. Amazon Route 53 also offers Domain Name Registration – you can
purchase and manage domain names such as example.com and Amazon Route
53 will automatically configure DNS settings for your domains.
Trying to host multiple applications on AWS Lightsail with https being allowable on all of them, but running into a problem. It appears as though the Lightsail load balancers only allow a single certificate to be active at one time. These sites are low-traffic so I would like to only have a single load balancer or ec2 instance for multiple domains that can support https on all of them. Does AWS provide a way to do this that integrates with Lighsail or what is the recommended approach?
You are correct that LightSail balancers only support 1 certificate, but that single certificate can support up to 10 domain names.
One of the domains is the "main" one and the other (up to) 9 are "alternate" domains and subdomains, but operationally it doesn't make any difference which one is the "main" one and which ones are alternates.
https://lightsail.aws.amazon.com/ls/docs/en/articles/add-alternate-domain-names-to-tls-ssl-certificate-https
Certificates are not editable, so if you already created one, you'll need to create a new one with all the domains, and attach it to the balancer.
I have two instances under one Application Loadbalancer. Both instances are under the same target group with default routing.
Can I control traffic routing to the instances at the application level?
I'd like to deploy a new version of the code to one instance and allow only a small amount of traffic to that instance for testing.
Route 53 can achieve this A/B Testing via weighted routing.
Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software.
RE: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-weighted
The architecture would comprise of two routes with different weights(https://aws.amazon.com/blogs/devops/introducing-application-load-balancer-unlocking-and-optimizing-architectures/):
The other optimized recommendation outlined in the article above is to use an application load balancer to rewrite the URLs instead of using DNS.
We want to give more server resources to some enterprise customer. How can we configure our load balancer so that users from certain IP addresses will route to our more high-end servers?
This isn't possible with Elastic Load Balancers (ELBs). ELB is designed to distribute all traffic approximately equally to all of the instances behind it. It does not have any selective routing capability or custom "weighting" of back-ends.
Given the relatively low cost of an additional balancer, one option is to set up a second one with a different hostname, in front of this preferred class of instances, and provide that alternate hostname to your priority clients.
Otherwise you'll need to use a third party balancer, either behind, or instead of, ELB, which will allow you to perform more advanced routing of requests, based on the client IP, the URI path, or other variables.
A balancer operating behind the ELB seems redundant at first glance, but it really isn't, since the second balancer can provide more features, while ELB conveniently provides the front-end entry point resiliency into a cluster of load balancers spanning availability zones, without you having to manage that aspect.