Can we control traffic routing to instances on AWS ELB with code - amazon-web-services

I have two instances under one Application Loadbalancer. Both instances are under the same target group with default routing.
Can I control traffic routing to the instances at the application level?
I'd like to deploy a new version of the code to one instance and allow only a small amount of traffic to that instance for testing.

Route 53 can achieve this A/B Testing via weighted routing.
Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software.
RE: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-weighted
The architecture would comprise of two routes with different weights(https://aws.amazon.com/blogs/devops/introducing-application-load-balancer-unlocking-and-optimizing-architectures/):
The other optimized recommendation outlined in the article above is to use an application load balancer to rewrite the URLs instead of using DNS.

Related

routing traffic from an external load balancer to an internal load balancer on gcp

Lets assume we are laboring under the following requirements:
We would like to run our code on gcp and wish to do so on kubernetes as opposed to on any of the managed solutions (app engine, cloud run...).
Our services need to be exposed to the internet
We would like to preserve client ips (the service should be able to read it)
We would also like to deploy different versions of our services and split traffic between them based on weights
All of the above, except for traffic splitting, can be achieved by a traditional ingress resource which on gcp is implemented by a Global external HTTP(S) load balancer (classic). For this reason I was overjoyed when I noticed that gcp is implementing the new kubernetes gateway and route resources. As explained here they are in combination able to do everything that the ingress resource did and more, specifically, weighted traffic distribution.
Unfortunately however, once I began implementing this on a test cluster, I discovered that the gateway resource is implemented by 4 different classes, all of which are backed by the same two load balancers already backing the existing ingress resource. And only the two internal gateway classes, backed by gcp's internal HTTP(S) load balancer supports traffic splitting. See the following two images:
and
So, if we want to expose our services to the internet, we cannot traffic split, and if we wish to traffic split, we may only do so internally. I wasn't dismayed by this, it actually kind of makes sense. I mean ideally, the gke-l7-gxlb (backed by global external https load balancer (classic)) would support traffic splitting but I've encountered this sort of architecture in orgs before. You have an external load balancer, which does ssl termination and then sends traffic to internal load balancers which split traffic based on various rules. There are even images of such diagrams, using multiple load balancers on gcp's various tutorial pages. However, to finally return to the title, I cannot seem to be able to convince gcp's external load balancer to route traffic to the internal one. It seems to be very restricted in its backend definitions. And simply choosing an ip address (provided to us upon the creation of the gateway resource, i.e. the internal load balancer) does not appear to be an option.
Is this possible? Am I completely off base here? should I be solving this in a completely different way? Is there an easier way to achieve the above 4 checkboxes? I feel like an external load balancer sending traffic to an ip in my vpc network should be one of its most basic functions, but maybe there is a reason it's not allowing me to do this?

Load Balancing of 2 instances in AWS

I have two VM's (in AWS cloud) connected to single DB. Each VM is having same application running. I want to load balance those two VM's and route based on the traffic. (Like if traffic is more on one VM instance then it should switch to another VM).
Currently I am accessing 2 different instances with 2 different IP addresses with HTTP. Now I want to access those 2 VM's with HTTPS and route the instances with same DNS like (https://dns name/service1/),
(https://dns name/service2/)
How can I do load balancing using nginx ingress.
I am new to AWS cloud. Can someone help me or guide me or suggest me some appropriate related references in getting the solution to it.
AWS offers an Elastic Load Balancing service.
From What is Elastic Load Balancing? - Elastic Load Balancing:
Elastic Load Balancing automatically distributes your incoming traffic across multiple targets, such as EC2 instances, containers, and IP addresses, in one or more Availability Zones. It monitors the health of its registered targets, and routes traffic only to the healthy targets. Elastic Load Balancing scales your load balancer as your incoming traffic changes over time. It can automatically scale to the vast majority of workloads.
You can use this ELB service instead of running another Amazon EC2 instance with nginx. (Charges apply.)
Alternatively, you could configure your domain name on Amazon Route 53 to use Weighted routing:
Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software.
This would distribute the traffic when resolving the DNS Name rather than using a Load Balancer. It's not quite the same because DNS information is cached, so the same client would continue to be redirected to the same server until the cache is cleared. However, it is practically free to use.

Is it possible to map different domain names to one IP address in AWS?

I'm not sure if this is the right place to ask this. If it's not, kindly refer me to the most appropriate place.
I need to have customized domain names for my clients but only one instance of the web app. Is this possible? How do you I go about this?
The answer may simply be yes. But it's not up to AWS, but rather the DNS for the domains you plan to use. All the application running on the AWS IP address has to do is not reject the domain names given to the web server stack in its configuration.
You can create as many domain names as you like to point to a single IP address using AWS route 53 hosted zones. You can create multiple A record in route 53 or you can utilise alias records.
Amazon Route 53 is a highly available and scalable cloud Domain Name
System (DNS) web service. It is designed to give developers and
businesses an extremely reliable and cost effective way to route end
users to Internet applications by translating names like
www.example.com into the numeric IP addresses like 192.0.2.1 that
computers use to connect to each other. Amazon Route 53 is fully
compliant with IPv6 as well.
Amazon Route 53 effectively connects user requests to infrastructure
running in AWS – such as Amazon EC2 instances, Elastic Load Balancing
load balancers, or Amazon S3 buckets – and can also be used to route
users to infrastructure outside of AWS. You can use Amazon Route 53 to
configure DNS health checks to route traffic to healthy endpoints or
to independently monitor the health of your application and its
endpoints. Amazon Route 53 Traffic Flow makes it easy for you to
manage traffic globally through a variety of routing types, including
Latency Based Routing, Geo DNS, Geoproximity, and Weighted Round
Robin—all of which can be combined with DNS Failover in order to
enable a variety of low-latency, fault-tolerant architectures. Using
Amazon Route 53 Traffic Flow’s simple visual editor, you can easily
manage how your end-users are routed to your application’s
endpoints—whether in a single AWS region or distributed around the
globe. Amazon Route 53 also offers Domain Name Registration – you can
purchase and manage domain names such as example.com and Amazon Route
53 will automatically configure DNS settings for your domains.

Subdomain DNS for different endpoints

I have a subdomain, api.example.com, that has 2 REST endpoints, endpoint1 and endpoint2, that ideally would be hosted on different servers (Think EC2 instances for example). Is there a way to configure the DNS record (I am using Amazon Route 53) such that api.example.com/endpoint1 and api.example.com/endpoint2 can each point to their own server? I don't think that is possible, but I just wanted to double check. If it is indeed not possible, is there another way to point the 2 endpoints to the different servers (ideally using AWS)?
You can't do this with DNS, but you can accomplish it with the Application Load Balancer.
Create an ALB, and point DNS at it.
Next, create two target groups, one for each endpoint, and deploy your instances (or autoscaling groups) to the appropriate target group.
The ALB will take care of the routing for you, and you can size and scale each endpoint fleet as needed.

AWS load balancer route the traffic to different instances based on IP addresses

We want to give more server resources to some enterprise customer. How can we configure our load balancer so that users from certain IP addresses will route to our more high-end servers?
This isn't possible with Elastic Load Balancers (ELBs). ELB is designed to distribute all traffic approximately equally to all of the instances behind it. It does not have any selective routing capability or custom "weighting" of back-ends.
Given the relatively low cost of an additional balancer, one option is to set up a second one with a different hostname, in front of this preferred class of instances, and provide that alternate hostname to your priority clients.
Otherwise you'll need to use a third party balancer, either behind, or instead of, ELB, which will allow you to perform more advanced routing of requests, based on the client IP, the URI path, or other variables.
A balancer operating behind the ELB seems redundant at first glance, but it really isn't, since the second balancer can provide more features, while ELB conveniently provides the front-end entry point resiliency into a cluster of load balancers spanning availability zones, without you having to manage that aspect.