Forward clients to a specific ENI through ALB - amazon-web-services

I don't know if it's even possible to do so, but I will still ask. The thing is that I want to have (using ECS) one service A with tasks that do some job with the clients (create TCP connection, then form a group from multiple players and send to each player that they are formed in this group). Then I want this clients to make request to some specific task (some ENI with private IP, because I use awsvpc) from other service B behind an ALB (and then that task sends a response to those clients and starts working with them).
So my question is: "How can I forward multiple clients to the same specific ENI if that ENI is behind ALB?". Maybe in service's A tasks I should use AWS SDK to figure out the IPs of a service B tasks? But I still don't know how to reach that task by private IP. Is that even possible to "tell" ALB that I want to connect to some specific ENI?

Yes, you can configure the ALB to route to a specific IP. The listener on your ALB has routing rules that you can edit. Rules can be based on the domain name and path to which the HTTP request was sent.
Here is a detailed Tutorial on how to do that.

Related

AWS call particular instance behind ASG and ALB

We have a usecase where we want to make an http call to a particular EC2 instance behind an ALB. I tried finding solution for this on AWS docs but was unable to find one. Can anyone suggest me how i can achieve this?
Why i want to do this?
So, Basically we have a service which is scaled out using ASG and we have a load balancer to balance the load of various clients that will connect to it. This is a persistent long running connection.
Now, we also have another service which wants to send request to the instance where a particular client is connected.
Any help is much appreciated. Thanks!
Is your question how to ensure that a client is always connecting to the same backend-instance? If yes, then Sticky Sessions are your answer.
If you want to be able to address and particular instance in the backend that's not possible. A big part of the reason to use a Load Balancer is to make the number of backend instances an implementation detail that's not exposed to the client.
In principle you could create a DNS-Record for each EC2-Instance, route that to the ALB and then do Host-based-routing to map each of them to a particular instance, but in that case you might as well expose the instances directly (unless you need encryption and stuff like that).

How do I tell which ELB sent me traffic?

I have 2 clustered EC2 instances with 2 Classic ELB's one is for the DNS xxx.com and one for xxx.net. Both point to the same cluster of EC2 instances. My application needs to know whether the incoming request came from the .com or the .net URL.
The problem is that technically the ELB forwards the request, so I lose that in the header. I can get the IP address of the ELB, but Amazon will occasionally drop the ELB and give us a new one with a different IP, so it works for a while, then breaks out of nowhere.
Does Amazon offer something like a "static" ELB? I can't find anything so I assume not.
Is there any other way around this?
My application needs to know whether the incoming request came from the .com or the .net URL.
Read this from the incoming HTTP Host header. If it doesn't contain one of the two expected values, throw an error (503, 421, whatever makes the most sense) and don't render anything.
The problem is that technically the ELB forwards the request, so I lose that in the header.
I don't know what this statement is intended to convey. The Host header is set by the user agent and is not modified by the ELB so your application can read it from the incoming request.
You should not be looking at the internal IP address of the ELB for anything except to log it for correlation/troubleshooting if needed.
Of course, you also don't actually need 2 Classic ELBs for this application. You can use a single classic balancer with a single certificate containing both domains, or a single Application Load Balancer with either a combo cert or two separate certs.

How 2 services can talk to each other on AWS Fargate?

I setup a Fargate cluster on AWS. My cluster has the following services:
server-A (port 3000)
server-B (port 4000)
Each service is in the same VPC and have the same security group (any ports, any source, any destination). The VPC is isolated from internet.
Now, I want server-A to send a http query to server-B. I would assume that, as in Docker swarm, there is a private DNS that maps the service name to its private IP, and it would be as simple as sending the query to: http://server-B:4000. However, server-A gets a timeout, which means it can't reach server-B.
I've read in the documentation that I can put the 2 containers in the same service, each container listening on a different port, so that, thanks to the loopback interface, from server-A, I could query http://127.0.0.1:4000 and server-B will respond, and vice-versa.
However, I want to be able to scale server-A and server-B independently, so I think it makes sense to keep each server independant from each other by having 2 services.
I've read that, for 2 tasks to talk to each other, I need to setup a load balancer. Coming from the world of Docker Swarm, it was so easy to query the services by their service name, and behind the scene, the request was forwarded to one of the containers in that service. But it doesn't seem to work like that on AWS Fargate.
Questions:
how can server-A talk to server-B?
As service sometimes redeploy, their private IP changes, so it makes no sense to query by IP, querying by hostname seems the most natural way
Do I need to setup any kind of internal DNS?
Thanks for your help, I am really lost on doing this simple setup.
After searching, I found out it was due to the fact that I was not enabling "Service Discovery" during the service creation, so no private DNS was created. Here is some additional documentation which explains exactly the steps:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service-discovery.html

Using Redis behing AWS load balancer

We're using Redis to collect events from our web application (pub/sub based) behind AWS ELB.
We're looking for a solution that will allow us to scale-up and high-availability for the different servers. We do not wish to have these two servers in a Redis cluster, our plan is to monitor them using cloudwatch and switch between them if necessary.
We tried a simple test of locating two Redis server behind the ELB, telnetting the ELB DNS and see what happens using 'redis-cli monitor', but we don't see nothing. (when trying the same without the ELB it seems fine)
any suggestions?
thanks
I came across this while looking for a similar question, but disagree with the accepted answer. Even though this is pretty old, hopefully it will help someone in the future.
It's more appropriate for your question here to use DNS failover with a Redis Replication Auto-Failover configuration. DNS failover provides groups of availability (if you need that level of scale) and the Replication group provides cache up time.
http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-configuring.html
The Active-passive failover should provide the solution you're wanting with High Availability:
Active-passive failover: Use this failover configuration when you want
a primary group of resources to be available the majority of the time
and you want a secondary group of resources to be on standby in case
all of the primary resources become unavailable. When responding to
queries, Amazon Route 53 includes only the healthy primary resources.
If all of the primary resources are unhealthy, Amazon Route 53 begins
to include only the healthy secondary resources in response to DNS
queries.
After you setup the DNS, then you would point that to the Elasticache Redis failover group's URL and add multiple groups for higher availability during a failover operation.
However, you might need to setup your application to write and read from different endpoints to maximize the architecture's scalability.
Sources:
http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Replication.html
http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/AutoFailover.html
Placing a pair of independent redis nodes behind a LB will likely not be what you want. What will happen is ELB will try to balance connections to each instance, splitting half to one and half to another. This means that commands issued by one connection may not be seen by another. It also means no data is shared. So client a could publish a message, and client b being subscribed to the other server won't see the message.
For PUBSUB behind ELB you have a secondary problem. ELB will close an idle connection. So if you subscribe to a channel that isn't busy your ELB will close your connection. As I recall the max you can make this is 60s, meaning if you don't publish a message every single minute your clients will be disconnected.
As to how much of a problem that is depends on your client library, and frankly in my experience most don't handle it well in that they are unaware of the need to re-subscribe upon re-establishing the connection, meaning you would have to code that yourself.
That said a sentinel + redis solution would be quite ideal if your c,isn't has proper sentinel support. In this scenario. Your client asks the sentinels for the master to talk to, and on a connection failure it repeats this process. This would handle the setup you describe, without the problems of being behind an ELB.
Assuming you are running in VPC:
did you register the EC2 instances with the ELB?
did you add the correct security group setting to the ELB (allowing inbound port 23)?
did you add an ELB listener that maps port 23 on the ELB to port 23 on the instances?
did you set sensible ELB health checks (e.g. TCP on port 23) so that ELB thinks the EC2 instances are healthy?
If the ELB thinks the servers behind it are not healthy then ELB will not send them any traffic.

Setting up a loadbalancer behind a proxy server on Google Cloud Compute engine

I am looking to build a scalable REST webservice on the Google Cloud Compute Engine but have a couple of requirements that I am not sure how best to implement.
Structure so far:
2 Instances running a REST webservice connected to a MySQL Cloud database.
(number of instances to scale up in the future)
Load balancer to split request between the two or more Instances.
this part is fine.
What I need next is that the traffic (POST requests from instances to an external webservice) must come from a single IP address. I assume these requests can not route back through the public IP of the load balancer?
I get the impression the solution to this is to route all requests from instances though a 3rd instance running squid. Is this the best way to do this? (side question)
Now to my main question:
I have been reading about ApiAxle which sounds like a nice proxy for Web Services, giving some good access control, throttling and reporting capabilities.
Can I have an instance running ApiAxle followed by a google cloud Load Balancer which shares the request from the proxy to the backend instances that do the leg work and feed the response back through the ApiAxle proxy, thus having everything though a single IP visible to clients using the API? (letting me add new instances to the pool to add capacity.)
and Would the proxy be much of a bottle neck?
Thanks in advance.
/Dave
(new to this, so sorry if its a stupid question because I cant find anything like this on the web)
Sounds like you need to NAT on your outbound traffic so it appears to come from one IP address. You need to do that via a third instance since Google LB stack doesn't provide this. GCLB works only with inbound connections on the load-balanced IP.
You can setup source-NAT using advanced routing, or you can use a proxy as you suggested.