AWS rotate ip address on each request (api gateway + lambda) - amazon-web-services

I am creating a scraper that calls a public API. I want to have a pool of proxy IP's that I can use to send each request from. My request interval should be around 10s, and each IP is allowed to send a request every minute. So having a pool of 6 ip's would be sufficient.
My setup is a EC2 server that send a request to API Gateway which calls the lambda function.
Is there a way for me to set this up so that each lambda request is send from a different ip
So let's assume I start polling from 08:00:00
08:00:00 - 0.0.0.1
08:00:10 - 0.0.0.2
08:00:20 - 0.0.0.3
08:00:30 - 0.0.0.4
08:00:40 - 0.0.0.5
08:00:50 - 0.0.0.6
08:01:00 - 0.0.0.1
etc.
or am I completely looking in the wrong direction and is there a different way for me to solve this problem?

The outbound requests from Lambda will come from different IP addresses as long as they're run on different instances of Lambda. You can have Lambda keep a minimum number of instances in a read to run state, for additional cost. But then you don't have a mechanism to route a specific request to a specific Lambda instance.
Since you're running an EC2 instance, you could bind multiple public IP addresses to it and then route each outbound request to the next one in turn.

Related

How buffer/delay incoming HTTP request until down backend wakes up?

https://companyA.acme.org/custom/api/endpoint1 is hosted on a dedicated ec2 instance
https://companyB.acme.org/another/custom/apiendpoint is hosted on a dedicated ec2 instance
(both ec2 instance are using the same core app, each customer can customize the catalog of API endpoints)
Most of the time those ec2 instances are idle, so we secretly want to stop them, but we don't want the customer to care about the instance being up or not.
We can accept a 2 sec delay on response timing when instance needs to be wake up before answering the customer API call
My idea is to intercept all incoming HTTP request and buffer them before routing + forwarding.
I need a delay to be able to check if a backend matching the subdomain is up or not and wake it up if it is down.
Anyone knows any existing proxy / load balancing solution able to buffer / queue HTTP requests, then allows to do some custom magic (in order to launch the right ec2 instance), then forward the request based on Origin/Referer ?
(the answer being probably every existing proxy for the last part )
I was thinking about the following:
NGINX in front of everyone (point all route53 subdomains to this NGINX)
Catch AWS event when someone is calling https://companyA.acme.com/custom/api/endpoint2
Trigger AWS lambda that will starts corresponding ec2 host
But I am not sure on how NGINX will handle the request buffering / forwarding while I start the ec2 host.
Bonus question : how not to waste any time forwarding the request in case the backend is already up ?

Notify all EC2 instances running in ASG

I've a microservice application that has multiple instances running in ASG. All these applications maintains some internal state. This application exposes Actuator endpoints to refresh it's state. I've some applications which are running on-prem. The scenario is, On some event, I want to call those Actuator endpoints of applications running in AWS to refresh their state. The problem is, If I call LoadBalanced url, then call would go to only one instance. So, I'm thinking of below solutions.
Use SQS and let on-prem ap publish and AWS app consume that message. But here also, only one instance will receive the message.
Use SNS but listeners are http/s based so URL would remain same so I think only one instance would receive the message. (AFAIK)
Any other solution? Please suggest.
Thanks
Use SNS but listeners are http/s based so URL would remain same so I
think only one instance would receive the message. (AFAIK)
When using SNS each server would subscribe to the SNS topic, and when each server subscribes it would provide SNS with its direct HTTP(s) URL (not the load balancer URL). When SNS receives a message it would send it to each server that is currently subscribed. I'm not sure SNS will submit the request to the actuator endpoint in the correct format that your application needs though.
There are likely several solutions you could consider, including ones that won't require a code change. Such as establishing a VPN connection between your on-premise applications and the VPC that contains your ASGs, which would allow you to invoke each machine's refresh endpoint by it's unique private ip address.
However, more simply, if you're using an AWS Classic ELB or ALB, than repeated calls to the load balancer url should hit each machine running your application if enough calls to the refresh endpoint are made.
Although this may not meet your use case, say if you must strictly limit refresh calls to 1 time per endpoint. You'd have to experiment with your software and the load balancer's round-robin behavior.

AWS - Abnormal Data Transfer OUT

I´m consistently being charged for a surprisingly high amount of data transfer out (from Amazon to Internet).
I looked into the Usage Reports of the past few months and found out that the Data Transfer Out was coming out of an Application Load Balancer (ALB) between the Internet and multiple nodes of my application (internal IPs).
Also noticed that DataTransfer-Out-Bytes is very close to the DataTransfer-In-Bytes in the same load balancer, which is weird (coincidence?). I was expecting the response to each request to be way smaller than the request itself.
So, I enabled flow logs in the ALB for a few minutes and found out the following:
Requests coming from the Internet (public IPs) in to ALB = ~0.47 GB;
Requests coming from ALB to application servers in the same availability zone = ~0.47 GB - ALB simply passing requests through to application servers, as expected. So, about the same amount of traffic.
Responses from application servers back into the same ALB = ~0.04 GB – As expected, responses generate way less traffic back into ALB. Usually a 1K request gets a simple “HTTP 200 OK” response.
Responses from ALB back to the external IP addresses => ~0.43 GB – this was mind-blowing. I was expecting ~0.04GB, the same amount received from the application servers.
Unfortunately, ALB does not allow me to use packet sniffers (e.g. tcpdump) to see that is actually coming in and out. Is there anything I´m missing? Any help will be much appreciated. Thanks in advance!
Ricardo.
I believe the next step in your investigation would be to enable ALB access logs and see whether you can correlate the "sent_bytes" in the ALB access log to either your Flow log or your bill.
For information on ALB access logs see: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html
There is more than one way to analyze the ALB access logs, but I've always been happy to use Athena, please see: https://aws.amazon.com/premiumsupport/knowledge-center/athena-analyze-access-logs/

Forward clients to a specific ENI through ALB

I don't know if it's even possible to do so, but I will still ask. The thing is that I want to have (using ECS) one service A with tasks that do some job with the clients (create TCP connection, then form a group from multiple players and send to each player that they are formed in this group). Then I want this clients to make request to some specific task (some ENI with private IP, because I use awsvpc) from other service B behind an ALB (and then that task sends a response to those clients and starts working with them).
So my question is: "How can I forward multiple clients to the same specific ENI if that ENI is behind ALB?". Maybe in service's A tasks I should use AWS SDK to figure out the IPs of a service B tasks? But I still don't know how to reach that task by private IP. Is that even possible to "tell" ALB that I want to connect to some specific ENI?
Yes, you can configure the ALB to route to a specific IP. The listener on your ALB has routing rules that you can edit. Rules can be based on the domain name and path to which the HTTP request was sent.
Here is a detailed Tutorial on how to do that.

Limit number of connections to instances with AWS ELB

We are using AWS classic ELB for our service and our service can only serve x number of requests at a time. If the number of requests are greater than x then we do not want to route those requests to the instance and neither do we want to lose those requests. We would like to limit the number of connections to the instances registered with the ELB. Is there some ELB setting to configure max connections to instances?
Another solution I could find was to use ELB connection draining but based on the ELB doc [1] , using connection draining will mark the instance as OutofService after serving in-flight requests. Does that mean the instance will be terminated and de-registered from ELB after in-flight requests are served? We do not want to terminate and de-register the instances, we just want to limit the number of connections to the instances. Any solutions?
[1] http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-conn-drain.html
ELB is more meant to spread traffic evenly across instances registered for it. If you have more traffic, you throw up more instances to deal with it. This is generally why a load balancer is matched with an auto scaling group. The Auto Scaling Group will look at set constraints and based on that either spins up more instances or pulls them down (ie. your traffic starts to slow down).
Connection draining is more meant for pulling traffic from bad instances so it doesn't get lost. Bad instances mean they aren't passing health checks because something on the instance is broken. ELB by itself doesn't terminate instances, that's another part of what the Auto Scaling Group is meant to do (basically terminate the bad instance and spin up a new instance to replace it). All ELB does is stop sending traffic to it.
It appears your situation is:
Users are sending API requests to your Load Balancer
You have several instances associated with your Load Balancer to process those requests
You do not appear to be using Auto Scaling
You do not always have sufficient capacity to respond to incoming requests, but you do not want to lose any of the requests
In situations where requests come at a higher rate than you can process them, you basically have three choices:
You could put the messages into a queue and consume them when capacity is available. You could either put everything in a queue (simple), or only use a queue when things are too busy (more complex).
You could scale to handle the load, either by using Auto Scaling to add additional Amazon EC2 instances or by using AWS Lambda to process the requests (Lambda automatically scales).
You could drop requests that you are unable to process. Unless you have implemented a queue, this is going to happen at some point if requests rise above your capacity to process them.
The best solution is to use AWS Lambda functions rather than requiring Amazon EC2 instances. Lambda can tie directly to AWS API Gateway, which can front-end the API requests and provide security, throttling and caching.
The simplest method is to use Auto Scaling to increase the number of instances to try to handle the volume of requests you have arriving. This is best when there are predictable usage patterns, such as high loads during the day and less load at night. It is less useful when spikes occur in short, unpredictable periods.
To fully guarantee no loss of requests, you would need to use a queue. Rather than requests going directly to your application, you would need an initial layer that receives the request and pushes it into a queue. A backend process would then process the message and return a result that is somehow passed back as a response. (It's more difficult providing responses to messages passed via a queue because there is a disconnect between the request and the response.)
AWS ELB is practically no limit to get request. If your application handle only 'N' connection, Please go with multiple servers behind the ELB and set ELB health check URL will be your application URL. Once your application not able to respond the request, ELB automatically forward your request to another server which is behind ELB. So that you are not going to miss any request.