restful API maximum limit : API Queue - amazon-web-services

I am developing a rest service using Spring boot. The rest service takes an input file and do some operation on it and return back the processed file.
I know that in spring boot we have configuration "server.tomcat.max-threads" which can be a maximum of 400.
My rest application will be deployed on a cluster.
I want to understand how I should be handling if the request is more than 400 for a case wherein my cluster has only one node.
Basically I wanted to understand what is the standard way for serving requests more than the "max-thread-per-node X N-nodes" in a cloud solution.

Welcome to AWS and Cloud Computing in general. What you have described is the system elasticity which is made very easy and accessible in this ecosystem.
Have a look at AWS Auto Scaling. It is a service which will monitor your application and automatically scale out to meet the increasing demand and scale in to save costs when the demand is low.
You can set triggers for the same. For eg. If you know that your application load is a function of Memory usage, whenever memory usage hits 80% you can add nodes to the custer. read more about various scaling Policies here.
One such scaling metric is ALBRequestCountPerTarget. It will scale the number of nodes int he cluster to maintain the average request count per node(target) in the cluster. With some buffer, you can set this to 300 and achieve what you are looking for. Read more about this in the docs.

Related

Best AWS RDS instance for my requirements

I have a database of 3GB size in AWS RDS t2.micro instance. My CPU credit balance is most of the time is zero. My API calls taking long time. I update data daily so I interact with RDS frequently and lot of times. So what type of instance I should take to make my API calls faster?
Thank You.
Enable x-tracing so you can see how long each request takes.
https://aws.amazon.com/xray/
API call that is slow can be alot of reasons.
your aws region is far away or internet is just slow
cold start of lambda https://lumigo.io/blog/this-is-all-you-need-to-know-about-lambda-cold-starts/
processing time of lambda
database throttling
using rest GW API instead of a HTTPAPI https://aws.amazon.com/blogs/compute/building-better-apis-http-apis-now-generally-available/
analyze your application and find out where the bottleneck is.
Most of the time its not your database.
I can help you further if you:
provide me a architectural diagram
take a screenshot of your monitoring tab of the RDS
show me your response time and xray trace.

Performance testing for serverless applications in AWS

In Traditional Performance Automation Testing:
There is an application server where all the requests hits are received. So in this case; we have server configuration (CPU, RAM etc) with us to perform load testing (of lets say 5k concurrent users) using Jmeter or any load test tool and check server performance.
In case of AWS Serverless; there is no server - so to speak - all servers are managed by AWS. So code only resides in lambdas and it is decided by AWS on run time to perform load balancing in case there are high volumes on servers.
So now; we have a web app hosted on AWS using serverless framework and we want to measure performance of the same for 5K concurrent users. With no server backend information; only option here is to rely on the frontend or browser based response times - should this suffice?
Is there a better way to check performance of serverless applications?
I didn't work with AWS, but in my opinion performance testing in case serverless applications should perform pretty the same way as in traditional way with own physical servers.
Despite the name serverless, physical servers are still used (though are managed by aws).
So I will approach to this task with next steps:
send backend metrics (response time, count requests and so on) to some metrics system (graphite, prometheus, etc)
build dashboard in this metric system (ideally you should see requests count and response time per every instance and count of instances)
take a load testing tool (jmeter, gatling or whatever) and start your load test scenario
During the test and after the test you will see how many requests your app processing, it response times and how change count of instances depending of concurrent requests.
So in such case you will agnostic from aws management tools (but probably aws have some management dashboard and afterwards it will good to compare their results).
"Loadtesting" a serverless application is not the same as that of a traditional application. The reason for this is that when you write code that will run on a machine with a fixed amount CPU and RAM, many HTTP requests will be processed on that same machine at the same time. This means you can suffer from the noisy-neighbour effect where one request is consuming so much CPU and RAM that it is negatively affecting other requests. This could be for many reasons including sub-optimal code that is consuming a lot of resources. An attempted solution to this issue is to enable auto-scaling (automatically spin up additional servers if the load on the current ones reaches some threshold) and load balancing to spread requests across multiple servers.
This is why you need to load test a traditional application; you need to ensure that the code you wrote is performant enough to handle the influx of X number of visitors and that the underlying scaling systems can absorb the load as needed. It's also why, when you are expecting a sudden burst of traffic, you will pre-emptively spin up additional servers to help manage all that load ahead of time. The problem is you cannot always predict that; a famous person mentions your service on Facebook and suddenly your systems need to respond in seconds and usually can't.
In serverless applications, a lot of the issues around noisy neighbours in compute are removed for a number of reasons:
A lot of what you usually did in code is now done in a managed service; most web frameworks will route HTTP requests in code however API Gateway in AWS takes that over.
Lambda functions are isolated and each instance of a Lambda function has a certain quantity of memory and CPU allocated to it. It has little to no effect on other instances of Lambda functions executing at the same time (this also means if a developer makes a mistake and writes sub-optimal code, it won't bring down a server; serverless compute is far more forgiving to mistakes).
All of this is not to say its not impossible to do your homework to make sure your serverless application can handle the load. You just do it differently. Instead of trying to push fake users at your application to see if it can handle it, consult the documentation for the various services you use. AWS for example publishes the limits to these services and guarantees those numbers as a part of the service. For example, API Gateway has a limit of 10 000 requests per second. Do you expect traffic greater than 10 000 per second? If not, your good! If you do, contact AWS and they may be able to increase that limit for you. Similar limits apply to AWS Lambda, DynamoDB, S3 and all other services.
As you have mentioned, the serverless architecture (FAAS) don't have a physical or virtual server we cannot monitor the traditional metrics. Instead we can capture the below:
Auto Scalability:
Since the main advantage of this platform is Scalability, we need to check the auto scalability by increasing the load.
More requests, less response time:
When hitting huge amount of requests, traditional servers will increase the response time where as this approach will make it lesser. We need to monitor the response time.
Lambda insights in Cloudwatch:
There is an option to monitor the performance of multiple Lambda functions - Throttles, Invocations & Errors, Memory usage, CPU usage and network usage. We can configure the Lambdas we need and monitor in the 'Performance monitoring' column.
Container CPU and Memory usage:
In cloudwatch, we can create a dashboard with widgets to capture the CPU and memory usage of the containers, tasks count and LB response time (if any).

AWS load balancing for a complex scenario

We were trying to implement an elastic scaling application on AWS. But currently, due to the complexity of the application process, I have an issue with the current routing algorithm.
In the application when we send a request (a request to a complex calculation). We immediately send a token to the user and start calculating. So the user can return with the token any day and access those calculated results. When there are more calculation requests they will be in a queue and get executed 2 by 2 as one calculation takes a considerable amount of CPU. As you can see, in this specific scenario.
The application active connection count is very low as we respond to the user with the token as soon as we get the request.
CPU usage will look normal as we do calculations 2 by 2
Considering these facts, with the load balancer routing we are facing a problem of elastic instances terminating before the full queue is finished calculating and the queue grows really long as the load balancer does not have any idea about the queued requests.
To solve it, either we need to do routing manually, or we need to find a way to let the load balancer know the queued request count (maybe with an API call). If you have an idea of how to do this please help me. (I'm new to AWS)
Any idea is welcome.
Based on the comments.
An issue observed with the original approach was premature termination of instances since they their scale-in/out is based on CPU utilization only.
A proposed solution to rectify the issue based the scaling activities on the length of the job queue. En example of such a solution is shown in the following AWS link:
Using Target Tracking with the Right Metric
In the example, the scaling is based on the following metric:
The solution is to use a backlog per instance metric with the target value being the acceptable backlog per instance to maintain.

Scaling Google Cloud Pull subscribers

I'm considering Google Cloud Pub/Sub and trying to determine whether I will go for Pull or Push subscriber model.
Notably the pull model is able to handle larger throughput
Large volume of messages (many more than 1/second)
Efficiency and throughput of message processing is critical.
However, the push model can sit behind an HTTP load balancer, and is therefore able to auto-scale subscriber nodes during times when the number of queued messages exceeds the capacity of a single subscriber node.
The pull model is also more secure, because it does not require exposing sensitive operations in an HTTP endpoint.
The issue is how can we scale subscriber nodes in the pull model? Is there something in GCP for this kind of situation?
There are several options for auto scaling via a pull subscriber:
Use GKE and to set up Autoscaling Deployments with Cloud Monitoring Metrics. This allows you to scale the number of instances based on the num_undelivered_messages metric.
Use a GCE managed instance group and scale based the num_undelivered_messages metric.
Use Dataflow to process messages from Pub/Sub and set up autoscaling.
When you pull a subscription, you need to be connected to the subscription. For the lowest latency, being connected full time is required. So, you can do this with compute engine.
Your compute pulls your queue and consume the messages. In case of huge amount of message, the compute engine resources (CPU and Memory) will increase. You can put this compute into a manage instance group (MIG) and set scalability threshold, like the quantity of CPU use.
Of course, the message pulling is more efficient in term of network bandwidth and protocol handshake. However, it requires a compute full time up and the scalability velocity is slow.
If you consider the Push subscription, of course, the HTTPS protocol consume much more bandwidth and is not very efficient, but you can push the message to Cloud Run or Cloud Functions. The scalability is very elastic and based on the traffic (number of messages pushed) and not on the CPU usage.
In addition, you can push pubsub message securely to CLoud Functions and Cloud Run by using the correct identification in your pubsub subscription

Tracking Usage per API key in a multi region application

I have an app deployed in 5 regions.
The latency between the regions varies from 150ms to 300ms
Currently, we use the method outlined in this article (usage tracking part):
http://highscalability.com/blog/2018/4/2/how-ipdata-serves-25m-api-calls-from-10-infinitely-scalable.html
But we export logs from Stackdriver to Cloud Pub/Sub. Then we use Cloud Dataflow to count the number of requests consumed per API key and update it in Mongo Atlas database which is geo-replicated in 5 regions.
In our app, we only read usage info from the nearest Mongo replica for low latency. App never updates any usage data directly in Mongo as it might incur latency cost since the data has to be updated in Master which may be in another region.
Updating API key usage counter directly from the app in Mongo doesn't seem feasible because we've traffic coming in at 10,000 RPS and due to the latency between region, I think it will run into some other issue. This is just a hunch, so far I've not tested it. I came to this conclusion based on my reading of https://www.mongodb.com/blog/post/active-active-application-architectures-with-mongodb
One problem is that we end up paying for cloud pub/sub and Dataflow. Are there strategies to avoid this?
I researched on Google but didn't find how other multi-region apps keep track of usage per API key in real-time. I am not surprised, from my understanding most apps operate in a single region for simplicity and until now it was not feasible to deploy an app in multiple regions without significant overhead.
If you want real-time then the best option is to go with Dataflow. You could change the way data arrives to Dataflow, for example usging Stackdriver → Cloud Storage → Dataflow, but instead of going though pub/sub you would go through Storage, so it’s more of a choice of convenience and comparing prices of each product cost on your use case. Here’s an example of how it could be with Cloud Storage.