Throttling of Flow not working when created from Route - akka

Consider routes containing all the HTTP services
val routes:Route = ...
I wish to throttle number of requests so I used Route.handleFlow(routes) to create flow and called throttle method with finite duration.
Finally, I created HTTP binding using
Http().bindAndHandle(flowObjectAfterThrottling, hostname, port)
When HTTP requests are fired from a loop throttling is not obeyed by akka.

One possibility is that the http requests being "fired from a loop" may be using separate connections. Each incoming connection is throttling at the appropriate rate but the aggregate throughput is higher than expected.
Use Configurations Instead
You don't need to write software to set limiting rates for your Route.
If you are only concerned with consumption of a resource, such as disk or RAM, then you can remove the rate logic and use akka configuration settings instead:
# The maximum number of concurrently accepted connections when using the
# `Http().bindAndHandle` methods.
max-connections = 1024
# The maximum number of requests that are accepted (and dispatched to
# the application) on one single connection before the first request
# has to be completed.
pipelining-limit = 16
This doesn't provide the ability to set a maximum frequency, but it does at least allow for the specification of a maximum concurrent usage which is usually sufficient for resource protection.

Related

High data usage with AWS IOT Core

I have developed an application that does simple publish-subscrible messages with AWS IOT Core service.
As per the requirement of AWS IoT SDK, I need to call aws_iot_mqtt_yield() frequently.
Following is description of function aws_iot_mqtt_yield
Called to yield the current thread to the underlying MQTT client. This
time is used by the MQTT client to manage PING requests to monitor the
health of the TCP connection as well as periodically check the socket
receive buffer for subscribe messages. Yield() must be called at a
rate faster than the keepalive interval. It must also be called at a
rate faster than the incoming message rate as this is the only way the
client receives processing time to manage incoming messages. This is
the outer function which does the validations and calls the internal
yield above to perform the actual operation. It is also responsible
for client state changes
I am calling this function at a period of 1 second.
As it is sending PING on tcp connection, it creates too much internet data usage in long run when system is IDLE for most of the time.
My system works on LTE as well and paying more money for IDLE time is not acceptable for us.
I tried to extend period from 1 second to 30 seconds to limit our data usage but it adds 30 seconds latency in receiving messages from cloud.
My requirement is to achieve fast connectivity with low additional data usage in maintaining connection with AWS.

How to implement resiliency (retry) in a nested service call chain

We have a webpage that queries an item from an API gateway which in turn calls a service that calls another service and so on.
Webpage --> API Gateway --> service#1 --> service#2 --> data store (RDMS, S3, Azure blob)
We want to make the operation resilient so we added a retry mechanism at every layer.
Webpage --retry--> API Gateway --retry--> service#1 --retry--> service#2 --retry--> data store.
This however could case a cascading failure because if the data store doesn't response on time, it will cause every layer to timeout and retry. In other words, if each layer has the same connection timeout and is configured to retry 3 times, then there will be a total of 81 retries to the data store (which is called a retry storm).
One way to fix this is to increase the timeout at each layer in order to give the layer below time to retry.
Webpage --5m timeout--> API Gateway --2m timeout--> service#1
This however is unacceptable because the timeout at the webpage will be too long.
How should I address this problem?
Should there only be one layer that retries? Which layer? And how can the layer know if the error is transient?
A couple possible solutions (and you can/should use both) would be to retry on different conditions and implement rate limiters/circuit breakers.
Retry On is a technique where you don't retry on every condition, but only specific conditions. This could be a specific error code or a specific header value. E.g. in your current situation, DO NOT retry on timeouts; only retry on server failures. In addition, you could have each layer retry on different conditions
Rate limiting would be to stick either a local or global rate limiter service inline to the connections. This would just help to short-circuit the thundering herd in the case that it starts up. E.g. rate limit the data layer to X req/s (insert real values here) and the gateway to Y req/s and then even if a service attempts lots of retries it won't pass too far down the chain. Similarly to this is circuit breaking, where each layer only permits X active connections to any downstream, so just another way to slow those retry storms.

HttpClient Akka timeout settings

I am trying to implement an HTTP client in my Akka app in order to consume a 3rd party API.
What I am trying to configure are the timeout and the number of retries in case of failure.
Is the below code the right approach to do it?
val timeoutSettings =
ConnectionPoolSettings(config).withIdleTimeout(10 minutes)
.withMaxConnections(3)
val responseFuture: Future[HttpResponse] =
Http().singleRequest(
HttpRequest(
uri = "https://api.com"
),
timeoutSettings
)
That is not the right approach (for below I refer to the settings via .conf file rather than the programmatic approach, but that should correspond easily).
idle-timeout corresponds to the
time after which an idle connection pool (without pending requests)
will automatically terminate itself
on the pool level, and on akka.http.client level to
The time after which an idle connection will be automatically closed.
So you'd rather want the connection-timeout setting.
And for the retries its the max-retries setting.
The max-connections setting is only:
The maximum number of parallel connections that a connection pool to a
single host endpoint is allowed to establish
See the official documentation

AWS API Gateway Cache - Multiple service hits with burst of calls

I am working on a mobile app that will broadcast a push message to hundreds of thousands of devices at a time. When each user opens their app from the push message, the app will hit our API for data. The API resource will be identical for each user of this push.
Now let's assume that all 500,000 users open their app at the same time. API Gateway will get 500,000 identical calls.
Because all 500,000 nearly concurrent requests are asking for the same data, I want to cache it. But keep in mind that it takes about 2 seconds to compute the requested value.
What I want to happen
I want API Gateway to see that the data is not in the cache, let the first call through to my backend service while the other requests are held in queue, populate the cache from the first call, and then respond to the other 499,999 requests using the cached data.
What is (seems to be) happening
API Gateway, seeing that there is no cached value, is sending every one of the 500,000 requests to the backend service! So I will be recomputing the value with some complex db query way more times than resources will allow. This happens because the last call comes into API Gateway before the first call has populated the cache.
Is there any way I can get this behavior?
I know that based on my example that perhaps I could prime the cache by invoking the API call myself just before broadcasting the bulk push job, but the actual use-case is slightly more complicated than my simplified example. But rest assured, solving this simplified use-case will solve what I am trying to do.
If you anticipate that kind of burst concurrency, priming the cache yourself is certainly the best option. Have you also considered adding throttling to the stage/method to protect your backend from a large surge in traffic? Clients could be instructed to retry on throttles and they would eventually get a response.
I'll bring your feedback and proposed solution to the team and put it on our backlog.

When is it necessary to queue the requests from the client

I have heard that there is a limit for a server for the requests number it can process.
So if the requests from the client are large than the number people will queue the requests.
So I have two problems:
1 When
How to decide if it is necessary to queue the requests? How to measure the largest number?
2 How
If the queue is unavoidable, so where should be the queue done?
For a J2EE application using spring web mvc as the framework, I want to know if the queue should be put in the Controller or the Model or the DAO?
3 Is there a idea which can avoid the queue but keeping providing the service?
First you have to establish your limit at the server actually is. Its likely that its a limit on the frequency of messages, ie. maybe your limited to sending 10 requests a second. If thats the case then your would need to keep a count of how many messages you've sent out in a second, then before you send out a request check to see if you will breach this limit, if this is true then you must make the thread wait until the second is up. If not your free to send the request. This thread would be reading from a queue of outbound messages.
If the server limit is determined in an other way, i.e. dynamically based on its current load, which sounds like it might be in your case, there must be a continuous feed of request limits which you must process to determine the current limit. Once you have this limit you can process the requests in the same way as mentioned in the first paragraph.
As for where to put the queue and the associated logic, i'd put it in the controller.
I don't think there is a way to avoid the queue, you are forced to throttle your requests an therefore you must queue your outbound requests internally so that they are not lost, and will be processed at some point in the future.