spring concurrency how spring handles multiple requests at the same time - concurrency

Suppose if 1000 requests are sent to the browser at the same time by 1000 different users, how spring serves each request. As spring bean default scope uhh s singleton how it works.

Its a good question - You can use scope="prototype" for each of your controller bean ,which will make your Bean essentially NON SINGLETON... but do keep in mind "NOT" to have instance variable defined. I have faced situation in past where even though I kept the scope=prototype my instance variables were shared between various user session.
Stick to Immutiblity

Related

Is there a limit for concurrent requests for one axios instance?

So I am a creating an axios instance that connects to some API like this:
const instance = axios.create(...)
I want to know is there a limit to how many concurrent/parallel requests axios would be able to make with that singular instance. Reason for this is because I have a back-end app that receives hundreds of requests a minute and that number will only keep going up so I want to understand how the axios instance behaves under the hood to know if there is anything I need to do to avoid some sort of overload and requests getting dropped, delayed, and/or unfulfilled.
After some research and reading into axios documentation it seems the answer is not dependent on axios itself but rather built in node features like http and Agent. If you've created an axios instance, and are you using keep-alive:true then agent will throttle and choose how it sends out requests. The answer to this stack overflow question goes more in depth on this.
Concurrent Requests In Node

Communicate internally between Google Cloud Functions?

We've created a Google Cloud Function that is essentially an internal API. Is there any way that other internal Google Cloud Functions can talk to the API function without exposing a HTTP endpoint for that function?
We've looked at PubSub but as far as we can see, you can send a request (per say!) but you can't receive a response.
Ideally, we don't want to expose a HTTP endpoint due to the extra security ramifications and we are trying to follow a microservice approach so every function is its own entity.
I sympathize with your microservices approach and trying to keep your services independent. You can accomplish this without opening all your functions to HTTP. Chris Richardson describes a similar case on his excellent website microservices.io:
You have applied the Database per Service pattern. Each service has
its own database. Some business transactions, however, span multiple
services so you need a mechanism to ensure data consistency across
services. For example, lets imagine that you are building an e-commerce store
where customers have a credit limit. The application must ensure that
a new order will not exceed the customer’s credit limit. Since Orders
and Customers are in different databases the application cannot simply
use a local ACID transaction.
He then goes on:
An e-commerce application that uses this approach would create an
order using a choreography-based saga that consists of the following
steps:
The Order Service creates an Order in a pending state and publishes an OrderCreated event.
The Customer Service receives the event attempts to reserve credit for that Order. It publishes either a Credit Reserved event or a
CreditLimitExceeded event.
The Order Service receives the event and changes the state of the order to either approved or cancelled.
Basically, instead of a direct function call that returns a value synchronously, the first microservice sends an asynchronous "request event" to the second microservice which issues a "response event" that the first service picks up. You would use Cloud PubSub to send and receive the messages.
You can read more about this under the Saga pattern on his website.
The most straightforward thing to do is wrap your API up into a regular function or object, and deploy that extra code along with each function that needs to use it. You may even wish to fully modularize the code, as you would expect from an npm module.

Django expire cache every N'th HTTP request

I have a Django view which needs can be cached, however it needs to be recycled every 100th time when the view is called by the HTTP request.
I cannot use the interval based caching here since the number will keep changing upon traffic.
How would I implement this? Are there other nice methods around except maintaining a counter (in db) ?
Here are some ideas / feedback:
You're going to have to centralize something if you need it to be exact - the Redis idea in this linked solution looks OK if you can't put it in the main DB. If Redis is in your stack, I'd use that. If the 100 requests can be per user and you're using sessions, you could attach a counter to the session.
implementing a counter that counts requests with django
To not centralize the counter outside of the webserver would mean your app needs to be and stay single-threaded to keep counts in memory. It would also reset if the server was restarted. Not a great idea IMO...
If you really can't make it work with anything else, you could hack something like a request counter on your load balancer (...if the load balancer is a single machine you control, and you're comfortable doing that) and pass it as a header for Django to read.

Configuring spray-servlet to avoid request bottleneck

I have an application which uses spray-servlet to bootstrap my custom Spray routing Actor via spray.servlet.Initializer. The requests are then handed off to my Actor via spray.servlet.Servlet30ConnectorServlet.
From what I can gather, the Servlet30ConnectorServlet simply retrieves my Actor out of the ServletContext that the Initializer had set when the application started, and hands the HttpServletRequest to my Actor's receive method. This leads me to believe that only one instance of my Actor will have to handle all requests. If my Actor blocks in its receive method, then subsequent requests will queue waiting for it to complete.
Now I realize that I can code my routing Actor to use detach() or a complete that returns a Future, however most of the documentation never alludes to having to do this.
If my above assumption is true (single Actor instance handling all requests), is there a way to configure the Servlet30ConnectorServlet to perhaps load balance the incoming requests amongst multiple instances of my routing Actor instead of just the one? Or is this something I'll have to roll myself by subclassing Servlet30ConnectorServlet?
I did some research and now I understand better how spray-servlet is working. It's not spray-servlet that dictates the strategy for how many Request Handler Actors are created but rather the plumbing code provided with the example I based my application on.
My assumption all along was that spray-servlet would essentially work like a traditional Java EE application dispatcher in a handler-per-request type of fashion (or some reasonable variant of that notion). That is not the case because it is routing the request to an Actor with a mailbox, not some singleton HttpServlet.
I am now delegating the requests to a pool of actors in order to reduce our potential for bottleneck when our system is under load.
val serviceActor = system.actorOf(RoundRobinPool(config.SomeReasonableSize).props(Props(Props[BootServiceActor])), "my-route-actors")
I am still a bit baffled by the fact that the examples and documentation assumes everyone would be writing non-blocking Request Handler Actors under spray. All of their documentation essentially demonstrates non-Future rendering complete, yet there is no mention in their literature that maybe, just maybe, you might want to create a reasonable sized pool of Request Handler Actors to prevent a slew of requests from bottle necking the poor single overworked Actor. Or it's possible I've overlooked it.

Multiple dispatcher for spray

I am wondering how to handle this specific case.
I have two ClientService that I want to provide to to a web application. By clientService, I mean client API that calls some external rest service. So we are in spray client here.
The thing is, one of the two service can be quite intensive and time consuming but less frequently called than the other one, which will be quicker but with very frequent calls.
I was thinking of having two dispatchers for the two clientService. Let's say we have the query API (ClientService1) and the classification API (ClientService2)
Both service shall indeed be based on the same actor system. So in other words, I would like to have two dispatcher in my actor system, then pass them to spray via the client-level api, for instance pipeline.
Is it feasible, scalable and appropriate?
What would you recommend instead to use one dispatcher but with a bigger thread pool?
Also, how can I obtain a dispatcher?
Should I create a threadpool executor myself and get a dispatcher out
of it?
How do I get an actor system to load/create multiple dispatcher, and
how to retrieve them such that to pass them to the pipeline method?
I know how to create an actor with a specific dispatcher, there are example for that, but that is a different scenario. I would not like to have lower than the client level API by the way
Edit
I have found that the system.dispatchers.lookup method can create one. So that should do.
However the thing that is not clear is related to AKK.IO/SPRAY.IO.
The manager IO(HTTP): it is not clear to me on which dispatcher it runs or if it can be configured.
Moreover, let's say I pass a different execution context to the pipeline method. What happens? I will still have IO(HTTP) running on the default execution context or its own (I don't know how it is done internally) ? Also what exactly will be ran on the execution context that I pass ? (in other words, which actors)