I often hear about that some website like facebook has over 100000 concurrent number.
I googled a lot and find someome explain it means 100000 request simultaneously send to the server. But I want to konw what does the 'simultaneously' mean? Does it demand that the requests sent to the server should be within one second?
Thanks
Related
I have two windows computers, both using the Microsoft Edge browser. When I'm typing on the Overleaf website, the connection gets lost every 2-5 minutes. What's worse, some unsynced sentences are gone when the connection resumes. I'm not sure whether this is a network problem since all other websites look good, including Twitter and Gmail. I wondered if this is about the framework of what Overleaf cloud service used. Could anyone give some tips about the issue here?
Is it possible to roughly estimate how many concurrent requests an API might receive?
Let's say it's a super simple API that just returns "hello" to a GET request deployed on a 16gb machine. In general, how many concurrent requests could it support before it starts to melt or say nah?
If it failed because of too many concurrent requests, what would happen?
Requests over the threshold would time out
Machine would crash
As PiRocks suggested, I ran an experiment
Deployed a simple node.js api app to heroku
Deployed the app to heroku (machine specs TBD - looking around if they even list it)
Signed up for a free account on loader.io
Unfortunately, the maximum for free is 10k requests over 15s, aka 666 QPS. That resulted in a 2ms average response time, no timeouts, and no errors. Might upgrade to see what it looks like from there.
Update: seems like 2K QPS is where I started to see errors. More details here
currently working on a youtube video downloader . using youtube-dl,Django and hosting my project on Pythonanywhere,i almost completed my project but when i send request more then 15-20 youtube block my pythonanywhere server ip. SO i want to ask that how can i solve this blocking problem. free proxies takes too much time to respond. please give me a solution.thanks in advance
I suspect that most YouTube downloaders do one of three things:
Execute client side code to do the actual download. Instead, what the server/extension does is go through the code to find a file being served.
Pay for professional proxy servers sufficient to handle the number of downloads one seeks to make without running into rate limits. Proxies are not expensive.
Limit the rate at which downloads are conducted.
Those are the only ways I can see around the blocking problem. Youtube really doesn't want you to do what you are trying to do and has put a lot of thought into stopping it.
I'm am new to server technology and dont really understand how they work (hope you could shed some light on this as well).
But basically my problem is I have a Firebase Database which i need to update every 20 seconds the whole day. This is the way I think i should solve the problem. I need to send a HTTP POST request to the firebase database every 20 sec. This means I need to have a server where I run a piece of code sending the HTTP request every 20s. Im not sure if this is the right way to do it, and even if it is how to implement it.
Some questions i have are
I definitely need to create a server for this right? and if so what platform is recommended to write my server code? (preferable free platforms)
I have tried reading up on the platforms available such as AWS, Google Cloud but dont really get the terminology used. Are there any tutorials for this available?
I am really lost, and have been stuck on this for some time, any help is deeply appreciated.
This is achievable by leveraging Cloud Watch Events specifically using a Rate Expression that invokes a SNS topic which can then hit your HTTP endpoint.
Hope that helps!
I would suggest that you try and keep everything within Firebase. Create a firebase cloud function that sends the HTTP request for the update, and use Firebase functions-cron that is a cron like scheduler to schedule.
I have an ASMX Web Service, which I am serving over HTTPS. After some testing, I arrived to the conclusion the Web Service would be intolerably slow in a real-world scenario.
I understand that the overhead of using HTTPS is unavoidable, but I would like to know how I could optimize this Web Service. The first thing I have noticed is that, most of the time, my Web Service returns lists of things, for example (not taken from the actual Web Service):
<Cars count="2">
<Car brand="Mercedes" registrationplate="612M0D0"/>
<Car brand="BMW" registrationplate="4RS-73CHN1C4"/>
</Cars>
(Usual real-life values of count are around 40-50.)
Thus, both the element's type's name (in this example, Car) and its attributes names (in this example, brand and registrationplate) are repeated too many times. All of this suggests compressing the SOAP response before sending it would a good idea. But I don't know to do it. Does anybody know?
Have you determined what is slow?
The volume of data
The number of requests
Time spent accessing a data store
Time between request and response
etc.
The first step in optimisation is to get metrics and then once you have these attack the ones that matter. For example a call to the function may be 1ms but if you call it 2000 times the delay may be 2s. So in this case attacking the number of calls may be in order.
I suggest using a tool like DotTrace to give you indicators.
Edit
See this so question : HTTP vs HTTPS performance