IIS App Pool Monitoring Infinite Loops (or in-appropriate load) - web-services

Im just wondering if there is anyway I can handle when our webservice might get stuck in an infinite loop. I know the first answer is not to have an infinite loop and we have tested the system and no loops should occur. But just for a fallback is there a way on putting something on the IIS app pool to say if the CPU has been running at say 99% for more than 1 minute than recycle the app pool?
Thanks in advance

There is no IIS-built-in way of doing something like that (the recycle options allow you to recycle at a set time each day, or after a set number of minutes, based on hitting virtual or private memory limits, or based on hitting a particular number of requests - nothing CPU-ish).
You could build your own monitor that would watch for certain events (like CPU going above 99% for a minute) and causes a recycle to happen (there are various programmatic ways to do this).

In IIS 7.0+ this can be done very easily (although instead of recycling the Application Pool, it will terminate the process and then restart it when resetInterval has been reached). See:
http://www.iis.net/configreference/system.applicationhost/applicationpools/add/cpu

Related

How to do a very large number of HTTP requests in shortest time

So we have a very huge database which has around 300,000 urls. These urls have to be pinged and get data from.(these urls are radio stations which are playing song. The data is metadata)
Some of them are sometimes inactive and sometimes active.
On any given time, around 80,000 are active. Some respond slow, some respond quickly. I have a server and I am thinking to do this using c++
My goal is to ping and parse(or crawl) them within 1 minute and keep repeating the process because information(the song playing on them) can change over time. ranging from 2-7 minutes mostly. But I am not sure if it is possible.
What should be my approach to do it?
I have thought of creating two programs, one to test if the url is active or not and run it twice a day. And how much time it generally takes to respond. Does it usually respond slow or whether it is responding slower now.
And the other to do the actual crawling where fastest will be crawled first and some dedicated threads for urls which respond faster.
Please i would love more better ideas or better solutions for it. Can any one tell me how to do the maths to find out the number of dedicated threads i should allot to each for getting the results in least number of time
You don't need performance of your CPU (not your bottleneck at the moment), but you need to avoid network layer stall... if the request timeout is 60 seconds, and you have 16 threads, and hit 16 very slow servers (which will time-out eventually), you are generally stalled for 60 seconds and not processing anything more.
So I would start with let's say 500 threads (and like 15-30s timeout, if you know the very slow radios are capable to fit even this), and keep some statistic about their turnaround, and keep adding more working threads dynamically for every original which didn't get response within 2-3 secs. 80000/500 = 160, so each "normally quick" worker thread has then to ping around 160 urls, if each does take 2 seconds, that's still 320 = 5min! So 500 sounds like minimum.
That said, having 500+ threads will somewhat burden CPU and memory (not sure how much, with decent thread/memory model implementation 500 doesn't sounds like much for modern x86 CPU with GB of RAM, even 5000 sounds still reasonable), but I would worry lot more about the network layer and about possible firewalls around, you need server-grade like network for such amount of requests (if I would try something like that from my home, my own router would filter me out with default settings, detecting it as some kind of DoS attack).
So get some statistic how long the request on average take, then take your target time (2-7min), and divide the number of urls by those, like average ping 5s, round time 3min = 300,000/(3*60/5) = 8333.33 threads at least needed. Then you will have to profile your app to verify, that with 8000 threads it will not choke on something else, but it will really handle the task as expected.
(other option is to fire asynchronous http request from single thread, but that sort of creates its own threads for each task any way, so I would rather manage the threads myself, and use synchronous http calls)
And thinking about dynamic grow mechanics... you can keep some counters about how many new requests were added in last second, and how many finished (either responded or failed), and after few seconds of running these should start to form some kind of "throughput" statistic, then if throughput is under desired threshold, you can add more threads.
About active/inactive... keep the response time/last-seen/last-check together with url, and add some further logic to check url only when it makes sense (like not within next 60s, if it did just respond, or check inactive just after 6h from last test). You need also avoid checking the same url in two different threads at the same time, so some central manager code should feed the threads with target (maybe some FIFO thread-safe queue ... actually you can use its size to estimate how well the worker threads are processing it, so you can add more threads when you see the queue is not emptying fast enough = that avoids adding the statistic code to thread themselves).

Client-server what is the limitation of server's processes run time?

I have a certain program that recieves input and returnes an output with a run-time of about 2 seconds,
Now, i want to run this program online on a server that can handle multiple connections (lets say up to 100k),
on each client-server session the program will launch,
the client will hand the server the program's input and will wait for the program to end to recieve the server's respond (program's output),
Lets say the server's host is a very powerful machine - e.g 16 cores,
Can this work or it is to much runtime for each client?
What is the maximum runtime this kind of program can have?
I'm posting this as an answer because it's too large to place as a comment.
Can this work? It depends. It depends because there are a lot of variables in this problem. Let's look at some of them:
you say it takes 2 seconds to compute a result. Where and how are those seconds spent? Is this pure computation or are you accessing a database, or the file system? Is this CPU bound or I/O bound? If you run computations for the full 2 seconds then you are consuming CPU which means that you can simultaneously serve only 16 clients, one per core. Are you hitting a database? Is this on the powerful server or on some other machine? If the database is the bottleneck than move this to the powerful server and have SSD drives on it.
can you improve processing for one client? What's more efficient, doing the processing on one core or spread it across all the cores? If you can parallelize, can you limit thread contention?
is CPU all you need? How about memory? Any backend service you access? Are those over the network? Do you have enough bandwidth?
related to memory, what language/platform are you using? Does it have a garbage collector? Do you generate a lot of object to compute a result? Does the GC kick in and pauses your application so it cleans up and compacts the memory? Do you allocate enough memory for the application to run?
can you cache responses and serve them to other clients or are responses custom to each client? Can you precompute the results and then just serve them to clients or can't you predict the inputs?
Have you tried running some performance tests and profile the application to see where hotspots might show up? Can you do something about them?
have you any imposed performance criteria? How many clients do you want to support simultaneously? Is 2 seconds too much? Can clients live with more? How much more? How many seconds does it mean an unacceptable response time?
do you need a big server to run this setup or smaller ones work better (i.e. scale horizontally instead of vertically)?
etc
Nobody can answer this for you. You have to do an analysis of your application, run some tests, profile it, optimize it, then repeat until you are satisfied with the results.

C++: How to set a timeout (not reading input, not threaded)?

Got a large C++ function in Linux that calls a whole lot of other functions, making up an algorithm. At various points given certain bad inputs, the algorithm can get "stuck" and go on forever. Adding a timeout seems appropriate as all potential "stuck" points cannot be predicted. But despite scouring the Internet for timeout examples I've only found how to apply timeouts when either the thing your timing is a separate thread or it's reading inputs. My code is a single thread and does not modify file descriptors, so not coming up with any luck. Do I basically have no choice but to thread it?
I am not sure about the situation, actually server applications or embedded applications often run for years in background without stopping. I think one option is to let your program run in background and log to a file(or screen) timely, and, if you really want to stop the program after certain time, you can use timeout command or a script to kill your program after that time, say, timeout 15s your-prog.

What is optimal value for Phusion passenger PassengerMaxRequestQueueSize

I know this depends on the box hardware, but for example if there are set 100 processes, the default queue is also 100. Does it makes sense to increase PassengerMaxRequestQueueSize to 200 or 300? Probably this depends on free memory. Thoughts?
The best answer will be explaining the setting and probably one or two examples, assuming the server process requests for 2-3 seconds.
Thanks in advance!
Why you should limit queuing
Any requests that aren't immediately handled by an application process, are queued. Queuing is usually is bad: it often means that your server cannot handle the requests quickly enough.
A larger queue means that requests are less likely to be dropped. But this comes with a drawback: during busy times, the larger the queue, the longer your visitors have to wait before they see a response. This causes them to click reload, making the queue even longer (their previous request will stay in the queue; the OS does not know that they've disconnected until it tries to send data back to the visitor), or causes them to leave in frustration.
So having a limit on the queue is a good thing. It limits the impact of the above situation.
You should ensure that requests are queued as little as possible. That could mean:
Making your app faster (if your workload is CPU bound).
Upgrading to faster hardware (if your workload is CPU bound).
Increasing your app's concurrency settings (if your workload is I/O bound), e.g. by increasing the number of processes or threads.
If you cannot prevent requests from being queued, then the next best thing to do is to keep the queue short, and to display a friendly error message upon reaching the queue limit. Something like, "We're sorry, a lot of people are visiting us right now. Please try again later." The documentation for PassengerMaxRequestQueueSize tells you how to do that.
Optimal value for the queue size
It's hard to say what the optimal queue size should be. A good rule of thumb is: set the request queue size to the maximum number of requests you can handle in one second. Depending on your situation you may have to tweak things a little bit.
This rule of thumb comes from the notion of expected burst traffic. How many simultaneous requests do you expect on your server?
Suppose that your queue size is 100, and that for whatever reason you receive 150 requests at the same time. Suppose that your server is fast enough to handle 150 requests in half a second, so you know it's not a performance problem. But if you have a request queue size of 100, then 50 of those requests will be dropped with a "Request queue full" error.
In such a situation, you should set the queue size to the maximum number of concurrent requests that you think you can safely handle without performance issues.
This SO question and the Passenger docs here talk more about working with this. If you want more information about why this is happening on your server you can try running passenger-status (usually you need to run this as root).
If you would like to set a custom error page when visitors see this issue you can use the following (in Apache) to set a custom error page:
PassengerErrorOverride on
ErrorDocument 503 /error503.html
As mentioned by Hongli you can also change the setting PassengerMaxRequestQueueSize to a higher number to queue more requests. You can also set this to 0 and disable it (for most situations this is not an optimal solution however).
For reference, the default error message a visitor to your site will see when bumping against this limit is:
This website is under heavy load
We're sorry, too many people are accessing this website at the same time. We're working on this problem. Please try again later.

Web application background processes, newbie design question

I'm building my first web application after many years of desktop application development (I'm using Django/Python but maybe this is a completely generic question, I'm not sure). So please beware - this may be an ultra-newbie question...
One of my user processes involves heavy processing in the server (i.e. user inputs something, server needs ~10 minutes to process it). On a desktop application, what I would do it throw the user input into a queue protected by a mutex, and have a dedicated background thread running in low priority blocking on the queue using that mutex.
However in the web application everything seems to be oriented towards synchronization with the HTTP requests.
Assuming I will use the database as my queue, what is best practice architecture for running a background process?
There are two schools of thought on this (at least).
Throw the work on a queue and have something else outside your web-stack handle it.
Throw the work on a queue and have something else in your web-stack handle it.
In either case, you create work units in a queue somewhere (e.g. a database table) and let some process take care of them.
I typically work with number 1 where I have a dedicated windows service that takes care of these things. You could also do this with SQL jobs or something similar.
The advantage to item 2 is that you can more easily keep all your code in one place--in the web tier. You'd still need something that triggers the execution (e.g. loading the web page that processes work units with a sufficiently high timeout), but that could be easily accomplished with various mechanisms.
Since:
1) This is a common problem,
2) You're new to your platform
-- I suggest that you look in the contributed libraries for your platform to find a solution to handle the task. In addition to queuing and processing the jobs, you'll also want to consider:
1) status communications between the worker and the web-stack. This will enable web pages that show the percentage complete number for the job, assure the human that the job is progressing, etc.
2) How to ensure that the worker process does not die.
3) If a job has an error, will the worker process automatically retry it periodically?
Will you or an operations person be notified if a job fails?
4) As the number of jobs increase, can additional workers be added to gain parallelism?
Or, even better, can workers be added on other servers?
If you can't find a good solution in Django/Python, you can also consider porting a solution from another platform to yours. I use delayed_job for Ruby on Rails. The worker process is managed by runit.
Regards,
Larry
Speaking generally, I'd look at running background processes on a different server, especially if your web server has any kind of load.
Running long processes in Django: http://iraniweb.com/blog/?p=56