ColdFusion Threads Remain in Thread Queue NOT_STARTED - coldfusion

I am using CFTHREAD on ColdFusion 8.
Occasionally I find that all the threads stop executing and remain with STATUS=NOT_STARTED
The server monitor tells me that there are no running requests, no running threads and an increasing number of queued threads.
The only way to recover is to restart the ColdFusion instance.
I only use threads in a handful of places. Some of the calls to CFTHREAD are JOINED - in this case I terminate any threads which have not completed within the timeout. Some of the calls to CFTHREAD are fire and forget.
Does anyone know why this might be happening?
Thanks,
William Bibby

In one of my application I already faced thread hanging issue. That's because, my thread was running some HTTP call or huge file downloading procedure, it was facing connection timeout problem.
Due to this thread hanging our server also becomes very busy because resources acquired by the running thread can't be released.
My Solution: Just check from how much time the thread is running. If it more than a specific interval then I was killing the thread by code.
You can use ColdFusion Admin API to kill a thread. If you want how to kill a thread using admin API then see here

Related

Handling multiple clients simultaneously in C++ UDP server

I have developed a C++ UDP based server application and I am in the process of implementing code to handle multiple clients simultaneously .
I have the following understanding regarding how to handle multiple clients and want to fill in the knowledge gaps
My step wise understanding is as mentioned below
UDP server listens at a specific port(say xxxx)
The server has a message queue .It can be array or linked list or Queue or anything for that matter
As soon as a request arrives at the port xxxx, its placed in the message queue
After putting it in the message queue a new thread(let us call it worked thread) is spawned and it picks up the queued message and the same is removed from the message queue
The worked thread knows about the clients IP:port from the message header
The worker thread processes the request and sends the response to the clients IP:port
The clients gets the response and the worker thread terminates.
Steps 3 to 7 take care of multiple client being handled simultaneously.
Is my understanding sufficient ? Where do I need improvement?
Thanks in advance
The clients gets the response and the worker thread terminates.
The worker thread should terminate when it completes processing. There is no practical way for it to wait for an acknowledgement from the client.
The worker thread processes the request and sends the response to the clients IP:port
I think it will be better to place the response on a queue. The main server thread can check the queue and send any responses found there. This prevents race conditions when two worker threads overlap in their attempts to send responses.
The server has a message queue .It can be array or linked list or Queue or anything for that matter
It pretty much has to be a queue. The interesting question is what queue priority. Initially FIFO would do. If your server becomes overloaded, then you need to consider alternatives. Perhaps it would be good to estimate the processing time required, and do the fast ones first. Or perhaps different clients deserve different priorities.
After putting it in the message queue a new thread(let us call it worked thread) is spawned
This is fine initially. However, you will want to do some time profiling and determine if a thread pool would be advantageous.
Deeper Discussion of threading issues
The job processing must be done in a separate worker thread, so that a long job will not block the server from accepting connections from other clients. However, you should consider carefully whether or not you want to use multiple worker threads. Since you are placing the job requests on a queue, a single worker thread can be used to process them one by one.
PRO single thread
Simpler, more reliable code. The processing code must be thread safe for context switches back to the main thread. However, there will not be any context switches between job processing code. This makes it easier to design and debug the processing code. For example, if the jobs are updating a database, then you do not require any extra code to ensure the database is always consistent - just that consistency is guaranteed at the end of each job process.
Faster response for short jobs. If there are many short jobs submitted at the same time, your CPU can spend more cycles switching between jobs than actually doing useful processing.
CON single thread
A big job will block other jobs until it completes.

Django request threads and persistent database connections

I was reading about CONN_MAX_AGE settings and the documentation says:
Since each thread maintains its own connection, your database must support at least as many simultaneous connections as you have worker threads.
So I wonder, On uWSGI, how does a Django process maintain it's own threads, does it spawn new thread for each request and kill it at the end of request?
If yes, how does a ceased thread maintain the connection?
Django is not in control of any threads (well... maybe in development server, but it's pretty simple), but uWSGI is. uWSGI will spawn some threads, depending on it's configuration and in each thread it will run django request handling.
Spawning threads may be dynamic or static, it can be strictly 4 threads or dynamic from 2 to 12 depending on load.
And no, there is no new thread on each request because that will allow someone to kill your server by making many concurrent connections to it because it will spawn so many threads that no server will take it.
Requests are handled one by one on each thread, main uWSGI process will round-robin requests between threads. If there are more requests than threads, some of them will wait until others are finished
In uWSGI there are also workers - independent processes that can spawn own threads so load can be better spreaded.
Also you can have multiple uWSGI servers and tell your HTTP server (apache, proxy) to spread requests between them. That way you can even serve your uWSGI instances on different machines and it will all look like from the outside as one big server.

Which mechanism keeps an Oracle session alive on the server?

I have a C++ application that connects to an Oracle database via the Qt QSqlDatabase interface. The main application establishes and uses the connection to the database along with starting a child process for unrelated other porpuses. To make this clear: the child process does not use any database relevant stuff.
The problem is now: If the main process gets terminated in an unusual way (it crashes or it gets killed by the user via the Task Manager), I can see that the database session on the Oracle server gets kept alive and does not timeout whatsoever. Absolutely reproducibly, however, the session gets cancelled immediatelly after I kill the child process manually.
As those dangling, orphaned sessions lead to some problems (the simplest beeing that the max session count on the server gets reached), I would really like all sessions to be closed as soon as possible.
My question now is: what is the mechanism that keeps a session alive on the server just because an irrelevant child process is still alive? How can I control this behavior, i.e. tell the oracle client to disconnect any sessions if the main application process dies?
Thanks in advance!
UPDATE
https://bugreports.qt.io/browse/QTBUG-9350
and
https://bugreports.qt.io/browse/QTBUG-4465
On Windows, the child process inherits sockets and file-descriptors even inheritFileDescriptors is set to false
Seems that the bug was fixed in QT5
A discussion about the issue on an Oracle thread:
https://community.oracle.com/thread/1048626
TL;DR; The oracle server does not "know" that the client has disappeared.
Some solutions:
1.There is a terminated connection detection feature:
http://docs.oracle.com/cd/B19306_01/network.102/b14213/sqlnet.htm#sthref474
2.My advice is to try to implement a 'connection pool' if you use QOCI driver. Or you can use ODBC which has support for connection pooling.
Looks like main process didn't terminated successfully and awaits for child process termination in some place of finalization code before closing database connection.
From other side exceptional situation raised by abnormal termination of child process successfully propagated to parent process which starts finalization process and closes connection to Oracle.
So first suggestion is to check if child process properly reacts on kill() and terminate() calls, and even parent process try to terminate child in case of abnormal termination.

Django + mod_wsgi + apache2 - child process XXX still did not exit, sending a SIGTERM

I am getting intermittent errors -
child process XXX still did not exit, sending a SIGTERM.. and then a SIGKILL. It occurs intermittently and the web page hangs.
I was not using Daemon process..but now I am, still the problem exists..
Also I have some Error opening file for reading: Permission Denied.
Please can someone help?
I am new to this forum, so sorry if that has been answered before.
If you were not using daemon mode of mod_wsgi, that would imply that Apache must have been restarted at the time that initial message was displayed.
What is occurring is that in trying to do a restart, Apache sends a SIGTERM to its child processes. If they do not exit by their own accord it will send SIGTERM again at 1 second intervals and finally send it a SIGKILL after 3 seconds. The message is warning you of the latter and that it force killed the process.
The issue now is what is causing the process to not shutdown promptly. There could be various reasons for this.
Using an extension module for Python which doesn't work in sub interpreters properly which is deadlocking and hanging the process, preventing it from shutting down. http://code.google.com/p/modwsgi/wiki/ApplicationIssues#Python_Simplified_GIL_State_API
Use of background threads in the Python web application which have not been set as being daemon threads properly with the result they are then blocking process shutdown.
Your web application is simply trying to do too much on process shutdown somehow and not completing within the time limit.
Even if using daemon mode you will likely see this behaviour as it implements a similar shutdown timeout, albeit that the timeout is configurable for daemon mode.
Anyway, force use of the main Python interpreter as explained in the documentation link above
As to the permissions issue, read:
http://code.google.com/p/modwsgi/wiki/ApplicationIssues#Access_Rights_Of_Apache_User
http://code.google.com/p/modwsgi/wiki/ApplicationIssues#Application_Working_Directory
In short, ensure access permissions are correct of files/directories you need to access and ensure you are always using absolute path names when accessing the file system.

Connecting to remote services from multiple threaded requests

I have a boost asio application with many threads, similar to a web server, handling hundreds of concurrent requests. Every request will need to make calls to both memcached and redis (via libmemcached and redispp respectively). Is the best practice in this situation to make a separate connection to both redis and memcached from each thread (effectively tripling the open sockets on the server, three per request)? Or is there a way for me to build a static object, with a single memcached/redis connection, and allow all threads to share that single connection? I'm a bit confused when it comes to the thread safety of something like this, and everything needs to be asynchronous between the threads, but blocking for each thread's individual request (so each thread has a linear progression, but many threads can be in different places in their own progression at any given time). Does that make sense?
Thanks so much!
Since memcached have syncronous protocol you should not write next request before you got answer to prevous. So, no other thread can chat in same memcached connection. I'd prefer to make thread-local connection if you work with it in "blocking" mode.
Or you can make it work in "async" manner: make pool of connections, pick a connection from it (and lock it). After request is done, return it to pool.
Also, you can make a request queue and process it in special thread(s) (using multigets and callbacks).