How to delete dynamic cfm file only AFTER the code runs? - coldfusion

I'm using the in-memory file system to execute dynamic CFM files. How can I delete the temporary file only AFTER it is finished running? If I do it right after the cfinclude, it won't get deleted if the dynamic code has an abort or location tag etc.
Can I create a thread that will sleep till the main page thread completes and then delete it?

Related

Multithreaded app has problems re-opening files - Windows taking too long to close?

I have a multithreaded app that opens a few files (read-only) and does a bunch of calculations based on data in those files. Each thread then generates some output files.
The code runs fine so long as I generate the threads and then delete them then the app exits. If, however, I try to put the thread creation/deletion into a subroutine and call it several times then the threads have problems when they try to re-open the input files. I have an if(inFile==NULL) check within each thread and sometimes that gets triggered but sometimes it just crashes. Regardless, each thread has an fclose() for each file and the threads are properly terminated so the files should always be closed before the threads are recreated.
I can create multiple threads that can open the same input files and that works fine. But if I close those threads and re-create new ones (e.g. by repeatedly calling a subroutine to create the threads) then I get errors when the threads try to re-open the input files.
The crashes are not predictable. Sometimes I can loop through the thread creation/deletion process several times, sometimes it crashes on the second time, sometimes the fourth, etc.
The only thing I can think of is that the OS (Windows 7) takes too long to close the file sometimes, so the next thread is spawned before the file is closed and then there's some kind of error due to the fact that the OS is trying to close the file while the thread is trying to open it. It seems to me that that could trigger the if(inFile==NULL) condition.
But, sometimes when the if(inFile==NULL) condition is not triggered I still get jibberish read in from the input file. So it thinks it has a good file pointer but it clearly does not.
I realize this is probably a tough question to answer but I'm stumped. So maybe someone has an idea.
Thanks in advance,
rgames

Releasing file handlers when terminating Qt threads

I am developing an application with back end code written in C and using QT for Gui development. For that purpose, I have generated dll which is written in C and contains the back end functions like encrypting a file. I am calling file encryption function in QT thread and updating the current progress in progressing bar dialog.
When cancel button of progress dialog is clicked, I generate a QT signal to tell the thread to terminated. Now problem starts here. When I terminate the thread which runs encryption function in dll, the thread is terminated and encryption also stops but process is not able to delete the incomplete temporary file which was result of encryption.
My understanding is that when encryption function is cancelled, file handlers that my dll has opened have not yet been closed so when when I try to remove temporary file externally, it does not let me delete because my application is hanging on to that file.
My question is that what approach should I take so that my dll releases all file handlers when, thread in QT, which is running it is terminated? Should I send signals like SIGABRT etc to the process itself?
Any help would be appreciated.

fopen/fwrite and multi-threading?

fopen/fwrite and multi-threading?
Some multi-threading programs open the same file, each thread create a file pointer to that the file.
There is one thread created by a paricular program that will update the file at some random time, whilst other threads, created by a different program, will simply read the contents of the file.
I guess this create a racing/data-inconsistence problem there if the writing thread change contents in the file whilst other threads try to read the contents.
The problem here is the thread that update the file should compiled into a different exe program than the the program that creates threads that read the contents of the file, so within-program level thread control become impossible.
My solution is create a very small "flag" file on the harddisk to indicates 3 status of the file:
1) writing-thread is updating the contents of the file;
2) reading-thread are reading the contents of the file;
3) Neither 1) or 2);
Using this flag file to block threads whenever necessary.
Are there some more-compact/neat solution to this problem?
It might be easier to use a process-global "named" semaphore that all the processes know about. Plus then you could use thread/process-blocking semaphore mechanisms instead of spin-looping on file-open-close and file contents...

Django FileWrapper object: How to hook in clean-up action

Suppose the django.core.servers.basehttp.FileWrapper class is used to serve back content from a temporary file.
When the client completes the file download, the temporary file needs to be deleted.
How can one hook into the FileWrapper object, to perform such a clean-up action?
If you run on unix system then unlink temp file right after opening. Disk space will be freed after closing file handle by FileWrapper at the end of downloading.

Rotating logs without restart, multiple process problem

Here is the deal:
I have a multiple process system (pre-fork model, similar to apache). all processes are writing to the same log file (in fact a binary log file recording requests and responses, but no matter).
I protect against concurrent access to the log via a shared memory lock, and when the file reach a certain size the process that notices it first roll the logs by:
closing the file.
renaming log.bin -> log.bin.1, log.bin.1 -> log.bin.2 and so on.
deleting logs that are beyond the max allowed number of logs. (say, log.bin.10)
opening a new log.bin file
The problem is that other processes are unaware, and are in fact continue to write to the old log file (which was renamed to log.bin.1).
I can think of several solutions:
some sort of rpc to notify other processes to reopen the log (maybe even a singal). I don't particularly like it.
have processes check the file length via the opened file stream, and somehow detect that the file was renamed under them and reopen log.bin file.
None of those is very elegant in my opinion.
thoughts? recommendations?
Your solution seems fine, but you should store an integer with inode of current logging file in shared memory (see stat(2) with stat.st_ino member).
This way, all process kept a local variable with the opened inode file.
The shared var must be updated when rotating by only one process, and all other process are aware by checking a difference between the local inode and the shared inode. It should induce a reopening.
What about opening the file by name each time before writing a log entry?
get shared memory lock
open file by name
write log entry
close file
release lock
Or you could create a logging process, which receives log messages from the other processes and handles all the rotating transparently from them.
You don't say what language you're using but your processes should all log to a log process and the log process abstracts the file writing.
Logging client1 -> |
Logging client2 -> |
Logging client3 -> | Logging queue (with process lock) -> logging writer -> file roller
Logging client4 -> |
You could copy log.bin to log.bin.1 and then truncate the log.bin file.
So the problems can still write to the old file pointer, which is empty now.
See also man logrotate:
copytruncate
Truncate the original log file to zero size in place after cre‐
ating a copy, instead of moving the old log file and optionally
creating a new one. It can be used when some program cannot be
told to close its logfile and thus might continue writing
(appending) to the previous log file forever. Note that there
is a very small time slice between copying the file and truncat‐
ing it, so some logging data might be lost. When this option is
used, the create option will have no effect, as the old log file
stays in place.
Since you're using shared memory, and if you know how many processes are using the log file.
You can create an array of flags in shared memory, telling each of the processes that the file has been rotated. Each process then resets the flag so that it doesn't re-open the file continuously.