When Log4cxx decides to write the logs it caches into the file (as configured previously), is it buffer based or timer based?
Also, can I configure Log4cxx to write the logs when I send the logs to it not when it decides to ?
When you set your file in the RollingfileAppender with setfile() you can tell whether you want buffered IO or not. This option will automatically configure setImmediateFlush() accordingly.
The code for the buffered writter shows that the flushing decistion is taken based on the size exclusively (if the buffer+new output exceeds bufer size).
Related
I have a log4cpp implementation in a multiple process environment . Logger is configured once during initialization and then is shared among forked processes which server http requests.
During first minute or so , I see the logs rolls perfectly fine at the query per second load( say it runs at 100qps).
After that, the log slows down dramatically. So, I logged pid as well and notice that only one process gets to write to the log for a time duration ( around 10-15 seconds) and then another process starts writing and so on so forth . Processes don't die. They just don't get a chance to write.
This is different from what happens when the server starts . At that time, every other log line is written by a different process. ( Also, I write one-log-line per process at the end of serving the request. )
At this point, I can't think of what could be going wrong.
This is how my log4cpp conf file looks
log4cpp.rootCategory=DEBUG,rootAppender
log4cpp.appender.rootAppender=org.apache.log4cpp.RollingFileAppender
log4cpp.appender.rootAppender.fileName=/tmp/mylogfile.log
log4cpp.appender.rootAppender.layout=org.apache.log4cpp.PatternLayout
log4cpp.appender.rootAppender.layout.ConversionPattern=%d|%p|%m%n
log4cpp.category.http.server.main=INFO,MAIN
log4cpp.additivity.http.server.main=false
log4cpp.appender.MAIN=org.apache.log4cpp.RollingFileAppender
log4cpp.appender.MAIN.maxBackupIndex=10
log4cpp.appender.MAIN.maxFileAge=1
log4cpp.appender.MAIN.append=true
log4cpp.appender.MAIN.fileName=/tmp/mylogfile.log
log4cpp.appender.MAIN.layout=org.apache.log4cpp.PatternLayout
log4cpp.appender.MAIN.layout.ConversionPattern=%d|%p|%m%n
Edit: more updates : Thanks #Botje for your time.
I see that whenever a new child process is created , it is only that process that gets to write to the log. That tells me that all the reference other processes were holding become invalid.
I also tried setting additive property to true. With that , server starts properly writing into the /tmp/myfile.log and then switches to writing into /tmp/myfile.log.1 withing a minute . And then stops writing after a minute.
At that point logs gets directed to stderr which is directed to another log file.
Also,
I did notice that the log4cpp FileAppender uses seek to determine the file size before writing log entries. If the file handle is shared between processes that will cause writes to end up at the start of the file instead of the end. Even if you fix that, you still have multiple processes that think they are in charge of log file rotation.
I suggest you have all processes write to a common udp/tcp/Unix socket and designate one process that collects all log entries and actually writes it to a file. You don't have to reinvent the wheel, you can use the syslog protocol and either the system syslog or a copy running in userspace.
I have been searching this site and the Boost.Log doc for a way to do this but have come up empty so far.
The doc (https://www.boost.org/doc/libs/1_74_0/libs/log/doc/html/log/detailed/sink_backends.html) mentions the ability to set a text_stream_backend to flush after each log record written by calling auto_flush(true).
While this works well for debugging, I was wondering if it was possible to configure a custom number of log records received by the core (or sink?) before a flush() occurs. My goal is to strike a balance between useful live logging (I can see the log records frequently enough with a tail -f) and performance.
Alternatively, would it be possible to configure the size of the buffer containing log records so that once it fills up, it gets flushed?
Muliple process access to writing on same file simultaneously..if the file size is excess on the limit(example 10mb),the processing file is renamed(sample.txt to sample1.txt)rolling appender) and create a new one on the same name.
My issue is ,multiple process writing at same time,File size exceed time file closed, if one of the process is still writing on same file. doesnt File rolling .can any one help
One strategy that I've used also works on a distributed computing system accross multiple machines.
If you create a library which will package log messages and then send them via TCP to a destination, then you can have as many processes as you like writing to the same logger. You'd need a server at that destination to receive the log messages and write them to one file.
Generally, inter-process communication occurs via either shared memory or networking. Using networking we can go not-only inter-process, but also inter machine. If we just use the destination of localhost or 127.0.0.1, then the packet never actually reaches the network card. Most drivers are smart enough to just pass the packet to any processes listening, leading to good performance too.
Is there any way to check if a file is in use in C/C++? Or do I have to ALWAYS implement a lock/semaphore to prevent simultaneous access of any file by multiple threads/processes?
If we consider Linux, and the following scenario: I want to transfer,in chunks, the contents of a file stored in device A to another device B through RS-232 communication, using a pre-defined communication framework. When the request for this transfer comes, I want to verify the file is NOT being used by any process in device A, before sending a "Ready to Transfer : OK" response, following which I will start reading and transmitting the data in chunks.
Is there a way to check file if is already in use without doing fopen/fclose on the said file?
actually
fopen();
is the best way to find this out.
Do fopen() on the receiving end, if it is successful, send the "OK to receive" message.
Here is the deal:
I have a multiple process system (pre-fork model, similar to apache). all processes are writing to the same log file (in fact a binary log file recording requests and responses, but no matter).
I protect against concurrent access to the log via a shared memory lock, and when the file reach a certain size the process that notices it first roll the logs by:
closing the file.
renaming log.bin -> log.bin.1, log.bin.1 -> log.bin.2 and so on.
deleting logs that are beyond the max allowed number of logs. (say, log.bin.10)
opening a new log.bin file
The problem is that other processes are unaware, and are in fact continue to write to the old log file (which was renamed to log.bin.1).
I can think of several solutions:
some sort of rpc to notify other processes to reopen the log (maybe even a singal). I don't particularly like it.
have processes check the file length via the opened file stream, and somehow detect that the file was renamed under them and reopen log.bin file.
None of those is very elegant in my opinion.
thoughts? recommendations?
Your solution seems fine, but you should store an integer with inode of current logging file in shared memory (see stat(2) with stat.st_ino member).
This way, all process kept a local variable with the opened inode file.
The shared var must be updated when rotating by only one process, and all other process are aware by checking a difference between the local inode and the shared inode. It should induce a reopening.
What about opening the file by name each time before writing a log entry?
get shared memory lock
open file by name
write log entry
close file
release lock
Or you could create a logging process, which receives log messages from the other processes and handles all the rotating transparently from them.
You don't say what language you're using but your processes should all log to a log process and the log process abstracts the file writing.
Logging client1 -> |
Logging client2 -> |
Logging client3 -> | Logging queue (with process lock) -> logging writer -> file roller
Logging client4 -> |
You could copy log.bin to log.bin.1 and then truncate the log.bin file.
So the problems can still write to the old file pointer, which is empty now.
See also man logrotate:
copytruncate
Truncate the original log file to zero size in place after cre‐
ating a copy, instead of moving the old log file and optionally
creating a new one. It can be used when some program cannot be
told to close its logfile and thus might continue writing
(appending) to the previous log file forever. Note that there
is a very small time slice between copying the file and truncat‐
ing it, so some logging data might be lost. When this option is
used, the create option will have no effect, as the old log file
stays in place.
Since you're using shared memory, and if you know how many processes are using the log file.
You can create an array of flags in shared memory, telling each of the processes that the file has been rotated. Each process then resets the flag so that it doesn't re-open the file continuously.