log4cplus - flush file before writing in it - c++

My need is quite simple:
with log4cplus, I'd like to be able to write a log in a log file and to flush the log file everytime before I write in it. This way, while I run my application, I will only have one single line in my log file.
I've tryed the append=False property, but it only flush the log file at startup.
I could do it by hand in C++ but I don't want to write C++ code as the product is already in prod.
Any idea ?
Thanks,

Use the ImmediateFlush property to force the file appender to flush after each event.

Related

How to run a batch file in siebel escript and execute plsql package through a batch file by passing a variable and to get the output

My requirement is to execute a plsql package through siebel escript. For that, I am planning to write a batch file which can be invoked in the escript.
in the batch file, I want to execute the package. But stuck at passing the input to package and get the output from it. Please help me with the code.
Thanks.
The quickest answer might be using Clib Send Command Method. This can be used to run commands on the siebel server, on any OS. eg:
Clib.system("dir /p C:\Backup");
So you could try invoking your bat file
Clib.system("C:\custom.bat arg1 arg2");
You will have to handle the variables in the bat file (or .sh) file and invoke your PLSQL from there.
The flip side is that there is no way of getting any output from the command line back to Siebel.
https://docs.oracle.com/cd/E14004_01/books/eScript/C_Language_Reference101.html#wp1008859
You can get the output back into Siebel indirectly by having the command pipe it to a text file and having Siebel process that file.
The only way to do this is to call the batch with Clib.system and have it save the output into a file. You then need to have some BS/Workflow to read the file and delete it.
It will work reliably if you are careful with the file naming to avoid concurrency issues.

Difficulty in writing data to file in Ocean Script

The code below creates the file, but is not writing data to it.
p=outfile("outfile.txt" "w")
fprintf(p "write to out file")
Cadence uses buffered IO for files, so to see your output you will need to explicitly flush the port like so:
drain(p)
The port will also get automatically flushed when you close it, so this is only necessary if you want to see intermediate output.

No need to finalize log4cplus? and new file with every run of the application?

I read some examples of log4cplus. But I've never found an example of finalization of log4cplus. So is there no need to finalize log4cplus by myself? log4cplus finalize by itself after an application finished.
http://sourceforge.net/p/log4cplus/wiki/CodeExamples/
And one more question.
Are there any way to create a new log file with every run of the application? I want to use timestamp for log file name.
log4cplus.appender.MyApp.FilenamePattern=D{%H:%M:%S}_MyApp.log

Create a file which can be renamed while in use

I have a application which has two sub modules. Also their is custom Log class written to log module activities.Requirement I am working on is each module should create log file with same name and write log into that. To explain it better consider initial run in which module1 is wrting logs in app.log now when another session of application starts with module2 it should also create app.log and start writing. but before that old app.log should get rename to something app.log.1.
Issue I am facing when log file is open with one module function fails to rename. I am working in C++ on window 7. To create a file I am using -
std::ofstream s_ofs.open("app.log", std::ios::out | std::ios::app);
Windows does not allow this. When you open a file, for writing or for reading, it's locked and you can't do operations such as rename or delete while the file is open.
You might want to reconsider your design. Either so that each submodule have its own uniquely named log file. Or use a logging module that can receive logging input from multiple sources and multiplex those into a single file.
You can achieve this by synchronizing access to the Log class object. An approach could be as follows:
Application creates the Log class object on startup Create a
Synchronization object (say Mutex) which protects access to the
Logging
Have the Log method accept a flag which will differentiate
between accesses from two different modules
Module1 gains access and starts logging
When module2 wants to write, the Logger will detect that it has a Log request from another module, and will close the file and then rename it and create another log file with the same name

How to check if a file is still being written?

How can I check if a file is still being written? I need to wait for a file to be created, written and closed again by another process, so I can go on and open it again in my process.
In general, this is a difficult problem to solve. You can ask whether a file is open, under certain circumstances; however, if the other process is a script, it might well open and close the file multiple times. I would strongly recommend you use an advisory lock, or some other explicit method for the other process to communicate when it's done with the file.
That said, if that's not an option, there is another way. If you look in the /proc/<pid>/fd directories, where <pid> is the numeric process ID of some running process, you'll see a bunch of symlinks to the files that process has open. The permissions on the symlink reflect the mode the file was opened for - write permission means it was opened for write mode.
So, if you want to know if a file is open, just scan over every process's /proc entry, and every file descriptor in it, looking for a writable symlink to your file. If you know the PID of the other process, you can directly look at its proc entry, as well.
This has some major downsides, of course. First, you can only see open files for your own processes, unless you're root. It's also relatively slow, and only works on Linux. And again, if the other process opens and closes the file several times, you're stuck - you might end up seeing it during the closed period, and there's no easy way of knowing if it'll open it again.
You could let the writing process write a sentinel file (say "sentinel.ok") after it is finished writing the data file your reading process is interested in. In the reading process you can check for the existence of the sentinel before reading the data file, to ensure that the data file is completely written.
#blu3bird's idea of using a sentinel file isn't bad, but it requires modifying the program that's writing the file.
Here's another possibility that also requires modifying the writer, but it may be more robust:
Write to a temporary file, say "foo.dat.part". When writing is complete, rename "foo.dat.part" to "foo.dat". That way a reader either won't see "foo.dat" at all, or will see a complete version of it.
You can try using inotify
http://en.wikipedia.org/wiki/Inotify
If you know that the file will be opened once, written and then closed, it would be possible for your app to wait for the IN_CLOSE_WRITE event.
However if the behaviour of the other application doing the writing of the file is more like open,write,close,open,write,close....then you'll need some other mechanism of determining when the other app has truly finished with the file.