Saving log text file regardless of crashing - c++

My program saves all kinds of log information into text file. But if the program is crashing due to some problems such as memory access violation, the log text file has nothing in there.
I believe it is because the program failed to close opened log text file.
Currently, I am using FILE* for saving log text files. I probably can open and close every time to write each log but I think that is too much overhead.
Is there any other way that I can keep the log regardless of crashing or unexpected stopping of program?
I do want to see the log right before the program is crashing.
I am using C++/CLI for my program. Thank you very much in advance.
FILE* logfile;
errno_t err;
char LogFileNameBuf[512] = {0,};
sprintf_s(LogFileNameBuf, "LogFile.txt");
err = fopen_s(&logfile, LogFileNameBuf, "wt");
if(logfile != NULL)
{
bLogfile = true;
GetLocalTime(&st);
sprintf_s(logBuf, "[%04d-%02d-%02d][%02d:%02d:%02d] SUCCESS:: Log Started\n", st.wYear, st.wMonth, st.wDay, st.wHour, st.wMinute, st.wSecond);
fputs(logBuf, logfile);
}
// close log file
if(bLogfile == true)
{
GetLocalTime(&st);
sprintf_s(logBuf, "[%04d-%02d-%02d][%02d:%02d:%02d] SUCCESS:: Log File Closed\n", st.wYear, st.wMonth, st.wDay, st.wHour, st.wMinute, st.wSecond);
fputs(logBuf, logfile);
fclose(logfile);
}

You can try forcing I/O operation in the file using fflush(). You could do that each time you write in your logs to make sure you have as much data actually written as possible.
Although I would suggest, as you are using C++, using fstream instead of FILE*

Related

C++: Problems of file write using FILE Library when file beat process is running

I have a code for file write using FILE Library, and usually works, but I found a case where doesn't work: When the code runs concurrently with filebeat process.
I don't know this cause of the problem because my c++ project does not support debugging mode.
I am participating in an open source project developed by someone else and i am not familiar with this project yet.
This is my c++ code:
FILE *fptr;
fptr = fopen("log_path.c_str()", "w");
if (fptr == NULL)
{
printf("Error!");
exit(1);
}
fprintf(fptr, "%s", log.c_str());
fclose(fptr);
Is there a any other good way to save log files?
Please give me some advice.
Your code have broken-pipe exception when you try to write a file.
This exception occurs when a c++ code try to write a log file while the filebeat software is reading the log file.
So, I recommend using this C++ code:
#include <fstream>
#include <iostream>
logFilePath = "this is path string of log file";
log = "this is log string";
ofstream output(logFilePath, ios::app);
output << log << endl;
output.close();
This code that used the offstream library will be solve the broken pipe exception.
If you use this c++ code, it is able to use string type for file write process, so it's not necessary to type convert via c_str().
I checked that this code able to used with File beat 7.10.0.
Thank you.

C++ ofstream not creating and writing to file on subsequent runs of the program

#include <iostream>
#define LOGFATAL(msg) log(0, msg)
std::ofstream *logst = NULL;
void log(int sev, char *msg)
{ if (logst == NULL) {
logst = new std::ofstream();
logst->open("filea.txt", std::ios::out | std::ios::app);
*logst << "Logger started." << std::endl;
}
std::ofstream &_log = *logst;
_log << msg << std::endl;
_log.flush();
}
int main()
{ LOGFATAL("Log msg1.");
LOGFATAL("Log msg2.");
LOGFATAL("Log msg3.");
logst->close();
delete logst;
}
I am opening a file for logging the very first time I log and continue to keep it open until the end of program.
Since I use the flush() operation after every log invocation, I expect to see my messages printed close to immediately. BUT THIS DID NOT HAPPEN. WHY?
Currently, I kill my program using Ctrl+C before it finishes (Don't ask me why). On subsequent runs of the program I don't even see the log file getting created and even if it already exists I don't see any logs getting added. Since I don't let the close() execute, does the file descriptor get leaked and prevent future open() from new programs to fail?
I am running it on a RHEL 7.2 and I assume most new OS's handle these days even if close() isn't called accidentally. Considering Ctrl+C is the only way to stop my program currently, what can I do to make my program to log correctly every time it is started?
Is there a way from the system shell to check if there are any leaked file descriptors to my log file?
Inserting std::endl will itself flush the data. It works for me even if _log.flush(); is commented.
"I expect to see my messages printed close to immediately". If you are using vim then you need to close and reopen the file. So, use tail -F filea.txt to see the output immediately if data gets written to file.
Take the following steps to validate some assumptions:
1.) Open your file as std::fstream instead. After writing the first log message, reset the file pointer to the beginning and read the contents of the file. You should be getting your log message from the open file. If not, then you probably have a memory corruption.
2.) After writing the first log message, open the file with an std::ifstream and read its contents. You should be getting your log message from the std::ifstream. If not, then there is probably another process interfering with your file.
Note: The OS takes some time to make file changes visible to other processes. Under high load I have observed delays of several 100 ms, e.g. until the message shows up in tail -f.

Refactoring fopen_s

I am attempting to refactor a very old piece of code that generates a log file:
FILE *File = NULL;
errno_t err = fopen_s(&File, m_pApp->LogFilename(), "a+"); // Open log file to append to
if (err == 0)
{
::fprintf(File, "Date,Time,Serial Number,ASIC Voltage,Ink Temp,Heater Temp, Heater Set Point, PSOC Version,");
if (m_ExtraLog)
::fprintf(File, "T1 Temperature,ASIC Temperature,Proc Temperature,Voltage mA");
::fprintf(File, "\n");
fclose(File);
}
The reason for refactoring is that some users report that it is not possible to copy the file that is being produced (they want to copy it so that it can be analysed by a labview program). I read the documentation regarding fopen_s and saw that "Files that are opened by fopen_s and _wfopen_s are not sharable" - is this the cause of my problem? I am unsure because actually, I do not see the copying problem and seem to be able to copy and paste the file without issue. In any case I have replaced it with the recommended _fsopen function like so:
FILE *File = NULL;
if((File = _fsopen(m_pApp->LogFilename(),"a+", _SH_DENYNO))!= NULL)
{
::fprintf(File, "Date,Time,Serial Number,ASIC Voltage,Ink Temp,Heater Temp, Heater Set Point, PSOC Version,");
if(m_ExtraLog)
{
::fprintf(File, "T1 Temperature,ASIC Temperature,Proc Temperature,Voltage mA");
}
::fprintf(File, "\n");
fclose(File);
}
I've given the refactored code to the user but they still report being unable to copy or access the file from labview. I have very limited knowledge of C++ so I am wondering is there any other explanation as to why the file being generated is not able to copied by another process?
Lets have look into doc
Open a file. These are versions of fopen, _wfopen with security enhancements as described in Security Enhancements in the CRT.
Follow by: link
We can read:
Filesystem security. Secure file I/O APIs support secure file access in the default case.
So to fix that, you must change 'file security' to match 'all users/read access'

Detail File IO error reporting in C++

Is there any open source File IO library or easy way in C++ which reports a detail and exact errors for File IO. For examples; If user doesnt have the read or right permission - or if the disk got full etc.
C style file ops do this by default, you just need to include cerrorno and cstring and use strerror after an unsuccessful file operation call:
hFile = fopen(fname, "r+b");
/*-- attempt to create the file if we can't open it for reading --*/
if(!hFile) {
/*-- print out relevant error information --*/
printf("Open File %s Failed, %s\n", fname, std::strerror(errno));
return 1;
}
return 0;
That is of course if you use C style file operations. I think ifstream also supports these on most compilers.
A note, this functionality is not thread safe on some implementations. There is strerror_r on linux which is thread safe.

Why ofstream would fail to open the file in C++? Reasons?

I am trying to open an output file which I am sure has a unique name but it fails once in a while. I could not find any information for what reasons the ofstream constructor would fail.
EDIT:
It starts failing at some point of time and after that it continuously fails until I stop the running program which write this file.
EDIT:
once in a while = 22-24 hours
code snippet ( I don't this would help but still someone asked for it )
ofstream theFile( sLocalFile.c_str(), ios::binary | ios::out );
if ( theFile.fail() )
{
std::string sErr = " failed to open ";
sErr += sLocalFile;
log_message( sErr );
return FILE_OPEN_FAILED;
}
Too many file handles open? Out of space? Access denied? Intermittent network drive problem? File already exists? File locked? It's awfully hard to say without more details. Edit: Based on the extra details you gave, it sounds like you might be leaking file handles (opening files and failing to close them and so running out of a per-process file handle limit).
I assume that you're familiar with using the exceptions method to control whether iostream failures are communicated as exceptions or as status flags.
In my experience, the iostream classes give very little details on what went wrong when they fail during an I/O operation. However, because they're generally implemented using lower-level Standard C and OS API functions, you can often get at the underlying C or OS error code for more details. I've had good luck using the following function to do this.
std::string DescribeIosFailure(const std::ios& stream)
{
std::string result;
if (stream.eof()) {
result = "Unexpected end of file.";
}
#ifdef WIN32
// GetLastError() gives more details than errno.
else if (GetLastError() != 0) {
result = FormatSystemMessage(GetLastError());
}
#endif
else if (errno) {
#if defined(__unix__)
// We use strerror_r because it's threadsafe.
// GNU's strerror_r returns a string and may ignore buffer completely.
char buffer[255];
result = std::string(strerror_r(errno, buffer, sizeof(buffer)));
#else
result = std::string(strerror(errno));
#endif
}
else {
result = "Unknown file error.";
}
boost::trim_right(result); // from Boost String Algorithms library
return result;
}
You could be out of space, or there could be a permission issue. The OS may have locked the file as well. Try a different name/path for kicks and see if it works then.
One possibility is that you have another instance of the same program running.
Another is that perhaps you run two instances (for debugging purposes?) right after each other, and the OS hasn't finished closing the file and resetting the locks before your next instance of the program comes along and asks for it.