Writing (logging) into same file from different threads , different functions? - c++

In C++ is there any way to make the writing into file thread safe in the following scenario ?
void foo_one(){
lock(mutex1);
//open file abc.txt
//write into file
//close file
unlock(mutex1);
}
void foo_two(){
lock(mutex2);
//open file abc.txt
//write into file
//close file
unlock(mutex2);
}
In my application (multi-threaded) , it is likely that foo_one() and foo_two() are executed by two different threads at the same time .
Is there any way to make the above thread safe ?
I have considered using the file-lock ( fcntl and/or lockf ) but not sure how to use them because fopen() has been used in the application ( performance reasons ) , and it was stated somewhere that those file locks should not be used with fopen ( because it is buffered )
PS : The functions foo_one() and foo_two() are in two different classes , and there is no way to have a shared data between them :( , and sadly the design is such that one function cannot call other function .

Add a function for logging.
Both functions call the logging function (which does the appropriate locking).
mutex logMutex;
void log(std::string const& msg)
{
RAIILock lock(logMutex);
// open("abc.txt");
// write msg
// close
}

If you really need a logger, do not try doing it simply by writing into files and perhaps use a dedicated logger, thus separating the concerns away from the code you're writing. There's a number of thread-safe loggers: the first one that comes to mind: g2log. Googling further you'll find log4cplus, a discussion here, even a minimalist one, +1

If the essence of functions foo_one() and foo_two() are only to open the file, write something to it, and close it, then use the same mutex to keep them from messing each other up:
void foo_one(){
lock(foo_mutex);
//open file abc.txt
//write into file
//close file
unlock(foo_mutex);
}
void foo_two(){
lock(foo_mutex);
//open file abc.txt
//write into file
//close file
unlock(foo_mutex);
}
Of course, this assumes these are the only writers. If other threads or processes write to the file, a lock file might be a good idea.

You should do this, have a struct with a mutex and a ofstream:
struct parser {
ofstream myfile
mutex lock
};
Then you can pass this struct (a) to foo1 and foo2 as a void*
parser * a = new parser();
initialise the mutex lock, then you can pass the struct to both the functions.
void foo_one(void * a){
parser * b = reinterperet_cast<parser *>(a);
lock(b->lock);
b->myfile.open("abc.txt");
//write into file
b->myfile.close();
unlock(b->mutex);
}
You can do the same for the foo_two function. This will provide a thread safe means to write to the same file.

Try this code. I've done this with MFC Console Application
#include "stdafx.h"
#include <mutex>
CWinApp theApp;
using namespace std;
const int size_ = 100; //thread array size
std::mutex mymutex;
void printRailLock(int id) {
printf("#ID :%", id);
lock_guard<std::mutex> lk(mymutex); // <- this is the lock
CStdioFile lastLog;
CString logfiledb{ "_FILE_2.txt" };
CString str;
str.Format(L"%d\n", id);
bool opend = lastLog.Open(logfiledb, CFile::modeCreate | CFile::modeReadWrite | CFile::modeNoTruncate);
if (opend) {
lastLog.SeekToEnd();
lastLog.WriteString(str);
lastLog.Flush();
lastLog.Close();
}
}
int main()
{
int nRetCode = 0;
HMODULE hModule = ::GetModuleHandle(nullptr);
if (hModule != nullptr)
{
if (!AfxWinInit(hModule, nullptr, ::GetCommandLine(), 0))
{
wprintf(L"Fatal Error: MFC initialization failed\n");
nRetCode = 1;
}
else
{
std::thread threads[size_];
for (int i = 0; i < size_; ++i) {
threads[i] = std::thread(printRailLock, i + 1);
Sleep(1000);
}
for (auto& th : threads) { th.hardware_concurrency(); th.join(); }
}
}
else
{
wprintf(L"Fatal Error: GetModuleHandle failed\n");
nRetCode = 1;
}
return nRetCode;
}
Referance:
http://www.cplusplus.com/reference/mutex/lock_guard/
http://www.cplusplus.com/reference/mutex/mutex/
http://devoptions.blogspot.com/2016/07/multi-threaded-file-writer-in-c_14.html

Related

Program crashes trying to create ofstream or fopen

I don't have enough reputation points to comment to ask if they solved the problem originally stated here. I have the same problem of the program crashing in construction of an ofstream.
It is not multi-threaded access, but it can be called in quick succession. I believe it crashes on the 2nd time. I use scope to ensure the stream object is destroyed.
Why would this happen?
I tried std::fopen too. It also results in a crash.
Here is the code using the ofstream:
static bool writeConfigFile (const char * filename, const Config & cfg)
{
logsPrintLine(_SLOG_SETCODE(_SLOGC_CONFIG, 0), _SLOG_INFO, "Write config file (%s stream)", filename);
ofstream os(filename); // FIXME: Crashes here creating ofstream 2nd time
if (os.good())
{
// Uses stream insertion operator to output attributes to stream
// in human readable form (about 2kb)
outputConfig(cfg, os);
if (!os.good())
{
logsPrintLine(_SLOG_SETCODE(_SLOGC_CONFIG, 0), _SLOG_NOTICE, "Failed to write configuration file (%s)", filename);
return false;
}
logsPrintLine(_SLOG_SETCODE(_SLOGC_CONFIG, 0), _SLOG_INFO, "Configuration written to file (%s)", filename);
return true;
}
logsPrintLine(_SLOG_SETCODE(_SLOGC_CONFIG, 0), _SLOG_NOTICE, "Cannot write configuration file (%s)", filename);
return false;
}
/**
* Called when configuration settings have been read/received and validated
* #return true if successfully set, and written to file
*/
bool Config::set (SysConfigSource source, const struct SCADA_dsconfig * p)
{
Lock lock(mtxSet); // This is locking a mutex on construction of the lock. Release it on destruction.
// Setup the non-current one to switch to
Config * pCfg = pConfig.other();
unsigned i, f, n = 0;
// set attributes in pCfg based on the config received
// and some constants ...
pCfg->setWritten(writeConfigFile("test.conf", *pCfg));
if (!pCfg->isWritten())
{
// Don't set system config status here. Existing one still in use.
logsPrintLine(_SLOG_SETCODE(_SLOGC_CONFIG, 0), _SLOG_NOTICE, "Config file not written. Retain prior config.");
return false;
}
pConfig.swap(); // switch-in the new config
setSystemConfigSource(source);
toSyslog(pCfg);
notifyConfigChange();
return true;
}
Maybe post a segment of your source code in order to get an idea of where it went wrong.
Here is a very basic segment of code of how I would use fstream.. hope you will find it helpful.
#include <iostream>
#include <fstream>
#include <string>
int main() {
while (1) {
std::string testString;
std::ofstream outFile;
outFile.open("Test", std::ios_base::app); // Appends to file, does not delete existing code
std::cout << "Enter a string: ";
std::cin >> testString;
outFile << testString << std::endl;
outFile.close();
}
}
It turned out to be a device driver bus master issue. Add "ahci nobmstr" when launching devb-ahci.
Derived via http://www.qnx.com/developers/docs/qnxcar2/index.jsp?topic=%2Fcom.qnx.doc.neutrino.user_guide%2Ftopic%2Fhardware_Troubleshooting_devb-eide.html

How to get rid of if-elif-else clause

For example, I have some arrays of filenames:
char arr[N] = ["FILENAME0", "FILENAME1", "FILENAME2", "FILENAME3", "FILENAME4", ...]
How can I write a function which depends on N will fopen and fclose N files?
switch-case and if-elif-else are straightforward, but require a lot of conditions and N already should be known (N will pass in runtime from stdin).
For-loop is not suitable here because it will open and close step by step. I want that at the beginning, function will fopen N files, then all these N files descriptors should be available in memory and only then close N files.
I expect that if N == 1 function will behave like:
int func ()
{
FILE *fp = fopen(arr[0]);
fclose(fp);
return 0;
}
or if N == 3:
int func ()
{
FILE *fp = fopen(arr[0]);
FILE *fp1 = fopen(arr[1]);
FILE *fp2 = fopen(arr[2]);
fclose(fp);
fclose(fp1);
fclose(fp2);
return 0;
}
Just store your FILE*s into a std::vector and close them with a second loop:
void func(const std::vector<std::string>>& filenames) {
std::vector<FILE*> fds;
for (const std::string& filename : filenames) {
fds.push_back(std::fopen(filename.c_str(), "w"));
}
// Work with the file descriptors however you want
for (FILE* fd : fds) {
std::fclose(fd);
}
}
If you do anything that could throw an exception between when you open the files and when you close them then you may want to use an exception-safe wrapper, rather than closing the FILE*s manually:
void func(const std::vector<std::string>>& filenames) {
std::vector<std::unique_ptr<FILE, int(*)(FILE*)>> fds;
for (const std::string& filename : filenames) {
fds.emplace_back(std::fopen(filename.c_str(), "w"), std::fclose);
}
// Work with the file descriptors however you want. To get
// the raw FILE* use fds[i].get()
// std::unique_ptr will call its deleter (std::fclose in this case)
// on its managed pointer in its destructor, so there's no need to
// manually close them
}
AFAIK, there isn't a possible solution of shortening FILE *fp = fopen(arr[0]); and so on, on C++. You are stuck using the if-elif-else clause for both opening the file and closing it.

C++ - Duplicating stdout/stderr to file while keeping console outputs

Very similar question was already asked here:
Writing to both terminal and file c++
But without a good answer. All answers suggest to use custom stream or duplicating std::cout. However I need the behavior for stdout/stderr.
Wanted behavior: For every write to stdout/stderr I want this to appear on console and also be redirected to a file.
I was thinking about redirecting the stdout to pipe and from there writing to file and console - expanding on this answer https://stackoverflow.com/a/956269/2308106
Is there any better approach to this?
EDIT1: Why stdout/stderr and not custom streams?
I'm calling (3rd party) code that I cannot modify and that is hosted within my process. So I cannot use custom streams (the called code is already writting to stderr/stdout).
EDIT2:
Based on the suggestion from JvO I tried my implementation (windows):
HANDLE hPipe = ::CreateNamedPipe(TEXT("\\\\.\\pipe\\StdErrPipe"),
PIPE_ACCESS_OUTBOUND | FILE_FLAG_FIRST_PIPE_INSTANCE,
PIPE_TYPE_BYTE,
//single instance
1,
//default buffer sizes
0,
0,
0,
NULL);
if (hPipe == INVALID_HANDLE_VALUE)
{
//Error out
}
bool fConnected = ConnectNamedPipe(hPipe, NULL) ?
TRUE : (GetLastError() == ERROR_PIPE_CONNECTED);
if (!fConnected)
{
//Error out
}
int fd = _open_osfhandle(reinterpret_cast<intptr_t>(hPipe), _O_APPEND | /*_O_WTEXT*/_O_TEXT);
if (dup2(fd, 2) == -1)
{
//Error out
}
There is still some issue though - as from the other end of the pipe I receive only a rubish (I first try to send something dirrectly - that works great; but once stderr is redirected and I write to it; the pipe receives same nonprinatble character over and over)
You can 'hijack' stdout and stderr by replacing the pointers; stdout and stderr are nothing more than FILE *. I suggest you open a pipe pair first, then used fdopen() to create a new FILE * which is assiocated with the sending end of the pipe, then point stdout to your new FILE. Use the receiving end of the pipe to extract what was written to the 'old' stdout.
Pseudo code:
int fd[2];
FILE *old_stdout, *new_stdout;
pipe(fd);
new_stdout = fdopen(fd[1], "w");
old_stdout = stdout;
stdout = new_stdout;
Now, everything you read from fd[0] can be written to a file, old_stdout, etc.
You can redirect cout. One (incomplete) example might look like this:
#include <fstream>
#include <iostream>
template<class CharT, class Traits = std::char_traits<CharT> >
struct teestream : std::basic_streambuf<CharT, Traits> {
private:
std::basic_streambuf<CharT, Traits>* m_rdbuf1;
std::basic_streambuf<CharT, Traits>* m_rdbuf2;
public:
teestream(std::basic_streambuf<CharT, Traits>* rdbuf1, std::basic_streambuf<CharT, Traits>* rdbuf2)
:m_rdbuf1(rdbuf1)
,m_rdbuf2(rdbuf2)
{}
~teestream() {
m_rdbuf1->pubsync();
m_rdbuf2->pubsync();
}
protected:
int_type overflow(int_type ch = Traits::eof()) override
{
int_type result = m_rdbuf1->sputc(ch);
if (result != Traits::eof())
{
result = m_rdbuf2->sputc(ch);
}
return result;
}
virtual int sync() override
{
int result = m_rdbuf1->pubsync();
if (result == 0)
{
result = m_rdbuf2->pubsync();
}
return result;
}
};
typedef teestream<char, std::char_traits<char>> basic_teestream;
int main()
{
std::ofstream fout("out.txt");
std::streambuf *foutbuf = fout.rdbuf(); //Get streambuf for output stream
std::streambuf *coutbuf = std::cout.rdbuf(); //Get streambuf for cout
std::streambuf *teebuf = new basic_teestream(coutbuf, foutbuf); //create new teebuf
std::cout.rdbuf(teebuf);//Redirect cout
std::cout << "hello" << std::endl;
std::cout.rdbuf(coutbuf); //Restore cout
delete teebuf; //Destroy teebuf
}
As you can see here the streambuf used by cout is replaced by one that controls the streambuf itself as well as the streambuf of a ofstream.
The code has most likely many flaws and is incomplete but you should get the idea.
Sources:
https://stackoverflow.com/a/10151286/4181011
How can I compose output streams, so output goes multiple places at once?
http://en.cppreference.com/w/cpp/io/basic_streambuf/pubsync
Implementing std::basic_streambuf subclass for manipulating input

Inconsistent output from c++ multithreaded program

I have the following program in C++:
// multithreading01.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include <string>
#include <iostream>
#include <process.h>
using namespace std;
bool threadFinished = false;
struct params {
string aFile;
bool tf;
};
void WriteToFile(void *p)
{
params* a = (params*)p;
cout<<a->aFile<<endl;
a->tf = true;
_endthread();
}
int main(int argc, char* argv[])
{
params *param01 = new params;
params *param02 = new params;
param01->aFile = "hello from p1";
param01->tf = false;
param02->aFile = "hello from p2";
param02->tf = false;
_beginthread(WriteToFile,0,(void *) param01);
_beginthread(WriteToFile,0,(void *) param02);
while(!param01->tf || !param02->tf)
{
}
cout << "Main ends" << endl;
system("pause");
return 0;
}
However, I am getting inconsistent outputs such as
output 1:
hello from p1
hello from p2
output 2:
hello from p1hello from p2
output 3:
hhello from p2ello from p1
How can I get a consistent output from this code? I am using Visual C++ 6.0 Standard Edition.
Read this small writeup
Like everyone mentioned in the comment, when you create threads, generally speaking, idea is to separate tasks and thusly increasing performance on modern multicore architecture CPUs which could one thread per core.
If you want to access same resource (same file in your case) from two different threads then you need to make sure that simultaneous access from two threads doesnt happen otherwise you would see the problem that you are seeing.
Your provide safe simultaneous access by protecting shared resource using some locks (e.g POSIX locks or you could chose your platform specific lock implementation).
Common mistake beginners do is that they lock the "code" not "resource".
Dont do this:
void WriteToFile(void *p)
{
pthread_mutex_lock(var); //for example only
params* a = (params*)p;
cout<<a->aFile<<endl;
a->tf = true;
_endthread();
pthread_mutex_unlock(var); //for example only
}
You should instead put a lock in your resource
struct params {
lock_t lock; //for example only not actual code
string aFile;
bool tf;
};
void WriteToFile(void *p)
{
params* a = (params*)p;
pthread_mutex_lock(a->lock); //Locking params here not the whole code.
cout<<a->aFile<<endl;
a->tf = true;
pthread_mutex_unlock(a->lock); //Unlocking params
_endthread();
}

Creating text file into C++ addon of node.js

I want to know how i can create file and append data inside it in c++ addon (.cc) file of node.js ??
I have used below code to do same, but not able to find file "data.txt" in my ubuntu machine(reason behind it may be below code is not correct way to create file, but strange i haven't received any error/warning at compile time).
FILE * pFileTXT;
pFileTXT = fopen ("data.txt","a+");
const char * c = localReq->strResponse.c_str();
fprintf(pFileTXT,c);
fclose (pFileTXT);
Node.js relies on libuv, a C library to handle the I/O (asynchronous or not). This allows you to use the event loop.
You'd be interested in this free online book/introduction to libuv: http://nikhilm.github.com/uvbook/index.html
Specifically, there is a chapter dedicated to reading/writing files.
int main(int argc, char **argv) {
// Open the file in write-only and execute the "on_open" callback when it's ready
uv_fs_open(uv_default_loop(), &open_req, argv[1], O_WRONLY, 0, on_open);
// Run the event loop.
uv_run(uv_default_loop());
return 0;
}
// on_open callback called when the file is opened
void on_open(uv_fs_t *req) {
if (req->result != -1) {
// Specify the on_write callback "on_write" as last argument
uv_fs_write(uv_default_loop(), &write_req, 1, buffer, req->result, -1, on_write);
}
else {
fprintf(stderr, "error opening file: %d\n", req->errorno);
}
// Don't forget to cleanup
uv_fs_req_cleanup(req);
}
void on_write(uv_fs_t *req) {
uv_fs_req_cleanup(req);
if (req->result < 0) {
fprintf(stderr, "Write error: %s\n", uv_strerror(uv_last_error(uv_default_loop())));
}
else {
// Close the handle once you're done with it
uv_fs_close(uv_default_loop(), &close_req, open_req.result, NULL);
}
}
Spend some time reading the book if you want to write C++ for node.js. It's worth it.