Multithreaded logger - c++

I am trying to create a logger for multithreaded c++ code using boost. Here's my code:
class logger
{
private:
boost::mutex logMtx;
public:
logger()
{
}
~logger()
{
}
void logString(string z)
{
boost::mutex::scoped_lock lock(logMtx);
std::cout<<z<<std::endl;
std::cout.flush();
}
};
Then I share an instance (The instance is created in main thread before creating other threads) of this with multiple threads and call the logString function for logging. It does not seem to work. Some lines are coming truncated (Whole string is not printing - i.e if I pass "abcd" it prints "bcd" sometimes).
Is there something wrong with this approach?

Related

c++: Writing to a file in multithreaded program

So I have multiple threads writing to the same file by calling Log::write method.
class Log
{
private:
ofstream log;
string file_path;
public:
Log(string);
void write(string);
};
Log::Log(string _file_path)
{
file_path=_file_path;
}
void Log::write(string str)
{
EnterCriticalSection(&CriticalSection);
log.open(file_path.c_str(),std::ofstream::app);
log<<str+'\n';
log.close();
LeaveCriticalSection(&CriticalSection);
}
Is it safe if threads will call Log::write method of the same object at the same time?
Your code is wasteful and does not follow C++ idioms.
Starting from the end : yes, write is thread safe, because win32 CRITICAL_SECTION protects it from concurrent modifications.
although:
why open and close the stream each time? this is very wasteful thing to do. open the stream in the constructor and leave it open. the destructor will deal with closing the stream.
if you want to use Win32 critical section, at least make it RAII safe. make a class which wraps a reference to critical section, locking it in a constructor and unlocking it in the destructor. this way even if an exception is thrown - you are guaranteed that the lock will be unlocked.
where is the deceleration of CriticalSection anyway? it should be a member of Log.
are you aware of std::mutex?
why are you passing strings by value? it is very un-efficient. pass then by const reference.
you use snake_case for some of the variables (file_path) but upper camel case for other (CriticalSection). use the same convention.
str is never a good name for a string variable, and the file stream is not a log. is the thing that does the actual logging. logger is a better name. in my correction is just named it m_file_stream.
Corrected code:
class Log
{
private:
std::mutex m_lock;
std::ofstream m_file_stream;
std::string m_file_path;
public:
Log(const std::string& file_path);
void write(const std::string& log);
};
Log::Log(const std::string& file_path):
m_file_path(file_path)
{
m_file_stream.open(m_file_path.c_str());
if (!m_file_stream.is_open() || !m_file_stream.good())
{
//throw relevant exception.
}
}
void Log::write(const std::string& log)
{
std::lock_guard<std::mutex> lock(m_lock);
m_file_stream << log << '\n';
}

How can I detect std::thread was terminated as async?

I'm a Java engineer but I need to migrate my Java code to C++.
In C++:
void foo(){
thread t(&loading_function);
t.detach();
}
void loading_function(){
while(true){
//..some loading operations
}
}
//I want to call after loading_function NOT in main thread like t.join();
void loading_after(){
//..loading after handling
}
I want to handling after thread t was end of its process.
Like this Java code:
public class Test implements Runnable{
public void foo(){
Thread loading_thread = new Thread(this);
loading_thread.start();
}
public void run(){
while(true){
//..some loading operations
}
loading_after();
}
public void loading_after(){
//..loading after handling
}
}
How can I do that?
Based on the description on how std::thread::detach() works, I don't think you can detect when it ends, unless you make your loading_function() signal to the outside world that is has ended. There seems to be no built in mechanism for a detached std::thread to signal it has ended. I might be wrong, I have little experience with std::thread.
Alternative would be to make a function that does both loading_function() and loading_after() and pass that function to the std::thread object.
void loading_function()
{
while(true)
{
//..some loading operations
}
}
//I want to call after loading_function NOT in main thread like t.join();
void loading_after()
{
//..loading after handling
}
void load()
{
loading_function();
loading_after();
}
void foo()
{
thread t(&load);
t.detach();
}

Two pcap_compile() on one device at same time?

I have two threads and each one has packet capture from the same deviсe at the same time but the program crashes when the second thread reaches the pcap_compile() function. Also each thread has their own variables and don't use global. It seems that they get the same handle of the device, therefore the program crashes. Why do I need two threads? Because I want to seperate packets on the sent and on the recived by specified pcap filter. So how do I solve this? Or is it better to use one thread and sort manually the sent and the received packets by using the address from tcp header?
pcap_compile is not thread safe. You must surround all calls to it that may be encountered by separate threads with a critical section/mutex to prevent errors because of non thread-safe state within the parser that compiles the expression (for the gory details, it uses YACC to create code for parsing the expression and the code generated for that is eminently not thread safe).
You need to explicitly open the device once per thread that you're planning on using for the capture, if you reuse the same device handle across multiple threads then it will simply not do what you're asking for. You should open the pcap handle within the thread that you're planning on using it, so each thread that's planning on doing capture should do it's own pcap_open.
to guard the call to pcap_compile with a Critical Section, you could create a simple wrapper (C++ wrapper of windows critical section):
class lock_interface {
public:
virtual void lock() = 0;
virtual void unlock() = 0;
};
class cs : public lock_interface {
CRITICAL_SECTION crit;
public:
cs() { InitializeCriticalSection(&crit); }
~cs() { DeleteCriticalSection(&crit); }
virtual void lock() {
EnterCriticalSection(&crit);
}
virtual void unlock() {
LeaveCriticalSection(&crit);
}
private:
cs(const locker &);
cs &operator=(const cs &);
};
class locker {
lock_interface &m_ref;
public:
locker(lock_interface &ref) : m_ref(ref) { m_ref.lock(); }
~locker() { m_ref.unlock(); }
private:
locker(const locker &);
locker &operator=(const locker &);
};
static cs section;
int
wrapped_pcap_compile(pcap_t *p, struct bpf_program *fp, const char *str, int optimize, bpf_u_int32 netmask)
{
locker locked(section);
pcap_compile(p, fp, str, optimize, netmask);
}
If you are using C++11, you can have something like:
int thread_safe_pcap_compile_nopcap(int snap_len, int link_type,
struct bpf_program *fp, char const *str,
int optimize, bpf_u_int32 netmask) {
static std::mutex mtx;
std::lock_guard<std::mutex> lock(mtx);
return pcap_compile_nopcap(snap_len, link_type, fp, str, optimize, netmask);
}
It is similar for pcap_compile function.

How do I make a non concurrent print to file system in C++?

I am programming in C++ with the intention to provide some client/server communication between Unreal Engine 4 and my server.
I am in need of a logging system but the current ones are flooded by system messages.
So I made a Logger class with a ofstream object which I do file << "Write message." << endl.
Problem is that each object makes another instance of the ofstream and several longer writes to the file get cut off by newer writes.
I am looking for a way to queue writing to a file, this system/function/stream being easy to include and call.
Bonus points: the ofstream seems to complain whenever I try to write std::string and Fstring :|
log asynchronously using i.e. g2log or using a non-blocking socket wrapper, such as zeromq
ofstream can't be used across multiple threads. It needs to be synchronized using mutex or similar objects. Check the below thread for details:ofstream shared by mutiple threads - crashes after awhile
I wrote a quick example of how you can implement something like that. Please keep in mind that this may not be a final solution and still requires additional error checking and so on ...
#include <concurrent_queue.h>
#include <string>
#include <thread>
#include <fstream>
#include <future>
class Message
{
public:
Message() : text_(), sender_(), quit_(true)
{}
Message(std::string text, std::thread::id sender)
: text_(std::move(text)), sender_(sender), quit_(false)
{}
bool isQuit() const { return quit_; }
std::string getText() const { return text_; }
std::thread::id getSender() const { return sender_; }
private:
bool quit_;
std::string text_;
std::thread::id sender_;
};
class Log
{
public:
Log(const std::string& fileName)
: workerThread_(&Log::threadFn, this, fileName)
{}
~Log()
{
queue_.push(Message()); // push quit message
workerThread_.join();
}
void write(std::string text)
{
queue_.push(Message(std::move(text), std::this_thread::get_id()));
}
private:
static void threadFn(Log* log, std::string fileName)
{
std::ofstream out;
out.open(fileName, std::ios::out);
assert(out.is_open());
// Todo: ... some error checking here
Message msg;
while(true)
{
if(log->queue_.try_pop(msg))
{
if(msg.isQuit())
break;
out << msg.getText() << std::endl;
}
else
{
std::this_thread::yield();
}
}
}
concurrency::concurrent_queue<Message> queue_;
std::thread workerThread_;
};
int main(int argc, char* argv[])
{
Log log("test.txt");
Log* pLog = &log;
auto fun = [pLog]()
{
for(int i = 0; i < 100; ++i)
pLog->write(std::to_string(i));
};
// start some test threads
auto f0 = std::async(fun);
auto f1 = std::async(fun);
auto f2 = std::async(fun);
auto f3 = std::async(fun);
// wait for all
f0.get();
f1.get();
f2.get();
f3.get();
return 0;
}
The main idea is to use one Log class that has a thread safe write() method that may be called from multiple threads simultaneously. The Log class uses a worker thread to put all the file access to another thread. It uses a threadsafe (possibly lock-free) data structure to transfer all messages from the sending thread to the worker thread (I used concurrent_queue here - but there are others as well). Using a small Message wrapper it is very simple to tell the worker thread to shut down. Afterwards join it and everything is fine.
You have to make sure that the Log is not destroyed as long as any thread that may possibly write to it is still running.

Boost::asio, Shared Memory and Interprocess Communication

I have an application that is written to use boost::asio exclusively as its source of input data as most of our objects are network communication based. Due to some specific requirements, we now require the ability to use shared memory as an input method as well. I've already written the shared memory component and it is working relatively well.
The problem is how to handle notifications from the shared memory process to the consuming application that data is available to be read -- we need to handle the data in the existing input thread (using boost::asio), and we also need to not block that input thread waiting for data.
I've implemented this by introducing an intermediate thread that waits on events to be signaled from the shared memory provider process then posts a completion handler to the input thread to handle reading in the data.
This is working now also, but the introduction of the intermediate thread means that in a significant amount of cases we have an extra context switch before we can read the data which has a negative impact on latency, and the overhead of the additional thread is also relatively expensive.
Here's a simplistic example of what the application is doing:
#include <iostream>
using namespace std;
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <boost/scoped_ptr.hpp>
#include <boost/bind.hpp>
class simple_thread
{
public:
simple_thread(const std::string& name)
: name_(name)
{}
void start()
{
thread_.reset(new boost::thread(
boost::bind(&simple_thread::run, this)));
}
private:
virtual void do_run() = 0;
void run()
{
cout << "Started " << name_ << " thread as: " << thread_->get_id() << "\n";
do_run();
}
protected:
boost::scoped_ptr<boost::thread> thread_;
std::string name_;
};
class input_thread
: public simple_thread
{
public:
input_thread() : simple_thread("Input")
{}
boost::asio::io_service& svc()
{
return svc_;
}
void do_run()
{
boost::system::error_code e;
boost::asio::io_service::work w(svc_);
svc_.run(e);
}
private:
boost::asio::io_service svc_;
};
struct dot
{
void operator()()
{
cout << '.';
}
};
class interrupt_thread
: public simple_thread
{
public:
interrupt_thread(input_thread& input)
: simple_thread("Interrupt")
, input_(input)
{}
void do_run()
{
do
{
boost::this_thread::sleep(boost::posix_time::milliseconds(500));
input_.svc().post(dot());
}
while(true);
}
private:
input_thread& input_;
};
int main()
{
input_thread inp;
interrupt_thread intr(inp);
inp.start();
intr.start();
while(true)
{
Sleep(1000);
}
}
Is there any way to get the data handled in the input_thread directly (without having to post it in via the interrupt_thread? The assumption is that the interrupt thread is totally driven by timings from an external application (notification that data is available via a semaphore). Also, assume that we have total control of both the consuming and providing applications, that we have additional objects that need to be handled by the input_thread object (so we cannot simply block and wait on the semaphore objects there). The goal is to reduce the overhead, CPU utilization and latency of the data coming in via the shared memory providing application.
I guess you have found your answer since you posted this question, this is for others benefit...
try and check out boost strands.
It gives you the ability to choose on which thread you want to do some work on.
It will automatically get queued on the specific strand, that's something you won't have to think about.
It even gives you a completion handler if you need to know when the work is done.