Is db_query thread safe? - c++

I am working with MySQL in C++. I had an issue before with mysql_query() not being thread safe (http://dev.mysql.com/doc/refman/5.1/en/c-api-threaded-clients.html.) Is db_query() any different or do the rules in the first bullet point from the doc I linked to apply to db_query() too? I assume they operate the same, but I want to make sure it isn't slightly different and end up unlocking my mutex too quickly or leave it locked longer than necessary. Sorry I couldn't find any documentation specifically on this issue for db_query().
Thanks.

Sorry everyone, I didn't realize that this db_query() was imported from another file. I'm still learning the code base :/ No wonder I couldn't find any documentation on it! Sorry to whomever's time I wasted.
...And it isn't thread safe. It is just a wrapper for mysql_query() doh XD

Related

Is it safe to change the reactor's state using the async API without manual synchronization?

Hey
I'm using gRPC with the async API. That requires constructing reactors based on classes like ClientBidiReactor or ServerBidiReactor
If I understand correctly, the gRPC works like this: It takes threads from some thread pool, and using these threads it executes certain methods of the reactors that are being used.
The problem
Now, the problem is when the reactors become stateful. I know that the methods of a single reactor will most probably be executed sequentially, but they may be run from different threads, is this correct? If so, then is it possible that we may encounter a problem described for instance here?
Long story short, if we have an unsynchronized state in such circumstances, is it possible that one thread will update the state, then a next method from the reactor will be executed from a different thread and it will see the not-updated value because the state's new value has not been flushed to the main memory yet?
Honestly, I'm a little confused about this. In the grpc examples here and here this doesn't seem to be addressed (the mutex is for a different purpose there and the values are not atomic).
I used/linked examples for the bidi reactors but this refers to all types of reactors.
Conclusion / questions
There are basically a couple of questions from me at this point:
Are the concerns valid here and do I properly understand everything or did I miss something? Does the problem exist?
Do we need to manually synchronize reactors' state or is it handled by the library somehow(I mean is flushing to the main memory handled)?
Are the library authors aware of this? Did they keep this in mind while they were coding examples I linked?
Thank you in advance for any help, all the best!
You're right that the examples don't showcase this very well, there's some room for improvement. The operation-completion reaction methods (OnReadInitialMetadataDone, OnReadDone, OnWriteDone, ...) can be called concurrently from different threads owned by the gRPC library, so if your code accesses any shared state, you'll want to coordinate that yourself (via synchronization, lock-free types, etc). In practice, I'm not sure how often it happens, or which callbacks are more likely to overlap.
The original callback API spec says a bit more about this, under a "Thread safety" clause: L67: C++ callback-based asynchronous API. The same is reiterated a few places in the callback implementation code itself - client_callback.h#L234-236 for example.

ofstream write from multiple boost threads - g++ and vs2008

Using the boost thread library I pass the open ofstream file to each thread as a reference, after half the threads write there is some kind of crash and the program terminates. My assumption is the function reaches the end, closes the file and the remaining threads are trying to write to a closed file. As a test, I added a join statement for the last thread and more of the threads managed to write to file. My experience mutithreading is 2 days - yesterday I got the boost library built and not much more experience in c++ or any language.
I read through the "Questions that may already have your answer" but none answer this question. There are so many posts addressing versions of this problem and just as many approaches to the solution - that is seems more like overthinking and there is a clean way to ensure all the threads have finished before the file closes and that the writes from each thread are buffered in a queue to prevent write clashes.
Some possible solutions:
Rather than pass an open file, pass the file reference and let each
thread open the file, append, and close ofstream myfile("database",
ios::out | ios::app); - from this posted solution "How do I
add..."; this did not work
Reading through the boost thread documentation, there is a
join_all() function but compiled in vs2008 from a boost 1.53.0
version, "error C2039: 'join_all' : is not a member of
'boost::thread'"
Use boost mutex or locks this explanation seems like it is the
asnwer, but I'd first like to know if the crash is due to multiple
writes conflicting or the file closes before the threads have
finished writing.
This c++ ostream::write page referenced multithreading, but just
states that there are no guarantees
This one states something about boost::thread and
boost::function work like no other, but reviewing the
boost::function literature did not help explain what the comment
means
Back to if this is a problem with waiting for all threads to complete
and not a write clash, this discussion provides a solution that
requires storing all the threads, and calling join()
This discusses the windows specific WaitForMultipleObjects but
this is windows specific - the last solution in this post sounds like
the answer but it has no votes and I cannot tell if it is windows
specific or not.
Buffer everything to memory and write-out in a separate function - this solution is in C# but the approach seems plausible. The specifics of their discussion don't make sense to me.
create threads in a loop - seems to have the clearest review; solved using boost::thread_group method given in this thread
There are more forum discussions but they sound like more versions of the previous examples
I'd like a solution that works in both windows and linux; my intuition is to pass the file reference and let each thread append to the file.
What results do you expect to see after uncoordinated writes from several threads to the same stream? That will be garbage after all, even if the stream did survive this torture... You will need to implement some kind of coordination between the writes.

QReadWriteLock recursion

I'm using QReadWriteLock in recursive mode.
This code doesn't by itself make sense, but the issues I have arise from here:
lock->lockForWrite();
lock->lockForRead();
lockForRead is blocked. Note that this is in recursive mode.
The way i see it is that Write is a "superior" lock, it allows me to read and write to the protected data, where Read lock only allows reading.
Also, i think that write lock should not be blocked if the only reader is the same one asking for the write lock.
I can see from the qreadwritelock.cpp source codes that there is no attempt to make it work like what i would like. So it's not a bug, but a feature I find missing.
My question is this, should this kind of recursion be allowed? Are there any problems that arise from this kind of implementation and what would they be?
From QReadWriteLock docs:
Note that the lock type cannot be changed when trying to lock
recursively, i.e. it is not possible to lock for reading in a thread
that already has locked for writing (and vice versa).
So, like you say, it's just the way it works. I personally can't see how allowing reads on the same thread as write locked item would cause problems, but perhaps it requires an inefficient lock implementation?
You could try asking on the QT forums but I doubt you'll get a definitive answer.
Why don't you take the QT source as a starter and have a go at implementing yourself if it's something you need. Writing synchronisation objects can be tricky, but it's a good learning exercise.
I found this question while searching for the same functionality myself.
While thinking about implementing this on my own, I realized that there definitely is a problem arising doing so:
So you want to upgrade your lock from shared (read) to exclusive (write). Doing
lock->unlock();
lock->lockForWrite();
is not what you want, since you want no other thread to gain the write lock right after the current thread released the read lock. But if there Was a
lock->changeModus(WRITE);
or something like that, you will create a deadlock. To gain a write lock, the lock blocks until all current read locks are released. So here, multiple threads will block waiting for each other.

How do I use v8 in a thread?

I'm trying to use v8 from c++ inside a thread that isn't the main thread. There's no multi-threading as far as v8 is concerned, all v8 objects are created and destroyed within that thread. Nothing is running in parallel, nothing is being shared. When I run my program from the main thread, everything works fine. When I have the v8 stuff in another thread, I get segmentation fault when I create a v8::HandleScope.
I can't find any useful documentation on how threading is actually addressed with v8. The instruction "use isolates and lockers" pops up often when searching, but I can't find any examples on how this is done. There's this API doc on v8::Isolate, but nothing on that page tells me if I need them in my specific case (I'm not sharing memory or executing in parallel). The docs on v8::Locker() don't even have information about what the class is for. The included samples in the project don't deal with any of this either.
So my questions are...
Do I need to use isolates and/or lockers here?
Could I get a minimal example of how to use them? Even pseudo-code or something would be really useful
You do need V8::Locker in the methods that will be working with the context when calling HandleScope. https://github.com/jasondelponte/go-v8/blob/master/src/v8context.cc#L41 is an example of how I've use the locker with v8. In this example it is used with multiple threads, but I believe the rule applies with single threads also.
Isolates are only needed when you want multiple instances of v8 in parallel.
https://groups.google.com/forum/?fromgroups=#!topic/v8-users/FXpeTYuAqKI Is an old thread I found a bit ago that helped me solve my problem with the library crashing as soon as HandleScope local variable was created.

Odd issue with std::map and thread safety

This isn't so much of a problem now as I've implemented my own collection but still a little curious on this one.
I've got a singleton which provides access to various common components, it holds instances of these components with thread ID's so each thread should (and does, I checked) have it's own instance of the component such as an Oracle database access library.
When running the system (which is a C++ library being called by a C# application) with multiple incoming requests everything seems to run fine for a while but then it crashes out with an AccessViolation exception. Stepping through the debugger the problem appears to be when one thread finishes and clears out it's session information (held in a std::map object) the session information held in a separate collection instance for the other thread also appears to be cleared out.
Is this something anyone else has encountered or knows about? I've tried having a look around but can't find anything about this kind of problem.
Cheers
Standard C++ containers do not concern themselves with thread safety much. Your code sounds like it is modifying the map instance from two different threads or modifying the map in one thread and reading from it in another. That is obviously wrong. Use some locking primitives to synchronize the access between the threads.
If all you want is a separate object for each thread, you might want to take a look at boost::thread_specific_ptr.
How do you manage giving each thread its own session information? Somewhere under there you have classes managing the lifetimes of these objects, and this is where it appears to be going wrong.