C++ Multithreading objects from library with static variables - c++

I created several "manager" objects of a library, each with different parameters. Every cycle a manager is fed with a data set, run calculations and writes result into a data structure. I have to run all managers on the same data set as fast as possible, so I created a threadpool to distribute data to all managers so that they can be run concurrently. Each manager have access to one result data structure, so I thought this will be thread safe.
However later I found out that the several classes in this library, which are used by managers, have static member variables which (as I believe) causes segmentation faults - segmentation errors originates from the library, not my code (checked).
My question is, is it possible to go around this? This will probably sound stupid, but is it possible to force each manager to use its own copy of the library and thus circumventing the static issue? I am processing ~20-50k data sets per second so I cannot afford overhead. Using forks would be very painful and in my case could create unwanted overheads.
Thanks for any advice!

Related

What is the right way to use QuantLib from multiple threads?

I haven't been able to find any documentation explicitly describing QuantLib's thread-safety properties (or the absence of them!). The QuantLib configuration documentation lists a number of compile-time options related to thread safety, from which i infer that, by default, QuantLib is not entirely threadsafe.
In particular, there are:
QL_ENABLE_SESSIONS - "If defined, singletons will return different instances for different sessions. You will have to provide and link with the library a sessionId() function in namespace QuantLib, returning a different session id for each session. Undefined by default."
QL_ENABLE_THREAD_SAFE_OBSERVER_PATTERN - "If defined, a thread-safe (but less performant) version of the observer pattern will be used. You should define it if you want to use QuantLib via the SWIG layer within the JVM or .NET eco system or any environment with an async garbage collector. Undefined by default."
QL_ENABLE_SINGLETON_THREAD_SAFE_INIT - "Define this to make Singleton initialization thread-safe. Undefined by default. Not compatible with multiple sessions."
Which options should i use, and what other steps should i take, if i want to use QuantLib:
From multiple threads, but never at the same time (eg only when holding a global lock)?
From multiple threads at the same time, but not sharing any objects between them?
From multiple threads at the same time, sharing objects between them?
The natural structure for my application is a directed acyclic graph, with a constant stream of market data entering at one end, being used to compute and update various objects, and producing a stream of estimated prices leaving at the other end. I would very much like to be able to have multiple cores working in parallel, as some calculations take a long time.
The application will mostly be written in Java, with minimal parts in C++ to interface with QuantLib. I am not planning to use the SWIG wrapper. I am happy to do memory management of QuantLib objects without help from Java's garbage collector.
EDIT! If you decide to set any of these options, then on unix, do it with the corresponding flag to ./configure:
--enable-sessions
--enable-thread-safe-observer-pattern
--enable-thread-safe-singleton-init
The answer from SmallChess is not far from the truth. There are almost no locks or safety nets in QuantLib, so most people use multiprocessing if they need to distribute calculations over processors---and with good reason.
For those who want a bit more insight, and not as an endorsement of using multi-threading in QuantLib:
whatever else you do, if possible, enable the configuration switches that give you some safety, such as the one for thread-safe initialization of singletons (with a caveat, see below);
you might have multiple threads running at once if they don't share any objects, and if they don't try to modify globals such as the evaluation date (look for classes inheriting from Singleton for the list of globals).
if you need different evaluation dates for different threads, you can use another compilation switch to build QuantLib so that the singletons are not actually singletons, but there's an instance per thread. Caveat: this switch is not compatible with thread-safe initialization of singletons. You still shouldn't share objects between threads.
if you want to share objects, you might be in for more trouble than it's worth. The problems are: (1) any change to the underlying data of, say, a curve will trigger a recalculation; and (2) the recalculations (such as the bootstrap of a curve) are not executed right away, but only when needed, i.e., when some curve method is called. This means that you must keep the various steps separate: first, set the values of any quotes and make sure that there aren't any further changes; then, go around the curves and trigger recalculation, for instance by asking a discount factor at some date; finally, pass the curves to the instruments and price them. Changing a value during the calculations will result in a bootstrap being done in the middle of them; and not triggering full construction before calculations might lead to two instruments triggering two simultaneous bootstraps, which wouldn't end well for any concerned parties.
As I said, it's probably more trouble than it's worth. Ideally, don't share objects between threads and don't touch the globals. Otherwise, prefer multiprocessing.
Unfortunately, QuantLib is not thread safe. None of the option you have will help you. QuantLib is a free project, it's focus is on the actual mathematical modelling and not computational optimisations such as thread safe.
You should definitely wrap QuantLib in a process. Multithreading is not encourage for QuantLib unless you absolutely know what you're doing and have checked the relevant source code.

C++ Boost Object Serialization - Periodic Saving to Protect Data

I have a program that uses boost serialization that loads on program start up and saves on shutdown.
Every once in a while, the program will crash due to this or that and I expect that to be fairly normal. The problem is that when the program crashes, often the objects are not saved at all. Other times, some will be missing or the data will be corrupted. This could be disastrous if a user loses months and months of data. In a perfect world, every one would backup their data and they could just roll back the data file.
My first solution is to periodically save the objects to a different temporary data file during run time. That way if the program crashes they can revert to the temporary data file with minimal data loss. My concern is the effect on performance. As far as I understand (correct me if I am wrong), once you save an object, it can't be used anymore? If that is the case, then the periodic save routine would involve saving and deleting my pointers, then loading them up again.
My second solution is to simply make a copy of the data file during program start up. The user's loss of data would be limited to that session. However, this may not be sufficient as some users may run the program for days and days.
Any input would be appreciated.
Thanks in advance.
If you save an object graph with boost serialization, that object graph is still available and can be saved again without necessarily reading anything from disk.
If you want to go high-tech and introduce a lot more complexity, you can use Boost Interprocess library with a managed_shared_memory segment. This enables you to actually transparently work directly on a disk file (actually, on memory pages backed by file blocks). This introduces another issue, actually: how to prevent changes from frequently hitting the disk.
Gratuitous advice:
I think the best of all worlds would be if your object graph is (e.g.) a Composite pattern where all nodes are shared immutables. Now serialization is "free" (with Boost), you can easily handle multiple versions of the program state (often a "document" or "database", logically) and efficiently save/load them with Boost Serialization. This pattern facilitates undo/redo, concurrent operations, transactional commit ¹ etc.
¹ (! not without extra work, but in principle)

how to let Matlab keep a mex session alive

My question is on how to program Matlab and my c++ code so that they can interact. To be more specific, I have a c++ program that process data, create an object, derive statistics of that object and write to mat file. I will then load it in matlab to do further analysis and visulization.
However, the time it takes to process a data and create the object is enormous, while the time to derive a statistic is negligible. On the other hand, there are many statistics and different combination of them and it is difficult to anticipate what combinations we are going to use. So I hope I can run the "statistics" part interatively many times without repeating the job of processing the data.
My question is: Can I ask Matlab to: 1. call the c++ code; 2. after processing the data and creating the object, keep that object "alive" in the memory 3. call the c++ code again to ask for a statistic to be loaded into my workspace. 4. Repeat 3 with different statistics.
Thanks
A further option may be to create a C++ class instance in your MEX function and return a pointer to it to MATLAB, passing the pointer to any subsequent calls. You should also create a MATLAB handle class wrapper for it if you use this approach, to allow you to clean up memory properly in its destructor. Here is a post where the poster was advised to do just that, and this is an example of the method on the Mathworks FileExchange.
The applicability of this method to your problem depends on the complexity of the problem. I would personally only go down this route if the problem is intractably complex with other approaches (e.g., you need to use a C++ class from some library and the instance must stay alive between calls, or if global variables won't do the trick as you need to keep track of a lot of instances and this is naturally represented best by an array of C++ classes where you can properly separate your concerns).
One way to accomplish this is to declare the variables that you want to access again as global in your c++ mex code. These variables will stay in the memory and you can access them again (when you call your mex function) until you clear that mex function or close the Matlab session. I used global variables for this purpose and it worked just fine for me.
Another option is to use persistent variables. From the documentation
Persistent variables are similar to global variables because the
MATLAB® software creates permanent storage for both. They differ from
global variables in that persistent variables are known only to the
function in which they are declared. This prevents persistent
variables from being changed by other functions or from the MATLAB
command line.

How can I write to a vector with one process and read it with another without performance loss

I have a vector that is constantly being updated (about 5-20 times a second) with new information by a process that I wrote to gather the information. I want to write a loop that would be able to detect a change in the vector, and then read the information and do the appropriate analysis that I write for that event. However, I know that there are many issues that arise from multithreading like this, so I'm curious what the best ways to do something like this are.
The vector is stored in a public class and is being updated by a feed we get from a financial company (the information being updated is futures indices).
I don't see a way to do this in C++. If you used shared memory you could store the vector in it, but its components would still be pointing at local, non-shared memory chunks, preventing it from being shared (you might be able to work around with with allocator trickery but I'm not sure it would work).
boost has some shared memory capabilities but as I recall from a question I can't find it has some downsides. You'll most likely have to design your own shared memory data structure and provide a mechanism to store a condition variable there so clients can wait on it to determine when new data is available.
EDIT: I found the question asking about boost::interprocess:
Is boost::interprocess ready for prime time?
Sorry, but in such a case, I would suggest changing the design: make the vector private, add a Setter function (wrapping push_back() or other) that will be used by your data feed, and youre done.
Or did I miss something ?

MPI how to send and receive unknown datatypes

We have developed an algorithm library in C++ which allows the user to implement his own datatypes for sharing data between individual algorithms (also implemented by the user).
This works fine, but we want to provide parallelization at library level. The individual algorithms should be executed in parallel on different nodes of distributed memory machines.
We decided to use MPI for parallelization, as it can be used for distributed and shared memory machines without code changes.
Unfortunately we fight now the problem how to distribute the user implemented datatypes between the nodes. We have the following problems:
We do not know how big the data might be, it might even change from run to run.
We do not know what data is inside the data structure.
The amount of data can be very big up to 1GB (this should be no problem with MPI)
The user should not see any difference in implementing the datatypes or algorithms for parallel execution (for the algorithm there is actually no problem)
Is there a possibility to use MPI to share these data between the nodes, or are there approaches available, which might be better suited for this kind of problem.
We would like to have a solution which works at least on shared memory machines however we would love to have a solution which works without code changes on shared and distributed memory machines.
Yes, you can do this with MPI, but no, MPI can't do it for you by itself.
Whether you're sending this data to another node, or writing it to disk, at some point you need to expressly describe the data structures layout in memory so that it can be serialized. If you pass MPI (or any other communications library) a pointer, it doesn't know what lies on the other side of that pointer, and so it has no way of traversing the data structure to copy its contents.
You can marshal the arguments into plain old data (manually, or with things like MPI_PACK), or you can create an MPI datatype which describes the layout of data in memory for that particular instance, and that will copy the data over. In addition, you'll need to redirecting any pointers within the data structure. Boost serialization may be able to help you with all of this.