thread safe shared library called from Progress 9.1d - concurrency

I'm working on a project that involves legacy systems written in Progress 9.1d. These systems need to use a shared library I programmed in C language.
I was told by the Progress folks that the aplication works via something called an "appserver". This appserver has something they call "agents" and when a user executes the Progress application, the appserver instantiates (I suppose it´s called that way) an agent to attend the petition. There are a limited number of agents and when the limit is surpassed, the petitions are queued.
So, each of these agents executes the Progress code that uses my shared library. My fear is that there could be data collisions between them. The shared library does not has global nor static variables. All the data the shared library function uses is created inside it, all variables are local.
The shared library and the Progress appserver are in a same UNIX server HP-UX 11.1
I'm guessing that each new agent has it's own copy of the data of the Progress aplication, but if it does, I don't know if the same happens with the shared library stuff...
If someone has expereince using shared libraries with Progress, are there some measures to take for concurrency?
So far our tests have been without problems.
Any comment would be appreciated, thanks.

Each app server instance is an individual UNIX process. So your worries about shared data shouldn't come up.
Shared libraries can work and can be called by Progress, even on such an ancient and obsolete release as 9.1D -- but Progress is aggressively single-threaded so if your shared lib uses threads in any way it may fail.
Who is responsible for calling the shared library from the 4GL code? You? Or the Progress developers? In either case this might be helpful:
http://dbappraise.com/ppt/shlib.pptx

Related

Problems with statically linking Intel tbb

I recently read this question How to statically link to TBB? and I still don't really understand the problems with using tbb as a statically linked library (which is possible with their makefile if you do make extra_inc=big_iron.inc tbb)
The answer seems to say that the problem is that there can be multiple singletons in a single program, all (most?) implementations of a singletons don't let that happen. I don't understand the reasoning behind this.
Is the problem that when you fork() another process the singleton becomes two separate singletons in two separate processes? Is that what they mean by "program"? Also if thats the case why can they not mmap() shared memory and use that as the communication medium?
Also doesn't dynamically linking only mean that the library itself is shared in memory, i.e. the code segment?
Thanks!
No, the singleton explanation refers to a single process, not the multiple processes case (though, it has some of the same issues with oversubscription and load balancing).
Dynamic linker makes sure there is only one global data section exists for the library and calls global constructors exactly once implementing singleton.
With statically linked TBB library, one can end up with multiple instances of TBB thread pool working in the same process simultaneously, which come from different components of an application. This causes the issue of over-subscription or even worse if somehow a memory or some object being allocated and registered in one instance of the scheduler gets used in another instance of the scheduler. This is especially easy to achieve because of thread-local storage that is heavily used by TBB scheduler. Each instance of the scheduler would use separate TLS breaking rules of nested parallelism up to deadlock and enabling memory leaks and segfaults because tasks allocated in one scheduler might end up being returned to another scheduler. Thus, this situation might not be obvious for developers who don't even intend to pass objects between module boundaries.
Sometimes, such a situation happens even with dynamic linkage when e.g. TBB shared library is renamed for one of application components. TBB team is working to solve this issue.

Shared memory stuff in cpp dll

Soft MetaTrader 5. It's trading terminal. It's "indicator" windows are little cpp-like programs. They can load pure cpp dlls. Every "indicator" works in separate thread.
I need to create shared memory stuff which can be accessable from every "indicator". Also for shared memory could be loaded in every indicator it must be in particular dll.
I found info about boost interprocesses.
I am newbee with boost and multithreading.
So I wonder am I right way?
Create dll with shared memory functionality and interface to access it from indicator.
Load dll in several "indicators".
Access it from several "indicators" in real-time?
Could you also advice other ways?
Global variables in shared libraries are not shared across the library user processes. That segment of data is created for every process which loads the library, only the read-only code segment is actually shared.
You need to use a library for shared memory, such as boost::interprocess shared_memory_object or POSIX Shared Memory, or Qt's QSharedMemory. That is however in case you need inter-process communication.
There is nothing special you need to do in order for multiple threads to access shared memory in the same process, aside from using a mutex to prevent data races.

Shared library loading and performance

I am writing a server side application in C/C++ which consists of 1 main daemon and several child processes.
I want the child processes to be extremely lightweight so that they can be spawned/killed without too much overhead (over and above that imposed by the OS).
I am building the main daemon and the children apps to make extensive use of shared libraries. In fact, the main daemon loads up all the shared libraries required by the child applications, and sets up the required (shared) memory structures etc.
My underlying assumption is that since the shared libraries (some of which are huge) are already loaded by the main daemon, the child applications will be able to launch quickly and simply attach to the loaded libraries - without having to load the shared libs, and thus resulting in a slightly fast time to be spawned - is this assumption correct?
[[Added]]
I am working on Ubuntu 10.0.4 LTS
The code segment of your shared libraries will be shared by all processes, no particular restriction w.r.t who loaded/spawned. However, it may take variable time depending upon how many symbols are used in the process, as those will be resolved during load.
But if you are forking, there isn't much to do so it will be fast with respect to launching new binary.

c++ calls to fortran and back

In my c++ code (my_app) I need to launch external app (app_ext) that dynamically loads my library (dll,so) written in fortran (lib_fort). From this library (lib_fort) I need to call back to some method from my_app, synchronously.
So its like that:
(my_app) --launches--> (app_ext) --loads--> (lib_fort) --"calls"--> (my_app)
app_ext is not developed by me.
Do you have any suggestions how to do it, and what's most important, do it efficiently??
Edit:
Clarification. Launching external app (app_ext) and loading my library from it (lib_fort) will happen only once per whole program execution. So that part doesn't need to be ultra-efficient. Communication between lib_fort and my_app is performance critical. Lib_fort needs to "call" my_app millions of times.
The whole point is about efficient inter-process communication.
My_app role after launching app_ext is to wait and serve "calls" from lib_fort. The tricky part is that solution needs to work both for distributed and shared memory environment, i.e. both my_app and app_ext+lib_fort on single host (1) and my_app and app_ext+lib_fort on different machines (2).
In (1) scenario I was thinking about MPI, but I'm not sure if it is possible to communicate with MPI between two different applications (in contrast to single, multi-process, MPI application).
In (2) scenario probably some kind of inter-process communication using shared memory? (or maybe also MPI?)
OK, the real issue is how to communicate between processes. (Forget MPI, that's for a different kind of problem.) You may be talking about COM (Component Object Model) or RPC (Remote Procedure Call) or pipes, but underneath it's going to be using sockets. IME the simplest and most efficient thing is to open the socket connections yourself and converse over those. That will be the rate-limiter and there really isn't anything faster.

C++ master/worker

I am looking for a cross-platform C++ master/worker library or work queue library. The general idea is that my application would create some sort of Task or Work objects, pass them to the work master or work queue, which would in turn execute the work in separate threads or processes. To provide a bit of context, the application is a CD ripper, and the the tasks that I want to parallelize are things like "rip track", "encode WAV to Mp3", etc.
My basic requirements are:
Must support a configurable number of concurrent tasks.
Must support dependencies between tasks, such that tasks are not executed until all tasks that they depend on have completed.
Must allow for cancellation of tasks (or at least not prevent me from coding cancellation into my own tasks).
Must allow for reporting of status and progress information back to the main application thread.
Must work on Windows, Mac OS X, and Linux
Must be open source.
It would be especially nice if this library also:
Integrated with Qt's signal/slot mechanism.
Supported the use of threads or processes for executing tasks.
By way of analogy, I'm looking for something similar to Java's ExecutorService or some other similar thread pooling library, but in cross-platform C++. Does anyone know of such a beast?
Thanks!
I haven't used it in long enough that I'm not positive whether it exactly meets your needs, but check out the Adaptive Communications Environment (ACE). This library allows you to construct "active objects" which have work queues and execute their main body in their own threads, as well as thread pools that can be shared amoung objects. Then you can pass queue work objects on to active objects for them to process. Objects can be chained in various ways. The library is fairly heavy and has a lot to it to learn, but there have been a couple of books written about it and theres a fair amount of tutorial information available online as well. It should be able to do everything you want plus more, my only concern is whether it possesses the interfaces you are looking for 'out of the box' or if you'd need to build on top of it to get exactly what you are looking for.
I think this calls for intel's Threading Building Blocks, which pretty much does what you want.
Check out Intels' Thread Building Blocks library.
Sounds like you require some kind of "Time Sharing System".
There are some good open source ones out there, but I don't know
if they have built-in QT slot support.
This is probably a huge overkill for what you need but still worth mentioning -
BOINC is a distributed framework for such tasks. There's a main server that gives out tasks to perform and a cloud of workers that do its bidding. It is the framework behind projects like SETI#Home and many others.
See this post for creating threads using the boost library in C++:
Simple example of threading in C++
(it is a c++ thread even though the title says c)
basically, create your own "master" object that takes a "runnable" object and starts it running in a new thread.
Then you can create new classes that implement "runnable" and throw them over to your master runner any old time you want.