Innerprocess communication between independent DLLs - c++

I'm developing and maintaing a set of DLLs that are used as plugins for a host application. The host application has a plugin API which my plugins implement.
The host application is developed by another company and I have no control over how the plugins are used: host application might load/unload any of the plugins at any time and in any order. A plugin can run in any thread and also might be called from different threads.
I need a way for these plugins plugins to share a common resource. This resource should be initialized by the first plugin that is loaded and uninitialized by the last plugin that is unloaded. First and last might be different plugins. Thread safety is an important issue.
You can think of this as a singleton that is shared between all the currently loaded plugins.
A possible solution could be that all my plugins will share a common DLL that will initialize the singleton upon it's loading and destroy it when it is unloaded.
However I would like to have my plugins self contained if at all possible, to ease the deployment on users's machines.
Because the host application is cross-platform, the solution should be cross-platform and work in the same way on Windows, Mac OS and Linux (if at all possible). To that effect I looked at boost but was overwhelmed by the number of classes and options in the boost inter-process code.
I do not ask for a complete coded solution, but rather an advice about the best way to approach this issue.
More information and answers to questions:
The issue here is that I cannot expect any help from the host application, so it does not really matter what it is. There are actually a few applications that use the plugins and so I cannot rely on any specific features of any single application.
I can say that host application is a normal desktop application, e.g. plain old .exe on Windows, .app on Mac OS. No iOS or Andriod apps.
Plugin interface is a set of functions the host can call. API is one way: host can call plugin but plugin cannot call host. Each plugin has an initialization function that the host must call one upon loading and an uninitialization the host must call once before unloading the DLL.
Plugin are implemented in C++, but not C++11. Compilers are VisualStudio 2005 on Windows and Xcode 3.2 with gcc 4.2.1 on Mac.
That said, I would like to again emphasize that I'm looking for a general design for approaching the issue not for specific code.
Thank for any help!

Remember that every program that uses your DLL has its own address space and therefore cannot interact using normal memory (as opposed to special OS supplied shared memory). The best way to get the different processes is for your DLL to launch a separate process that countains the shared resource. You will then need to implement some sort of (local) socket API that allows data to be shared.

You could use Qt -actually QtCore-, Gtk -actually its Glib (perhaps thru Gtkmm which is a C++ glue above them), and Poco, or perhaps Apache Portable Runtime
All of them are free software, cross-platform frameworks with powerful IPC and multi-threading (and plugin) abilities.
We cannot help more unless you tell much more about your (third party) host application, its plugin interface, and your own plugins. Perhaps the host application does already provide some portable ways to do inter-process communication, or thread-safe singletons... (this is why you should tell us more about that host application; it probably uses, or at least provides, some cross-platform library or API like the ones I listed).
Perhaps using C++11 might help. I guess you want some singleton pattern.

Related

Need to sandbox application that compiles C++ modules from untrusted sources online

I’m developing a C++ application where I want to compile C++ modules from potentially untrusted sources online, and have them operate on a specific bank of data within a single process. I’d like to sandbox these somehow. This is obviously a complex issue, but hoping to discover if there’s any potential approach or tool/library I haven’t yet thought of. The app will run on Windows & OSX at minimum, and (hopefully) Linux, iOS, Android.
My app would locally compile the C++ modules it downloads, and dynamically link the object code to a process in the app (not necessarily the “main” app process). The C++ modules would only have access to my API via the headers I provide, however the API (and any dependent libraries) need to be linked into the same process. The API’s dependent libraries are compute-based only, such as native SIMD-based math and possibly memory allocation. I don’t expect they will need to call any network, disk, or any other OS functionality, for that matter – except for needing to communicate their input data and computed results to the main process (maybe over shared memory ?)
I don’t care if the sandboxed process’ memory is corrupted or hollowed, as long as it’s contained in that process. I also want to avoid having any system API call addresses linked into in the process memory space, to prevent compromised code from finding them.
I’ve done a review of the basic security issues (stack crashes, return oriented programming hacks, etc.). Also looked at some related projects:
I see Google has a sandbox subproject within the Chromium repo which might be useful, but unsure of it’s utility in my case.
Windows Sandbox is a Microsoft tool for Windows only, and isn’t available on some versions anyway. Moreover. there are big performance issues with using it. The app runs in real time, with frame rate requirements similar to a video game.
considered compiling to WebAssembly, but at the moment it seems too immature (no SIMD, hard to debug, and potentially vulnerable to hacks in the wrapping host or browser.)
I thought there might be some kind of wrapper library already out there to intercept all OS calls and allow custom configuration of what calls get passed through (in my case, anything except what’s needed for the inter-process communication would be denied)
Any other ideas, architectural suggestions, or promising open source projects on the horizon for this ?
Thanks,
C
Compiling untrusted source code and linking to your app sounds really unsafe. If I understand your problem correctly, you need to "provide safe runtime environment for single threaded user code with only access to your API", then in my opinion its better to use runtime interpreter instead. It will provide you more control and sandbox capabilities, safe API calls and users code exceptions handling.
If you have doubts about interpreters performance, its a good trade of to safety, flexibilty and control. Vast of interpreters compile source code to bytecode and runs realy fast. Also you can reach better performance by providing fast API to script.
In my Java enterprise projects I use built-in Rhino JavaScript interpreter to run user scripts and provide API to reach flexibility, required performance and control. This scripts can call nothing but my API. Its safe, flexible and absolutely controllable.
I found these C/C++ (C like syntax) interpreter libraries:
JavaScript (ECMA)
https://v8.dev/
Lua
http://acamara.es/blog/2012/08/running-a-lua-5-2-script-from-c/
C++ interpreter
https://github.com/root-project/cling

How to Implement a C++ Plugin System for a Modular Application?

I am trying to design a modular application; one where developer's create plugins as dlls that get linked to the main application using loadlibrary() or dlopen(). My current thoughts on this are:
1: Both the application and plugin module include a core header with a pure virtual class IPlugin with a method run(). The plugin module then implements the class, defining run():
2: The plugin module defines a function IPlugin* GetPlugin() using "extern c" to ensure ABI compatibility
3: The application requires the plugin module using loadlibrary(), retrieves IPlugin from GetPlugin() using getprocaddress()
4: The application calls method run() to run the plugin
This all works, but how do I create a plugin interface that has access to my full application code? If I have a class that creates a window or initializes a button, I want to access those through the plugin interface and be able to manipulate them and receive events, while still having the memory be managed by the main application side.
1. How would I go about doing this?
2. Would the best solution be ABI compatible?
An API either uses C++ or C, there’s no middle ground. If you want to force plugin developers to use an ABI compatible compiler then you can have an interface that deals with classes and virtual methods and there’s no need for any “extern C” brouhaha.
Otherwise, you’re providing a C API and a C API only. Of course you can implement a virtual function table using a struct of function pointers, and internally you can wrap it in a C++ class and whatnot. But that’s your implementation detail and doesn’t figure in the design of the plugin interface itself.
So that’s that, pretty much. There’s no such thing as C++ API compatibility for free on Windows. People use at least 5 incompatible compilers - MSVC 2017+, 2015, 2012, mingw g++ and clang. Some particularly backwater corporate establishments will insist on using even older MSVC sometimes. So a C++ interface is mostly a lost cause unless you provide shims for all those users. It’s not unthinkable in these days of continuous integration (CI) - you can easily build wrappers that consume your C API and expose it to the users via a C++ interface compatible with their preferred development system. But that doesn’t mean that you get to mess with their C++ objects directly from your C++ code. There’s still a C intermediary, and you still need to use the helpers in your adapter code. E.g. you cannot delete the user provided object directly - you can only do it by calling a helper in the adapter DLL - a helper that uses the same runtime and is ABI compatible with user’s C++ code.
And don’t forget that there are binary incompatible runtime libraries for any given compiler version - e.g. debug vs release and single vs multithreaded. So, for any MSVC version you have to provide four adapter DLLs that the plugin developers would link with. And then your code links to that adapter as well, to access user’s plugin. So you would be first parsing the binary headers in the plugin to determine what adapter DLL and runtime it’s using, and issue an error message if they don’t match (plugin devs are very likely to mess up). Then if there’s a match you load the plugin DLL. The dynamic linker will bring in the requisite adapter DLL. Then you’re ready to use the adapter DLL to access the plugin.
Having done this before, my advice is to have each adapter dll provide different C symbols to your host program, since invariably there will be multiple plugins each using a different adapter and this only complicates matters. Instead, you need to link to all the adapters via demand loading on Windows, and only access a particular adapter when you have parsed the plugin DLL to know what it’s using. Then when you call the adapter, the dynamic linker will resolve the demandload stubs for the real adapter functions. A similar approach can be used on non-Windows platforms, but requires writing helper code to provide the demand link feature - so it may be simplest to use dlopen explicitly on Unix. You’ll still need to parse the ELF headers of the plugin to figure out the C++ runtime it uses and the adapter library it expects, validate the combination; and only then load it. And then you’d dlopen the adapter to talk to the plugin. In no case you’d be directly calling any functions on the plugin itself - only the adapter can do that safely when you need to cross C++ runtime boundaries. There may be easier ways to do all that but my experience is that they only work 99% of the way so in the end there’s no cheap solution here - not unless someone wrote an open source project (or a product) to do all this. You will be expected to understand the dirty implementation details and deal with C++ runtime bugs anyway if you dabble in that. So if you’d rather avoid all that - don’t mess with C++ user visible APIs that require lodable libraries.
You can certainly do a header only C-to-C++ bridge for the user, but surprisingly enough some users don’t like header only solutions. Sigh. How do I know? I had to get rid of a header-only interface improves customer insistence… I wept and laughed like a maniac all at once that day.

Can I run a QT or wxWidgets GUI from a STA dll?

I am currently evaluating options for porting an existing unmanaged C++ codebase to use a new GUI toolkit. QT and wxWidgets both seem like a good fit so far as they have a strong object model. The application is only targeted at Windows machines, but a platform independent solution would be good to have.
In the future this code may need to be converted into a DLL restricted to a Single Threaded Apartment (STA). Is this a un-avoidable problem for either of these toolkits? Are there any other toolkits that I should be considering?
I know the DLL would be loaded by an application acting as a Multi Threaded Apartment (MTA). Unfortunatly, this application may also be loading other DLLs that may have their own GUIs which could be using other or similar toolkits not under my control. Are either of these toolkits more suited to these restrictions? I understand from other posts that starting a QT GUI from a DLL is possible, but not very flexible. However, I don't know if the same is true for wxWidgets, or if the STA restriction has any impact for either toolkit.
wxWidgets doesn't care about the apartment it's loaded into. The only COM interfaces it uses are the shell ones (e.g. IFileDialog) which should work in any apartment. So I just can't imagine having any new problems due to this. But maybe I just don't have a good enough imagination, of course...

Communicate with CoDeSys program on a Linux-based WAGO PFC200 PLC

I'm currently getting familiar with PLC's, the WAGO 750-8206 PLC in particular. It offers a linux OS and can run CoDeSys programs. There are some I/O modules attached to the controller: 750-530, 750-430 and 750-600. What I would like to know is this:
Is it possible to write a C++ linux application that runs on the PLC and gets/sets the digital inputs and outputs?
Even better: can I write a CoDeSys program that "talks to the I/O's" and handles all the logic and at the same time can be accessed by a C++ linux program? THe idea is this: I would like the CoDeSys program to check for let's say two digital inputs. If both are high, a variable should be set to a defined value. The linux application should be able to read that variable and conduct further processing (such as sending JSon data to a server or similar).
Also, I would need to be able to send commands from the linux application to the CoDeSys program in order to switch digital outputs (or set values on analog outputs etc) when the linux application receives a message that triggers the command.
Any thoughts and comments on this topic are greatly appreciated as I am completely new to this topic. Thanks in advance!
The answer you might want
The actual situation has changed into the opposite of the previous answer.
WAGO's recent Board Support Packages and Documentation actively support you in making changes and extensions to the PLC200 line. Specifically the WAGO 750-8206 and 17 (as of March 2016) other PLCs :
wago.us -> Products -> Components for Automation -> Modular WAGO-I/O-SYSTEM, IP 20 (750/753 Series)
What you have to do is get in touch with them and ask for their latest Board Support Package (BSP) for the PLC200 line.
I quote from the previous answer and mark the changes, my additions are in bold.
Synopsis
Could you hack a PFC200 and get custom binaries executed? Probably Absolutely yes. As long as the program is content to run on the Linux-3.6.11 kernel and glibc-2.16 and is compiled for the "armhf" API, any existing ARM application, provided you also copy the libraries it uses as well, will just run without even compiling it specifically for the PFC200.
Would it be easy or quick? No. Yes, if you have no fear of the Linux Command line. It is as easy as using the Cross Compiler provided by the Board Support Package (BSP) with the provided C-libraries and then run this to transfer your program to the PFC's flash and run it: scp your-program root#PFC200:/usr/bin
ssh root#FC200 /usr/bin/your-programOf course, you can use Eclipse CDT with the Cross Toolchain for the PFC200 and configure Eclipse to do do remote run and debug.
Will this change in the future? Maybe. Remember that PFC200 is fairly new in North America.It has, PFC200 has appeared in September 2014
The public HOWTO Building FORTE for Wago describes how to use the initial BSP to run FORTE, which is the IEC 61499 run-time environment of 4DIAC (link: sf.net/projects/fordiac ), an open source PLC environment allowing to implement industrial control solutions in a vendor neutral way. 4DIAC implements IEC 61499 extending IEC 61131-3 with better support for controller to controller communication and dynamic reconfiguration.
In case you want to access the KBUS (which talks to the I/Os) directly, you have to know that currently only one application can be in charge of KBUS.
So either CODESYS, or FORTE, or your own KBUS application can be in charge of the KBUS.
The BSP from 2015 has many examples and demos how to use all the I/O of the PLC200 (KBUS, CAN, MODBUS, PROFIBUS as well as the Switches and LEDs on the PFC200 directly). Sources for the kernel and with all kernel drivers and the other Open Source components is provided and compiled in the Board Support Package (BSP).
But, the sources for the libraries and tools developed from scratch by WAGO and are not based on GPL/Open Source code are not provided: These include the Application Device Interface(ADI)/Device Abstraction Layer(DAL) libraries which do CANopen, PROFIBUS-Slave and KBus (which is used all PLC I/O modules connected to the main PLC unit)
While CANopen is using the standard Linux Socketcan API to talk to the kernel and you could just write a normal socketcan program using the provided libsocketcan, the KBus API is an WAGO-specific invention and there, you'd have to do some reverse-engineering if you'd not want to use WAGO's DAL for accessing all the electrical I/O of the PLC, but the DAL is documented and examples how to use it are provided in the BSP.
If you use CODESYS however, there is an "codesys_lib_demo-0.1" example library which shows how to provide a library for CODESYS to use.
Outdated Answer
This answer was very specific to circumstances in 2014 and 2015. As of 2016, it contains incorrect information. Still going to leave as-is for now to provide background.
The quick answer you probably don't want
You could very reasonably write code using Codesys that put together a JSON packet and sent it off to a server elsewhere. JSON is just text, and Codesys can manipulate text in a fashion very similar to C. And there are many ethernet protocols available from within Codesys using addon libraries provided by Wago.
Now the long Answers
First some background
Since you seem to be new to Wago and the philosophy of Codesys in general... a short history.
Codesys is used to build and deploy Hard Realtime execution environments, and it is important to understand that utilizing libraries without fully understanding the consequences can destabilize performance of the entire system (bringing Codesys to its knees and throwing watchdog errors in the program). Remember, many PLC's are controlling equipment that could kill someone if it ever crashed.
Wago is fond of using Linux to provide the preemptive RT kernel for the low level task scheduling and then configuring Codesys to utilize much of the standard C-libraries that often accompany linux. Wago has been doing this for quite some time, but they would never allow you to peel back the covers without going through Codesys (which means using IEC 61131 languages, of which C++ is not included), and this was for your own safety (and their product image). If you wanted the power of linux on a Wago, you had to get a special PLC with a completely naked OS, practically no manual or support, and forfeit the entire Codesys runtime.
The new PFC200's have much more RAM and memory available than recent models, allowing for more of the standard linux userland stack (ssh, ftp, http,...) to be included without compromising the Codesys runtime, and they advertise this. BUT... they are still keeping a lid on compilation tools and required header files needed to compile and link to Codesys libraries or access specialized hardware (the Wago KBUS, which interfaces your I/O modules).
The Synapsis
Could you hack a PFC200 and get custom binaries executed? Probably yes.
Would it be easy or quick? No.
Will this change in the future? Maybe. Remember that PFC200 is fairly new in North America.
Things you may not know
Codesys does not necessarily know or care about Wago. You can get Target Platforms for Codesys that do target Intel processors running a linux os. Codesys DOES SUPPORT accessing external libraries (communication in the reverse direction is dangerous), but they often expect a C style interface, and you can only access those libraries by defining C-headers that Codesys will analyze, so you may need to do some magic to get C++ working seemlessly. What you can do is create a segment of shared memory that both C++ and Codesys access, and that is how they pass information (synchronization is another problem).
You can get an Open Wago PLC, running Codesys on Linux. Wago's IPC are made specifically for this purpose. They have more power, memory, and communication capabilities in general; but they do cost more than double your typical Wago PLC.
If you feel like toying with the idea of hacking a Wago, you will need to tear apart the manuals for Codesys (it has its own), the manuals for the Wago IPC's, and already be familiar with linux style inter-process-communication and/or dynamic libraries.
Also, there is an older Wago PLC that had the naked Linux on it 750-8??. It also has a very good manual on how to access the Wago hardware using supplied headers.
You must first understand how Codesys expects to talk to its target operating system. Then you work backwards to make it talk to Wago specific libraries living on that operating system. You must be careful not to hijack Codesys.
Your extra C++ libs should assist Codesys, not take it over. For instance, host a sqlite database on the same device, and use C++ to manage the database and provide a very simple interface that Codesys can utilize. All Codesys would do is call a function and pass some values, but your C++ would actually build an SQL query and issue it to the database (Codesys doesn't need to know why or how this is happening).
I hope at least one paragraph is helpful in some way.

Best practices for creating an application which will be upgraded frequently - C++

I am developing a portable C++ application and looking for some best practices in doing that. This application will have frequent updates and I need to build it in such a way that parts of program can be updated easily.
For a frequently updating program, creating the program parts into libraries is the best practice? If program parts are in separate libraries, users can just replace the library when something changes.
If answer for point 1 is "yes", what type of library I have to use? In LINUX, I know I can create a "shared library", but I am not sure how portable is that to windows. What type of library I have to use? I am aware about the DLL hell issues in windows as well.
Any help would be great!
Yes, using libraries is good, but the idea of "simply" replacing a library with a new one may be unrealistic, as library APIs tend to change and apps often need to be updated to take advantage of, or even be compatible with, different versions of a library. With a good amount of integration testing though, you'll be able to 'support' a range of different versions of the library. Or, if you control the library code yourself, you can make sure that changes to the library code never breaks the application.
In Windows DLLs are the direct equivalent to shared libraries (so) in Linux, and if you compile both in a common environment (either cross-compiling or using MingW in Windows) then the linker will just do it the same way. Presuming, of course, that all the rest of your code is cross-platform and configures itself correctly for the target platform.
IMO, DLL hell was really more of a problem in the old days when applications all installed their DLLs into a common directory like C:\WINDOWS\SYSTEM, which people don't really do anymore simply because it creates DLL hell. You can place your shared libraries in a more appropriate place where it won't interfere with other non-aware apps, or - the simplest possible - just have them in the same directory as the executable that needs them.
I'm not entirely convinced that separating out the executable portions of your program in any way simplifies upgrades. It might, maybe, in some rare cases, make the update installer smaller, but the effort will be substantial, and certainly not worth it the one time you get it wrong. Replace all executable code as one in most cases.
On the other hand, you want to be very careful about messing with anything your users might have changed. Draw a bright line between the part of the application that is just code and the part that is user data. Handle the user data with care.
If it is an application my first choice would be to ship a statically-linked single executable. I had the opportunity to work on a product that was shipped to 5 platforms (Win2K,WinXp, Linux, Solaris, Tru64-Unix), and believe me maintaining shared libraries or DLLs with large codebase is a hell of a task.
Suppose this is a non-trivial application which involves use of 3rd Party GUI, Threads etc. Using C++, there is no real one way of doing it on all platforms. This means you will have to maintain different codebases for different platforms anyway. Then there are some wierd behaviours (bugs) of 3rd Party libraries on different platforms. All this will create a burden if application is shipped using different library versions i.e. different versions are to be attached to different platforms. I have seen people shipping libraries to all platforms when the fix is only for a particular platform just to avoid the versioning confusion. But it is not that simple, customer often has a different angle to how he/she wants to upgrade/patch which is also to be considered.
Ofcourse if the binary you are building is huge, then one can consider DLLs/shared-libraries. Even if that is the case, what i would suggest is to build your application in the form of layers like:-
Application-->GUI-->Platform-->Base-->Fundamental
So here some libraries can have common-code for all platforms. Only specific libraries like 'Platform' can be updated for specific behaviours. This will make you life a lot easier.
IMHO a DLL/shared-library option is viable when you are building a product that acts as a complete solution rather than just an application. In such a case different subsystems use common logic simultaneously within your product framework whose logic can then be shared in memory using DLLs/shared-libraries.
HTH,
As soon as you're trying to deal with both Windows and a UNIX system like Linux, life gets more complicated.
What are the service requirements you have to satisfy? Can you control when client systems get upgraded? How many systems will you need to support? How much of a backward-compatibility requirement do you have.
To answer your question with a question, why are you making the application native if being portable is one of the key goals?
You could consider moving to a a virtual platform like Java or .Net/Mono. You can still write C++ libraries (shared libraries on linux, DLL's on windows) for anything that would be better as native code, but the bulk of your application will be genuinely portable.