I have got a program which gets commands via network and allocates these to a specific function. And now I want to implement a plugin feature where I can add a .dll file in a folder. The next step is to invoke the methods in the dll based on the command.
I have two ideas how to solve this problem but I do not know which of these is better/more performant:
Initializing all methods + commands from the dll with reflection and store them in a std::map<std::string, void(*func)(args...)>. When the program receives a command it looks up the associated function in the map and invokes it.
Load the dll into runtime and create an interface which hands over the std::string with the arguments to all dll's which have implemented it. The method in the dll uses if statements to check the command can be processed in there. (Observer pattern)
If there are better options which I have not mentioned let me know.
Although you are using incorrect terminology to describe what you want, I would say option 2 is "cleaner"
Forcing a singular interface that is implemented by plug-in DLLs is the best-practice way of establishing the sort of dependency injection you're seeking, in my opinion.
Related
I need to provide my users the ability to write mathematical computations into the program. I plan to have a simple text interface with a few buttons including those to validate the script grammar, save etc.
Here's where it gets interesting. These functions the user is writing need to execute at multi-megabyte line speeds in a communications application. So I need the speed of a compiled language, but the usage of a script. A fully interpreted language just won't cut it.
My idea is to precompile the saved user modules into objects at initialization of the C++ application. I could then use these objects to execute the code when called upon. Here are the workflows I have in mind:
1) Testing(initial writing) of script: Write code in editor, save, compile into object (testing grammar), run with test I/O, Edit Code
2) Use of Code (Normal operation of application): Load script from file, compile script into object, Run object code, Run object code, Run object code, etc.
I've looked into several off the shelf interpreters, but can't find what I'm looking for. I considered JAVA, as it is pretty fast, but I would need to load the JAVA virtual machine, which means passing objects between C and the virtual machine... The interface is the bottleneck here. I really need to create a native C++ object running C++ code if possible. I also need to be able to run the code on multiple processors effectively in a controlled manner.
I'm not looking for the whole explanation on how to pull this off, as I can do my own research. I've been stalled for a couple days here now, however, and I really need a place to start looking.
As a last resort, I will create my own scripting language to fulfill the need, but that seems a waste with all the great interpreters out there. I've also considered taking an existing open source complier and slicing it up for the functionality I need... just not saving the compiled results to disk... I don't know. I would prefer to use a mainline language if possible... but that's not required.
Any help would be appreciated. I know this is not your run of the mill idea I have here, but someone has to have done it before.
Thanks!
P.S.
One thought that just occurred to me while writing this was this: what about using a true C compiler to create object code, save it to disk as a dll library, then reload and run it inside "my" code? Can you do that with MS Visual Studio? I need to look at the licensing of the compiler... how to reload the library dynamically while the main application continues to run... hmmmmm I could then just group the "functions" created by the user into library groups. Ok that's enough of this particular brain dump...
A possible solution could be use gcc (MingW since you are on windows) and build a DLL out of your user defined code. The DLL should export just one function. You can use the win32 API to handle the DLL (LoadLibrary/GetProcAddress etc.) At the end of this job you have a C style function pointer. The problem now are arguments. If your computation has just one parameter you can fo a cast to double (*funct)(double), but if you have many parameters you need to match them.
I think I've found a way to do this using standard C.
1) Standard C needs to be used because when it is compiled into a dll, the resulting interface is cross compatible with multiple compilers. I plan to do my primary development with MS Visual Studio and compile objects in my application using gcc (windows version)
2) I will expose certain variables to the user (inputs and outputs) and standardize them across units. This allows multiple units to be developed with the same interface.
3) The user will only create the inside of the function using standard C syntax and grammar. I will then wrap that function with text to fully define the function and it's environment (remember those variables I intend to expose?) I can also group multiple functions under a single executable unit (dll) using name parameters.
4) When the user wishes to test their function, I dump the dll from memory, compile their code with my wrappers in gcc, and then reload the dll into memory and run it. I would let them define inputs and outputs for testing.
5) Once the test/create step was complete, I have a compiled library created which can be loaded at run time and handled via pointers. The inputs and outputs would be standardized, so I would always know what my I/O was.
6) The only problem with standardized I/O is that some of the inputs and outputs are likely to not be used. I need to see if I can put default values in or something.
So, to sum up:
Think of an app with a text box and a few buttons. You are told that your inputs are named A, B, and C and that your outputs are X, Y, and Z of specified types. You then write a function using standard C code, and with functions from the specified libraries (I'm thinking math etc.)
So now your done... you see a few boxes below to define your input. You fill them in and hit the TEST button. This would wrap your code in a function context, dump the existing dll from memory (if it exists) and compile your code along with any other functions in the same group (another parameter you could define, basically just a name to the user.) It then runs the function using a functional pointer, using the inputs defined in the UI. The outputs are sent to the user so they can determine if their function works. If there are any compilation errors, that would also be outputted to the user.
Now it's time to run for real. Of course I kept track of what functions are where, so I dynamically open the dll, and load all the functions into memory with functional pointers. I start shoving data into one side and the functions give me the answers I need. There would be some overhead to track I/O and to make sure the functions are called in the right order, but the execution would be at compiled machine code speeds... which is my primary requirement.
Now... I have explained what I think will work in two different ways. Can you think of anything that would keep this from working, or perhaps any advice/gotchas/lessons learned that would help me out? Anything from the type of interface to tips on dynamically loading dll's in this manner to using the gcc compiler this way... etc would be most helpful.
Thanks!
For some test needs I need some third-party dll to be replaced with own stubbed version: the actual methods should return some hard-coded data with some specific delays and delays' effect on the system that uses the original dll are actually the under the test.
So, for this purpose I need some how to create a *.cpp file with the same structs and signatures from the dll. What is the best approach to do it programmatically?
1) decompilers? but actually I don't need the code itself, just class definitions and signatures only.
2) do it programmatically with c++ code
3) do it programmatically with java code through JNI, for example.
Any ideas and directions are welcome. Unfortunately, googling doesn't give the straighforward answer, at least for c++ newbies as me.
If this DLL is loaded from Java and called via JNI, then you can launch javah to produce the relevant headers; use -stubs option to prepare the C files with dummy implementations of all native methods.
Update: -stubs does not work correctly, see Unable to generate JNI .c file using javah -stubs
I've been developing for some time. And these beasts appear from time to time in MFC, wxWidgets code, and yet I can't find any info on what they do exactly.
As I understand, they appeared before dynamic_cast was integrated into core C++. And the purpose, is to allow for object creation on the fly, and runtime dynamic casts.
But this is where all the information I found, ends.
I've run into some sample code that uses DECLARE_DYNAMIC_CLASS and IMPLEMENT_DYNAMIC_CLASS within a DLL, and that is used for exported classes. And this structure confuses me.
Why is it done this way? Is that a plugin based approach, where you call LoadLibrary and then call the CreateDynamicClass to get a pointer which can be casted to the needed type?
Does the DECLARE/IMPLEMENT_DYNAMIC work over DLL boundaries? Since even class is not so safe to DLLEXPORT, and here we have a custom RTTI table in addition to existing problems.
Is it possible to derive my class from a DYNAMIC_CLASS from another DLL, how would it work?
Can anyone please explain me what these things are for, or where I can find more than a two sentences on a topic?
This stuff appends addional type information to your class, which allows to RTTI in runtime-independent manner, possibility of having factories to create your classes and many other things. You can find similar approach at COM, QMetaObject, etc
Have you looked at the definitions of DECLARE/IMPLEMENT_DYNAMIC?
In the MS world, all uppercase usually denotes a macro, so you can just look up the definition and try to work out what it's doing from there. If you're in Visual Studio, there's a key you can hit to jump to the definition - see what it says, and look that up and try to work from there.
I have a C++ DLL which was written in 1998, and I want to view the members (properties, fields, methods, constructors etc). I am of the understanding that the company who wrote the DLL is no longer about, but it is still used.
If I have only the DLL, is this possible? Or do you have to just know what is inside the DLL to work with it. If it is possible how would I go about this?
I am looking to work with the DLL from .Net via P/Invoke.
Get this: http://www.dependencywalker.com/ , use depends.exe to open the DLL, then activate "Undecorate C++ Functions" in the "View" menu. I mainly use it to find dependencies, but it also exposes entry points of DLLs.
This is not fool proof, because a DLL that exposes a class doesn't have to export its methods. For example, pure virtual method layout is sufficiently uniform that you can expose your instances as interface pointers with maybe a factory function. But it might solve your problem.
And regardless, you need a copy of dependency walker. :)
You can use a tool like IDA to dissasemble the binary and try to make out function names, but this can be hard. Why do you only have the DLL? It would be much easier to get the info from the export table of the associated lib or, even better, header files.
you can use VS Command prompt
a simple command is:
dumpbin /exports <nameofdll>
NirSoft offers 'DLL Export Viewer'
https://www.nirsoft.net/utils/dllexp.zip
Convinient UI application which can display dll members.
I was wondering if it is possible to change the logic of an application at runtime? Meybe we could replace the implementation of an abstract class with another implementation? Or maybe we could replace a shared library at runtime...
update: Suppose that I've got two implementations of function foo(x, y) and can use any of them based on strategy pattern. Now I want to know if it's possible to add a third implementation of foo(x, y) without restarting the application.
You can use a plugin (a library that you will load at runtime) that expose a new foo function.
I remember we implemented something similar at school, a calculator in which we could add new operations at runtime, without having to restart the program. See dlsym and dlopen.
Addenda
Be very careful when dlclose-ing a plugin that it is not still used in some active call stack frame. On Linux you can call many thousands of times dlopen (so you could accept not dlclose-ing plugins, with some address space leak).
Exactly, as you said "replace the implementation of an abstract class with another implementation" if by it you mean, you can use runtime polymorphism and change the instances of concrete classes with instances of another set of concrete classes.
More specifically, there is a well-known pattern called Strategy pattern exactly for this purpose. Have a look at the wiki page, as it explains this very nicely, even with a code example along with diagram.
C++ mechanism of virtual functions does not allow you to change the implementation at run-time.
However, you can implement whatever implementation change at runtime with function pointers.
Here is an article on self-modifying code that I read recently: http://mainisusuallyafunction.blogspot.com/2011/11/self-modifying-code-for-debug-tracing.html