Everytime I look at plugin's tutorial, they look incredibly complex for the (conceptually) simple thing I'd like to do.
Let's say we are on Windows and I want to create a program with an interface, which I'd like to use to implement plugins as dynamic external libraries (dll).
So I have a header like this :
Interface.h:
class Interface
{
public:
virtual void overrideMe() = 0;
};
Which I use from another code (the plugin), so that I can create a DLL:
MyPlugin.cpp
#include "Interface.h"
class MyPlugin: public Interface
{
public:
void overrideMe()
{
std::cout << "Hey, I am a specific MyPlugin DLL!" << std::endl;
}
};
// DLL creation code blabla
So imagine I create from this code a library named MyPlugin.dll
Now I'd like to use the Interface in a generic program, but not by including headers and using traditionnal polymorphism, but by dynamically calling/loading/charging it from my dll :
MyProgram.cpp:
#include "Interface.h"
int main()
{
// Interface* i = new MyPlugin; // nope!
Interface* i = chargeMe("MyPlugin.dll"); // How to do this?
i->overrideMe(); // display: "Hey, I am a specific MyPlugin DLL!"
}
Now the questions are:
What is the simplest way to do it using C++?
Is there any fuction like chargeMe("MyPlugin.dll") in the real world?
Do I necessarily need an external framework (like Qt) to do so or can the standard be enough?
Related
So I am trying to write test cases for my production code but the coverage is drastically low due to the usage of some external C library which cannot be executed without target hardware, So I have no choice but to stub the same. Now the problem is how to stub a C function ?
My production code : prod_code.cpp
int TargetTestClass::targetFunc()
{
if(externalCFunc() == True)
{
statement1; statement2; statement3; /// and so on
}
}
My testcode.cpp generally contains tests like this
//Fixture for Target Test class
class TargetTestClassFixture : public testing::Test {
TargetTestClass* targetTestClassPtr;
void SetUp() {
targetTestClassPtr = new TargetTestClass();
}
void TearDown() {
delete targetTestClassPtr;
}
};
TEST_F (unitTest, test_001)
{
targetTestClassPtr->targetFunc(); //Need to do something here so that externalCFunc() returns true on call
}
What you can do is to create a source file like my_c_stubs.c where you rudimentary implement your C function. For example, the implementation can just return true. Then don't link original source file with the external C function but rather use your stub file. You should still use the original C header. In this way you won't be able to stub inline functions though. If it is required, some more sophisticated approach is needed.
I found 2 solutions to my problem so I am going to answer the same here.
Solution 1 : This involved changing the target source code. Basically you need to write a wrapper that calls the external C functions like below
Class TargetTestClass{
protected:
int targetFunc();
virtual int externalCFuncWrapper(); // Wrapper
};
//call the external C function from the wrapper
int TargetTestClass::externalCFunctionWrapper(){
return(externalCFunc());
}
//Definition of targetFuc in original question
//Now write a mock class for Target Test Class as usual and mock the wrapper function to return what you want to
class MockTargetTestClass : public TargetTestClass{
public: MOCK_METHOD0(externalCFunctionWrapper, int());
};
//Now use the Mock class as needed
TEST_F ( TargetUnitTest, TestingExternalCFuctionCall)
{
MockTargetTestClass mockTargetTestClassObj;
using ::testing::Return;
using ::testing::_;
Expect_Call(mockTargetTestClassObj, externalCFunctionWrapper())
.WillOnce(Return(1));
Assert_EQ(mockTargetTestClassObj.targetFunc(), 1);
}
Solution 2 : Thanks to #kreynolds, I have looked into Fake Function Framework and implemented as follows :
Class TargetTestClass{
protected:
int targetFunc();
//No Code change needed in target source code
};
//In testcode.cpp
#include <gmock-global/gmock-global.h>
MOCK_GLOBAL_FUNC0(externalCFunc, int());
TEST( Unittest, test002){
using ::testing::Return;
using ::testing::_;
EXPECT_GLOBAL_CALL(externalCFunc, externalCFunc()).WillOnce(Return(1));
TargetTestClass targetFunc; //This does not contain any wrapper
EXPECT_EQ(targetTestClassObj.targetFunc(), 1);
}
I am using the second solution as this does not require any change in my source code and easier to use.
Once again thank you everyone for giving your time.
I've been trying to come up with a means of generating a C interface for a C++17 project of mine. The project produces an executable that loads plugins on the fly. I played with clang for a while before discovering SWIG, and I'm wondering if SWIG is up to the task, or if there's a trivial amount of work that I can do to make it suitable for this scenario.
Here's my vision of the plugin interface. Suppose the source code of my program looks like this:
header.h
namespace Test {
struct TestStruct {
int Data;
};
class TestClass {
public:
virtual ~TestClass() = default;
void TestMethod(TestStruct&) const;
virtual void TestVirtual(int);
};
}
then the following code should be generated:
api.h
// opaque structs
typedef struct {} Test_TestStruct;
typedef struct {} Test_TestClass;
typedef struct {
void (*Test_TestClass_destructor)(Test_TestClass*);
void (*Test_TestClass_TestVirtual)(Test_TestClass*, int);
} Test_TestClass_vtable;
typedef struct {
Test_TestStruct *(*Test_TestStruct_construct)();
void (*Test_TestStruct_dispose)(Test_TestStruct*);
int *(*Test_TestStruct_get_Data)(Test_TestStruct*);
int *(*Test_TestStruct_set_Data)(Test_TestStruct*, int);
Test_TestClass *(*Test_TestClass_construct)();
Test_TestClass *(*Test_TestClass_construct_derived(const Test_TestClass_vtable*);
void (*Test_TestClass_dispose)(Test_TestClass*);
void (*Test_TestClass_TestMethod)(const Test_TestClass*, Test_TestStruct*);
void (*Test_TestClass_TestVirtual)(Test_TestClass*, int);
} api_interface;
api_host.h
#include "api.h"
void init_api_interface(api_interface&);
api_host.cpp
#include "header.h"
#include "api.h"
// wrapper class
class _derived_TestClass : public Test::TestClass {
public:
_derived_TestClass(const Test_TestClass_vtable &vtable) : _vtable(vtable) {
}
~_derived_TestClass() {
if (_vtable.Test_TestClass_destructor) {
_vtable.Test_TestClass_destructor(reinterpret_cast<Test_TestClass*>(this));
}
}
void TestVirtual(int v) override {
if (_vtable.Test_TestClass_TestVirtual) {
_vtable.Test_TestClass_TestVirtual(reinterpret_cast<Test_TestClass*>(this), v);
} else {
TestClass::TestVirtual(v);
}
}
private:
const Test_TestClass_vtable &_vtable;
};
// wrapper functions
Test_TestStruct *_api_Test_TestStruct_construct() {
return reinterpret_cast<Test_TestStruct*>(new TestStruct());
}
void _api_Test_TestStruct_dispose(Test_TestStruct *p) {
auto *phost = reinterpret_cast<TestStruct*>(p);
delete phost;
}
int *_api_Test_TestStruct_get_Data(Test_TestStruct *p) {
return &reinterpret_cast<TestStruct*>(p)->Data;
}
...
...
// sets the values of all function pointers
void init_api_interface(api_interface &iface) {
iface.Test_TestStruct_construct = _api_Test_TestStruct_construct;
iface.Test_TestStruct_dispose = _api_Test_TestStruct_dispose;
iface.Test_TestStruct_get_Data = _api_Test_TestStruct_get_Data;
...
...
}
When I compile the host program, I compile all these files into an executable, and call init_api_interface() to initialize the function pointers. When other people compile plugins, they only include api.h, and compile the files into a dynamic library with a certain exposed function, say init_plugin(const api_interface*). When the user loads a plugin, the host program only needs to pass a pointer to the struct to init_plugin in the dynamic library, and the plugin can set off to use all these functions.
The benefits of using such a scheme is that:
Plugins compiled using different toolchains than the host program should work fine.
The list of API functions can be extended without breaking existing plugins, as long as new function pointers are added after existing ones.
This approach allows full access to routines in the host program, while it's also easy to hide certain aspects.
It allows plugins to inherit from classes in the host program, which is kinda important for my case.
Plugin developers don't need the source of the host program.
It's convenient since the API interface doesn't need to be manually maintained.
Of course, this is just a gist of the approach and many more details need to be considered in practice.
So my questions are:
Is this kind of plugin interface good practice? Are there existing examples of this approach? Are there better solutions to this problem? Is there any critical drawbacks of this approach that I don't see?
Can SWIG accomplish this task? If not, can SWIG be modified to do so?
If SWIG must be modified, which is easier, modifying SWIG or starting from scratch using clang?
I want to use some code that executes a http-post, and because I'm not too familiar with c++ and what libraries you can use, and I am probably too dumb to get libcurl and curlpp to work, I found a link explaining how to use the .net version.
Alright so I created a class. Header File:
public ref class Element
{
public:
Element();
virtual ~Element();
void ExecuteCommand();
};
Class file:
#include "Element.h"
Element::Element()
{
}
Element::~Element()
{
Console::WriteLine("deletion");
}
void Element::ExecuteCommand(){
HttpWebRequest^ request = dynamic_cast<HttpWebRequest^>(WebRequest::Create("http://www.google.com"));
request->MaximumAutomaticRedirections = 4;
request->MaximumResponseHeadersLength = 4;
request->Credentials = gcnew NetworkCredential("username", "password", "domain");
HttpWebResponse^ response = dynamic_cast<HttpWebResponse^>(request->GetResponse());
Console::WriteLine("Content length is {0}", response->ContentLength);
Console::WriteLine("Content type is {0}", response->ContentType);
// Get the stream associated with the response.
Stream^ receiveStream = response->GetResponseStream();
// Pipes the stream to a higher level stream reader with the required encoding format.
StreamReader^ readStream = gcnew StreamReader(receiveStream, Encoding::UTF8);
Console::WriteLine("Response stream received.");
Console::WriteLine(readStream->ReadToEnd());
response->Close();
readStream->Close();
}
If I set the configuration type of this project to Application (exe), and create a new .cpp file where I create an Instance of this Element it works fine.
But my question is: Is it possible to create a .dll/.lib Library from this project and use it in a C++ project without CLI? (I don't want to use ^ for pointers :( )
Even if it's not possible, I have another problem.
When I link the library in a C++/CLI project. I get
unresolved token (06000001) Element::.ctor
unresolved token (06000002) Element::~Element
unresolved token (06000003) Element::ExecuteCommand
3 unresolved externals
the code for main.cpp in the second project is just the following:
#include <Element.h>
int main(){
return 0;
}
Thank you
As Hans Passant already stated: you must compile your C++/CLI code as Dynamic Library in order to be able to consume it from an unmanaged application. CLI/Managed code cannot run from/cannot reside in static libraries.
If you change the C++/CLI library target from Static library to Dynamic library you'll be able to compile successfully your unmanaged C++ application.
One thought from my side:
I think you'll be better if you use mixed mode C++/CLI DLLs to consume the managed functionality - you'll be able to free your consumer application completely from referencing the CLR.
The Header of such mixed mode Wrapper for your Element class would look like this:
#pragma once
#pragma unmanaged
#if defined(LIB_EXPORT)
#define DECLSPEC_CLASS __declspec(dllexport)
#else
#define DECLSPEC_CLASS __declspec(dllimport)
#endif
class ElementWrapperPrivate;
class __declspec(dllexport) ElementWrapper
{
private:
ElementWrapperPrivate* helper;
public:
ElementWrapper();
~ElementWrapper();
public:
void ExecuteCommand();
};
And the implementation would look like this:
#include "ElementWrapper.h"
#pragma managed
#include "Element.h"
#include <msclr\auto_gcroot.h>
using namespace System::Runtime::InteropServices;
class ElementWrapperPrivate
{
public:
msclr::auto_gcroot<Element^> elementInst; // For Managed-to-Unmanaged marshalling
};
ElementWrapper::ElementWrapper()
{
helper = new ElementWrapperPrivate();
helper->elementInst = gcnew Element();
}
ElementWrapper::~ElementWrapper()
{
delete helper;
}
void ElementWrapper::ExecuteCommand()
{
helper->elementInst->ExecuteCommand();
}
Then just compile your Element.cpp + ElementWrapper.cpp to a DLL and use the ElementWrapper.h in your unmanaged applications.
I jave the following interface that must be implemented by a set of DLL libraries I want to dynamically import later:
class ToolboxInterface {
public:
struct ToolboxInfo {
std::string name;
};
virtual void process() = 0;
virtual void clear() = 0;
};
I want to dynamically load the DLL's as in here, and for that reason I have to force this interface to all my DLLs, as a way of being sure I can use GetProcAddress in all DLL's.
What is the best way of forcing this interface into a DLL project? SHould I not use a class and use some other strategy instead? Or how can I use the class interface?
Given two IDL definitions: (I'm only implementing a client, the server side is fixed.)
// Version 1.2
module Server {
interface IObject {
void Foo1();
void Foo2() raises(EFail);
string Foo3();
// ...
}
};
// Version 2.3
module Server {
interface IObject {
// no longer available: void Foo1();
void Foo2(string x) raises(ENotFound, EFail); // incompatible change
wstring Foo3();
// ...
}
};
(Edit Note: added Foo3 method that cannot be overloaded because the return type changed.)
Is it somehow possible to compile both stub code files in the same C++ CORBA Client App?
Using the defaults of an IDL compiler, the above two IDL definitions will result in stub code that cannot be compiled into the same C++ module, as you'd get multiple definition errors from the linker. The client however needs to be able to talk to both server versions.
What are possible solutions?
(Note: We're using omniORB)
(Adding answer from one Stefan Gustafsson, posted in comp.object.corba 2011-03-08)
If you look at it as a C++ problem instead of a CORBA problem, the
solution is C++ namespaces.
You could try to wrap the different implementations in different C++
namespaces.
Like:
namespace v1 {
#include "v1/foo.h" // From foo.idl version 1
}
namespace v2 {
#include "v2/foo.h" // from foo.idl version 2
}
And to be able to compile the C++ proxy/stub code you need to create C++
main files like:
// foo.cpp
namespace v1 {
#include "v1/foo_proxy.cpp" // filename depend on IDL compiler
}
namespace v2 {
#include "v2/foo_proxy.cpp"
}
This will prevent the C++ linker complaining since the names will be
different. Of course you
could run into problems with C++ compilers not supporting nested
namespaces..
A second solution is to implement the invocation using DII, you could
write a C++ class
class ServerCall {
void foo2_v1() {
// create request
// invoke
}
void foo2_v2(String arg) {
// create_list
// add_value("x",value,ARG_IN)
// create_request
// invoke
}
}
By using DII you can create any invocation you like, and can keep full
control of your client code.
I think this is a good idea, but I haven't been able to try it out yet, so there may lurk some unexpected surprises wrt to things no longer being in the global namespace.
What comes to my mind would be splitting the client code into separate libraries for each version.
Then you can select the correct client depending on the version to be used.
In a recent project we handled this by introducing a service layer with no dependency to the CORBA IDL.
For example:
class ObjectService
{
public:
virtual void Foo1() = 0;
virtual void Foo2() = 0;
virtual void Foo2(const std::string &x) = 0;
};
For each version, create a class derived from ObjectService and implement the operations by
calling the CORBA::Object. Each derived class must be in separate library.
In the client implementation, you only operate on instances of ObjectService.
CORBA::Object_var remoteObject=... // How to get the remote object depends on your project
ObjectService *serviceObject=0;
// create a service object matching the remote object version
// Again, this is project specific
switch (getRemoteObjectVersion(remoteObject))
{
case VERSION_1_2:
serviceObject=new ServiceObjectImpl12(remoteObject);
break;
case VERSION_2_3:
serviceObject=new ServiceObjectImpl23(remoteObject);
break;
default:
// No matching version found, throw exception?
break;
}
// Access remote object through service object
serviceObject->Foo2("42");