I have a Visual C++ solution, which consists out of 3 projects.
One of these projects, project "A" is used by both other projects and it has some global data which should always be the same.
However when I link project A into both other projects it seems that two instances of project A are working on different data.
Can this be the case and how can I set up the linking process to prevent this from happending?
--- Update to make things more clear
- Project 1 -
main () {
init();
test();
}
- Project 2 -
test () {
cout << get_data();
}
- Project A -
int data;
init() {
data = 123;
}
get_data() {
return data;
}
As you can see in this exaple I am initializeing the data of project A in the first project and I am accessing it from the second project. My observation is that the data is not initialized when the acces from the second project takes place.
Both projects A and 2 are linked statically into project 1 so the output is a single executable.
A global resides in a single place in a process's memory space. If you have two processes that share a module, they'll each have separate variable, yes.
You'll need to use IPC to share data between processes.
The symbols from project A in the static library are linked into both project 1 and project 2, separately. Getting them merged involves compiler-specific mechanisms.
Basically, you must make project 2 re-export project A's symbols, and have project 1 import those instead of importing project A directly.
If you can't do that (e.g. because you don't have control over either project 1 or 2), you must write workarounds inside project A. One option (the easiest usually) is to convert project A to a dynamic library. Then both project 1 and 2 load the same instance of project A and the data is shared.
Another option is to change project A so that it doesn't have a global variable, but instead registers a process-global data item that contains the data you want; for example, you could abuse the local atom table[1] to store a pointer to dynamic memory.
[1] http://msdn.microsoft.com/en-us/library/windows/desktop/ms649053%28v=vs.85%29.aspx#_win32_Integer_Atoms
Related
I am building a dynamic multi agent simulation in OMNeT and for this I have to create new modules at runtime. The module creation is working, however, the modules created at runtime are not appearing in the 3D visualization.
module "node" is created sucessfully
Does anyone know how to make the module appear in the visualization? Do I have to update the visualization module?
omnet.ini:
[General]
network = AgentNetwork
*.visualizer.osgVisualizer.typename = "IntegratedOsgVisualizer"
*.visualizer.*.mobilityVisualizer.animationSpeed = 1
*.visualizer.osgVisualizer.sceneVisualizer.typename = "SceneOsgEarthVisualizer"
*.visualizer.osgVisualizer.sceneVisualizer.mapFile = "hamburg.earth"
AgentSpawner:
void AgentSpawner::initialize()
{
cMessage *timer = new cMessage("timer");
scheduleAt(1.0, timer);
}
void AgentSpawner::handleMessage(cMessage *msg)
{
cModuleType *moduleType = cModuleType::get("simulations.Agent");
cModule *module = moduleType->create("node", getParentModule());
// set up parameters and gate sizes before we set up its submodules
module->par("osgModel") = "3d/glider.osgb.(20).scale.0,0,180.rot";
module->getDisplayString().parse("p=200,100;i=misc/aircraft");
module->finalizeParameters();
// create internals, and schedule it
module->buildInside();
module->callInitialize();
module->scheduleStart(simTime()+5.0);
}
The OSG visualization info is maintained totally separately from the actual simulation model module object (that's because the visualization must be ALWAYS optional in the simulation, so make sure your simulation builds fine with OSG totally turned off). This means that an entirely different data structure is built during initialization time from the existing network nodes. As this is done only once during the initialization, dynamically created modules will not have their visualization counterpart data structure.
The code which created the corresponding objects is here.
The solution would be to look up the NetworkNodeOsgVisualizer module in your AgentSpawner code then create and add the corresponding data structures (NetworkNodeOsgVisualization objects). The needed methods (create and add) are there, but sadly they are protected, so you many need to modify the INET code and make them public to be able to call them.
I have a C++ CLR/CLI project, I wonder how to embed a localized satellite dll into my exe application, I found similar solutions but it's for C# projects which is pretty different from my project structure.
Is it possible to embed it directly into the binary?
By the way I'm getting issues with namespaces, it seems my custom namespace is not linked to my localized resource file.
I've been searching for hours to find a solution for a C++ CLR/CLI project which is pretty different comparing with C# projects which apparently comes with Build Action and Custom Tool Namespace all these options we don't have in a CLR/CLI project, it's really important, especially if we have changed Namespaces so we gotta use Resource Logical Name instead. Here's my answer how to solve Namespace issues, this also works for localized resource files linked to satellite dlls.
After your localized satellite dll is generated, include that in your project as Compiled Managed Resource you can set that by opening its file property and setting the Item Type. In projects such as C# you won't find that but something similar like "Embedded Resource". Anyways this is intended to C++ CLR/CLI projects only. If you have changed namespaces, don't forget to set Resource Logical Name of the respective resource file.
Next step is to do some code in order to embed that dll into our exe application, here's a good one for that:
Since C++ CLR/CLI doesn't support lambda expressions we have to do this way:
private: System::Reflection::Assembly^ currentDomainAssemblyResolve(System::Object^ sender, System::ResolveEventArgs^ args) {
System::Reflection::AssemblyName^ assemblyName = gcnew System::Reflection::AssemblyName(args->Name);
System::String^ resourceName = assemblyName->Name + ".dll";
System::IO::Stream^ stream = System::Reflection::Assembly::GetExecutingAssembly()->GetManifestResourceStream(resourceName);
array<Byte>^ assemblyData = gcnew array<Byte>((unsigned long) stream->Length);
try {
stream->Read(assemblyData, 0, assemblyData->Length);
} finally {
if (stream != nullptr) delete stream;
}
return System::Reflection::Assembly::Load(assemblyData);
}
Usage:
//Put it in your constructor before InitializeComponent()
MyClass(void) {
AppDomain::CurrentDomain->AssemblyResolve += gcnew System::ResolveEventHandler(this, &MyNameSpace::MyClass::currentDomainAssemblyResolve);
InitializeComponent();
}
So now it's no longer necessary satellite dlls to load your localized resources.
Use a free application packer to bundle files into a single exe.
https://enigmaprotector.com/en/aboutvb.html
This one is free, I use it and it works very well for me.
I am trying to build a nuget package via CoApp tool for c++.
The package needs to embed 3 folders when compiling a cpp using it.
So, I want an internal include structure as following :
/build/native/include/lib1,
/build/native/include/lib2,
/build/native/include/lib3
My question: how to add several include folders in /build/native/include/
I tryied :
Multiple blocs of (varying lib1, lib2, lib3):
nestedInclude +=
{
#destination = ${d_include}lib1;
".\lib1\**\*.hpp", ".\lib1\**\*.h"
};
Multiple blocs of (varying lib1, lib2, lib3):
nestedInclude
{
#destination = ${d_include}lib1;
".\lib1\**\*.hpp", ".\lib1\**\*.h"
};
but it seems coapp accumulates the .h/.hpp files among the blocs (depending of operator += or not) and at the end, add all of them to the last #destination tag value. So I get an unique entry : /build/native/include/lib3
The destination is overwritten in your example and therefore you get everything flat in the last given address. To handle this you can instead create multiple nested include,
nested1Include: {
#destination = ${d_include}lib1;
".\lib1\**\*.hpp", ".\lib1\**\*.h"
}
nested2Include: {
#destination = ${d_include}lib2;
".\lib2\**\*.hpp", ".\lib2\**\*.h"
}
I've just hit the same issue, and Gorgar's answer set me on the right track, thank you. But I do have one additional piece of information. I only had one underlying directory, and in that case CoApp still flattened everything. The trick is to make it think it has two, even if it doesn't, like this:
include1: {
#destination = ${d_include}NativeLogger;
"include\NativeLogger\*.h"
};
// The use of a second include spec here which doesn't actually address any files
// is to force CoApp to create the substructure of the first include. There is some
// discussion on the net about bugginess related to includes structures, but this
// seems to fix it.
include2: { include\* };
I hope someone could help me address this fundamental problem that I have been trying to tackle for the last two weeks.
I have a solution that contains 4 projects, some libraries that the project files depend on. In each of these project, a copy of logic.cpp file has been included and it contains a long list of logic which in pseudo codes looks like this:
BOOL myLogic(){
if(...)
{
switch(...)
{
case 1:
doA();
break;
case 2:
doB();
break;
...
case 20:
doSomething();
break;
}
}
}
For project #1, it generates an exe of the tool. While for project #2, it generates the dll version of the tool that I'm building and the other 2 projects, they act as utility files for my tool. If you notice there are like 20 cases that the logic can run into and it is pretty massive.
So, my problem now is that all these source codes are being compiled into my single exe or dll even when some of these case may not even be reached when deployed in some scenarios. What I want to achieve is to break this switch case and compile 20 different sets of exe and dll. So
1) The application has a smaller footprint.
2) The sources could be protected to a certain extent when reverse engineered.
Hence, I would like to seek advise from the community on how do I go about solving this problem, if I would like to still continue using Visual Studio's inbuilt compilation. (I could build the 20 sets of exe and dll with the "Build Solution").
Thank you and I appreciate any advice. Feel free to clarify if I have not been clear enough in my question.
Create a new project, that compiles into static library. In that project create separate source cpp files for all the 20 functionalities. (Splitting to more source files are just for the sake of maintainability.) Split logic.cpp into the 20 separate files. If there are common code parts, you can create more source files to contain those parts.
Now create 2x20 new projects: 20 exe projects and 20 dll projects. Each of these projects depends on the static library project created in step 1, and all of these projects are nothing but a simple stub for calling exactly one of the functionalities from the common library.
When you build the solution, you will have 20 differently named executables and 20 differently named dlls, for each functionality. If dead code elimination is turned on in the linker, then none of the exes/dlls will contain code that is not required for the specific function.
What about some handwork?
Indroduce some defines for your scenarios or use some standard ones like "_ISDLL"
and encase the cases :-) from which you know they can not be reaches in "#ifdefs"
#ifdef _ISDLL
case x:
break;
#endif
Hi have 6 projects defined in my IDE.
EventHelper
ConfigParser
OfficeEventHandler
Messaging
LoggingAndPersistence
ScreenCamera
EventHelper has the entry point. The rest of the projects are the DLL which gets absorbed by the EventHelper.
Messaging and ConfigParser is being used in every other DLLs as well. So the code for loading the DLLs and acessing it is common in all the modules (Code Redundancy).
dllHandle_parser = ::LoadLibrary(TEXT("ConfigParser.dll"));
if (!dllHandle_parser){
return;
}
configParserClient_fctry = reinterpret_cast<configParser>(::GetProcAddress(dllHandle_parser, "getParserInstance"));
if (!configParserClient_fctry) {
::FreeLibrary(dllHandle_parser);
return;
}
parser = configParserClient_fctry();
And the similar code for Messaging
My question is
Is there a way where I can have one DLL called ObjectFactory where I can give the name of the class (in runtime, in string format) whose to be created. Something like
ObjectFactory.getInstance("ConfigParser/Messaging"). (Java like Class.forName("className"))
Or if not possible this way, what would be the suggested architecture?