I am trying to share some data across DLLs in a project which has an extremely complicated dependency structure (numberous DLLs).
I want to be able to associate a key with some data in one part of the application, and then extract that data by supplying the appropriate key in some other part of the app. In a way, one can say that I looking for something that is similar to Java's System.setProperty()/getProperty().
I was sure that the Process APIs would give me some access to a process-wide buffer, but I had no luck. Any ideas?
(I know that the clean solution is to introduce a new DLL and to link it properly to the existing DLLs. Unfortunately, this type of solution is beyond the mandate of my team).
You don't need fancy API's for that. Windows has a much older API precisely for this kind of stuff. These things are known as "atoms". You'd use functions as AddAtom and FindAtom. By default atoms are process-wide.
To be clear here there is one exe with multiple DLL's in only one process but multiple modules. So you aren't looking for inter-process communications.
In answer I see two strategies:
use Windows API atoms which are slightly limited (basically only string data) which can work within or between processes.
If you write a DLL which contains your speculated SetProperty/getproperty functionality you don't have to compile ALL the other DLL's again (which is presumably what is beyond your team's specification) - you only need to recompile those DLL's which are currently using your new features (set/getproperty) (which is presumably within your teams power). So this seems a direct and powerful solution.
Related
I use C++ to address the following task:
I'd like to get the list of all API functions, which are used by the particular process. It can be any Windows 7 process - 32 or 64 including system processes.
So far, the only solution I see - is to create a kernel driver to intercept all possible APIs, listen them for some time and check if particular process called them. It won't guarantee me full list of APIs of that process, but at least will give me some of them.
This method looks dangerous and not effective.
If there is any simpler way to deal with that task? If there is a way to get a full list of APIs of the process, not just the ones called during some time?
Thank you.
No, it's not possible, at least in any meaningful or general sense.
I can write a program that (for example) takes interactive input from the user in the form of a string, then uses GetProcAddress to find the address of a function by that name, and invokes that function.
Note that although using interactive input to read function names is fairly unusual, reading them from some external file is quite a bit more common.
Also note that a kernel driver isn't really the correct place to look either. If you want to do this, you want to intercept at the level of the DLLs used by the program.
One possibility is to create a "shadow" DLL for every DLL to which the program links statically. Then if it calls LoadLibrary/GetProcAddress, you can dynamically intercept those calls to determine what functions it's calling in them, and so on.
This still won't get an absolute result, since it could (as outlined above) get data at runtime to find functions in one execution that it doesn't use in another.
If you want an existing tool to do (approximately) that, consider depends.exe. It's been around for quite a while, and works quite well.
I am halfway through the development of a MFC Form application and I know I will have to publish it in multiple languages. I plan on using satellite DLL's in order to achieve this goal. I am using Visual Studio 2012 by the way.
I have done some reading but I'm still quite a neophyte on the subject. In order to create a satellite DLL containing a Form in an other language, I have to copy the Form in the resource file of a new DLL project, give a specific name to the DLL, add the /NOENTRY option to the linker and then translate the Form.
The thing is, the Form may be subject to change in a near future (move/delete/add controls). If I create the satellite DLL right now, I fear I will have to do the same modifications in every single DLL if I need to alter the structure of a Form.
My question is: Should I wait until I have completed my application and then create the satellite DLL's or is there a mechanism in VS or else where that will allow me to make the modifications of my DLL in a single place?
You didn't mention if you are using .NET - is your C++ application managed? I would suggest you to write your application first, but design it such that resource-only/satellite DLLs can be easily plugged in later.
I think I'd do a complete application in two languages, then translate into others.
As for why to do two first rather than just one: because this way, you're pretty much forced to keep the structure amenable to being translated into more. If you only do one the first time, no matter how careful you think you're being, it seems to be almost inevitable that you miss at least a few things, so you end up going back and rewriting to accommodate more languages. You'll probably do a little of that anyway, but doing two the first time reduces it quite a bit.
I like to inject DLLs to processes, because I can change certain values of a program.
The only minus with dll is that they are not very portable friendly and making them portable consumes a lot of code.
I just wanted to know is it possible to inject an application (to a process) that is stored in a resource and then later execute it?
If so, what code parts may differ from the dll's injecting?
Your question is not tagged "Windows" but from the wording I'll still assume you refer to Windows.
Given the necessary access rights, it is possible to inject an executable to another process, the fork implementation in cygwin is a proof of concept. Windows does not support anything like fork, at least not exposed in the public API. Cygwin implements it by creating a new process and injecting its own process into the other one (including all data). Reading from a resource instead and injecting that is just about the same thing.
One of the differences (and difficulties) may be the image base, which is (normally) always the same under Win32. For a DLL it is an usual thing to be rebased, for an executable it is not. On the other hand, if you want to inject code in addition to an already existing process' code, then the addresses you want may not be free.
I've been using dynamic libraries and GetProcAddress stuff for quite some time, but it always seems tedious, intellisense hostile, and ugly way to do things.
Does anyone know a clean way to import new features while staying compatible with older OSes.
Say I want to use a XML library which is a part of Vista. I call LoadLibraryW and then I can use the functions if HANDLE is non-null.
But I really don't want to go the #typedef (void*)(PFNFOOOBAR)(int, int, int) and PFNFOOOBAR foo = reinterpret_cast<PFNFOOOBAR>(GetProcAddress(GetModuleHandle(), "somecoolfunction"));, all that times 50, way.
Is there a non-hackish solution with which I could avoid this mess?
I was thinking of adding coolxml.lib in project settings, then including coolxml.dll in delayload dll list, and, maybe, copying the few function signatures I will use in the needed file. Then checking the LoadLibraryW return with non null, and if it's non-null then branching to Vista branch like in a regular program flow.
But I'm not sure if LoadLibrary and delay-load can work together and if some branch prediction will not mess things up in some cases.
Also, not sure if this approach will work, and if it wont cause problems after upgrading to the next SDK.
IMO, LoadLibrary and GetProcAddress are the best way to do it.
(Make some wrapper objects which take care of that for you, so you don't pollute your main code with that logic and ugliness.)
DelayLoad brings with it security problems (see this OldNewThing post) (edit: though not if you ensure you never call those APIs on older versions of windows).
DelayLoad also makes it too easy to accidentally depend on an API which won't be available on all targets. Yes, you can use tools to check which APIs you call at runtime but it's better to deal with these things at compile time, IMO, and those tools can only check the code you actually exercise when running under them.
Also, avoid compiling some parts of your code with different Windows header versions, unless you are very careful to segregate code and the objects that are passed to/from it.
It's not absolutely wrong -- and it's completely normal with things like plug-in DLLs where two entirely different teams probably worked on the two modules without knowing what SDK version each other targeted -- but it can lead to difficult problems if you aren't careful, so it's best avoided in general.
If you mix header versions you can get very strange errors. For example, we had a static object which contained an OS structure which changed size in Vista. Most of our project was compiled for XP, but we added a new .cpp file whose name happened to start with A and which was set to use the Vista headers. That new file then (arbitrarily) became the one which triggered the static object to be allocated, using the Vista structure sizes, but the actual code for that object was build using the XP structures. The constructor thought the object's members were in different places to the code which allocated the object. Strange things resulted!
Once we got to the bottom of that we banned the practise entirely; everything in our project uses the XP headers and if we need anything from the newer headers we manually copy it out, renaming the structures if needed.
It is very tedious to write all the typedef and GetProcAddress stuff, and to copy structures and defines out of headers (which seems wrong, but they're a binary interface so not going to change) (don't forget to check for #pragma pack stuff, too :(), but IMO that is the best way if you want the best compile-time notification of issues.
I'm sure others will disagree!
PS: Somewhere I've got a little template I made to make the GetProcAddress stuff slightly less tedious... Trying to find it; will update this when/if I do. Found it, but it wasn't actually that useful. In fact, none of my code even used it. :)
Yes, use delay loading. That leaves the ugliness to the compiler. Of course you'll still have to ensure that you're not calling a Vista function on XP.
Delay loading is the best way to avoid using LoadLibrary() and GetProcAddress() directly. Regarding the security issues mentioned, about the only thing you can do about that is use the delay load hooks to make sure (and optionally force) the desired DLL is being loaded during the dliNotePreLoadLibrary notification using the correct system path, and not relative to your app folder. Using the callbacks will also allow you to substitute your own fallback implementations in the dliFailLoadLib/dliFailGetProc notifications when the desired API function(s) are not available. That way, the rest of your code does not have to worry about platform differences (or very little).
How can I include my programs dependency DLLs inside the EXE file (so I only have to distribute that one file)? I am using C++ so I can't use ILMerge like I usually do for C#, but is there an easier way to automatically do this in Visual Studio?
I know this is possible (thats why installers work), I just need some help being pointed to the best way to this.
Thank you for your time.
There are many problems with this approach. For one example, see this post from REAL Software. Their “REALbasic” product used to do this and had problems including:
When writing the DLLs out at run-time, it would trigger anti-virus warnings.
Problems with machines where the user doesn’t have write permissions or is low on disk space.
Their attempt to fix the problem caused more problems, including crashes. Eventually they relented and now distribute DLLs side-by-side with apps.
If you really need a single-EXE deployment, and can’t use an installer for some reason, the reliable way is to static-link all dependencies. This assumes that you have the correct .libs (and not just .libs that link in the DLL).
There exist two options, both of which are far from ideal:
write a temporary file somewhere
load the DLL to memory "by hand", i.e. create a memory block, put DLL image to memory, then process relocations and external references.
The downside of the first approach is described above by Nate. Second approach is possible, but is complicated (requires deep knowledge of certain low-level things) and doesn't allow the DLL code to access DLL resources (this is obvious - there's no image of the DLL so the OS doesn't know where to take resources).
One more option usable in some scenarios: create a virtual disk whose contents are stored in your EXE file resources, and load the DLL from there. This is possible using our SolFS product (OS edition), but creation of the virtual disk itself requires use of kernel-mode drivers which must be written to disk before use.
Most installers use a zip file (or something similar) to hold whatever files are needed. When you run the installer, it decompresses the data and puts the individual files where needed (and typically adds registry entries, registers any COM controls it installed, etc.)