how to use dll? - c++

i have have a program/project in vs2008 that uses a third party static lib. Now, it has come to my attention that i need to offer some api's via a dll. Apparently a thrid party program will use my dll via the apis i will provide.
Can anyone give me some direction as to what I need to do? would i need to just create a dll in vs2008 and just copy and paste my method logic in the apis that i provide?
are there any potential issues i need to worry about?
thank you

I suggest you check out this MSDN tutorial on creating & using DLLs.
There are unfortunately many potential issues to think about. A very brief and by no means complete list of the ones that pop in to my head:
You need to be aware of potential errors passing CRT objects across DLL boundaries
If you allocate objects in one module and deallocate them in the other, your DLL and the client code must link to the same CRT
It is very important that you keep the interface seperate from the implementation in your DLLs header files. This means you often can't do thing like use std::string as function parameters, or even as private member variables.
If I think of more or find more links, I'll add them.

Related

Should I use a Static Library or DLL if I'm creating an API?

I'm creating an API to make it easier to find specifications about your system (CPU, GPU, etc.) using C++. This API will be distributed using GitHub.
In this scenario, which type of library should I use: Static or DLL (Dynamic)? Also, what are the pros and cons of each?
It depends on a few things, but like in the previous comments, you can leave the choice to the user.
A DLL is easier to integrate since the user just has to do a LoadLibrary() to start using it.
The problem with the DLL is that if you compiled it without using the default libraries as static and if you don't provide the redistributables, the user will get frustrated because of SxS problems. It may just so happen that there may not be any issues with SxS but you never know.
If you give out the lib file, it is possible that some compilation options may conflict with yours, unless you only use vanilla options.
All in all, at the end of the day, both options are viable and similar. Just depends on how the user wants to use your API.

Do I need to distribute a header file and a lib file with a DLL?

I'm updating a DLL for a customer and, due to corporate politics - among other things - my company has decided to no longer share source code with the customer.
Previously. I assume they had all the source and imported it as a VC++6 project. Now they will have to link to the pre-compiled DLL. I would imagine, at minimum, that I'll need to distribute the *.lib file with the DLL so that DLL entry-points can be defined. However, do I also need to distribute the header file?
If I can get away with not distributing it, how would the customer go about importing the DLL into their code?
Yes, you will need to distribute the header along with your .lib and .dll
Why ?
At least two reasons:
because C++ needs to know the return type and arguments of the functions in the library (roughly said, most compilers use name mangling, to map the C++ function signature to the library entry point).
because if your library uses classes, the C++ compiler needs to know their layout to generate code in you the library client (e.g. how many bytes to put on the stack for parameter passing).
Additional note: If you're asking this question because you want to hide implementation details from the headers, you could consider the pimpl idiom. But this would require some refactoring of your code and could also have some consequences in terms of performance, so consider it carefully
However, do I also need to distribute the header file?
Yes. Otherwise, your customers will have to manually declare the functions themselves before they can use it. As you can imagine, that will be very error prone and a debugging nightmare.
In addition to what others explained about header/LIB file, here is different perspective.
The customer will anyway be able to reverse-engineer the DLL, with basic tools such as Dependency Walker to find out what system DLLs your DLL is using, what functions being used by your DLL (for example some function from AdvApi32.DLL).
If you want your DLL to be obscured, your DLL must:
Load all custom DLLs dynamically (and if not possible, do the next thing anyhow)
Call GetProcAddress on all functions you want to call (GetProcessToken from ADVAPI32.DLL for example
This way, at least dependency walker (without tracing) won't be able to find what functions (or DLLs) are being used. You can load the functions of system DLL by ordinal, and not by name so it becomes more difficult to reverse-engineer by text search in DLL.
Debuggers will still be able to debug your DLL (among other tools) and reverse engineer it. You need to find techniques to prevent debugging the DLL. For example, the very basic API is IsDebuggerPresent. Other advanced approaches are available.
Why did I say all this? Well, if you intend to not to deliver header/DLL, the customer will still be able to find exported functions and would use it. You, as a DLL provider, must also provide programming elements with it. If you must hide, then hide it totally.
One alternative you could use is to pass only the DLL and make the customer to load it dynamically using LoadLibrary() + GetProcAddress(). Although you still need to let your customer know the signature of the functions in the DLL.
More detailed sample here:
Dynamically load a function from a DLL

Can changes to a dll be made, while keeping compatibility with pre-compiled executables?

We have a lot of executables that reference one of our dlls. We found a bug in one of our dlls and don't want to have to re-compile and redistribute all of our executables to fix it.
My understanding is that dlls will keep their compatibility with their executables so long as you don't change anything in the header file. So no new class members, no new functions, etc... but a change to the logic within a function should be fine. Is this correct? If it is compiler specific, please let me know, as this may be an issue.
Your understanding is correct. So long as you change the logic but not the interface then you will not run into compatibility issues.
Where you have to be careful is if the interface to the DLL is more than just the function signatures. For example if the original DLL accepted an int parameter but the new DLL enforced a constraint that the value of this parameter must be positive, say, then you would break old programs.
This will work. As long as the interface to the DLL remains the same, older executables can load it and use it just fine. That being said, you're starting down a very dangerous road. As time goes by and you patch more and more DLLs, you may start to see strange behaviour on customer installations that is virtually impossible to diagnose. This arises from unexpected interactions between different versions of your various components. Historically, this problem was known as DLL hell.
In my opinion, it is a much better idea to rebuild, retest, and redistribute the entire application. I would even go further and suggest that you use application manifests to ensure that your executables can only work with specific versions of your DLLs. It may seem like a lot of work now, but it can really save you a lot of headaches in the future.
It depends
in theory yes, if you load the dll with with LoadLibrary and haven't changed the interface you should be fine.
If you OTOH link with the .dll file using some .lib stub there is no guarantee it will work.
That is one of the reasons why COM was invented.

Using new Windows features with fallback

I've been using dynamic libraries and GetProcAddress stuff for quite some time, but it always seems tedious, intellisense hostile, and ugly way to do things.
Does anyone know a clean way to import new features while staying compatible with older OSes.
Say I want to use a XML library which is a part of Vista. I call LoadLibraryW and then I can use the functions if HANDLE is non-null.
But I really don't want to go the #typedef (void*)(PFNFOOOBAR)(int, int, int) and PFNFOOOBAR foo = reinterpret_cast<PFNFOOOBAR>(GetProcAddress(GetModuleHandle(), "somecoolfunction"));, all that times 50, way.
Is there a non-hackish solution with which I could avoid this mess?
I was thinking of adding coolxml.lib in project settings, then including coolxml.dll in delayload dll list, and, maybe, copying the few function signatures I will use in the needed file. Then checking the LoadLibraryW return with non null, and if it's non-null then branching to Vista branch like in a regular program flow.
But I'm not sure if LoadLibrary and delay-load can work together and if some branch prediction will not mess things up in some cases.
Also, not sure if this approach will work, and if it wont cause problems after upgrading to the next SDK.
IMO, LoadLibrary and GetProcAddress are the best way to do it.
(Make some wrapper objects which take care of that for you, so you don't pollute your main code with that logic and ugliness.)
DelayLoad brings with it security problems (see this OldNewThing post) (edit: though not if you ensure you never call those APIs on older versions of windows).
DelayLoad also makes it too easy to accidentally depend on an API which won't be available on all targets. Yes, you can use tools to check which APIs you call at runtime but it's better to deal with these things at compile time, IMO, and those tools can only check the code you actually exercise when running under them.
Also, avoid compiling some parts of your code with different Windows header versions, unless you are very careful to segregate code and the objects that are passed to/from it.
It's not absolutely wrong -- and it's completely normal with things like plug-in DLLs where two entirely different teams probably worked on the two modules without knowing what SDK version each other targeted -- but it can lead to difficult problems if you aren't careful, so it's best avoided in general.
If you mix header versions you can get very strange errors. For example, we had a static object which contained an OS structure which changed size in Vista. Most of our project was compiled for XP, but we added a new .cpp file whose name happened to start with A and which was set to use the Vista headers. That new file then (arbitrarily) became the one which triggered the static object to be allocated, using the Vista structure sizes, but the actual code for that object was build using the XP structures. The constructor thought the object's members were in different places to the code which allocated the object. Strange things resulted!
Once we got to the bottom of that we banned the practise entirely; everything in our project uses the XP headers and if we need anything from the newer headers we manually copy it out, renaming the structures if needed.
It is very tedious to write all the typedef and GetProcAddress stuff, and to copy structures and defines out of headers (which seems wrong, but they're a binary interface so not going to change) (don't forget to check for #pragma pack stuff, too :(), but IMO that is the best way if you want the best compile-time notification of issues.
I'm sure others will disagree!
PS: Somewhere I've got a little template I made to make the GetProcAddress stuff slightly less tedious... Trying to find it; will update this when/if I do. Found it, but it wasn't actually that useful. In fact, none of my code even used it. :)
Yes, use delay loading. That leaves the ugliness to the compiler. Of course you'll still have to ensure that you're not calling a Vista function on XP.
Delay loading is the best way to avoid using LoadLibrary() and GetProcAddress() directly. Regarding the security issues mentioned, about the only thing you can do about that is use the delay load hooks to make sure (and optionally force) the desired DLL is being loaded during the dliNotePreLoadLibrary notification using the correct system path, and not relative to your app folder. Using the callbacks will also allow you to substitute your own fallback implementations in the dliFailLoadLib/dliFailGetProc notifications when the desired API function(s) are not available. That way, the rest of your code does not have to worry about platform differences (or very little).

C++ How to communicate between DLLs of an application?

I have a application and two Dlls. Both libraries are loaded by the application. I don't have the source of the application, but the source of the libs. I want to instantiate a class in lib A and want to use this instance also in lib B.
How do I do this? I'm not sure, but I think that both libs are used in different threads of the application.
I have no idea where I have to search for a solution.
No. Think about DLL as just normal library. Both can be used in a single thread.
If you want to use a class A in library X, you must pass a pointer/reference to it. The same applies to library Y. This way both libraries can work with same class/data.
Two dll's loaded into the same process is a fairly simple setup.
You just need to be careful with module scope, which will be the same as dll scope.
e.g. each dll will have its own set of static instances for any static objects.
Next you need to understand how to reference functions/classes across the boundary and what sort of types are safe to use as arguments.
Have a look on any documentation for dllexport and dllimport - there are several useful questions on this site if you search with those terms.
You have to realize that even though your DLLs are used by the host application nothing prevents you (that is your DLLs) from using your DLLs as well. So in your DLL A you could load and use your DLL B and call functions and stuff. When DLL A is unloaded, free DLL B as well. DLLs are reference counted, so your DLL A will have a reference of 1 (the host application) and your DLL B 2 (the host application and DLL A). You will not have two instances of DLL B loaded in the same process.
Named pipes might be the solution to your problems.
If you're targetting windows, you can check this reference
[EDIT] Didnt see you wanted to work on the same instance. In that case you need shared memory spaces, however, you really have to know what you're doing as it's quite dangerous :)
A better solution could be to apply OO Networking principles to your libs and communicate with, say CORBA, using interprocess middleware or the 127.0.0.1 loopback interface (firewalls will need to allow this).
Seems the simple solution would be to include in some initialization function of library A (or in DllMain if needed) a simple call to a function in library B passing a pointer to the common object. The only caveat is that you must ensure the object is deleted by the same DLL that newed it to avoid problems with the heap manager.
If these DLL's are in fact used in different threads you may have to protect access to said data structure using some kind of mutex.