Whats in SDL.dll? - sdl

I am new to SDL, and I am just curious why does sdl use static and dynamic libraries? I mean, what functions are in sdl.dll, and why is it linked dynamically instead of statically? Thanks.

SDL.dll contains implementations of all of the functions that you use from SDL, such as SDL_Init() and SDL_SetVideoMode(). It's linked dynamically to allow drop-in replacement of the library with a newer version, without breaking compatibility with existing applications—the SDL interface does not change nearly as often as the implementation. Dynamic libraries provide decoupling between interface and implementation, at the cost of whatever it takes to load them dynamically: searching for a file, loading it, complaining if it's not available, and so on.
An application is more modular using dynamic libraries, and the executable will tend to remain small. It is possible to link statically against SDL, under which circumstances the size of your executable will include the (modest) size of the SDL library, and upgrading to a newer version of SDL will require recompiling your application.

Related

How can a C++ dynamic library with a C++ interface not break ABI between different compiler versions?

Recently I have been using the Assimp library for my hobby 3D graphics engine project in order to load 3D models.
The library comes in the form of a DLL accompanied by an import library (LIB) and an EXP file (only when building from latest sources; I understand it holds information about the exported functions, similar to the LIB)
Now, the surprise part, which made me ask this question, is the following:
I was able to correctly, without any build/link errors or run-time errors use the C++ interface of two different versions of the library:
The pre-built library, release 3.1.1, which was built and is dependent on an older runtime (MSVCP110.DLL, MSVCR110.DLL) (which I later decided to change to my own build since there was a linking error in that binary build)
My own source build made with the VS 2013 compiler, depending on MSVCP120.DLL and VCRUNTIME120.DLL
I was able to use both of these library binaries with my executable binary, when built with either:
VS 2013 compiler
VS 2015 compiler
My surprise of the success of using the library like described above, without any error, is caused by:
reading all over the internet about how C++ libraries and interfaces are not binary compatible (ABI) between different compiler versions, and thus cannot be used together with binaries built with other compilers. Exceptions: usage through C interfaces, which maintain the ABI compatibility; COM libraries (COM interfaces were built to overcome the same issue).
Using binaries built with different compilers can raise problems because of function name mangling (may be solved by the import library?!), dynamic memory allocation and freeing done by different versions of the allocators/destructors.
Can anyone provide some insight on this use case, on why I successfully managed to use a VS 2013 built dynamic library together with a VS 2015 toolset application without any of the above described problems? Also, the same applies when using the pre-built binary both with a VS 2013 build and a VS 2015 build of my 3D engine application.
Is it an isolated case or does the Assimp library itself take care of the problems and is implemented in such a way to ensure compatibility?
The actual usage of the library was pretty straightforward: I used a local stack variable of an Importer object (class) and from there on manipulated and read information from a bunch of structs; no other classes (I think)
As long as all the classes and memory used in the DLL is allocated and freed and maintained inside the class, and you don't use standard library container objects across the interface boundary you are completely safe to do it.
Even the use of std:string isn't allowed across DLL boundaries if the modules use different compilers or even have a different CRT usage, or at least a different CRT.
In fact, using plain interfaces (pure virtual classes) is safe for all VS C++ compilers. Also using extern "C" is safe.
So if the structures you exchange are just PODs or have simple plain data objects, and as long as allocation and destruction is done all inside the DLL you are free to mix and use DLLs build with different compilers.
So the best examples are the DLLs of the windows OS. They use a clearly defined interface of simple data structures. Memory management and allocation (like windows, menus, GDI objects) is done via a transparent handle. The DLLs are just blackboxes for you.
Hope that I met all points.

How do you properly use dynamic linking with Boost on Windows?

If I dynamically link against Boost libraries, I still have to copy the respective Boost DLLs into the folder of the executable for the program to work.
I installed Boost into the recommended path C:\local\boost_1_59_0
Also, taking redistribution into consideration, probably very few people will have Boost installed, and there isn't really a user-friendly redistributable package, like with the Visual C++ libraries.
So does it make more sense to just statically link Boost in order to save some time? I don't really see the benefit of dynamic linking for Boost (on Windows that is!).
Thank you for your tips.
You are correct in saying that you'll have to distribute the dynamic library too, since not many people will be having it. But a dynamic library is not made just for this purpose only.
It can be useful in case multiple applications have to use that library simultaneously.
For example, if your client has multiple applications using the boost dynamic libraries, it makes sense to just send the dynamic library once, and install it into a commonly accessible location and let all those applications use it. This way, the individual sizes of those applications will remain small.
Another use case could be your client simultaneously running multiple instances of a single executable which requires the library.

Linux: C/C++ standard library static vs dynamic linking [duplicate]

This question already has answers here:
Static linking vs dynamic linking
(16 answers)
Closed 9 years ago.
Probably on any OS it is possible to compile C++/C standard library statically or dynamically. On Windows I prefer static builds always, because it helps to avoid "dll hell" problem with different versions of libraries installed or not installed on specific Windows version, edition and service pack, etc. Static linking makes software more portable and less dependent on what end user did with his operating system (I even saw examples when end user could make SHIFT+DEL on some DLLs in system32, he couldn't explain why, or when users claim that my app contains virus because it tried to download dynamically linked prerequisites from official Microsoft website...) So, on Windows static linking is usually better than dynamic one in my experience.
However, I am new to Linux, so can anyone share his experience? My question is: what kind of linking (dynamic or static) is preffered on Linux if we ignore the fact that dynamic one allows to save memory & hard drive space and if we plan to distribute software with automated install program (hard drive space and memory are cheap enough now, so there are not reason to sacrifice hours of working time required to create really good and portable installer to win some megabytes of RAM or hard drive space). Are there any Linux-specific issues with dynamic/static linking?
On Linux you normally have a package manager that ensures you only have one version of libraries installed. So there normally is no dll hell and no problem with linking dynamically. Linking dynamically is the standard way on Linux.
I'd say the answer depends on how you distribute the software.
If you package the software for a specific Linux distribution and version dynamic linking is usually preferred. You know which libraries to find on the system and you can specify dependencies.
However, if you want to distribute the software as a Linux binary that runs on "any" system (such as various games or software like Matlab for example) you will end up with the same dll (or .so) hell problem as on windows. You don't know which versions of which libraries are on the system. Thus, you will have to provide your own .so files or link statically.
See the whole point in using dynamic linking is to reduce the size of executables and memory usage.If you neglect that there is too less to talk about.
On the other hand you mentioned about saving memory and disk space.It is necessary to save disk space because when you want to export your app/program, you can't put a 2Gb app on the internet for download(for example openCV library is about 2.1GB). The solution is to dynamically link them and load only those modules which are necessary to you.This enables efficient multitasking also(creates just one copy of the module and the whole program uses the same copy).
peculiarly:
For example, a media player application might originally be shipped
with a codec that
supports the mp3 file format. If the media player were statically linked it would not
be possible to dynamically update it to support a different file format, without
replacing the entire application. Dynamic linking means that a new version of the
shared library containing a more up-to-date codec, which includes some enhancements
and bug fixes, could be dynamically loaded by a dynamic linker into memory at run-time
to replace the original shared library. A shared library can also be shared by more than one application. For example, two
different media players could both use the same shared library containing the same
codec. This potentially means that the device running the application requires less
physical memory, depending on the size of the dynamic linker.
third, in linux everything is dynamically linked except for the /bin/ash.static whic also has its dynamic version /bin/ash but this shouldn't stop you from static linking in linux.
when using gcc the linking is by default dynamic.I guess you should use the "-static" flag to statically link the libraries
#Vitaliy good that you brought this up.The important thing to note here is that Smart linking and the creation of shared (or dynamic) libraries are mutually exclusive, that is, if you turn on smart linking, then the creation of shared libraries is turned of.
smart linking breaks the code into small code blocks and their dependencies are loaded.
So if you are calling a dependency multiple times, it gets loaded multiple times.
This gives a very good execution time but very high compilation time especially for large units.So there is a certain trade-off.

Potential memory risks from linking a statically built library to a shared library using Visual Studio

I'm working on a cross-platform Firebreath plugin which is crashing on Windows. I use a static library containing classes which reference boost.asio. When I link this library against the plugin dll, I observe crashes when interacting with the io_service subsystem (ie, during socket construction). When I link the static library against an ordinary executable, the problem does not occur. When I compile the contents of the static library directly into the plugin dll project, the crash does not occur. I have gone to great lengths to ensure that all aspects of my build environment on Windows are consistent (build modes, version of Visual Studio, etc). In addition, I've firewalled the boost.asio headers, so that the plugin dll code has no visibility of the boost.asio sub-system (to no effect, unfortunately, vs2008 and vs2010). As far as I can tell, I've done everything possible to ensure that the build environment is well behaved but the problem persists.
Can the community offer any advice on potential risks or approaches which could expose or solve the problem?
Two things that are dramatically different between linking a static library vs loading a DLL:
global initialization: In a DLL, they all run. In a static library, the linker only brings in an object file if it satisfies some unresolved external, so systems which rely on components registering themselves using the constructor or intiialization expression of a global object fail.
shared CRT: In a static library, all calls to the Standard library are resolved during link time, and the main application and library function all call the same copy of the Standard library. In a DLL, you risk having two copies of the Standard library, which can be ok if you're careful never to share any Standard library objects (like FILE*, std::string, or even malloc/free pairs) between library and application.
The second thing is most likely what's biting you. There's a lazy way to fix it: Use the Standard library DLL, and a better way of fixing it: figure out memory and object lifetime and don't try to allocate on one side and free on the other, or share C++ class layout across the boundary. Virtual functions work fine across the boundary.
The advantage of the "better" way is that plugins can be built in any version of the compiler, which becomes a big deal for maintenance later on in the development cycle.
In the FireBreath plugin prep script, try turning on the WITH_DYNAMIC_MSVC_RUNTIME flag.

Where should I be using a static library in C++

What are the use cases of using static libraries in C++? I have seen that people create DLLs instead or some that use static libraries only. Whats your recommendation?
I'm a big fan of static libraries pretty much everywhere. The one big thing that DLLs get you that static libs cannot do is the ability to dynamically load and unload library functionality. So if your application is going to support some sort of hot swapping plugins, you need to use dynamic libs. Otherwise you can probably use static libs.
Static libs open the door to a lot of optimizations that you can't do with dynamic libs because they are performed at link-time. In the microsoft world Link Time Code Generation (LTCG) give you the ability to do whole program optimization and dead code stripping through not only your application, but also your libraries (in gcc this is called Link Time Optimization [LTO])
Additionally static libs tend to make your program easier to distribute because you aren't forced to pass around a lot of library files, and you can completely avoid DLL-hell if you ever were to version your library.
You should use shared libraries (DLL) if you have a significant functionality that needs to be shared between applications; AND this functionality may be improved independant of all the application and updates shipped seprately.
The 'AND' part is the hardest to fulfill: usually you ship your application with any new functionality added and never update the library without updating the application at the same time (I am not saying that never happens) but usually the two ship in lockstep.
Otherwise it is easier to just build normal libs and ship the application.
An Example of a good (I use the term loosely for example purposes) is DirectX. When a new version of DirectX is shipped (and the interface has not changed) you just need to update the DLL and all apllications that use DirectX get the benifit of the new version of the library. In reality it is not quite that simple but you get the idea.
In general, although there are always exceptions to the rule, I would say:
Advantages of DLLs
Less physical memory usage when running multiple instances of an application. (Copy on write optimisation of memory usage.)
Faster link times.
Smaller executables.
Better modularity.
Advantages of static libraries
Less virtual memory usage (and probably less physical memory usage) when running a single instance of an application.
Performance. Approximately 10% (more or less) improvement over DLLs, depending on your application.
Reliability. You tested your application against a specific version (or specific versions) of a library. An upgrade to a DLL could potentially break your application.
There is the advantage of not having to recompile your entire program if you make a change to a dynamically linked library. #Chris makes a good point about dll-hell but if it s a minor bug fix that doesn't affect the API, this can save you the recompilation.
There is a SO post that talks about Windows not being able to apply updates to your program if you statically link their libraries (link to come). Although i think you are more talking about statically linking your own modules.
Use static version of your libraries where you can. Use dynamic libraries where you need to (license, availability or plugin system).
I use static libraries to implement UML's "package" concept. All modules belonging to a package gets put into their own subdirectory, and I create an IDE subproject or makefile for that directory which builds a static library *.a file. Modern IDEs make it possible to work with your top-level package along with sub-packages within the same "workspace".
If a package (or a group of packages) can be deployed separately from the main executable, then I compile it into a shared library (*.so or *.dll) instead and consider it a "component" in UML jargon.
Well a Static DLL would be for holding huge libraries and also for using Multi-Os coode as i like to call it so it's able to be ran on Linux , Windows ...