DLL management for C++ with MinGW? - c++

I recently decided to take a look at 2D graphics with C++, using MinGW on Windows 7.
Since I was only going to need 2D graphics any library would be viable more or less (OpenGL, SDL, etc..). I decided to take a quick look at a few and check how easy they'd be to get working on windows with MinGW.
I soon noticed every library I tested (which were Cairo, SDL and GTK+) required tons of dll files in order to work. After compiling even a simple program from something like a tutorial it would give me like 5 or 6 different dll errors, forcing me to copy all of them into my program's working directory for it to even run.
Of course my program worked, but it's very cubersome to have this many DLLs just for a simple program. Making the program run on someone else's computer would require to ship all those DLLs along with it as seperate files, plus other DLLs that I got globally installed but others don't.
It just seems so weird that something as popular as C++ would be so annoying to use because of all the DLLs required... Am I doing anything wrong? Could there be some magical solution to this problem? Some tool to minimize or even completely eliminate these complications? It'd be cool to have to use fewer DLLs for my application. Of course I won't be able to omit DLLs completely, but at least reducing the amount to a single one (one library = one DLL) or having the possibility to organize them in a subfolder of their own would be awesome.

Of course my program worked, but it's very cubersome to have this many DLLs just for a simple program. Making the program run on someone else's computer would require to ship all those DLLs along with it as seperate files, plus other DLLs that I got globally installed but others don't.
If you're making an installer for your program, the installer should take care of installing the DLLs right alongside your program. It's pretty common practice, and won't be inconvenient for your user at all. If you're just distributing a zip file with your app in it, just keep the DLLs in the folder you're zipping up (also a pretty common practice).
It's also worth noting that you'll already have to send DLLs with your app. GCC apps need libgcc_s_sjlj-1.dll and libstdc++-6.dll. MSVC apps, if I recall correctly, rely on the Visual C++ runtime library. A few more graphics libraries aren't likely to bother the user at all.
Am I doing anything wrong?
Nope, this seems like business as usual to me. I recommend you continue your project without worrying too much about the DLLs.
Could there be some magical solution to this problem? Some tool to minimize or even completely eliminate these complications?
You could look into a DLL/EXE packer. It's highly unrecommended, though, because antiviruses tend to not appreciate the modified EXEs. (A lot of malware uses packing techniques, so antiviruses are often suspicious of such packed apps by default.)
at least reducing the amount to a single one (one library = one DLL)
Technically you might be able to, but you'd probably need to rewrite the graphics library in question since right now it's set up for multiple DLLs. I really don't recommend this either; the DLLs are separate because they're meant to be. They offer different functionality (for example, a quick glance at GTK+'s changelog showed mentions of libraries for SVG, JPG, and other file formats, as well as a lot of backend stuff to interface with the host OS, printers, etc.) so they're encapsulated into different libraries. In some cases (libjpeg) these "sub-libraries" are even written by a different group, and the graphics library you're using just calls certain functions from the "sub-library".
If you're really that insistent on having just one DLL, I think you're better off looking for some little, minimal-functionality library. Since you just want 2D graphics you might be able to get away with that.
or having the possibility to organize them in a subfolder of their own would be awesome
Unfortunately you can't do this.

Related

Creating/Organising a portable C++ library

I'm not sure the way I'm organising my library is the most elegant way to organise it. I'm mainly concerned about making all the code that I type compile/run for all systems that I'm targeting (keeping it portable), and also keeping it up-to-date for every system.
For example:
I'm not sure using __declspec(dllexport/dllimport) would be used for Mac or Linux. I assume that it's not, but I don't know what the equivalent is for Mac/Linux is. Or another example might be calling specific Operating System functions, which I try to avoid. However, things such as: measuring how long something takes to happen*, in a precise manner, does require me to call OS specific functions.
*as in getting the user's time precisely (down to micro/milli-seconds).
The systems that I am currently (perhaps more in the future) aiming for is Mac, Windows and Linux. But testing that the code compiles (and runs correctly) for every system just seems like a waste of time. As currently, the way I'm proposing to do it, requires me to make a separate project for every system. i.e. I create a Visual Studio Project for Windows, an Xcode project for Mac, and use the command line for Linux or use some other IDE.
Okay, so my main problems with the way I organise things are:
1. Time Consumption, as in creating the projects for all systems and keeping each indiviual project, for each system up-to-date.
2. Knowledge on all IDE's/Compilers that I require to use
Please note, I've never really used* Linux before, I'm thinking about switching. The only problem that I'm concerned about is finding the right tools for me to use with it, and finding the right Distro that would suit me, for what I need. I would really appreciate if someone that is experienced with Linux could guide me, or give me some advice whether to switch or not.
I really like Visual Studio, and it's been my main IDE for quite a while now. I'm just not sure if I want to ditch Visual Studio or not, for Linux; as I don't know if the tools that are available for Linux can do what Visual Studio can or not. What I mean is, it being as user friendly as Visual Studio is. I'm afraid to learn how to use Linux's tools, I'm just not sure if it's worth doing so. Time isn't a major factor on this library, I have plenty of time, I'm fairly young and determined to program as my career in the future.
*I have used it before, but I've never replaced it for Windows.
I am currently hosting all my source code for my project on BitBucket, but at the moment, I only have the RAW code on my repo. There is no project files or any other tool to compile it with, just the code and a readme file. I was thinking of using Makefiles, since they seem popular. But I've never made a Makefile before, don't get me wrong, I am willing to learn. I'm just not really sure where to start. I've heard that people use CMake to create portable libraries, such as SFML and Ogre3D. I've built a couple of libraries with CMake, but I have no clue on how to actually make my own library with it to make my project/make files. Should I learn and incorporate CMake with my library, or is there a better option available?
EDIT:
I'm not aiming to write a library for actual Software that uses a GUI. I'm mainly aimed to write games.
1 - Boost. Boost will help your portability more than you can imagine. Its only real sticking point is, believe it or not, OS X.
2 - Use CMake. It integrates with Visual Studio project files as the build tool, and you can put most of your different-platform compilation voodoo in there.
3 - If you're seriously writing a portable library, consider writing it in C/writing a C wrapper, or making it header-only, or providing the source-code. Making it a shared or statically linkable library does not mean that it will play nice. Name mangling leads to inconsistencies that will blow your mind.
4 - Always be explicit about the number of bits in each variable.
5 - use git. It'll allow you to setup a crappy local server for a repository very easy and get very fast transfers of the kind of huge changes MSVC will make annoyingly
There are a lot more best-practices that can be discussed about cross-platform development. All of that advice isn't applicable in every case; I have a very code-heavy Linux/Windows library that I code almost exclusively in MSVC2k10 and mostly build/test for in Linux, and it is nowhere close to header-only.
EDIT in response to comments:
git was suggested because I find it very easy to use and manage locally. I've use svn before and liked it, I won't really endorse any others, but there are probably plenty of fine ones.
To expound on point 3,
A C wrapper would make it so that anyone anywhere could use your library - FORTRAN developers, Ruby, even Java.
Otherwise you generally have to have similar compiler versions to link properly and it will only link to other C++ code, outside the case of DLLs, and there are still versioning issues. It's one of the stupidest problems in C++ left over, check "name mangling" on Wikipedia. There is a reason widely-used libraries are written in C or have C wrappers, such as libz, openssl etc.
There are other advantages to it. Exception propagation across dynamic libraries is non-existent; with static libraries it can be inconsistent or non-existent.
You'll find that the most widely-used C++ libraries are mostly header-only, like Boost. A header-only library solves many problems by putting all the code directly into a project in a relatively intuitive way, and modern compilers can still optimize away much (but certainly not all) of the extra compile time associated with it.
With all this said, it is certainly possible to do without a C wrapper or header-only, it is just annoying and very troublesome. DLL hell and its Linux equivalents still exist.
You also asked about Boost. That depends. If you're distributing the sourcecode then you certainly must distribute Boost with your code/have people install it. Having people install libraries in order to compile other libraries or use programs is common practice. Think of how specific versions of DirectX come with games for an example.
However if you are distributing binary versions of your library, statically linking against Boost will eliminate any need to include it as long as you are careful to keep Boost headers out of the outward-facing parts of your library. This is where you start seeing things like void * pointers inside C++ headers; an unfortunate side-effect of some of the shortcomings of C++ compilation and library distribution unfortunately.
CMake is a good choice. You can learn to use it. Read a tutorial:
http://www.cmake.org/cmake/help/cmake_tutorial.html
But, if your targets are Linux and Windows only. It is probably OK, in your case (small/average first multi-platform project), to maintain 2 separate build systems.
On Linux, Use Make. It is standard and has a very good reference manual:
http://www.gnu.org/software/make/manual/make.html
On Windows. Use your IDE project file, be it Visual, DevC++ or other. That is the simplest way to go.
Most important, make it easy to test your library/software on different platforms. Install a virtual machine on your desktop. Or at least compile your library into Cygwin.
Once you are here come back on stackoverflow and we will help !
Personally I'd leverage a framework like Qt, because it is quite portable, it does abstract a lot of OS functionality (files, timing, threads, networking), and you get a decent, free IDE (Qt Creator) that is also portable and runs on Windows, OS X and any Unix flavor that runs Qt. It'd give you the lowest barrier to entry. Qt Creator can leverage the Visual Studio compiler and the CDB debugger if they are available.
You do not need to use OpenGL to use Qt, in fact you're not bound to any particular graphics subsystem. Qt only "uses" OpenGL in Qt 5 for the Qt Quick 2 graphics backend. It's not needed for Qt 4, nor for Qt Quick 1 (even in Qt 5!).
You can use any 2D or 3D framework you fancy to push images and other content to the screen. What Qt is good at is creating the kind of 2D imagery that is often needed in games - menus, configuration screens, HUDs, etc. There's a lot of controls and drawing primitives that Qt makes easy to leverage for your purposes.
Qt also lets you use a reasonably powerful model-view and networking frameworks, thus you'd be able to reasonably easily generate server or client lists that update in real time.
There'd need to be a small amount of shim code between Qt and DirectX, of course. On the output side, you typically end up with a QImage in Qt, and then use DirectX, SDL, OpenGL, etc. to push it to the screen. On the input side, you need to call qApp->processEvents() within your main game loop, and you will need to post user input events from DirectX etc. to Qt's event queue using qApp->postEvent(...). This would be only needed if, say, DirectX main loop consumes all Windows messages and won't let standard winapi/win32 code (Qt's windows event dispatcher) see them. I haven't deal with DirectX much, so others feel free to chime in with details, of course.

What libraries can I use to make tiny Windows programs?

Perhaps some of you people have heard of http://suckless.org/ and their set of Unix tools. Basically, they're a set of programs that each aim to do one thing but do it well, while still being as simple and resource-light as possible.
I've been trying to find a way to reproduce this style of programming on Windows with C++ but all the libraries I know of would produce binaries that are huge with respect to their function. Even the simplest of anything Qt, for example, is generally several megabytes large. I'm not against packaging dependencies along with distributables but I wouldn't want to do it to that level.
Binary size is not one of my main goals but simplicity is and big libraries like these are, by construction, not simple. If binary size were your primary concern you could use runtime compression just like kkreiger or malware.
A possibility would be to go commando and use only ISO Standard C++ libraries but rebuilding a sockets or networking system for a small single-purpose application is not really something anyone would want to be troubled with.
For some reason I thought there was some general-purpouse library that Windows developers could count on everyone and their grandma having readily accessible but now I don't know if anything like that exists. What can you use to write code that adheres to the Unix Philosophy but for Windows targets?
You should target the Win32 API directly. You can't get much lower level than that. In the Windows world, everything directly or indirectly wraps the SDK functions, including the so-called "standard C++ libraries".
Alternatively, you could use something like MFC or WTL, which are relatively thin C++ wrappers over the Win32 API. Because of the overhead of the class libraries, such programs will be slightly smaller than those created using only the SDK, but nowadays, the actual overhead is completely insignificant.
The desires expressed in your question are precisely why I learned and still use the Win32 API today, so that's definitely what I would go with. Plus, your programs will look and feel native, which is way better than the crap most "cross-platform GUI toolkits" put out. The advantages of this can't be underestimated.
But if you just open up Visual Studio and compile a simple little SDK "Hello World" app, it'll still be awfully large. Kilobytes, to be sure, but that still seems like a lot for about the simplest app imaginable. If you really need to cut things down further, you can try telling Visual Studio not to link to the C runtime libraries and define your own main entrypoint. This does mean that you'll have to implement all of your own startup initialization code, but this can reduce the size of a trivial app substantially.
Matt Pietrek had this same idea some years ago, although you'll probably want to take time to "modernize" his original code significantly if you decide to go this route.
FLTK is a popular cross platform minimal gui toolkit.
Or a popular alternative if you don't need too much detailed interaction is just to fire up a minimal embedded webserver and do all the 'gui' in html in a browser.

Reorganize Classes into Static Libraries

I am about to attempt reorganizing the way my group builds a set of large applications that share about 90% of their source files. Right now, these applications are built without any libraries whatsoever involved except for externally linked ones that are not under our control. The applications use the same common source files (we are not maintaining 5 versions of the same .h/.cpp files), but these are not built into any common library. So, at the moment, we are paying the price of building the same code over-and-over per application, each time we intend to release a version. To me, this sounds like a prime candidate for using libraries to capture the shared code and reduce build times. I do not have the option of using DLL's, so the approach is to use static libraries.
I would like to know what tips you would have for how to approach this task. I have limited experience with creating/organizing static libraries, so even the basic suggestions towards organization/gotchas are welcome. Maybe even a good book recommendation?
I have done a brief exercise by finding the entire subset of files that each application share in common. As a proof of concept, I took these files and placed them in a single "Common Monster" static library. Building the full application using this single static library certainly improves the build time for all of the applications, but should I leave it at this? The purpose of the library in this form is not very focused and seems like a lazy attempt at modularity. There is ongoing development with these applications, and I'm afraid this setup will cause problems further down the line.
It's very hard to give general guidelines in this area - how you structure libraries depends very much on how you use them. Perhaps if I describe my own code libraries this may help:
One general purpose library containing code that I expect all applications will have at least a 50/50 chance of needing to use. This includes string utilities, regexes, expression evaluation, XML parsing and ODBC support. Conceivably this should be split up a bit, but it makes distributing my code in FOSS projects easier to keep it monolithic.
A library supporting multi-threading, providing wrappers around threads, mutexes, semaphores etc.
One supporting SQLite via its native interface, rather than via ODBC.
A C++ web server wrapper round the Mongoose C web server.
The general purpose library is used in all the stuff I write, the others in more specialised circumstances. Headers for each library are held in separate directories, as are the library binaries themselves (though they should probably be in a single lib directory).
Make sure that the dependencies of your libraries form acyclic directed graph (a tree). While this is not necessarily a problem for static libs (I'm not sure in fact), it will be a problem if you ever decide to switch to dlls. Depending on your situation, this may require some redesign of interfaces.
Another thing I noticed (for sure on MSVC), which you may consider if build speed is an important concern: DLLs link much faster than static libraries. I assume this is because they don't have to be copied into the new executable and there's no need to search an eliminate unused code. Even if it's no option for production, you may use this trick while developing.
I also have the habit to create my solution files with CMake, because it is easier to overview the entire build process than clicking through an endless list of options in a GUI. It's up to you to decide if you want to walk that path.

Best practices for creating an application which will be upgraded frequently - C++

I am developing a portable C++ application and looking for some best practices in doing that. This application will have frequent updates and I need to build it in such a way that parts of program can be updated easily.
For a frequently updating program, creating the program parts into libraries is the best practice? If program parts are in separate libraries, users can just replace the library when something changes.
If answer for point 1 is "yes", what type of library I have to use? In LINUX, I know I can create a "shared library", but I am not sure how portable is that to windows. What type of library I have to use? I am aware about the DLL hell issues in windows as well.
Any help would be great!
Yes, using libraries is good, but the idea of "simply" replacing a library with a new one may be unrealistic, as library APIs tend to change and apps often need to be updated to take advantage of, or even be compatible with, different versions of a library. With a good amount of integration testing though, you'll be able to 'support' a range of different versions of the library. Or, if you control the library code yourself, you can make sure that changes to the library code never breaks the application.
In Windows DLLs are the direct equivalent to shared libraries (so) in Linux, and if you compile both in a common environment (either cross-compiling or using MingW in Windows) then the linker will just do it the same way. Presuming, of course, that all the rest of your code is cross-platform and configures itself correctly for the target platform.
IMO, DLL hell was really more of a problem in the old days when applications all installed their DLLs into a common directory like C:\WINDOWS\SYSTEM, which people don't really do anymore simply because it creates DLL hell. You can place your shared libraries in a more appropriate place where it won't interfere with other non-aware apps, or - the simplest possible - just have them in the same directory as the executable that needs them.
I'm not entirely convinced that separating out the executable portions of your program in any way simplifies upgrades. It might, maybe, in some rare cases, make the update installer smaller, but the effort will be substantial, and certainly not worth it the one time you get it wrong. Replace all executable code as one in most cases.
On the other hand, you want to be very careful about messing with anything your users might have changed. Draw a bright line between the part of the application that is just code and the part that is user data. Handle the user data with care.
If it is an application my first choice would be to ship a statically-linked single executable. I had the opportunity to work on a product that was shipped to 5 platforms (Win2K,WinXp, Linux, Solaris, Tru64-Unix), and believe me maintaining shared libraries or DLLs with large codebase is a hell of a task.
Suppose this is a non-trivial application which involves use of 3rd Party GUI, Threads etc. Using C++, there is no real one way of doing it on all platforms. This means you will have to maintain different codebases for different platforms anyway. Then there are some wierd behaviours (bugs) of 3rd Party libraries on different platforms. All this will create a burden if application is shipped using different library versions i.e. different versions are to be attached to different platforms. I have seen people shipping libraries to all platforms when the fix is only for a particular platform just to avoid the versioning confusion. But it is not that simple, customer often has a different angle to how he/she wants to upgrade/patch which is also to be considered.
Ofcourse if the binary you are building is huge, then one can consider DLLs/shared-libraries. Even if that is the case, what i would suggest is to build your application in the form of layers like:-
Application-->GUI-->Platform-->Base-->Fundamental
So here some libraries can have common-code for all platforms. Only specific libraries like 'Platform' can be updated for specific behaviours. This will make you life a lot easier.
IMHO a DLL/shared-library option is viable when you are building a product that acts as a complete solution rather than just an application. In such a case different subsystems use common logic simultaneously within your product framework whose logic can then be shared in memory using DLLs/shared-libraries.
HTH,
As soon as you're trying to deal with both Windows and a UNIX system like Linux, life gets more complicated.
What are the service requirements you have to satisfy? Can you control when client systems get upgraded? How many systems will you need to support? How much of a backward-compatibility requirement do you have.
To answer your question with a question, why are you making the application native if being portable is one of the key goals?
You could consider moving to a a virtual platform like Java or .Net/Mono. You can still write C++ libraries (shared libraries on linux, DLL's on windows) for anything that would be better as native code, but the bulk of your application will be genuinely portable.

Splitting up a utility DLL into smaller components in C++

We have a core library in the form of a DLL that is used by more than one client application. It has gotten somewhat bloated and some applications need only a tiny fraction of the functionality that this DLL provides. So now we're looking at dividing this behemoth into smaller components.
My question is this: Can anyone recommend a path to take to divide this bloated DLL into a set of modules that have some interdepencies but do not necessarily require all other modules?
Here are the options as I see them but I'm hoping someone can offer other possibilities:
Create a "core" dll and several "satellite" dlls which use the core and possibly other satellite DLLs.
Subdivide the contents of the bloated DLL into static libraries that the main DLL uses (to maintain the same functionality) but apps that don't want to use the bloated version can assemble the static libraries they need into their own dll or into the app itself.
I was hesitant to mention this but I think it may be important to note that the app uses MFC.
Thanks for your thoughts.
Somewhat related to your question is this question, about splitting up a very large C module into smaller ones.
How do you introduce unit testing into a large, legacy (C/C++) codebase?
It seems your question has to do with the larger question of breaking some large blob of code into a more modular system. The link above is definitely recommended reading.
Without having all the details it is a little hard to help but here is what I would do in your situation
provide both static and dll versions of whate3ver you release - for MT and single threaded.
try to glean from the disparate clients which items should be grouped together to provide reasonable segmentation - without having layers of dependencies.
having a "core" module sounds like a good idea - and make sure you don't have too many levels of dependencies - you might want to keep it simple.
You may find after the exercise that one big dll is actually reasonable.
Another consideration is that maintaining multiple DLLs and both static libs and DLLs will hugely increase the complexity of maintenance.
Are you going to be releasing them all at once every time, or are they going to be mix and match? Be careful here - and know that you could create testing issues
If no one is complaining about the size of the DLL then you might want to consider leaving it as is.