Creating/Organising a portable C++ library - c++

I'm not sure the way I'm organising my library is the most elegant way to organise it. I'm mainly concerned about making all the code that I type compile/run for all systems that I'm targeting (keeping it portable), and also keeping it up-to-date for every system.
For example:
I'm not sure using __declspec(dllexport/dllimport) would be used for Mac or Linux. I assume that it's not, but I don't know what the equivalent is for Mac/Linux is. Or another example might be calling specific Operating System functions, which I try to avoid. However, things such as: measuring how long something takes to happen*, in a precise manner, does require me to call OS specific functions.
*as in getting the user's time precisely (down to micro/milli-seconds).
The systems that I am currently (perhaps more in the future) aiming for is Mac, Windows and Linux. But testing that the code compiles (and runs correctly) for every system just seems like a waste of time. As currently, the way I'm proposing to do it, requires me to make a separate project for every system. i.e. I create a Visual Studio Project for Windows, an Xcode project for Mac, and use the command line for Linux or use some other IDE.
Okay, so my main problems with the way I organise things are:
1. Time Consumption, as in creating the projects for all systems and keeping each indiviual project, for each system up-to-date.
2. Knowledge on all IDE's/Compilers that I require to use
Please note, I've never really used* Linux before, I'm thinking about switching. The only problem that I'm concerned about is finding the right tools for me to use with it, and finding the right Distro that would suit me, for what I need. I would really appreciate if someone that is experienced with Linux could guide me, or give me some advice whether to switch or not.
I really like Visual Studio, and it's been my main IDE for quite a while now. I'm just not sure if I want to ditch Visual Studio or not, for Linux; as I don't know if the tools that are available for Linux can do what Visual Studio can or not. What I mean is, it being as user friendly as Visual Studio is. I'm afraid to learn how to use Linux's tools, I'm just not sure if it's worth doing so. Time isn't a major factor on this library, I have plenty of time, I'm fairly young and determined to program as my career in the future.
*I have used it before, but I've never replaced it for Windows.
I am currently hosting all my source code for my project on BitBucket, but at the moment, I only have the RAW code on my repo. There is no project files or any other tool to compile it with, just the code and a readme file. I was thinking of using Makefiles, since they seem popular. But I've never made a Makefile before, don't get me wrong, I am willing to learn. I'm just not really sure where to start. I've heard that people use CMake to create portable libraries, such as SFML and Ogre3D. I've built a couple of libraries with CMake, but I have no clue on how to actually make my own library with it to make my project/make files. Should I learn and incorporate CMake with my library, or is there a better option available?
EDIT:
I'm not aiming to write a library for actual Software that uses a GUI. I'm mainly aimed to write games.

1 - Boost. Boost will help your portability more than you can imagine. Its only real sticking point is, believe it or not, OS X.
2 - Use CMake. It integrates with Visual Studio project files as the build tool, and you can put most of your different-platform compilation voodoo in there.
3 - If you're seriously writing a portable library, consider writing it in C/writing a C wrapper, or making it header-only, or providing the source-code. Making it a shared or statically linkable library does not mean that it will play nice. Name mangling leads to inconsistencies that will blow your mind.
4 - Always be explicit about the number of bits in each variable.
5 - use git. It'll allow you to setup a crappy local server for a repository very easy and get very fast transfers of the kind of huge changes MSVC will make annoyingly
There are a lot more best-practices that can be discussed about cross-platform development. All of that advice isn't applicable in every case; I have a very code-heavy Linux/Windows library that I code almost exclusively in MSVC2k10 and mostly build/test for in Linux, and it is nowhere close to header-only.
EDIT in response to comments:
git was suggested because I find it very easy to use and manage locally. I've use svn before and liked it, I won't really endorse any others, but there are probably plenty of fine ones.
To expound on point 3,
A C wrapper would make it so that anyone anywhere could use your library - FORTRAN developers, Ruby, even Java.
Otherwise you generally have to have similar compiler versions to link properly and it will only link to other C++ code, outside the case of DLLs, and there are still versioning issues. It's one of the stupidest problems in C++ left over, check "name mangling" on Wikipedia. There is a reason widely-used libraries are written in C or have C wrappers, such as libz, openssl etc.
There are other advantages to it. Exception propagation across dynamic libraries is non-existent; with static libraries it can be inconsistent or non-existent.
You'll find that the most widely-used C++ libraries are mostly header-only, like Boost. A header-only library solves many problems by putting all the code directly into a project in a relatively intuitive way, and modern compilers can still optimize away much (but certainly not all) of the extra compile time associated with it.
With all this said, it is certainly possible to do without a C wrapper or header-only, it is just annoying and very troublesome. DLL hell and its Linux equivalents still exist.
You also asked about Boost. That depends. If you're distributing the sourcecode then you certainly must distribute Boost with your code/have people install it. Having people install libraries in order to compile other libraries or use programs is common practice. Think of how specific versions of DirectX come with games for an example.
However if you are distributing binary versions of your library, statically linking against Boost will eliminate any need to include it as long as you are careful to keep Boost headers out of the outward-facing parts of your library. This is where you start seeing things like void * pointers inside C++ headers; an unfortunate side-effect of some of the shortcomings of C++ compilation and library distribution unfortunately.

CMake is a good choice. You can learn to use it. Read a tutorial:
http://www.cmake.org/cmake/help/cmake_tutorial.html
But, if your targets are Linux and Windows only. It is probably OK, in your case (small/average first multi-platform project), to maintain 2 separate build systems.
On Linux, Use Make. It is standard and has a very good reference manual:
http://www.gnu.org/software/make/manual/make.html
On Windows. Use your IDE project file, be it Visual, DevC++ or other. That is the simplest way to go.
Most important, make it easy to test your library/software on different platforms. Install a virtual machine on your desktop. Or at least compile your library into Cygwin.
Once you are here come back on stackoverflow and we will help !

Personally I'd leverage a framework like Qt, because it is quite portable, it does abstract a lot of OS functionality (files, timing, threads, networking), and you get a decent, free IDE (Qt Creator) that is also portable and runs on Windows, OS X and any Unix flavor that runs Qt. It'd give you the lowest barrier to entry. Qt Creator can leverage the Visual Studio compiler and the CDB debugger if they are available.
You do not need to use OpenGL to use Qt, in fact you're not bound to any particular graphics subsystem. Qt only "uses" OpenGL in Qt 5 for the Qt Quick 2 graphics backend. It's not needed for Qt 4, nor for Qt Quick 1 (even in Qt 5!).
You can use any 2D or 3D framework you fancy to push images and other content to the screen. What Qt is good at is creating the kind of 2D imagery that is often needed in games - menus, configuration screens, HUDs, etc. There's a lot of controls and drawing primitives that Qt makes easy to leverage for your purposes.
Qt also lets you use a reasonably powerful model-view and networking frameworks, thus you'd be able to reasonably easily generate server or client lists that update in real time.
There'd need to be a small amount of shim code between Qt and DirectX, of course. On the output side, you typically end up with a QImage in Qt, and then use DirectX, SDL, OpenGL, etc. to push it to the screen. On the input side, you need to call qApp->processEvents() within your main game loop, and you will need to post user input events from DirectX etc. to Qt's event queue using qApp->postEvent(...). This would be only needed if, say, DirectX main loop consumes all Windows messages and won't let standard winapi/win32 code (Qt's windows event dispatcher) see them. I haven't deal with DirectX much, so others feel free to chime in with details, of course.

Related

Can MinGW replicate most unix system calls with no effort?

Background: I'm working on porting a large project that was developed in C++ exclusively for unix systems, to be compatible with windows to make way for an eventual windows distribution. I don't have very much experience in Windows development, but I'd like to get whatever I can do, done right, before senior developers move in and take over.
Question: So for a while, I've been looking up Windows versions for all the unix/posix calls that are used in the software, most coming from dirent.h, unistd.h, and some ones under sys, like sys/stat.h or sys/types.h. And although it'll take a lot of work to modify the programs to adopt the new Win32 API calling conventions and return types (and sometimes all new functions), it'll probably work, eventually.
But I've been seeing this come up frequently, the fact that MinGW, I guess, includes many native unix calls and functionalities as part of the GCC environment, and can translate them into windows-compatible calls so you can compile on Windows, and use said compiled program on Windows easily. In fact, one of the similar questions I read in the sidebar talks about just that. What I don't understand, and seem to have trouble understanding, is exactly the extent of this built-in translation functionality, and where I can find a list of system stuff that'll work with this.
Sorry this post is a little unstructured and that I'm so green with this, but I only have 2 weeks to try to accomplish something with this before I get swapped with a senior developer.
No. MinGW does not attempt to implement UNIX functions on Windows. It cannot replicate most system calls with no effort.
However, Cygwin does do that.

DLL management for C++ with MinGW?

I recently decided to take a look at 2D graphics with C++, using MinGW on Windows 7.
Since I was only going to need 2D graphics any library would be viable more or less (OpenGL, SDL, etc..). I decided to take a quick look at a few and check how easy they'd be to get working on windows with MinGW.
I soon noticed every library I tested (which were Cairo, SDL and GTK+) required tons of dll files in order to work. After compiling even a simple program from something like a tutorial it would give me like 5 or 6 different dll errors, forcing me to copy all of them into my program's working directory for it to even run.
Of course my program worked, but it's very cubersome to have this many DLLs just for a simple program. Making the program run on someone else's computer would require to ship all those DLLs along with it as seperate files, plus other DLLs that I got globally installed but others don't.
It just seems so weird that something as popular as C++ would be so annoying to use because of all the DLLs required... Am I doing anything wrong? Could there be some magical solution to this problem? Some tool to minimize or even completely eliminate these complications? It'd be cool to have to use fewer DLLs for my application. Of course I won't be able to omit DLLs completely, but at least reducing the amount to a single one (one library = one DLL) or having the possibility to organize them in a subfolder of their own would be awesome.
Of course my program worked, but it's very cubersome to have this many DLLs just for a simple program. Making the program run on someone else's computer would require to ship all those DLLs along with it as seperate files, plus other DLLs that I got globally installed but others don't.
If you're making an installer for your program, the installer should take care of installing the DLLs right alongside your program. It's pretty common practice, and won't be inconvenient for your user at all. If you're just distributing a zip file with your app in it, just keep the DLLs in the folder you're zipping up (also a pretty common practice).
It's also worth noting that you'll already have to send DLLs with your app. GCC apps need libgcc_s_sjlj-1.dll and libstdc++-6.dll. MSVC apps, if I recall correctly, rely on the Visual C++ runtime library. A few more graphics libraries aren't likely to bother the user at all.
Am I doing anything wrong?
Nope, this seems like business as usual to me. I recommend you continue your project without worrying too much about the DLLs.
Could there be some magical solution to this problem? Some tool to minimize or even completely eliminate these complications?
You could look into a DLL/EXE packer. It's highly unrecommended, though, because antiviruses tend to not appreciate the modified EXEs. (A lot of malware uses packing techniques, so antiviruses are often suspicious of such packed apps by default.)
at least reducing the amount to a single one (one library = one DLL)
Technically you might be able to, but you'd probably need to rewrite the graphics library in question since right now it's set up for multiple DLLs. I really don't recommend this either; the DLLs are separate because they're meant to be. They offer different functionality (for example, a quick glance at GTK+'s changelog showed mentions of libraries for SVG, JPG, and other file formats, as well as a lot of backend stuff to interface with the host OS, printers, etc.) so they're encapsulated into different libraries. In some cases (libjpeg) these "sub-libraries" are even written by a different group, and the graphics library you're using just calls certain functions from the "sub-library".
If you're really that insistent on having just one DLL, I think you're better off looking for some little, minimal-functionality library. Since you just want 2D graphics you might be able to get away with that.
or having the possibility to organize them in a subfolder of their own would be awesome
Unfortunately you can't do this.

What libraries can I use to make tiny Windows programs?

Perhaps some of you people have heard of http://suckless.org/ and their set of Unix tools. Basically, they're a set of programs that each aim to do one thing but do it well, while still being as simple and resource-light as possible.
I've been trying to find a way to reproduce this style of programming on Windows with C++ but all the libraries I know of would produce binaries that are huge with respect to their function. Even the simplest of anything Qt, for example, is generally several megabytes large. I'm not against packaging dependencies along with distributables but I wouldn't want to do it to that level.
Binary size is not one of my main goals but simplicity is and big libraries like these are, by construction, not simple. If binary size were your primary concern you could use runtime compression just like kkreiger or malware.
A possibility would be to go commando and use only ISO Standard C++ libraries but rebuilding a sockets or networking system for a small single-purpose application is not really something anyone would want to be troubled with.
For some reason I thought there was some general-purpouse library that Windows developers could count on everyone and their grandma having readily accessible but now I don't know if anything like that exists. What can you use to write code that adheres to the Unix Philosophy but for Windows targets?
You should target the Win32 API directly. You can't get much lower level than that. In the Windows world, everything directly or indirectly wraps the SDK functions, including the so-called "standard C++ libraries".
Alternatively, you could use something like MFC or WTL, which are relatively thin C++ wrappers over the Win32 API. Because of the overhead of the class libraries, such programs will be slightly smaller than those created using only the SDK, but nowadays, the actual overhead is completely insignificant.
The desires expressed in your question are precisely why I learned and still use the Win32 API today, so that's definitely what I would go with. Plus, your programs will look and feel native, which is way better than the crap most "cross-platform GUI toolkits" put out. The advantages of this can't be underestimated.
But if you just open up Visual Studio and compile a simple little SDK "Hello World" app, it'll still be awfully large. Kilobytes, to be sure, but that still seems like a lot for about the simplest app imaginable. If you really need to cut things down further, you can try telling Visual Studio not to link to the C runtime libraries and define your own main entrypoint. This does mean that you'll have to implement all of your own startup initialization code, but this can reduce the size of a trivial app substantially.
Matt Pietrek had this same idea some years ago, although you'll probably want to take time to "modernize" his original code significantly if you decide to go this route.
FLTK is a popular cross platform minimal gui toolkit.
Or a popular alternative if you don't need too much detailed interaction is just to fire up a minimal embedded webserver and do all the 'gui' in html in a browser.

Working on a cross platform library

What are the best practices on writing a cross platform library in C++?
My development environment is Eclipse CDT on Linux, but my library should have the possibility to compile natively on Windows either (from Visual C++ for example).
Thanks.
To some extent, this is going to depend on exactly what your library is meant to accomplish.
If you were developing a GUI application, for instance, you would want to focus on using a well-tested cross-platform framework such as wxWidgets.
If your library depends primarily on File IO, you would want to make sure you use an existing well-tested cross-platform filesystem abstraction library such as Boost Filesystem.
If your library is none of the above (i.e. there are no existing well-tested cross-platform frameworks for you to use), your best bet is to make sure you adhere to standard C++ as much as possible (this means don't #include <linux.h> or <windows.h>, for instance). When that isn't possible (i.e. your library reads raw sound data from a microphone), you'll want to make sure the implementation details for a given platform are sufficiently abstracted away so that you minimize the work involved in porting your library to another platform.
To my knowledge, there are a few things you can do:
You can divide the platform specific code into different namespaces.
You can use the PIMPL idiom to hide platform specific code.
You can use macros do know what code to compile (in this case the code will be platform specific). Check this link for more information.
Test your library in multiple environments.
Depending on what you are doing it might be good to use libraries such as Boost because it is not specific to a platform. The downside (or possibly the good side) is that you will force the use of the libraries you included.
Couple of suggestions from my practical experience:
1) Make sure of regular compilation of sources in your targeted platforms. Don't wait till the end. This'd help point to errors early. Use a continuous build system -- it makes life a lot easier.
2) Never use platform specific headers. Not even for writing native code -- for all you know some stuff in a windows header might expect some string which was ABC in XP but got changed to ABC.12 in Win7.
3) Use ideas from STL and BOOST and then build on top of them. Never consider these to be a panacea for problems though -- STL is easy to ship with your code but BOOST is not.
4) Do not use compiler specific constructs like __STDCALL. This is asking for hell.
5) The same code when compiled with similar compiler options in g++ and cl might result in different behavior. Please have a copy of your compiler manual very handy.
Anytime I work on something like this I try and build it in the different environments that I want to be supported. Similarly if you were making a web page and you wanted to make sure it worked in IE, Firefox, and Chrome you'd test it in all three of those browsers. Test it in the different environments you want to support, and you'll know what systems you can safely say it works for.
question as stated is bit abstract.but you can give QT a consideration
It's really just as simple as "don't using anything platform specific". The wealth of freely available tools availalble these days makes writing cross-platform code in C++ a snap. For those rare but occasional cases where you really do need to use platform specific APIs, just be sure to separate them out via #defines or, better in my opinion, distinct .cpp files for each platform.
There are many alternatives for cross platform libraries but my personal preferences are:
GUI: Qt
OS abstraction (though Qt does a great job of this all by itself): Boost
Cross-platform Makefiles: CMake
The last one, CMake, has been a tremendous help for me over the last few years for keeping my build environment sane while doing dual-development on Windows & Linux. It has a rather steep learning curve but once it's up and running, it works exceptionally well.
You mean besides continuous integration and testing on target platforms? Or besides using design to abstract away the implementation details?
No, can't think of anything.

C++ Programming in Linux Platform

I am a software engineer and i work in VC++, C++ in WIndows OS.
Are there any major differences when it comes to coding in C++ in Linux environment.
Or is it just some adjustments that we have to make when we need to code in C++ in Linux.
It would depend on the types of projects you've worked on and what native windows APIs you made use of. For example if you used the native Windows API for everything, you're going to have a pretty big task ahead of you, it'd be worth making your project(s) work nicely with Wine instead.
In the Linux environment you have the man pages, quite detailed documentation of almost everything :). As mentioned above, look at POSIX, and while I recommend Qt - it provides a LOT of abstractions for things you might want to learn to do the Linux way (eg sockets, filesystem...)
Use the POSIX API instead of the Win32 API.
Use gtkmm, Qt, or wxWidgets instead of MFC.
Linux programming world is very different from you are familiar with in Windows world. You have to understand it and get used to it. Once you understand you will not want to come back.
You have many small/good tools that works with each other rather then all-in-one MSVC solution. For example:
In Linux you have a compiler as stand-alone tool (Gnu compiler collection), you have build system as stand-alone tool (autotools, CMake). You have GNU Debugger as stand alone tool and you have very good editors as stand alone tool (like hard core vim/emacs).
There are integrated development environments like Eclipse, Netbeans, KDevelop, Anjuta
but still you have to understand how stuff works.
You should understand that each separate tool is very powerful and integrates with others.
OS Level API is designed for simplicity. You'll rarely will find calls like CreateProcessEx with bizzilion parameters rather you have simple fork()+exec(). man is you real friend in all connected to system API and standard C library.
GUI - You have two big GUI libraries Qt/GTK. Qt is great C++ library that makes GUI development enjoyable work (unlike MFC). GTK has both C and C++ APIs GTK and GTKmm (no experience with them).
i18n/l10n/unicode - this is where Linux programming makes life easier. Almost everything is UTF-8. No wide API crap, no issues with opening Chinese file names with simple fopen or ifstream, no 3rd part library that can't open file with Unicode name. Great built in tools available like gettext, and good translation toolkits like KBabel.
Libraries - this is where Linux programming makes you hate Windows. Almost every single free library is already installed or available with simple apt-get or yum install. no debug/release incompatibility crap, no DLL_EXPORT-ing, simple robust, making shared objects is as simple as working with static libraries (and most do not use static libraries at all).
My $0.02
(I'm Linux programmer that have deal a lot with windows development)...
It depends on how many windows-specific things you've been using. The standard part of C++ is the same, but using that will not get you much further than command-line applications.
There's also the whole makefile-instead-of-letting-VS-build-for-you thing. Depending on what tool (or IDE) you decide to use in Linux, that could be a big difference.
I have worked quite a bit on both platforms and like them both, but in general I found most developers to like one and hate the other.
I would describe *nix environment as "geek friendly": many excellent and very flexible tools on your disposal. Some of them introduce hard learning curve, and some are simply broken but still popular for some reason (make) but if you are willing to invest some time in properly learning them, the reward is high. In fact, I use many *nix tools even when working on Windows: vim, grep, perl, etc...
On the other hand, Windows platform offers Win32 API which has way more functionality than POSIX, is very well documented and supported by very good tools. Debuggers on Windows (especially windbg) are generally more powerful that any *nix debugger I have tried, although gdb is generally good enough for most tasks. Deployment of executables is also easier than in Linux world - in fact the only truly reliable way to deploy software on Linux is to ship source code and build it on clients' machines via config/make.
I would suggest to use a Buildsystem like SCons which works very well on both Linux and Win32.
Take a look at the source to some open-source project that runs on both Linux and Windows. Typically, over 80% of the code is identical, and the bigger the project the less the system-specific part tends to be. Unfortunately, there can be hard parts (threading, non-blocking network IO, GUI details) in the system-specific code.
There are some major differences that I can think of:
Tools. Good and bad points. If you are used to Visual Studio, there is nothing quite like that available. Each Linux IDE has some issues. On the other hand, especially debugging tools are very good. But all in all, you are supposed to create your own working environment from what's available.
API's. Documentation varies wildly. Some components are well documented, but often you end up reading the source code to figure out how something is supposed to work. On the other hand, you have source code so eventually you have all the tools possible to figure out why something doesn't work.
The Linux programming community is usually very good as long as you remember to behave and you find the right places. SO isn't half bad in some issues, but sometimes you need to find other places.
Things are not quite as automatic as you might have learned in the Windows world. Yes, some tools allow you to create projects without Makefile knowledge, but really, you should learn how to use them. In Windows it's much more common that you never edit the project files (e.g. Makefiles) by hand.
If you want to work in kernel space (drivers etc) C is a better bet than C++ since the kernel is written with that.
And I agree with people suggesting Qt. Very nice widget set. Beats at least Swing (yes, I know, it's Java) hands down. And Qt Creator isn't half bad.
Don't underestimate the power of shell scripting! Something very few Windows programmers have figured out, but you can do a hell of a lot with them to help your work.
A typical windows programmer who is used to Visual C++ might find the following aspects of Linux C++ programming novel, or difficult:
Linux programming isn't linux programming, it's Unix programming. Unix programming's roots go back a lot further than the MS-DOS roots of Windows, and it shows in a lot of places.
Windows programmers tend to think about the environment, they tend to think about the IDE tools (your GUI editor, compiler, debugger) first. Unix programmers tend to be arranged in various tribes, many core Unix (linux) C++ programmers are very comfortable working from the command line without an IDE, and some, I'm sure, use visual-studio style IDEs on Linux, of which there are many.
I personally found I had to learn how to read (and maybe write) a makefile, build a bunch of standard Linux/Unix applications from source (and understand how to type my way through steps like 'autoconfiguration' and the various "--command-line-options" one might select there), before I get the feel, and the flavour of the environment.
Until you are a seasoned Linux system administrator you might want to stick with the newbie-friendly Linux distributions (like Ubuntu).