VC++ replaces defines of different objects, GCC etc. not - c++

I have a big app using many static libs, which is platform independant and deployed under Windows and Linux.
All static libs and the main() itself are compiled with two defines:
-DVERSION=1.0.0 -DBUILD_DATE=00.00.0000
These defines are used by macros inside each static lib and inside the main to store the current lib version inside a registry-like class.
Under GCC / Linux this works very well - you can list all linked modules and display their real version and builddate, e.g.:
ImageReader 0.5.4 (12.01.2010)
Compress 1.0.1 (03.01.2010)
SQLReader 0.3.3 (22.12.2009)
But: When I link the exactly same code with VisualStudio 2005 SP1 I get only the version and build date of the last compiled module:
ImageReader 0.5.4 (12.01.2010)
Compress 0.5.4 (12.01.2010)
SQLReader 0.5.4 (12.01.2010)
Has anybody an idea? Is this an "optimization" issue of the VC++ linker?

Well, Visual Studio supports solutions with multiple projects. And its dependency engine is capable of detecting that a changed macro value requires a project to be recompiled. Occam's razor says that the libs simply got rebuilt and acquired the new VERSION macro value.

Preprocessor defines are resolved by the preprocessor stage of the compiler, not the linker.
There could be an issue with precompiled headers though in VC++.
Otherwise, to really tell I'd like to see the source code doing the actual printing of the version (date).

This doesn't have anything to do with the Visual Studio linker; it's just a matter of preprocessor macros, so the problem is already at the very beginning, before the compiler even gets to work.
What does the compile line look like in your Visual Studio build? My first idea is that for some reason, the defines (-D arguments) are all added to a single command line, and the last one always wins.

I'm assuming you have an app which then links to these libraries, and it's in this app that you're seeing the identical version numbers.
Make sure that the app doesn't have these -D switches as well. If not, then my guess is that VC compiler is being clever and triggering a build of the dependent projects with the same -D switch, rather than triggering the build via the project file.
Also, the best way to version these binaries is by employing macros in headers/source directly and giving them all unique names for each library. That way they can't interfere with each other (unless you clone one of the headers into an app, duplicating the Macro defs), and you're no longer dependent on the compiler to do it properly.

This can be a issue if you are using pre-compiled headers. Try building the application by disabling pre-compiled headers option.

"These defines are used by macros inside each static lib and inside the main to store the current lib version inside a registry-like class."
You're not violating the One Definition Rule by any chance? If you have one class, it should have one definition across all libraries. it sounds like the class definition depends on a version macro, that macro is defined differently in different part of your program, and thus you violate the ODR. The penalty for that is Undefined Behavior.
It seems that the MS linker takes advantage of the ODR by ignoring everything but the first definition. After all, if all definitions of X are the same, then you can ignore all but the first.

Related

C++ Linker issues, is there a generalized way to troubleshoot these?

I know next to nothing about the linking process, and it almost always gets in the way when I am trying to start a new project or add a new library. Whenever I search for fixes to these type of errors, I will find people with a similar problem but rarely any sort of fix.
Is there any generalized way of going about finding what the problem is, and fixing it?
I'm using visual studio 2010, and am statically linking my libraries into my program. My problems always seem to stem from conflicts with LIBCMT(D).lib, MSVCRT(D).lib, and a few other libraries doublely defining certain functions. If it matters at all, my intent is to avoid using "managed" C++.
If your error is related to LIBCMT(D).lib and the like, usually that depends from the fact that you are linking against a library that uses a different CRT version than yours. The only real fix is to either use the library compiled for the same version of the CRT you use (often there is the "debug" and "release" version also for this reason), either (if you are desperate) change the CRT version you use to match the one of the library.
What is happening behind the scenes is that both your program and your library need the CRT functions to work correctly, and each one already links against it. If they are linking against the same version of it nothing bad happens (the linker sees that it's the same and doesn't complain), otherwise there are multiple conflicting implementations of the same functions, so the linker doesn't know which are right for which object modules (and also, since they are probably not binary compatible, internal data structures of the two CRTs will be incompatible).
The specific link errors you mentioned (with LIBCMT(D).lib, MSVCRT(D).lib libraries) are related to conflicts in code generation options between modules/libraries in your program.
When you compile a module, the compiler automatically inserts in the resulting .obj some references to the runtime libraries (LIBCMT&MSVCRT). Now, there is one version of these libraries for each code generation mode (I'm referring to the option at Configuration properties -> C/C++ -> Code Generation -> Runtime Library). So if you have two modules compiled with a different mode, each of them will reference a different version of the library, the linker will try to include both, and of course there'll be duplicated symbols, since essentially all the symbols are the same in these libraries, only their implementations differ.
The solution comes in three parts. First, make sure all the modules in a project use the same mode. Second, if you have dependencies between projects, all of them have to use the same mode. Third, if you use third-party libraries, you have to either know which mode they use (and adopt it) or be able to recompile them with the desired mode.
The last one is the most difficult. Sometimes, libraries come pre-compiled, and not always the provider gives information about the mode used. Worse, if you're using more than one third-party library, they may have conflicting modes. In those cases, you have no better option than trial-and-error.
Also notice that each Visual Studio version has its own set of runtime libraries, so when using third-party libraries you have to use those compiled with the same version of Visual Studio you're using. If the provider doesn't offer it, your only choice is to recompile yourself.

LNK2038, iterator mismatch error, need to ignore

I'm getting the linker error LNK2038 when trying to convert a VS2008 project to VS2010. This error occurs when two different projects are compiled in which one is using _DEBUG preprocessor macro, and the other is not. Basically I have a 3rd party library that only has release .libs, so when I try and use that library when building my project in debug mode I get this mismatch.
I understand why Microsoft is giving this error (STL iterator safety), however our project does not use Microsoft's STL, we use STLPort, so this error means nothing to our project. I just need a way to prevent it from doing this check.
Inside of the STL includes there is a file called yvals.h, which includes the #pragma detect_mismatch definition for the various _ITERATOR_DEBUG_LEVEL settings. That set of definitions is wrapped in an #ifndef _ALLOW_ITERATOR_DEBUG_LEVEL_MISMATCH, #endif. However, even if I define _ALLOW_ITERATOR_DEBUG_LEVEL_MISMATCH as a preprocessor macro for my entire project I'm still getting the same linker error. I can even alter yvals.h to define that macro and it does nothing (I'm assuming because the STL itself would need to be recompiled).
So my question is basically, what steps can I take make _ALLOW_ITERATOR_DEBUG_LEVEL_MISMATCH actually work as intended so that my project doesn't do this check anywhere when compiling in VS2010?
EDIT: I know this is a late response but I just found this post and realized I didn't post the solution. As others mentioned there was a mismatch in the libraries. As it turns out VS2010 changes the default directories for certain projects (I found a thread on MSDN at one point full of complaints about it), and that directory change had caused VS2010 to look in the wrong directory for the debug library, and it was finding the release library instead.
You must use the same version of the standard library, compiled with the
same options, if you expect to successfully link. If you use STLPort,
then you can only link with libraries which use the STLPort, not with
libraries which use the VC++ standard implementation. If you mix,
either you will fail to link, or you will get strange runtime errors.
The problem is that things like std::vector<>::iterator may be defined
completely differently; depending on where and how they are used, you
will find yourself using an instance constructed in a different library,
with a different layout.

MS vs Non-MS C++ compiler compatibility

Thinking of using MinGW as an alternative to VC++ on Windows, but am worried about compatibility issues. I am thinking in terms of behaviour, performance on Windows (any chance a MinGW compiled EXE might act up). Also, in terms of calling the Windows API, third-party DLLs, generatic and using compatible static libraries, and other issues encountered with mixing parts of the same application with the two compilers.
First, MinGW is not a compiler, but an environment, it is bundled with gcc.
If you think of using gcc to compile code and have it call the Windows API, it's okay as it's C; but for C++ DLLs generated by MSVC, you might have a harsh wake-up call.
The main issue is that in C++, each compiler has its own name mangling (or more generally ABI) and its own Standard library. You cannot mix two different ABI or two different Standard Libraries. End of the story.
Clang has a specific MSVC compatibility mode, allowing it to accept code that MSVC accepts and to emit code that is binary compatible with code compiled with MSVC. Indeed, it is even officially supported in Visual Studio.
Obviously, you could also simply do the cross-DLL communication in C to circumvent most issues.
EDIT: Kerrek's clarification.
It is possible to compile a large amount of C++ code developed for VC++ with the MinGW toolchain; however, the ease with which you complete this task depends significantly on how C++-standards-compliant the code is.
If the C++ code utilizes VC++ extensions, such as __uuidof, then you will need to rewrite these portions.
You won't be able to compile ATL & MFC code with MinGW because the ATL & MFC headers utilize a number of VC++ extensions and depend on VC++-specific behaviors:
try-except Statements
__uuidof
throw(...)
Calling a function without forward-declaring it.
__declspec(nothrow)
...
You won't be able to use VC++-generated LIB files, so you can't use MinGW's linker, ld, to link static libraries without recompiling the library code as a MinGW A archive.
You can link with closed-source DLLs; however, you will need to export the symbols of the DLL as a DEF file and use dlltool to make the corresponding A archive (similar to the VC++ LIB file for each DLL).
MinGW's inclusion of the w32api project basically means that code using the Windows C API will compile just fine, although some of the newer functions may not be immediately available. For example, a few months ago I was having trouble compiling code that used some of the "secure" functions (the ones with the _s suffix), but I got around this problem by exporting the symbols of the DLL as a DEF, preparing an up-to-date A archive, and writing forward declarations.
In some cases, you will need to adjust the arguments to the MinGW preprocessor, cpp, to make sure that all header files are properly included and that certain macros are predefined correctly.
What I recommend is just trying it. You will definitely encounter problems, but you can usually find a solution to each by searching on the Internet or asking someone. If for no other reason, you should try it to learn more about C++, differences between compilers, and what standards-compliant code is.

Building C++ source code as a library - where to start?

Over the months I've written some nice generic enough functionality that I want to build as a library and link dynamically against rather than importing 50-odd header/source files.
The project is maintained in Xcode and Dev-C++ (I do understand that I might have to go command line to do what I want) and have to link against OpenGL and SDL (dynamically in SDL's case). Target platforms are Windows and OS X.
What am I looking at at all?
What will be the entry point of my
library if it needs one?
What do I have to change in my code?
(calling conventions?)
How do I release it? My understanding
is that headers and the compiled
library (.dll, .dylib(, .framework),
whatever it'll be) need to be
available for the project -
especially as template functionality
can not be included in the library by
nature.
What else I need to be aware of?
I'd recommend building as a statc library rather than a DLL. A lot of the issues of exporting C++ functions and classes go away if you do this, provided you only intend to link with code produced by the same compiler you built the library with.
Building a static library is very easy as it is just an collection of .o/.obj files - a bit like a ZIP file but without compression. There is no need to export anything - just include the library in the list of files that your application links with. To access specific functions or classes, just include the relevant header file. Note you can't get rid of header files - the C++ compilation model, particularly for templates, depends on them.
It can be problematic to export a C++ class library from a dynamic library, but it is possible.
You need to mark each function to be exported from the DLL (syntax depends on the compiler). I'm poking around to see if I can find how to do this from xcode. In VC it's __declspec(dllexport) and in CodeWarrior it's #pragma export on/#pragma export off.
This is perfectly reasonable if you are only using your binary in-house. However, one issue is that C++ methods are named differently by different compilers. This means that nobody who uses a different compiler will be able to use your DLL, unless you are only exporting C functions.
Also, you need to make sure the calling conventions match in the DLL and the DLL's client. This either means you should have the same default calling convention flag passed to the compiler for both the DLL or the client, or better, explicitly set the calling convention on each exported function in the DLL, so that it won't matter what the default is for the client.
This article explains the naming issue:
http://en.wikipedia.org/wiki/Name_decoration
The C++ standard doesn't define a standard ABI, and that's bad news for people trying to build C++ libraries. This means that you get different behavior from your compiled code depending on which flags were used to compile it, and that can lead to mysterious bugs in code that compiles and links just fine.
This extends beyond just different calling conventions - C++ code can be compiled to support or not support RTTI, exception handling, and with various optimizations that can affect the the memory layout of class instances, which C++ code relies on.
So, what can you do? I would build C++ libraries inside my source tree, and make sure that they're built as part of my project's build, and that all the libraries and the code that links to them use the same compiler flags.
Note that name mangling, which was supposed to at least prevent you from linking object files that were compiled with different compilers/compiler flags only mostly works, and there are certain things you can do, especially with GCC, that will result in code that links just fine and fails at runtime.
You have to be extra careful with vendor supplied dynamic C++ libraries (QT on most Linux distributions, for example.) I've seen instances of vendor supplied libraries that were compiled in ways that prevented certain things from working properly. For example, some Redhat Linux releases (maybe all of them) disabled exceptions in QT, which made it impossible to catch exceptions in main() if the exceptions were thrown in a QT callback. Fun.

Adding Boost makes Debug build depend on "non-D" MSVC runtime DLLs

I have an annoying problem which I might be able to somehow circumvent, but on the other hand would much rather be on top of it and understand what exactly is going on, since it looks like this stuff is really here to stay.
Here's the story: I have a simple OpenGL app which works fine: never a major problem in compiling, linking, or running it. Now I decided to try to move some of the more intensive calculations into a worker thread, in order to possibly make the GUI even more responsive — using Boost.Thread, of course.
In short, if I add the following fragment in the beginning of my .cpp file:
#include <boost/thread/thread.hpp>
void dummyThreadFun() { while (1); }
boost::thread p(dummyThreadFun);
, then I start getting "This application has failed to start because MSVCP90.dll was not found" when trying to launch the Debug build. (Release mode works ok.)
Now looking at the executable using the Dependency Walker, who also does not find this DLL (which is expected I guess), I could see that we are looking for it in order to be able to call the following functions:
?max#?$numeric_limits#K#std##SAKXZ
?max#?$numeric_limits#_J#std##SA_JXZ
?min#?$numeric_limits#K#std##SAKXZ
?min#?$numeric_limits#_J#std##SA_JXZ
Next, I tried to convert every instance of min and max to use macros instead, but probably couldn't find all references to them, as this did not help. (I'm using some external libraries for which I don't have the source code available. But even if I could do this — I don't think it's the right way really.)
So, my questions — I guess — are:
Why do we look for a non-debug DLL even though working with the debug build?
What is the correct way to fix the problem? Or even a quick-and-dirty one?
I had this first in a pretty much vanilla installation of Visual Studio 2008. Then tried installing the Feature Pack and SP1, but they didn't help either. Of course also tried to Rebuild several times.
I am using prebuilt binaries for Boost (v1.36.0). This is not the first time I use Boost in this project, but it may be the first time that I use a part that is based on a separate source.
Disabling incremental linking doesn't help. The fact that the program is OpenGL doesn't seem to be relevant either — I got a similar issue when adding the same three lines of code into a simple console program (but there it was complaining about MSVCR90.dll and _mkdir, and when I replaced the latter with boost::create_directory, the problem went away!!). And it's really just removing or adding those three lines that makes the program run ok, or not run at all, respectively.
I can't say I understand Side-by-Side (don't even know if this is related but that's what I assume for now), and to be honest, I am not super-interested either — as long as I can just build, debug and deploy my app...
Edit 1: While trying to build a stripped-down example that anyway reproduces the problem, I have discovered that the issue has to do with the Spread Toolkit, the use of which is a factor common to all my programs having this problem. (However, I never had this before starting to link in the Boost stuff.)
I have now come up with a minimal program that lets me reproduce the issue. It consists of two compilation units, A.cpp and B.cpp.
A.cpp:
#include "sp.h"
int main(int argc, char* argv[])
{
mailbox mbox = -1;
SP_join(mbox, "foo");
return 0;
}
B.cpp:
#include <boost/filesystem.hpp>
Some observations:
If I comment out the line SP_join of A.cpp, the problem goes away.
If I comment out the single line of B.cpp, the problem goes away.
If I move or copy B.cpp's single line to the beginning or end of A.cpp, the problem goes away.
(In scenarios 2 and 3, the program crashes when calling SP_join, but that's just because the mailbox is not valid... this has nothing to do with the issue at hand.)
In addition, Spread's core library is linked in, and that's surely part of the answer to my question #1, since there's no debug build of that lib in my system.
Currently, I'm trying to come up with something that'd make it possible to reproduce the issue in another environment. (Even though I will be quite surprised if it actually can be repeated outside my premises...)
Edit 2: Ok, so here we now have a package using which I was able to reproduce the issue on an almost vanilla installation of WinXP32 + VS2008 + Boost 1.36.0 (still pre-built binaries from BoostPro Computing).
The culprit is surely the Spread lib, my build of which somehow requires a rather archaic version of STLPort for MSVC 6! Nevertheless, I still find the symptoms relatively amusing. Also, it would be nice to hear if you can actually reproduce the issue — including scenarios 1-3 above. The package is quite small, and it should contain all the necessary pieces.
As it turns out, the issue did not really have anything to do with Boost.Thread specifically, as this example now uses the Boost Filesystem library. Additionally, it now complains about MSVCR90.dll, not P as previously.
Boost.Thread has quite a few possible build combinations in order to try and cater for all the differences in linking scenarios possible with MSVC. Firstly, you can either link statically to Boost.Thread, or link to Boost.Thread in a separate DLL. You can then link to the DLL version of the MSVC runtime, or the static library runtime. Finally, you can link to the debug runtime or the release runtime.
The Boost.Thread headers try and auto-detect the build scenario using the predefined macros that the compiler generates. In order to link against the version that uses the debug runtime you need to have _DEBUG defined. This is automatically defined by the /MD and /MDd compiler switches, so it should be OK, but your problem description suggests otherwise.
Where did you get the pre-built binaries from? Are you explicitly selecting a library in your project settings, or are you letting the auto-link mechanism select the appropriate .lib file?
I believe I have had this same problem with Boost in the past. From my understanding it happens because the Boost headers use a preprocessor instruction to link against the proper lib. If your debug and release libraries are in the same folder and have different names the "auto-link" feature will not work properly.
What I have done is define BOOST_ALL_NO_LIB for my project(which prevents the headers from "auto linking") and then use the VC project settings to link against the correct libraries.
Looks like other people have answered the Boost side of the issue. Here's a bit of background info on the MSVC side of things, that may save further headache.
There are 4 versions of the C (and C++) runtimes possible:
/MT: libcmt.lib (C), libcpmt.lib (C++)
/MTd: libcmtd.lib, libcpmtd.lib
/MD: msvcrt.lib, msvcprt.lib
/MDd: msvcrtd.lib, msvcprtd.lib
The DLL versions still require linking to that static lib (which somehow does all of the setup to link to the DLL at runtime - I don't know the details). Notice in all cases debug version has the d suffix. The C runtime uses the c infix, and the C++ runtime uses the cp infix. See the pattern? In any application, you should only ever link to the libraries in one of those rows.
Sometimes (as in your case), you find yourself linking to someone else's static library that is configured to use the wrong version of the C or C++ runtimes (via the awfully annoying #pragma comment(lib)). You can detect this by turning your linker verbosity way up, but it's a real PITA to hunt for. The "kill a rodent with a bazooka" solution is to use the /nodefaultlib:... linker setting to rule out the 6 C and C++ libraries that you know you don't need. I've used this in the past without problem, but I'm not positive it'll always work... maybe someone will come out of the woodwork telling me how this "solution" may cause your program to eat babies on Tuesday afternoons.
This is a classic link error. It looks like you're linking to a Boost DLL that itself links to the wrong C++ runtime (there's also this page, do a text search for "threads"). It also looks like the boost::posix::time library links to the correct DLL.
Unfortunately, I'm not finding the page that discusses how to pick the correctly-built Boost DLL (although I did find a three-year-old email that seems to point to BOOST_THREAD_USE_DLL and BOOST_THREAD_USE_LIB).
Looking at your answer again, it appears you're using pre-built binaries. The DLL you're not able to link to is part of the TR1 feature pack (second question on that page). That feature pack is available on Microsoft's website. Or you'll need a different binary to link against. Apparently the boost::posix::time library links against the unpatched C++ runtime.
Since you've already applied the feature pack, I think the next step I would take would be to build Boost by hand. That's the path I've always taken, and it's very simple: download the BJam binary, and run the Boost Build script in the library source. That's it.
Now this got even a bit more interesting... If I just add this somewhere in the source:
boost::posix_time::ptime pt = boost::posix_time::microsec_clock::universal_time();
(together with the corresponding #include stuff), then it again works ok. So this is one quick and not even too dirty solution, but hey — what's going on here, really?
From memory various parts of the boost libraries need you to define some preprocessor flags in order to be able to compile correctly. Stuff like BOOST_THREAD_USE_DLL and so on.
The BOOST_THREAD_USE_DLL won't be what's causing this particular error, but it may be expecting you to define _DEBUG or something like that. I remember a few years ago in our boost C++ projects we had quite a few extra BOOST_XYZ preprocessor definitions declared in the visual studio compiler options (or makefile)
Check the config.hpp file in the boost thread directory. When you pull in the ptime stuff it's possibly including a different config.hpp file, which may then define those preprocessor things differently.