I'm currently working on a project which has a lot of dependencies. It is written in Objective-C with C++ libraries. It uses cross platform code with conditional compilation for different platforms in the libraries.
After the latest update I have encountered a peculiar issue with preprocessor macros with this
#if defined(DEBUG)
#warning WARNING_1
#elif defined(DEBUG_gibberish)
#warning WARNING_2
#elif defined(SOMETHING)
#warning WARNING_3
#else
#warning WARNING_DEFAULT
#endif
code.
In this case DEBUG macro is defined in Xcode Apple LLVM 6.0 Preprocessing section
Please see the pic
So basically the problem is that #if defined() doesn't work as expected for this particular project. If I copy the same code to some clean test project it works as expected.
It also has another interesting effect, if I define DEBUG_gibberish then it will be evaluated instead of else case. After performing a couple of experiments it seems that it always evaluates/uses the first true condition and then the second true condition or #else if there was no second true condition.
I have already tried to clean project, clean derived data, restart Xcode, reboot my mac and even voodoo dolls.
I would appreciate any thought on why this happens and how to fix it.
EDIT1: I have a hierarchy of Xcode projects in my main project. The problematic library is a subproject in my main project. If I try to build it separately it works fine. If I compile it as dependency from main project I encounter this issue.
Ok, I have found the problem.
I use some of the headers from third party libraries in my project to make my own subclasses of libraries' classes.
I believe Xcode gives a composite representation of warnings:
In my code it shows WARNING_1 after preprocessing of the header in
library where DEBUG macro is defined.
Then it shows WARNING_DEFAULT after preprocessing the same header
in my project where DEBUG macro is not defined.
But both warnings are shown as if they are in the same file.
By doing that it led me to conclusion that something is wrong with preprocessor or my code. Because of that I didn't think about simple thing that my project and library are built separately and my project doesn't contain needed macros.
So in the end solution was simple, I had to define needed macros in my project and then it compiled fine.
Related
This is for Visual Studio 2015, C++
I have one project that is compiled as a library and has some #if - #else statements
#ifdef DXTK
//...
#elif defined DXUT
//...
#else
//...
#endif
I have two different solutions (with a separate executable project each) that both include this library as a project.
I need to #define DXUT in one executable project and #define DXTK in the other
But the problem is, my definitions in pre-processor for the executable projects (not the library) do not affect the library project's #if - #else statements
I know one recommendation is to create different configurations for the library project and use one in one solution, and another in the other.
But is there a way to pass the preprocessor definition in the entire solution?
I tried adding /DDXUT to C/C++ -> Command Line for the executable projects, but it did not work.
How can I do this without creating a new configuration for each project?
Possible duplicate of this: Are Preprocessor Definitions compiled into a library?
Short answer is No, you cannot do what you want to do without different configurations, as the code that is compiled into that library has been transformed before you even include the library into a different project.
I am developing a piece of software in C++ using Visual Studio on Windows. From the beginning, I would like to have it run on both Windows and Linux. Obviously, I won't compile the Linux binary on a Windows machine, but I still want to use Visual Studio to write the code.
When it comes to headers, I am selecting while file to use based on pre-processor definitions.
A very rudimentary and simple example:
#pragma once
#ifndef PLATFORM_TIMER_H
#define PLATFORM_TIMER_H
#ifdef _WIN32
#include "win32\win32_timer.h"
#elif __linux__
#include "linux\linux_timer.h"
#endif
#endif // PLATFORM_TIMER_H
For the header it works just fine. But the .cpp-file for the Linux implementation breaks the build on Windows. That's because the Linux .cpp-file will get compiled no matter what, even on Windows. And because the Windows machine is missing Linux-headers, the function it uses will be undefined.
Question 1: What is the "industry standard" to deal with this?
Question 2: Is it reasonable to wrap both the .h and .cpp-files in "#ifdef PLATFORM" so that the code will only be enabled on the correct OS?
But the .cpp-file for the Linux implementation breaks the build on Windows. That's because the Linux .cpp-file will get compiled no matter what, even on Windows.
Why are you compiling the Linux-specific file for a Windows build?
Question 1: What is the "industry standard" to deal with this?
If you're going to create separate source files for Windows-specific and Linux-specific code, then the whole point would be that you use only the appropriate one for the current platform when you build.
An alternative approach would be to have both implementations in the same source file, using conditional compilation to choose which parts to use. That's pretty conventional, too, especially where the parts that vary are smaller than whole functions.
Question 2: Is it reasonable to wrap both the .h and .cpp-files in "#ifdef PLATFORM" so that the code will only be enabled on the correct OS?
It would be strange to go to the trouble of creating separate, platform-specific source files and then use conditional compilation to include them all in every build. It could work, but it would not fall within my personal definition of "reasonable".
Any code specific to one operating system needs to have the proper #ifdef's set up, whether in header files or source files.
Is it a bad practice to use a release version of 3rd party library in debug binary?
I am using a 3rd party library and compiled a release .lib library. My exe is in debug mode development. Then I got:
error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in test1.obj
After some googling I found that is because I am trying to mix release with debug, and I should probably compile the library in debug mode or otherwise muddle with the _ITERATOR_DEBUG_LEVEL macro. But I am just curious if that is the recommanded way and why. It just seem cumbersome that I need to compile and keep a record of both release and debug binaries for every 3rd party library I intend to use, which will be many very soon, while having no intention to debug into these code.
Mixing debug and release code is bad practice. The problem is that the different versions can depend on different fundamental parts of the C++ runtime library, such as how memory is allocated, structures for things like iterators might be different, extra code could be generated to perform operations (e.g. checked iterators).
It's the same as mixing library files built with any other different settings. Imagine a case where a header file contains a structure that is used by both application and library. The library is built with structure packing and alignment set to one value and the application built with another. There are no guarantees that passing the structure from the application into the library will work since they could vary in size and member positions.
Is it possible to build your 3rd party libraries as DLLs? Assuming the interface to any functions is cleaner and does not try to pass any STL objects you will be able to mix a debug application with release DLLs without problems.
Mixing debug and release library/binary is good and very useful practice.
Debugging a large solution (100+ projects as an example) typically is not fast or even could not be possible at all (for an example when not all projects can be built in debug). Previous commentators wrote that debug/release binary could have different alignment or another staff. It's not true. All linking parameters are same in debug and release binaries because they depends on the same architecture.
You have to remove all optimizations (/Od) from the selected project. Then assign a release c++ runtime.
The issue came because you have defined _DEBUG in the project. Remove the macro from the definitions (Project->Properties->Preprocessor->Preprocessor Definitions).
If the macro isn't in the Preprocessor Definitions, then you have to add it in "UndefinePreprocessorDefinitions".
The fact that it doesn't compile should be sufficient to prove it's bad practice.
Regarding maintaining separate builds - you don't need to do that. Here's a workaround that previously worked for me:
#ifdef _DEBUG
#define DEBUG_WAS_DEFINED
#undef _DEBUG
#endif
#include <culprit>
#ifdef DEBUG_WAS_DEFINED
#define _DEBUG
#endif
Let me know if this works for you.
I have a C++/CLI project created with Visual Studio 2010 that targets .NET Framework 3.5 and PlatformToolset v90. Initially it requests the VC CRT of version 9.0.21022.8, but if I include atlbase.h header then it requests the VC CRT of version 9.0.30729.6161.
Why does this happen? And how can I make it to target 9.0.30729.6161 without including atlbase.h?
I tried to define macroses _BIND_TO_CURRENT_CRT_VERSION=1 and _BIND_TO_CURRENT_VCLIBS_VERSION=1 but this didn't help.
The version is set by vc/include/crtassem.h, near the bottom you can see:
#ifndef _CRT_ASSEMBLY_VERSION
#if _BIND_TO_CURRENT_CRT_VERSION
#define _CRT_ASSEMBLY_VERSION "9.0.30729.6161"
#else
#define _CRT_ASSEMBLY_VERSION "9.0.21022.8"
#endif
#endif
So the rule is that you can explicitly override the version by #defining _CRT_ASSEMBLY_VERSION. Don't do that. As you noted in your question, #defining _BIND_TO_CURRENT_CRT_VERSION to 1 gets you the version string you want.
Having a problem with this in a C++/CLI project is possible. You can compile C++/CLI code without ever #including any of the CRT include files. So you'll end up with a default version which, ironically, is defaulted by the linker to its own version of the CRT. So a workaround is to explicitly put #include <crtassem.h> in one of your source code files. #including atlbase.h would do that too since it does include CRT headers but of course is the big hammer approach.
Additional troubleshooting is available from Project + Properties, C/C++, Advanced, Show Includes = Yes. You'll see a trace of all the #include files getting included in the Output window.
Beware that you'll now have the additional burden to ensure that the up-to-date version of msvcr90.dll gets deployed on the user's machine. Your program will fail to start if it is missing or old.
I have a big app using many static libs, which is platform independant and deployed under Windows and Linux.
All static libs and the main() itself are compiled with two defines:
-DVERSION=1.0.0 -DBUILD_DATE=00.00.0000
These defines are used by macros inside each static lib and inside the main to store the current lib version inside a registry-like class.
Under GCC / Linux this works very well - you can list all linked modules and display their real version and builddate, e.g.:
ImageReader 0.5.4 (12.01.2010)
Compress 1.0.1 (03.01.2010)
SQLReader 0.3.3 (22.12.2009)
But: When I link the exactly same code with VisualStudio 2005 SP1 I get only the version and build date of the last compiled module:
ImageReader 0.5.4 (12.01.2010)
Compress 0.5.4 (12.01.2010)
SQLReader 0.5.4 (12.01.2010)
Has anybody an idea? Is this an "optimization" issue of the VC++ linker?
Well, Visual Studio supports solutions with multiple projects. And its dependency engine is capable of detecting that a changed macro value requires a project to be recompiled. Occam's razor says that the libs simply got rebuilt and acquired the new VERSION macro value.
Preprocessor defines are resolved by the preprocessor stage of the compiler, not the linker.
There could be an issue with precompiled headers though in VC++.
Otherwise, to really tell I'd like to see the source code doing the actual printing of the version (date).
This doesn't have anything to do with the Visual Studio linker; it's just a matter of preprocessor macros, so the problem is already at the very beginning, before the compiler even gets to work.
What does the compile line look like in your Visual Studio build? My first idea is that for some reason, the defines (-D arguments) are all added to a single command line, and the last one always wins.
I'm assuming you have an app which then links to these libraries, and it's in this app that you're seeing the identical version numbers.
Make sure that the app doesn't have these -D switches as well. If not, then my guess is that VC compiler is being clever and triggering a build of the dependent projects with the same -D switch, rather than triggering the build via the project file.
Also, the best way to version these binaries is by employing macros in headers/source directly and giving them all unique names for each library. That way they can't interfere with each other (unless you clone one of the headers into an app, duplicating the Macro defs), and you're no longer dependent on the compiler to do it properly.
This can be a issue if you are using pre-compiled headers. Try building the application by disabling pre-compiled headers option.
"These defines are used by macros inside each static lib and inside the main to store the current lib version inside a registry-like class."
You're not violating the One Definition Rule by any chance? If you have one class, it should have one definition across all libraries. it sounds like the class definition depends on a version macro, that macro is defined differently in different part of your program, and thus you violate the ODR. The penalty for that is Undefined Behavior.
It seems that the MS linker takes advantage of the ODR by ignoring everything but the first definition. After all, if all definitions of X are the same, then you can ignore all but the first.