Eclipse CDT: Managing conditional compile (#ifdef) in one codebase - eclipse-cdt

I am working in a very large code base that has conditional compile flags to build code for several different embedded hardware platforms. There is a large part of the code that is common and there is a hardware adaptation layer that is supposed to be h/w independent but also has a lot of common code with function calls to specific hardware functions that are wrapped in #ifdef #else for conditional compilation. This is unfortunately the paradigm imposed on us for how we work across several teams so I need to work with it ie- no option to move to really hardware independent files. I develop and debug for all 3 (so far) of these platforms and keep having to add/delete the compiler flags from my Symbols and re-build my CDT index each time I need to context switch from developing/debugging an issue with one platform to another. Rebuilding the index can take a long time (up to an hour) , even with aggressive resource filtering.
We work with Perforce as our CVS and I want to work within a single Perforce workspace so I don't get out of sync with which files are checked out. I tried to create separate Eclipse projects for each of these types of platforms but I get an error message that the resource (the Perforce workspace code) is already in use by another project.
Does anyone have any suggestions?
I am using Eclipse Luna with CDT.
Thanks

For the part where you mentioned the need to delete and add Symbols and change build options in the Project Properties, this is what Configurations are for. Assuming the settings are pretty static for a given configuration (specific hardware platform), define a list of configurations, one per platform, and set the options according to the platform in question. This way, just changing configs will change the set of build options.
This is also true for file-specific settings, like "exclude from build". You can have varying set of files to build for each platform.
I don't know if Eclipse will re-index every time you switch configurations.

Related

Prevent adding new csproj from adding AnyCPU back to solution file

We have a solution that we only want to have the x86 platform but every time we add a new project to the solution it adds AnyCPU back for every single project in the solution. This is tedious to remove all the AnyCPU lines in the solution file because we have 70+ projects in the solution. Is their any way to configure Visual Studio to prevent this from being added?
Not sure if this is relevant but we are on the legacy project system and only use csproj in our solution.
EDIT 1:
The reason I would like to keep AnyCPU from being added back to the solution is because of warnings and issues with building with certain nuget packages.
Some of our third party dependencies are built against x86 and it produces warnings with no codes when we reference them so I am unable to suppress them.
The nuget package I am specifically aware causes issues is CefSharp. It will fail to build our desktop application that references it if the developer selects AnyCPU. It uses the platform to determine if it should copy its unmanaged x86 or x64 dll.
EDIT 2:
Here is the section of the solution that causes issues when we go to build. From what I have read Visual Studio looks through this list alphabetically for a platform if one is not provided. This example is from an unrelated solution.
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU
QA|Any CPU = QA|Any CPU
Release|Any CPU = Release|Any CPU
EndGlobalSection
EDIT 3:
As far as I can tell Hans' answer is the correct way to handle this. I have looked for other ways to handle this but after looking on uservoice was able to find where this was suggested in 2011.
This is a very common mistake. VS2010 is for a large part responsible for it, its project templates chose x86 instead of AnyCPU. Fixed again in VS2012 but not otherwise repairing any damage done by solutions that were once exposed to VS2010. Or helping programmers to get it right.
The platform selection is meaningless for C# projects. You use the exact same build tools for any platform, the generated code is truly compatible with "any cpu". It is the just-in-time compiler that locks in the target processor, it does so at runtime. The only setting that matters at all to affect what the jitter does is present in the Project > Properties > Build tab. Only the settings for the EXE project matter, libraries don't have a choice but to be compatible with the bitness of the process.
It does matter for C++ projects. A lot, they use a completely different compiler and linker for each platform. Necessarily so, C++ projects generate machine code up front and that code must be compatible with the user's machine. Also the reason this got fumbled at VS2010, that's when the C++ build system moved to MSBuild.
The typical reason AnyCPU pops back into the solution is adding a new project. Since they default to AnyCPU again, it needs to be added back to the solution platforms.
By far the best solution is to stop fighting the machine. AnyCPU should be your preference. Use Build > Configuration Manager > Active Solution combobox > Edit. Remove x86 so only AnyCPU remains. And do focus on what you want to accomplish, it is the EXE project settings that matter. Beware of yet another trap, even though the default platform is AnyCPU, a project template turns on the "Prefer 32-bit" checkbox by default. Not any cpu anymore. High time Microsoft fixes this btw, the 64-bit debugger and jitter have been stable and capable long enough to no longer need this.

Capture all compiler invocations and command line parameters during build

I want to run tools for static C/C++ (and possibly Python, Java etc.) code analysis for a large software project built with help of make. As it is known, make (or any other build tool) invokes compiler and similar tools for specified source code files. It is also possible to control compilation by defining environmental variables to be later passed to the compiler via its arguments.
The key to accurate static analysis is to provide defines and include paths exactly as they were passed to the compiler (basically all its -D and -I arguments). This way, the tool will be able to follow same code paths the compiler have followed.
The problem is, the high complexity of the project means there is no way to statically determine such environment, as different files are built with different sets of defines/include paths and other compilation flags.
The idea is that it should be somehow possible to capture individual invocations of the compiler with all arguments passed to it for each input file. Having such information and after its straightforward filtering (e.g. there is no need to know -O optimization levels or -W warning settings) it should be possible to invoke the static analyzer for each input file with the identical set of defines/includes used just for that input file.
The question is: are there existing tools/workflows that implement the idea I've described? I am mostly interested in a solution for POSIX systems, but ideas for Windows are also welcome.
A few ideas I've come to on my own.
The most trivial solution would be to collect make output and process it afterwards. However, certain projects have makefile rules that give very concise output instead of verbose one, so it might require some tinkering with Makefiles, which is not always desirable. Parallel builds may also have their console output mixed up and impossible to parse. Adaptation to other build systems (Cmake) will not be trivial either, so it is far from being the most convenient way.
Running make under ptrace and recording all invocations of exec* system calls that correspond to starting new applications, including compiler invocations. Then one will need to parse ptrace's output. This approach is build system and language agnostic (will catch all invocations of any compiler for any language) and should work for parallel builds. However it seems to be more technically complex. Performance degradation to the build process because of ptrace sitting on make's back is unclear either. It will also be harder to port it to Windows, as program-tracing API is somewhat different there.
The proprietary static analyzer for C++ on Windows (and recently Linux AFAIK) PVS-Studio seems to implement the second approach, however details on how they do it are welcome. If there are other IDEs/tools that already have something similar to what I need, please share information on them.
There are the following ways to gather information about the parameters of compilation in Linux:
Override environment CC/CXX variables. It is used in the utility scan-build from Clang Analyzer. This method works reliably only with simple projects for Make.
procfs - all the information on the processes is stored in /proc/PID/... . Reading from a disk is a slow process, you might not be able to receive information about all processes of a build.
strace utility (ptrace library). The output of this utility contains a lot of useful information, but it requires a complicated parsing, because information is written randomly. If you do not use many threads to build the project, it is a fairly reliable way to gather information about the processes. It’s used in PVS-Studio.
JSON Compilation Database in CMake. You can get all the compilation parameters using the definition -DCMAKE_EXPORT_COMPILE_COMMANDS=On. It is a reliable method if a project does not depend on non-standard environment variables. Also the project for CMake can be written with errors and issue incorrect Json, although this doesn’t affect the project build. It’s supported in PVS-Studio.
Bear utility (function substitution using LD_PRELOAD). You can get JSON Database Compilation for any project. But without environment variables it’ll be impossible to run the analyzer for some projects. Also, you cannot use it with projects, which already use LD_PRELOAD for a build. It’s supported in PVS-Studio.
Collecting information about compiling in Windows for PVS-Studio:
Visual Studio API to get the compilation parameters of standard projects;
MSBuild API to get the compilation parameters of standard projects;
Win API to get the information on any compilation processes as, for example, Windows Task Manager does it.
VERBOSE=true is a default make option to display all commands with all parameters. It also works with CMake, for instance.
You might want to look at Coverity. They are attaching their tool to the compiler to get everything that the compiler receives. You could overwrite the environment variables CC or CXX to first collect everything and then call the compiler as usual.

What is a good way to set Preprocessor values for an imported library

I apologize if this is covered elsewhere, but I was unable to find the information readily. I am working with an extant library for my company that uses pre-processor directives to add and remove specialized capabilities. For example, we might have a IMPORT_OPENBLAS and IMPORT_SPEEX to indicate that the build needs to support use of the OpenBLAS and Speex libraries. We also have unit tests based off of the Google test framework, some of which need said pre-processor directives enabled to run, which statically link in our library. The two places where we typically run the unit tests are through Visual Studio (2008 if that makes a difference) and through Ant, which invokes vsbuild.exe to do the build.
So, long story short, I have been tasked with adding additional capabilities such as the above libraries. We have other projects that use our library and specifically don't want those capabilities turned on, in part due to issues with dependencies and in part because they don't want the additional complexity. My first impulse was to put the preprocessor directives into the unit test project, since it builds our library as a dependency anyhow, but that doesn't seem to work. Is there any way to flag things to indicate that a given pre-processor command needs to be turned on for compiling the dependent project?
Another alternative is to create new build targets for the unit tests which specifically set the right pre-processor flags, but I want to avoid that if possible because we already have 10 different build targets encompassing different linking methods, processor size, and debug versus release modes and one of my earlier tasks involved getting them all to work again since no one had compiled some of them for months since our primary release is based off of just two of those targets.
Thank you for any help you can provide.
You could simply have a header file that includes those defines and include it in all the files in the project through the project properties. See the project properties -> Configuration properties -> C/C++ -> Advanced -> Force Includes.
In other words, this file would be included in all the projects.

How to profile building?

I am working on a large (~1 mloc) C++-application which takes too long to build from source (on windows using Visual Studio, on the mac using a Makefile or XCode). I would like to know where to start optimizing (e.g. precompiled headers, forward declarations, ...).
As with performance of the application itself, I would like to profile the build process before I start optimizing.
What tools are available to support this?
Firstly, please state exactly which version of Visual Studio you're using. If possible, upgrade to VS2010 as this has much better support for parallel building. Here's several things to consider:
Put the source tree on a different disk to the system disk. If you can extend to 2 SSDs (1 for system, 1 for source) then this makes a huge difference
Enable parallel builds. In VS2010 this halved our build time for a project about the same size as yours. Enable the 'Multiprocessor compilation' switch (/MP). You may find that one or two of your projects may need this turned off if they have strange dependencies, but as long as it's on for most projects then you'll get a massive boost.
VS2010 has verbose build timing logging options which can help you isolate the time spent in different projects. VS2005/2008 have a build timing option
If you have VS2005 or VS2008 then try out the MPCL plugin (it's not free but very cheap) which will do better parallel building than VS itself. If you have the budget there are tools like Incredibuild
If you're using Makefiles then use the -j flag to parallelise. If you're using Xcode then you can use distributed builds if you have other macs available (I've never had any luck with this myself though)
You could look into using ccache with gcc
Enable Precompiled headers for all or most projects. It may take a bit of experimenting to work out how much benefit you get -- you do hit diminishing returns quite quickly the more you put in them (and the more you have in, the more rebuilds you'll need to do)
Read John Lakos's book on Large Scale C++ Design which is a fantastic source of advice for how to split up large projects to isolate dependencies
Consider a two-stage build process. If you have lots of third party libraries that need to be built, or other libraries that don't change all that often then set up a separate project for them. Try building that in parallel with your main project or save the binaries. Consider checking the binaries into your source control system (yes, I know checking binaries into SCM is generally considered evil, but I believe you have to be pragmatic)
There are many ways of improving build-times. One of them is of course more hardware, i.e. faster disks and more RAM. Another is features of the compiler like precompiled headers. There are also external tools that can help, like distcc or ccache. For GNU make, there is also the -j option to run several make processes in parallel.

Avoiding too many configurations for a Visual Studio project

I'm currently porting a large Linux project to Visual Studio. The project depends on a number of third-party libraries (Python, MPI, etc.) as well as a couple of in-house ones. But it can also be built without these libraries, or with only a few of them. So I don't want to create a different configuration for each possible combination, e.g. "Parallel with Python", "Parallel without Python", etc. There are just too many combinations. Is this a situation where I could use MSBuild?
Edit: One possibility I considered is to create a bunch of .vsprops files, but this is essentially the same as creating a bunch of different configurations.
Edit: Maybe CMake is more what I'm looking for? I'd love to hear from any CMake users out there...
One approach could be to conditionally reference your libraries using the Condition attribute of every assemblies Reference element (Python, MPI, etc).
This could separate your libraries from the configuration and platform properties, and allow you to build them by default, or conditionally using MSBuild properties.
So in your csproj:
<Reference Include="YourPythonLibrary"
Condition="$(BuildType) == '' Or $(BuildType) == 'TypeA'" />
<Reference Include="YourMpiLibrary"
Condition="$(BuildType) == 'TypeA' Or $(BuildType) == 'TypeB'" />
That includes Python by default and MPI only if the correct build type is set. Wouldn't matter what the configuration or platform is set as, and you could adjust the boolean logic to suit each library for each of your build types.
MSBuild /p:BuildType=TypeA
MSBuild /p:BuildType=TypeB
It would be nice to use some form of bitwise operation on the condition, but i'm not sure that is possible in MSBuild?
Note: Doesn't have to a Reference element, if it's just included as Content this approach will still work.
There's no good solution to this that I'm aware of. The IDE seems to require a configuration for each set of command line arguments to the tools. So if N different sets of arguments are required -- as it sounds like the case is here -- N different configurations will be required. That's just how the IDE works, it appears.
Unfortunate, but one rarely wins in a fight against Visual Studio, so I personally have always given in and created as many configurations as needed. It's a pain, and it's fiddly, and yes the IDE should ideally provide some better mechanism for managing the combinations -- but it's doable, just about, and it doesn't actually take as long to set up as it feels like at the time.
(As I understand them, .vsprops can take some of the pain away by allowing easy sharing of configuration settings between configurations. So those miniscule text boxes in VS are only used to set up the settings that differ between configurations. This may make them still worth investigating. This isn't something I've used myself yet, though; only discovered it recently.)
If you right-click the solution in Visual Studio and select Configuration Manager you can create build targets for each configuration.
You can select between those targets with a combo box in the toolbar if you have the default settings.
Those targets can also be selected when using MSBuild just as you can choose between Release and Debug.