How to see macro expansions step-by-step? - c++

It seems Eclipse allows user to "see the expansion Step-by-Step" by pressing F2.
I like this awesome feature. But can I do the same thing with just gcc or clang (or any tool)?
-E option makes all macros fully expanded. So I haven't found any alternative way to expand macros step-by-step.
Eclipse is big. I hope I don't need to install it everywhere and have it launched all the time.

It's a feature built into Eclipse. If such a tool was provided as part of the GCC or Clang toolchain, Eclipse would have no need to implement it. Such a feature could be implemented as an extension to GCC using MELT. LLVM (of which Clang is a part of) is designed to make something like this trivial.
One thing you have to keep in mind that macro expansion is a tricky business. At any given point in time, a macro definition may change or not exist at all. Theoretically you could use gdb (the debugger that comes with GCC) to step through your program to see macro expansions at different point in the program. If you want, you could try writing a gdb plugin in Python.

Related

How to compare disassembly of zero cost exceptions with the old approach?

I want to understand how "zero-cost exceptions" differ from the previous approach used to compile exceptions, so I want to look at the assembly code of some program compiled using both, to compare them. How can I do that?
Is there a GCC option I can use to switch between them? Or is there an old version of GCC that uses the old approach (ideally one that's available on Godbolt's Compiler Explorer)? Or something else?
I'm interested in x64 on Linux.
According to this question, GCC on linux uses zero-cost exceptions by default but can be configured to use the old ones (SJLJ). It seems you will need to build GCC yourself (and configure with --enable-sjlj-exceptions)

Is there anything like a forwarding C++ preprocessor, that could be used by GCC?

I've been searching around for different custom pre-processor extensions and replacements, but all of them seem to come with 1 of 2 caveats:
Either 1), you generate the code as a separate build-system, them manually put the output into your real (CMake) build system, or 2) you end up losing the builtin preprocessor for GCC.
Is there really no tool that can, say, run each file it gets against some configured script, then through cpp, then pass the result to gcc?
I'd love to use something like Cog by just setting an environment variable for gcc, indicating a tool that runs Cog first and then the standard preprocessor.
Alternatively, is there a straightforward way to accomplish that in CMake, itself? I don't want to have to write a custom script for each file, especially if I have to then hard-code the compiler/preprocessor flags in each target.
edit: For clarity, I am aware of several partial/partially-applicable solutions. For example, how to tell GCC to use a different preprocessor. (Or really, to look in a different place for its own preprocessor, cc1. See: Custom gcc preprocessor) However, that leaves a lot of work to do, to modify files, and then correctly invoke the real cc1, with the correct original arguments.
Since that is effectively a constant/generic problem, I'm just surprised there is no drop in program.
Edit 2: After looking over several proposed solutions, I am not convinced there is an answer to this question. For example, if files are going to be generated by CMake, then they can't be included and browsed by the IDE - due to not yet existing.
As ridiculous as it sounds, I don't think there is any way to extend the preprocessor short of forking Gcc. Everything recommended so far, constitutes incomplete hacks.
The GCC (C++ compiler) is made for compiling C++ programs. As the C++ preprocessor is standardized within the C++ standard there is usually no need for anything like a "plugin" or "extension" there.
Don't listen to the comments, that suggest you using any exotic extension to CMake or change source code of GCC. Running source files through a different program (cog in your case) before compiling is a well known task and all major build systems support it right away.
In CMake you can use the add_custom_command function. If you need this for more than one file, you could use a CMake loop like e.g. suggested in this answer.

Visual Studio Code intellisense, use g++ predefined macros

I'm using Visual Studio Code for c++ with MinGW and g++. My code uses a couple of g++ predefined macros. These work in the compiler without any problem, but Intellisense doesn't know about the macros and that causes some misdiagnosed errors (and other Intellisense problems) in the editor.
I can get all the predefined macros for my particular configuration of g++ with this command:
g++ -dM -E foo.cpp > defines.h
And this allows me insert them into the "defines" property of the c_cpp_properties.json file. This solution is a hack though, and if I'm not careful it could completely undermine the purpose of these macros. I want the compiler and Intellisense to cooperate across multiple development platforms, and that's looking like a pretty complicated setup.
Is there a better way to let Intellisense know about the g++ predefined macros?
From what I can tell, Intellisense's ability to properly handle preprocessor macros is still limited. If you are using CMake for Makefiles, it appears that you can tell Intellisense what macros you have, as seen here: auto-defined macros. I have not played with this personally, but it would be worth investigating.
If you can't get this to work, I found a feature to just turn off the macro-based highlighting. In settings, set"C_Cpp.dimInactiveRegions" to false. This isn't ideal, because it stops all graying out, including debug blocks like if(0) {...}, but it could be a temporary fix and you might find it less irritating.
Apart from that, look closely for added features in future updates. I'll try to update this post when I find any new discoveries on the matter.
The property in c_cpp_properties.json called compilerPath allows IntelliSense to query the compiler for default include paths and defines. It may be necessary to set intelliSenseMode to gcc-x64 for this to work, as that is not the default on Windows. (I currently do not have a Windows setup so I can't test this for the time being)

Disable std::map::at()

I'm compiling some code with gcc4.7, which was written for c++11, but I'd like it to be compatible with gcc4.4. The weird thing is that code with std::map::at() (which is only supposed to be defined in c++11) used doesn't seem to give me compiling errors, even after I remove the -std=c++11 flag. I'd like to be getting compiler errors, since this code has to be shared with colleagues who may not be using gcc4.7. Is this normal? Is there some way to restrict the behavior of std::map?
Apparently it is not possible to achieve this with a new gcc and new libraries, at least without compiling them yourself.
As a practical solution, assuming you have a relatively modern PC (6+GB of memory, perhaps 4GB will do), I suggest you
Install an older Linux distro in a virtual machine, which has both the desired old gcc, and matching old standard libraries. This is far less hassle, than trying to set up an alternative compiler and library environment in your main development OS.
Keep your sources in version control, if you already don't.
Either set up a script in the old VM to check out and build the software manually, or go the extra mile, and set up a Jenkins on the VM, and create a job to poll your version control repo and do a test build automatically when you do commit on your main development environment.
Good thing about this is, you can easily set up as many different environments and OSes as you want to keep compatibility with, and still keep the main development OS up to date with latest versions.
Original answer for the ideal world where things work right:
To get strict C++03, use these flags:
-std=c++03 -pedantic
Also, if you only want to support gcc, you may want -std=g++03 "standard" instead, but unless there is some specific feature, say C99-style VLA, which you really want to use, then I'd recommend against that. You never know what compiler you or someone else may want to use in the future.
As a side note, also recommended (at least if you want to fix the warnings too): -Wall -Wextra
Sad reality looks like selecting the C++ standard indeed does not solve the problem. As far as I can tell, this is not really a problem in the gcc compiler, it is a problem in the GNU C++ standard library, which evidently does not check the desired C++ standard version (with #ifdefs in header files). If it bothers you, you might consider filing a bug report (if there already isn't one, though I did not find one with a quick search).

Detect compiler with #ifdef

I'm trying to build a small code that works across multiple platforms and compilers. I use assertions, most of which can be turned off, but when compiling with PGI's pgicpp using -mp for OpenMP support, it automatically uses the --no_exceptions option: everywhere in my code with a "throw" statement generates a fatal compiler error. ("support for exception handling is disabled")
Is there a defined macro I can test to hide the throw statements on PGI? I usually work with gcc, which has GCC_VERSION and the like. I can't find any documentation describing these macros in PGI.
Take a look at the Pre-defined C/C++ Compiler Macros project on Sourceforge.
PGI's compiler has a __PGI macro.
Also, take a look at libnuwen's compiler.hh header for a decent way to 'normalize' compiler versioning macros.
You could try this to see what macros are predefined by the compiler:
pgcc -dM
Maybe that will reveal a suitable macro you can use.
Have you looked at the boost headers? Supposing they support PGI, they will have found a way to detect it. You could use that. I would start to search somewhere in boost/config.