GCC #error that won't break further compilation - c++

I have several simple macros that exist in some files across a large project that includes a #error. They all follow this structure, more or less:
#ifdef COMPFAIL
#pragma message "Compilation Has Failed"
#error
#endif
I want to set the project up so that, if COMPFAIL is defined, the #pragma within each file occurs with compilation ultimately failing. I now understand that when #error is called, it halts compilation on the spot, not attempting to compile any of the other files defined in my Makefile and preventing all further #pragma calls from occurring. Is there a way to force the compiler to finish preprocessing across all files before failing?

A very simple test:
#error foo
#error bar
Compiling this file with gcc produces the following results:
t.c:2:2: error: #error foo
#error foo
t.c:3:2: error: #error bar
#error bar
It's obvious that #error does not completely stop the compilation of the file. If it were, only the first error would've been reported, and the compilation would've stopped. But, after the #error, the compiler keeps going, and continues to preprocess, and compile, the rest of the code. However, it's pointless, since the compiler will not produce an object file, once an error occured, so it's not clear to me what benefit you expect to gain from continuing to preprocess this file.
Now, as far as any other files that gets compiled via your makefile, that's completely unrelated to what any pragma or directive does, insofar as compiling or preprocessing the rest of the file. Once a command executed by make terminates with a non-zero exit code, make stops executing any more commands. To change that, use the -k option, as it's already been mentioned.
Note, that the -k option has no direct relevance on whether #error does or does not abort the immediate compile at hand. Either way, the compile stops with a non-zero exit code, and that's the driving factor, here.

Related

Compiler says it cannot find file and yet reports errors in "unfound" file?

I have two short files located in the same directory. The contents of each are shown below.
File test.cpp contains:
int main()
{
#include <test.h>
}
File test.h contains:
syntax_error
Upon compiling test.cpp with either g++ or clang++, I get an error which is expected.
test.cpp:3:11: error: 'test.h' file not found with <angled> include; use
"quotes" instead
#include <test.h>
^~~~~~~~
"test.h"
However, I also get a second error which seems to contradict the first error.
In file included from test.cpp:3:
./test.h:1:1: error: use of undeclared identifier 'syntax_error'
syntax_error
^
Essentially, the first error reports that the compiler cannot find the file test.h, and the second reports a syntax error in the file that the compiler reported it could not find.
These are the only two errors generated.
I understand why the compiler reports the first error and that I should use quotes with #include in this case. Why, though, does the compiler say it cannot find the file when it clearly has found it? And, why would it continue to report errors in the "unfound" file?
This is a feature, not a bug.
The idea is that if the error is trivial (like a missing semicolon), then the compiler will try to continue compiling as if you had already fixed the error. This enables you to fix multiple errors in one go. This is especially useful when compiling your code takes a long time.
Imagine fixing a missing semicolon, recompiling for five hours, just so that the compiler finds another missing semicolon. And then you have to recompile again. That would be very frustrating, no?
Basically, the compiler will try to recover from any errors as far as it is able to, to be able to report as much errors as possible. Most compilers have a flag for this.
Why, though, does the compiler say it cannot find the file when it clearly has found it?
The compiler found the file yes, that's why it gave you a hint to use "" instead of <>. If it hadn't, it might not have given you the hint. Still, the compiler is not allowed to compile your code correctly, because your code is ill-formed.
As an analogy, just because the compiler found a missing semicolon, that doesn't mean that it can just compile the code with that missing character (if it tries to be Standards compliant). It will however recover and try to find other errors, if any.

C++ command line debug argument

How can I change the value of a boolean macro when I run my program through the command line? For instance, suppose I have the following macro in my cpp file, call it MyCpp.cpp
#define DEBUG 1
How can I change this when I run my program? through the command line:
g++ -Wall -Wextra -o MyCpp MyCpp.cpp
I am pretty sure you specify some kind of command line option, does this ring any bells?
Also, I do NOT want to use argv[]
First, change your source code:
#ifndef DEBUG
# define DEBUG 1
#endif
Now you can say on the command line:
g++ -Wall -Wextra -o MyCpp MyCpp.cpp -DDEBUG=5
# ^^^^^^^^^
The command line argument -DFOO=bar has the same effect as putting #define FOO bar in your source code; you need the #ifndef guard to avoid an illegal redefinition of the macro.
Sometimes people use an auxiliary macro to prevent the definition of another macro:
#ifndef SUPPRESS_FOO
# define FOO
#endif
// ... later
#ifdef FOO
// ...
#endif
Now say -DSUPPRESS_FOO to not define FOO in the code...
How can I change the value of a boolean macro when I run my program through the command line?
As it stands, you can't. You are using a preprocessor symbol so the decision as to whether debug information should be printed is a compile time decision. You are going to have to change that compile-time DEBUG symbol to a run-time variable that you set by parsing the command line, via some configuration file read in at run time, or both.
Parsing the command line isn't that hard. There are plenty of low-level C-style tools to help you do that. Boost has a much more powerful C++ based scheme. The trick then is to change those compile-time debug decisions to run-time decisions. At the simplest, it's not that hard: Just replace that DEBUG preprocessor symbol with a global variable. You can get quite a bit more sophisticated than this of course. Eventually you'll have a configurable logging system. Boost has that, too.
Please note the following. If you have in your c/cpp file or one of your included header files:
#define DEBUG 1
then you cannot modify this definition using the command line of the compiler (makefile). There is simply no chance. The cpp file will simply overwrite the command line setting.

PCC-F-02102, Fatal error while doing C preprocessing AIX 5.3

Oracle version - 10.2.0.1.0
Pro*C/C++: Release 10.2.0.1.0
AIX version - 5.3
I cannot compile with the following errors.
Syntax error at line 135, column 2, file /usr/include/standards.h:
Error at line 135, column 2 in file /usr/include/standards.h
#warning The -qdfp option is required to process DFP code in headers.
.1
PCC-S-02014, Encountered the symbol "warning" when expecting one of the following:
a numeric constant, newline, define, elif, else, endif,
error, if, ifdef, ifndef, include, line, pragma, undef,
an immediate preprocessor command, a C token,
The symbol "newline," was substituted for "warning" to continue.
Syntax error at line 382, column 3, file mydb.h:
Error at line 382, column 3 in file mydb.h
time_t timestamp;
..1
PCC-S-02201, Encountered the symbol "time_t" when expecting one of the following
:
} char, const, double, enum, float, int, long, ulong_varchar,
OCIBFileLocator OCIBlobLocator, OCIClobLocator, OCIDateTime,
OCIExtProcContext, OCIInterval, OCIRowid, OCIDate, OCINumber,
OCIRaw, OCIString, short, signed, sql_context, sql_cursor,
struct, union, unsigned, utext, uvarchar, varchar, void,
volatile, a typedef name,
The symbol "enum," was substituted for "time_t" to continue.
Error at line 0, column 0 in file my_db.pc
PCC-F-02102, Fatal error while doing C preprocessing
make: *** [libdb.a] Error 1
Any solution?
pcscfg.cfg
sys_include=(/usr/include)
CODE=ANSI_C
parse=partial
sqlcheck=full
sys_include=/usr/include
sys_include=/usr/include/sys
sys_include=/usr/include/linux
include=$(ORACLE_HOME)/precomp/public
include=$(ORACLE_HOME)/precomp/include
include=$(ORACLE_HOME)/oracore/include
include=$(ORACLE_HOME)/oracore/public
include=$(ORACLE_HOME)/rdbms/include
include=$(ORACLE_HOME)/rdbms/public
include=$(ORACLE_HOME)/rdbms/demo
ltype=short
define=__64BIT__
define=_IBM_C
define=_LONG_LONG
The exactly same code is fine in AIX 5.2. The problem is occurred in AIX 5.3.
The first error reported, PCC-S-02014, is actually the important one. The Pro*C precompiler ignores some C preprocessor directives, but not #warning - it doesn't understand it, and doesn't think warning is a valid thing to have after a #.
You can use the ORA_PROC macro to avoid problematic header files being included at this stage. Assuming the location given in a previous answer is right, you can 'hide' the #include from the preprocessor like this:
#ifndef ORA_PROC
#include <standards.h>
#endif
Of course you may not be including that file directly, so you might have to work out the heirarchy to see which file you really need to exclude in your source file. In your case it looks like you could maybe hide mydb.h within your my_db.pc file, but that seems excessive; it might be better to hide standard.h within your mydb.h file - basically exclude the minimum amount of code you can. I'm speculating from the error messages though, you may have more layers.
This is covered in the advanced topics section of the Pro*C/C++ documentation.
This is easier than copying and editing the system header file, and much safer than editing the original. It also allows you to add comments explaining what's happening, of course.
This problem is usually occurred in AIX 5.3 and above. The /usr/include/standards.h is different from older version and I think PCC is somehow not able to compile.
To fix that issue, you have to change the followings in standards.h.
FROM
---
#if defined(__IBM_PP_WARNING)
#warning The -qdfp option is required to process DFP code in headers.
#else
#error The -qdfp option is required to process DFP code in headers.
TO
--
//#if defined(__IBM_PP_WARNING)
//#warning The -qdfp option is required to process DFP code in headers.
//#else
#if !defined(__IBM_PP_WARNING)
#error The -qdfp option is required to process DFP code in headers.
I suggest not to change the system include file. So, copy standards.h file into your project directory, fix and use it.

g++ compilation of a separately preprocessed file gives error depending on the architecture

I am using g++ version 4.1.2 on a x64_86 GNU linux architecture. Code base is very huge and I don't have sufficient understanding of makefiles used in the project. The code compiles fine as it is.
For some debugging purpose, I need to preprocess (g++ -E) few source files individually and then re-compile it. I am giving the required include paths using -I. Ideally the compilation should go fine.
But I am getting few discrepancies in standard headers like:
typedef unsigned long size_t; causes errors with operator new()
declaration generated by compiler (if I change to unsigned int
manually then this error disappears)
In library functions like unsigned long numeric_limits<>::max(),
compiler complains for big numbers such as 922...807L; it generates
compiler error as integer constant is too large for long type
Mismatch declaration of __errorno_location() gives compiler error
I am having hard time finding what is going wrong. Why compilation goes fine when I do make on unchanged file and why standard headers start cribbing when I give g++ -I <> -E option on individual file ?
(Note that there is no problem with the code we have written, it's just from standard library side. I tried locating the stddef.h which has unsigned int as typedef, but that just fixes the 1st problem. )
Any idea to fix this errors would be highly appreciated.
Don't preprocess and compile separately, or if you must then use consistent compiler options and a consistent environment.
It sounds a though you're running the preprocessor on a 32-bit machine (or using the -m32 option) then compiling on a 64-bit machine.
When compiling the output of the preprocessor, make sure that you use the-fpreprocessed compiler option so that the preprocessor will not run again.
If you don't pass in that option certain constructs that produced identifiers that look like macros may get expanded again into something they shouldn't get expanded to. It's hard for me to come up with a case that shows a difference (I'm sure I can, but it would take a bit of puzzling out and would be pretty contrived). However, the implementation headers may well use some arcane macro techniques that might be sensitive to this option.

What is the best way to eliminate MS Visual C++ Linker warning : "warning LNK4221"?

I have a CPP source file that uses #if / #endif to compile out completely in certain builds. However, this generates the following warning.
warning LNK4221: no public symbols found; archive member will be inaccessible
I was thinking about creating a macro to generate a dummy variable or function that wouldn't actually be used so this error would go away but I want to make sure that it doesn't cause problems such as using the macro in multiple files causing the linker to bomb on multiply defined symbols.
What is the best way to get rid of this warning (without simply suppressing the warning on the linker command line) ?
FWIW, I would be interested in knowing how to do it by suppressing the warning on the linker command line as well but all my attempts there appear to be simply ignored by the linker and still generate the error.
One other requirement: The fix must be able to stand up to individual file builds or unity build (combine CPP file builds) since one of our build configurations is a bulk build (like a unity build but groups of bulk files rather than a single master unity file).
Use an anonymous namespace:
namespace { char dummy; };
Symbols within such namespace have external linkage, so there will be something in the export table. On the other hand, the namespace name itself will be distinct (you can think of it as "randomly generated") for every translation unit, so no clashes.
OK, the fix I am going to use is Pavel's suggestion with a minor tweak. The reason I’m using this fix is it’s an easy macro to drop in and it will work in bulk-builds / unity-builds as well as normal builds:
Shared Header:
// The following macro "NoEmptyFile()" can be put into a file
// in order suppress the MS Visual C++ Linker warning 4221
//
// warning LNK4221: no public symbols found; archive member will be inaccessible
//
// This warning occurs on PC and XBOX when a file compiles out completely
// has no externally visible symbols which may be dependant on configuration
// #defines and options.
#define NoEmptyFile() namespace { char NoEmptyFileDummy##__LINE__; }
File that may compile out completely:
NoEmptyFile()
#if DEBUG_OPTION
// code
#endif // DEBUG_OPTION
(Though the discussion is already old and I cannot comment directly #Adisak's answer), I guess some additional macro expansion magic is needed for this to work:
#define TOKENPASTE(x, y) x ## y
#define TOKENPASTE2(x, y) TOKENPASTE(x, y)
#define NONEMPTY_TRANSLATION_UNIT char TOKENPASTE2(NoEmptyFileDummy, __LINE__);