I wrote a program for an assignment in which I allocated memory in this way:
EdgeBucket* edgeTable[ n_scanlines ];. I understand that this is normally illegal in C, but I was not aware that it could also not be done in C++. However, when I compile it using g++, it gives no compile errors. But my grader is using a visual studio, and when he attempted to build my program, it gave errors stating that the length of the array must be constant. I normally compile my programs with the -ansi and -Wall options to ensure cross compiler integrity, but even that didn't detect this. I am concerned about my grades being compromised by this, so does anyone know why the -ansi compiler didn't catch this, and what can be done to prevent further cross compiler discrepancies?
Use -pedantic-errors flag. Example.
They are known as VLAs (Variable Length Arrays). The are legal in C from C99 and illegal in C++.
Related
I'm using Clang v3.7.0 with Mingw-w64 5.1.0 and GCC 5.1.0, all 64-bit, on Windows 10. My goal is to use a set of Clang and GCC options that will give me the best chance of detecting potential C89 and C++98 language standards portability issues across many different compilers. For example, for C I have been using the following GCC command line with pretty good success:
gcc -c -x c -std=c89 -pedantic-errors -Wall -Wextra -Wno-comment -Wno-parentheses -Wno-format-zero-length test.c
However, I recently tried it with Clang and got a different result. Here is my sample test code:
int main(void)
{
int length = (int)strlen("Hello");
return 0;
}
With Clang I get the following error, whereas with GCC I get the same basic thing, but it flags it as a warning instead:
test.c:3:22: error: implicitly declaring library function 'strlen'
with type 'unsigned long long (const char *)'
int length = (int)strlen("Hello");
If I remove the -pedantic-errors option, or just change it to -pedantic, Clang then only flags it as a warning, which is what I actually want. However, according to the GCC documentation the -pedantic-errors option causes warnings that are considered language extensions to be flagged as errors, but not using a prototype for a function is not an extension in C89. So, I have three basic questions:
Has Clang changed the meaning of -pedantic-errors from the meaning used by GCC, or am I misinterpreting something?
What is the best set of options that will enforce adherence to the selected standard and will issue errors for all non-conforming code?
If I continue to use -pedantic-errors with Clang is there a way to get it to issue a warning instead of an error in specific cases? In another posting on this site an answer was given that said to use the following, where foo is the error:
-Wno-error=foo
If that is a correct approach, what do I actually use in place of foo for an error like I'm getting since there is no actual error number indicated? I can't believe it actually wants all of the following:
-Wno-error=implicitly declaring library function 'strlen'
with type 'unsigned long long (const char *)'
Your code is invalid, and the behavior is undefined, so the compiler can do anything also when compiling. The implicitly declared int strlen(char*) is not compatible with size_t strlen(const char *).
Has Clang changed the meaning of -pedantic-errors from the meaning used by GCC, or am I misinterpreting something?
As I read it, yes. From clang documentation:
-pedantic-errors
Error on language extensions.
In GCC:
-pedantic
Issue all the warnings demanded by strict ISO C and ISO C++ [...]
-pedantic-errors
Give an error whenever the base standard (see -Wpedantic) requires a diagnostic, in some cases where there is undefined behavior at compile-time and in some other cases that do not prevent compilation of programs that are valid according to the standard.
Clang errors on extensions.
GCC errors when standard explicitly requires it and in other "some cases".
This is a different, it is a different set of errors. Standard may not require a diagnostic, but it's still an extension - GCC will be silent, Clang will error.
What is the best set of options that will enforce adherence to the selected standard and will issue errors for all non-conforming code?
The first answer that comes to mind is: "none". Compiler inherently use "implementation-defined behavior" and extension, because they are meant to compile the code in the first place, not meant to not compile non-conforming code. There are cases where the code is conforming, but still the behavior differs between compilers - you can explore such a case here.
Anyway, keep using -pedantic-errors, as it seems to work in detection of non-conforming code. Your code is invalid, the behavior is undefined, so your code is non-conforming, so clang properly detects it. Also use linters and sanitizers to detect other cases of undefined behavior.
If I continue to use -pedantic-errors with Clang is there a way to get it to issue a warning instead of an error in specific cases?
Use -fno-builtin.
For example, gcc 4.7 has a new feature -Wnarrowing. In configure.ac, how can I test where a feature is supported by the current gcc or not?
There's a file in gnulibc, but doesn't make much sense to me.
Both gcc and clang support -W[no-]narrowing and -W[no-]error=narrowing options.
With -std=c++11, gcc emits a warning by default, and clang emits an error by default. Even though you only mention gcc, I think you could extend the functionality check to compilers like clang that attempt to provide the same options and extensions. That might include Intel's icc too.
Let's assume you've selected the C++ compiler with AC_PROG_CXX, and have ensured that it's using the C++11 standard.
ac_save_CXXFLAGS="$CXXFLAGS"
CXXFLAGS="$CXXFLAGS -Werror -Wno-error=narrowing"
AC_LANG_PUSH([C++])
AC_COMPILE_IFELSE([AC_LANG_PROGRAM([],
[[int i {1.0}; (void) i;]])],
[ac_cxx_warn_narrowing=1], [ac_cxx_warn_narrowing=0])
AS_IF([test $ac_cxx_warn_narrowing -ne 0],
[AC_MSG_RESULT(['$CXX' supports -Wnarrowing])])
AC_LANG_POP([C++])
CXXFLAGS="$ac_save_CXXFLAGS"
Compilation will only succeed if: 1) the compiler supports -Wnarrowing related options, which implies it supports -Werror, and: 2) recognizes C++11 initialization syntax.
Normally, configure.ac scripts and flags passed to configure should avoid -Werror, as it breaks too many internal tests. In this context, we ensure there are no other warnings besides the narrowing, which is why (void) i; is needed to prevent a warning about unused variables.
The logic behind this should probably be:
Create a correct file that should get a warning with -Wnarrowing. Verify that it gets compiled correctly. This is a sanity check.
Then compile that same file with -Wnarrowing, and verify that it still gets compiled correctly. This makes sure you detect compilers that don't support -Wnarrowing as an option, and don't attempt to pass bogus options to them.
Finally, compile that same file with -Werror=narrowing, and verify that it now does not get compiled correctly. If it now fails, you can be fairly certain that the compiler does indeed support -Wnarrowing. This last check is useful to detect compilers that do accept -Wnarrowing/-Werror=narrowing, but spit out a warning "ignoring unknown option -Wnarrowing". In that case, you shouldn't be passing -Wnarrowing.
Optionally, you may also want to compile a file that shouldn't get a warning with -Wnarrowing with -Werror=narrowing, in case you find a compiler where -Wnarrowing is useless and -Werror=narrowing is a hard error. I cannot think of a compiler where this would be required, though.
Translating this to a configure check should be trivial.
See http://code.google.com/p/opendoom/source/browse/trunk/VisualC8/autotools/ac_c_compile_flags.m4 for an example test of this sort - this tries to compile a trivial program with the given compiler flag, and adds it to CFLAGS if it works.
Surprisingly I found gcc can find this error when it compiles C. I simplified the code which still triggers the warning. I post the question for making clear the details of the techniques it uses. Below is my code of file a.c
int main(){
int a[1]={0};
return(a[1]);
}
My gcc version is gcc (Ubuntu/Linaro 4.7.3-1ubuntu1) 4.7.3. When using gcc a.c -Wall, there is no warning; when using gcc -O1 a.c -Wall, there is a warning:
warning: ‘a[1]’ is used uninitialized in this function [-Wuninitialized]
and when using gcc -O2 a.c -Wall (or -O3), there is another warning:
warning: array subscript is above array bounds [-Warray-bounds]
The most surprising thing is that, when I give a[1] a value, then none of the above compiling options gives any warning. There is no warning even when I change the index to a huge number (of course the compiled file offends the operating system and will be kicked out),
int main(){
int a[1]={0};
a[2147483648]=0;
return(a[2147483648]);
}
I think the above phenomenon is more of a function than a bug. I hope someone help me figure out what happens, and/or why the compiler is designed so. Many thanks!
Accessing memory past the end of the array results in undefined behaviour.
gcc is nice enough to go out of its way to detect, and warn you about, some of these errors. However, it is under no obligation to do so, and certainly cannot be expected to catch all such errors.
The compiler is not required to provide diagnostic for this kind of error, but gcc is often able to help; notice that these warnings often arise in part as a byproduct of the static analysis passes done for optimization purposes, which means that, as you noticed, such warnings often depend from the specified optimization level.
Is it possible to 'tell' compiler that if total number of warnings (while compiling a C++ program) are more than say 10 then stop compiling further, and emit an error?
Or is it possible to hack a compiler like clang to provide this functionality.
GCC has two options together would achieve this, from gnu online docs:
-Werror
Make all warnings into errors.
-fmax-errors=n
Limits the maximum number of error messages to n, at which point GCC bails out rather than attempting to continue processing the source
code.
This would make a build with any warnings fail though, the options just define when to stop parsing.
I haven't seen this kind of feature in gcc or clang. You can certainly try to patch it into either of them, both are open source. There is also -Werror (accepted by both compilers) which simply treats warnings as errors.
How about using -Werror to make warnings into errors and -fmax-errors=n to set the limit.
(Also, perhaps making your code completely warning free would be a good thing).
I am using g++ version 4.1.2 on a x64_86 GNU linux architecture. Code base is very huge and I don't have sufficient understanding of makefiles used in the project. The code compiles fine as it is.
For some debugging purpose, I need to preprocess (g++ -E) few source files individually and then re-compile it. I am giving the required include paths using -I. Ideally the compilation should go fine.
But I am getting few discrepancies in standard headers like:
typedef unsigned long size_t; causes errors with operator new()
declaration generated by compiler (if I change to unsigned int
manually then this error disappears)
In library functions like unsigned long numeric_limits<>::max(),
compiler complains for big numbers such as 922...807L; it generates
compiler error as integer constant is too large for long type
Mismatch declaration of __errorno_location() gives compiler error
I am having hard time finding what is going wrong. Why compilation goes fine when I do make on unchanged file and why standard headers start cribbing when I give g++ -I <> -E option on individual file ?
(Note that there is no problem with the code we have written, it's just from standard library side. I tried locating the stddef.h which has unsigned int as typedef, but that just fixes the 1st problem. )
Any idea to fix this errors would be highly appreciated.
Don't preprocess and compile separately, or if you must then use consistent compiler options and a consistent environment.
It sounds a though you're running the preprocessor on a 32-bit machine (or using the -m32 option) then compiling on a 64-bit machine.
When compiling the output of the preprocessor, make sure that you use the-fpreprocessed compiler option so that the preprocessor will not run again.
If you don't pass in that option certain constructs that produced identifiers that look like macros may get expanded again into something they shouldn't get expanded to. It's hard for me to come up with a case that shows a difference (I'm sure I can, but it would take a bit of puzzling out and would be pretty contrived). However, the implementation headers may well use some arcane macro techniques that might be sensitive to this option.