How to prevent gcc from allowing stdbool to be included in ANSI C code? - header-files

I am programming in ANSI C and I want my compiler (gcc) to complain when I try to include a header file that is not part of this standard (e.g., stdbool).
The command line options -ansi -pedantic -Wall -Wextra aren't enough to do so.
Could you help me?
Example of a code that should not compile but does:
#include <stdlib.h>
#include <stdbool.h>
int main(void) {
return EXIT_SUCCESS;
}

Related

Why does the filename extension make a difference to compiling?

I compile this code under CentOs8 with the GNU compiler:
#include <stdlib.h>
int main() {
int *a = malloc(3 * sizeof(int));
return 0;
}
When I name it a.cpp, both of the compile commands failed:
g++ -o a a.cpp
gcc -o a a.cpp
But after I rename it to a.c, this compile command succeeds:
gcc -o a a.c
This is C code, NOT C++ code. I believe using gcc or g++ should make the difference, but it seems the compiler only considers the filename extension.
Could you please provide some details on this?
C++ is going to error on your implicit cast from the void* returned by malloc() to int*. Whereas C allows implicit casts from void* to other pointer types.
Most compilers will default to looking at the file extension to determine language to compile to.
A man gcc reveals that all .c files default to being compiled as C. Whereas all .cc, .cp, .cxx, .cpp, .CPP, .c++, and .C (capital C) files are compiled as C++.
You can override this behavior force the language via the -x option for gcc/g++.
Example:
gcc -x c++ foo.c -c // compiles foo.c as C++ instead of C
gcc and g++ are typically the same binary on most unix systems. It just defaults to different behavior depending on its own argv[0] parameter.
There might be other behavior differences between explicitly using g++ and gcc versus the -x option. I'm not certain on that.

Why does gcc produce a different result when bulding from source compared to linking a static library?

I have a single C++14 file, my.cpp, and from within it I'm trying to use a C99 library called open62541. For the latter, both full source open62541.c/.h and a library libopen62541.a exist. In my.cpp, where I include the open62541.h, I'm using C++ specific code (e.g. iostream), so technically I'm mixing C and C++.
I can get my.cpp to compile successfully by referencing the libopen62541.a:
gcc -x c++ -std=c++14 -Wall my.cpp -l:libopen62541.a -lstdc++ -o out
This outputs no warnings, and creates an executable out.
However, if I try to compile using source code only:
gcc -x c++ -std=c++14 -Wall my.cpp open62541.c -lstdc++ -o out
I get a lot of ISO C++ warnings (e.g. "ISO C++ forbids converting a string constant to ‘char'*") and some "jump to label" errors originating from within open62541.c, resulting in compilation failure.
I can get compilation to succeed by using the -fpermissive switch:
gcc -x c++ -std=c++14 -Wall my.cpp open62541.c -lstdc++ -fpermissive -o out
which still outputs a lot of warnings, but creates the executable successfully. However, I'm unsure if doing this is a good idea.
Perhaps worth mentioning is that open62541.h considers C++ at the beginning:
#ifdef __cplusplus
extern "C" {
#endif
Given that .a library, which comes bundled with the open62541 library code, is supposedly built from the same source, why are the first two approaches not consistent in terms of warnings and errors generated? Why does one work and the other doesn't?
Should one method - linking .a vs referring to .c - be preferred to another? I was under impression that they should be equivalent, but apparently they aren't.
Is using -fpermissive in this case more of a hack that could mask potential problems, and should thus be avoided?
The error (and warning) you see are compilation errors (and warning) output by a C++ compiler when compiling C code.
For instance, in C "literal" has type char[] while in C++ it has type const char[].
Would you get a C++ compiler build libopen62541.a from open62541.c, you would see the same errors (warnings). But a C compiler might be OK with it (depending on the state of that C source file).
On the other hand, when you compile my.cpp and links it against libopen62541.a, the compiler doesn't see that offending C code, so no errors (warnings).
From here, you basically have two options:
Use the procompiled library if it suits you as is
g++ -std=c++14 -Wall -Wextra -Werror my.cpp -lopen62541.a -o out
Compile the library's code as a first step if you need to modify it
gcc -Wall -Wextra -Werror -c open62541.c
g++ -std=c++14 -Wall -Wextra -Werror -c my.cpp
g++ open62541.o my.o -o out
gcc -x c++ -std=c++14 -Wall my.cpp open62541.c -lstdc++ -o out
This command forces the C code in open62541.c to be compiled as C++. That file apparently contains constructs that are valid in C but not C++.
What you should be doing is compiling each file as its own language and then linking them together:
gcc -std=gnu11 -Wall -c open62541.c
g++ -std=gnu++14 -Wall -c my.cpp
g++ -o out my.o open62541.o
Wrapping up those commands in an easily repeatable package is what Makefiles are for.
If you're wondering why I changed from the strict -std=c++14 to the loose -std=gnu++14 mode, it's because the strict mode is so strict that it may break the system headers! You don't need to deal with that on top of everything else. If you want a more practical additional amount of strictness, try adding -Wextra and -Wpedantic instead ... but be prepared for that to throw lots of warnings that don't actually indicate bugs, on the third-party code.

Compiling on linux with c++ standard libraries

Hi have the following example code:
func.h - header file for functions
#include <vector>
#include <tuple>
using std::vector;
using std::tuple;
tuple <double,double> A(vector<int>& n);
func.cpp - function cpp file
#include <iostream>
#include <vector>
#include <tuple>
using namespace std;
tuple <double,double> A(vector<int>& n)
{
double a1=n.size();
double a2=a1+0.5;
return make_tuple(a1,a2);
}
main.cpp - main cpp file
#include <iostream>
#include <vector>
#include <tuple>
#include "func.h"
using namespace std;
int main()
{
double a1,a2;
vector<int> n;
n.push_back(1);
n.push_back(2);
tie(a1,a2)=A(n);
return 0;
}
This compiles well in visual studio.
I have a problem compiling it on Linux (gcc version 4.4.7 20120313 Red Hat 4.4.7-11) with:
g++ -03 -std=c++0x main.cpp func.cpp -lm
It does not compile, I get the following errors:
1. In file included from /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../include/c++/4.4.7/array:35,from main.cpp:5:/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../include/c++/4.4.7/c++0x_warning.h:31:2: error: #error This file requires compiler and library suppcoming ISO C++ standard, C++0x. This support is currently experimental, and must be enabled with the -std=c++0x or -std=gnu++0x compiler options.
2. ‘std::tuple’ has not been declared
3. expected constructor, destructor, or type conversion before ‘<’ token
Any guidance on how to deal with this will be helpful!
Surprisingly the error seems to tell you that std=c++0x is not set.
Double check your compilation command. it should be
g++ -std=c++0x -o b main.cpp func.cpp -O3 -lm
and not
g++ -o -std=c++0x b main.cpp func.cpp -03 -lm
as in the original question.
You are telling GCC to output to the file named "-std=c++0x", and thus not setting that option at all, leading to this error. What it does with "b" afterwards, I have no idea. But you should always do "-o outputfilename" and not put other options between the "-o" option and its argument.
I cut and pasted your three files (func.h, func.cpp and main.cpp) and I can assure you that on my Linux box (CentOS 7.2) with g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4) everything works fine (your original command had some errors):
g++ -o myProg -O3 -std=c++0x main.cpp func.cpp -lm
Update your GCC (even from sources if you have several hours ;) ) .
Since you want to run an executable (compiled from recent C++11 or C++14 source code) on a server with an old version of Linux and of GCC -you have GCC 4.4 which does not support recent C++ standards, because it appeared in 2009 before the publication date (2011) of C++11- you could try the following:
install a recent Linux distribution on your own laptop (or computer) and check that its GCC compiler is at least GCC 5 (and preferably GCC 6) by running g++ --version (you might need to use g++-5 instead of g++, etc...)
compile and link statically your program on that laptop using g++ -static -std=c++14 -Wall func.cpp main.cpp -lm -o mybinprog (and perhaps also -O3 if you want to optimize and/or -g for debugging -better do the debugging locally-)
copy (e.g. using scp mybinprog remotehost:) the executable to the remote server and run it there
It is very probable (but not certain) that a statically linked executable built on a newer Linux (laptop) would run on some older Linux server.
BTW, to compile a multi-source file program, better learn how to use GNU make
Notice that order of program arguments to g++ matters a big lot, so read the documentation about Invoking GCC.
PS. Technically you might even try to link dynamically the C library and statically the C++ standard library.

__STDC_VERSION__ not defined in C++11?

I tried to get __STDC_VERSION__ with gcc 4.8 and clang, but it just not defined.
Compiler flags:
g++ -std=c++11 -O0 -Wall -Wextra -pedantic -pthread main.cpp && ./a.out
http://coliru.stacked-crooked.com/a/b650c0f2cb87f26d
#include <iostream>
#include <string>
int main()
{
std::cout << __STDC_VERSION__ << std::endl;
}
As result:
main.cpp:6:18: error: '__STDC_VERSION__' was not declared in this scope
I have to include some header, or add compiler flags?
The official documentation states:
__STDC_VERSION__
...
This macro is not defined if the -traditional-cpp option is used, nor when compiling C++ or Objective-C.
Also, the C++ standard(s) leave it up to the implementation to define this macro or not, and g++ opted for the latter.
Depending on what you're trying to do the __cplusplus macro might be an alternative (it is not just defined, it has a value, too ;)
For those who come across the following warning:
warning: "__STDC_VERSION__" is not defined
This is due to the -Wundef flag which is enabled:
-Wundef
Warn if an undefined identifier is evaluated in an #if directive. Such identifiers are replaced with zero.
(from official GCC documentation)
So you can just define __STDC_VERSION__ to zero (-D__STDC_VERSION__=0) to suppress these warnings.

Why does my C++0x code fail to compile if I include the "-ansi" compiler option?

I've come across a really weird error that only pops up if I use the ansi flag.
#include <memory>
class Test
{
public:
explicit Test(std::shared_ptr<double> ptr) {}
};
Here's the compilation, tested with gcc 4.5.2 and 4.6.0 (20101127):
g++ -std=c++0x -Wall -pedantic -ansi test.cpp
test.cpp:6:34: error: expected ')' before '<' token
But compiling without -ansi works. Why?
For the GNU C++ compiler, -ansi is another name for -std=c++98, which overrides the -std=c++0x you had earlier on the command line. You probably want just
$ g++ -std=c++0x -Wall minimal.cpp
(-pedantic is on by default for C++, so it's unnecessary to say it again. If you want pickier warnings, try adding -Wextra.)
std::shared_ptr doesn't exist in c++98. Try these changes:
#include <tr1/memory>
...
explicit Test(std::tr1::shared_ptr<double> ptr) {}
Um, because there is not yet an ANSI standard for C++0x? The ANSI flag checks for conformance with existing standards, not future ones.