I want for my program to use C++11 standard and O3 optimization.
Normally I would just use compilation flags: -std=c++11 and -O3, but I have to send sourcefile to a remote server where it is compiled automatically. So basically the only thing that comes to my mind is to use some neat macro. I am pretty sure it is possible to define optimization that way cause I saw it somewhere, but still I cannot remember how it looked like.
What you're asking for is outside the C++ spec.
So if it were supported at all--anywhere--it would be supported via pragmas. I don't know of any pragma to say "compile as C++11", so I'd call that a lost cause. There do seem to be some optimization pragmas out there:
http://gcc.gnu.org/onlinedocs/gcc/Function-Specific-Option-Pragmas.html
But bear in mind that using a #pragma is entirely outside of the spec. If the compiler hits one, it can do anything it likes and still be standard. That includes launching video games (and this has been done)
So long story short: if this is a requirement for you, then you need to tell whoever this is providing your compilation service that you need certain compiler settings for your program.
Related
GCC seems to allow "and" / "or" to be used instead of "&&" / "||" in C++ code; however, as I expected, many compilers (notably MSVC 7) do not support this. The fact that GCC allows this has caused some annoyances for us in that we have different developers working on the same code base on multiple platforms and occasionally, these "errors" slip in as people are switching back and forth between Python and C++ development.
Ideally, we would all remember to use the appropriate syntax, but for those situations where we occasionally mess up, it would be really nice if GCC didn't let it slide. Anybody have any ideas on approaches to this?
If "and" and "or" are simply #defines then I could #undef when using GCC but I worry that it is more likely built into the compiler at more fundamental level.
Thanks.
They are part of the C++ standard, see for instance this StackOverflow answer (which quotes the relevant parts of the standard).
Another answer in the same question mentions how to do the opposite: make them work in MSVC.
To disable them in GCC, use -fno-operator-names. Note that, by doing so, you are in fact switching to a non-standard dialect of C++, and there is a risk that you end up writing code which might not compile correctly on standard-compliant compilers (for instance, if you declare a variable with a name that would normally be reserved).
The words are standard in C++ without the inclusion of any header.
The words are standard in C if you include the header <iso646.h>.
MSVC is doing you no service by not supporting the standards.
You could, however, use tools to enforce the non-use of the keywords. And it can be a coding guideline, and you can quickly train your team not to make silly portability mistakes. It isn't that hard to avoid the problem.
Have you considered any code analysis tools? Something similar to FxCop? With FxCop you can write your own rules( check for && ) and you can set it to run during the pre-compile stage.
-pedantic-errors may help for this, among other gnu-isms.
I have a very simple question but I haven't been able to find the answer yet so here I go:
I am using a shared library and I'd like to know if it had been compiled with an optimization flag such as -O3 or not.
Is there a way to find that information ?
Thank you very much
If you are using gcc 4.3 or later, check out -frecord-gcc-switches. After you build the binary file use readelf -n to read the notes section.
More info can be found here Detect GCC compile-time flags of a binary
Unless whoever compiled the library in the first place used a compiler that saves these flags to the binary somehow (I think only recent GCC allows that, and probably clang), there's inherently no way to know exactly what flags have been used. You can, of course, if you have had a lot of experience looking at assembly, deduct a lot (for example "this looks like an automatically unrolled loop", "This looks like someone optimized for a processor where A xor A is faster than A := 0x0", etc).
Generally, there's always different source code that can end up as the same compiled code, so there's no way to tell wether what has been compiled was optimized "by hand" in the first place or has seen compiler optimization in many cases.
Also, there are a lot of C++ compilers out there, a lot of versions of these and even more flags...
Now, your question comes out of somewhere; I'm guessing you're asking this because either
you want to know if there's debugging symbols in there, or
you want to make sure something isn't crashing because of incorrect optimization, or
you want to know whether there's potential for optimization.
Now, 1. is really rather independent of the level of optimization; of course, the more you optimize, the less your bytecode corresponds to "lines of source code", but you can still have debugging symbols.
The second point: I've learned the hard way that unless I've successfully excluded every other alternative, I'm the one to blame for bugs (and not my compiler).
The third point: There's always room for optimization, but that won't help you unless you're in a position to recompile the library yourself. If you recompile, you'll set the flags, so no need to find out if they were set in the first place. If you're not able to recompile: Knowing there is room won't help you. If you're just getting your library out of a complex build process: Most build systems leave you with a log that will include things like compiler flags.
I have a C++ soft that gets compiled with different OSes, platforms & compilers. Now sometimes compiler have bugs e.g. for instance this one, which implies that gcc versions pre 4.6.4 and pre 4.7.3 are a no-go. Now i could include a unit test that showcases the bug (and perhaps this question will reveal that indeed that's what I should be doing) but this is a tedious task: compiler bugs are sometimes hard to repro and turning one into a unit test might not be easy either... and that's when you have the platform & compiler at hand.
What I'm looking for is a repository that tells me which versions of g++, clang++ and msvc++ suffers from fatal bugs for supporting C++11 (i'm not talking about missing features, when features are not there I work around them). I would then fatal crash when building with them in the build system. Nice feature is, I'm not even forced to hit a bug to ban a compiler (so I'm saving myself future trouble).
Does such a list exist?
This is probably not the answer you are looking for, but I believe the correct way to deal with this is to have a white-list, rather than a black-list. In other words, have a list of compilers that you know works, and if the customer tries to build using a different version than the ones you have tested with, you issue a warning message as part of the build script saying something like this:
This compiler is not supported, please see
http://www.example.com/list_of_supported_compilers.html for a list of
compilers we support. If you choose to continue using this compiler,
feel free to do so, but don't expect full support from our
tech-support, if you find a problem.
The reason I say this is that:
You will not be able to prove that EVERY version other than what is on your blacklist works correctly. You can, however, for whatever testcases you have, prove that compiler X version a.b.c-d works [this doesn't mean that this compiler is bug free - just that you haven't hit any of those bugs in your testing!]
Even if the compiler is "known good" (by whatever standard that is defined), your particular code may trigger bugs that affect your code.
Any sufficiently large software (or hardware) product will have bugs. You can only show that your software works by testing it. Relying on external "there is known bug in version such and such of compiler X" will not help you avoid bugs affecting your code. Having said that, most compilers are fairly well tested, so you (usually) need to do some fairly unusual/complicated things to make the compiler fail.
Investigate Boost.Config, in particular the header <boost/config.hpp>.
This includes a large group of macros for a wide variety of compilers (and different versions there of) which indicate which C++ features are enabled, broken etc. It also includes a comprehensive test suite which can be used to test any new compiler for missing features etc.
According to this post the Google C++ Testing Framework considers "make install" a bad practice.
http://groups.google.com/group/googletestframework/browse_thread/thread/668eff1cebf5309d
The reason for this is that this library violates the "One Definition Rule".
http://en.wikipedia.org/wiki/One_Definition_Rule
Somewhere further in the thread it says: "If you pass different -DGTEST_HAS_FOO=1 flags to different translation units, you'll violate the ODR. Or sometimes people use -D
to select which malloc library to use (debug vs release), and you have
to use the same malloc library across the board."
My questions:
What is it exactly this project is doing wrong?
What can we learn from this? How can we write code more defensive to prevent violating the ODR?
The straight answer to the question would be: do not write code that depends on compiler parameters. In this case, the whole discussion stems from the fact that the code is different based on compiler flags (most probably by means of #ifdef or other preprocessor directives). That in turn means that while the codebase is the same, by changing compiler flags the processed code will be different, and a binary compiled with one set of flags will not be compatible with a binary compiled with a different set of flags.
Depending on the actual project, it might be impossible to decouple the code from compiler flags, and you will have to live with it, but I would recommend avoiding in as much as possible code that can be configured from the compiler command line, in the same way that you should avoid DEBUG only code with side effects. And when you cannot, document the effect that different compiler flags have.
All applicants to our company must pass a simple quiz using C as part of early screening process.
It consists of a C source file that must be modified to provide the desired functionality. We clearly state that we will attempt to compile the file as-is, with no changes.
Almost all applicants user "strlen" but half of them do not include "string.h", so it does not compile until I include it.
Are they just lazy or are there compilers that do not require you to include standard library files, such as "string.h"?
GCC will happily compile the following code as is:
main()
{
printf("%u\n",strlen("Hello world"));
}
It will complain about incompatible implicit declaration of built-in function ‘printf’ and strlen(), but it will still produce an executable.
If you compile with -Werror it won't compile.
I'm pretty sure it's non-conformant for a compiler to include headers that aren't asked for. The reason for this is that the C standard says that various names are reserved, if the relevant header is included. I think this implies they aren't reserved if they aren't included, since compilers aren't allowed to reserve names the standard doesn't say are reserved (unless of course they include a non-standard header which happens to be provided by the compiler, and is documented elsewhere to reserve extra names. Which is what happens when you use POSIX).
This doesn't fully answer your question - there do exist non-conformant compilers. As for your applicants, maybe they're just used to including "windows.h", and so have never thought before about what header strlen might be defined in by the C standard. I assume without testing that MSVC does in principle require you to include "string.h". But since "windows.h" does that for you, for the vast majority of practical Windows programs you don't need to know that you have to include "string.h".
They might be lazy, but you can't tell. Standard library implementations (as opposed to compilers, though of course each compiler usually has "its own" stdlib impl.) are allowed to include other headers. For example, #include <stdlib.h> could include every other library described in the standard. (I'm talking in the context of "C/C++", not strictly C.)
As a result, programmers get accustomed to such things, even if not strictly guaranteed, and it's easy to forget whether some function comes from a general catch-all like stdlib.h or something else—many people forget that memcpy is from string.h too.
If they do not include any headers, I would count them as wrong. If you don't allow them to test it with a particular implementation, however, it's hard to say they're wrong. And if you don't provide them with man pages (which represent the resources they'll need to know how to use on the job), then you're wrong.
At that point, you can certainly say the don't follow the exact letter of the standard; but do you want coders that get things done and know how to fix problems when they see them, or coders that worry about minutiea that won't matter?
If you provide a C file to start working with, make it have all the headers that could be needed from the beginning and ask the applicants to remove the unused ones.
The most common engineering experience is to add (or delete) a few lines of code to/from an application with thousands of lines already working correctly. It would be extremely rare in such a case to need another header file when adding a call to printf() or strlen().
It would be interesting to look over the shoulder of experienced engineers—not just graduated from school, but with extensive experience in the trenches—to see if they simply add strlen() and try compiling, or if they check to see if stdlib.h or string.h is already included before compiling. I bet the overwhelming majority do the former.
C implementations usually still allow implicit function declarations.
Anyway, I wouldn't consider all the boilerplate a required part of an interview, unless you specifically ask for it (e.g. "please don't omit anything you'd normally have in a source file").
(And with Visual Assist's "add ... include" I know less and less where they comde from ;))
Most compilers provide some kind of option to force headers inclusion.
Eg. the GCC compiler has the -include option which is the equivalent of #include preprocessor directive.
TCC will also happily compile a file such as the accepted answer's example:
int main()
{
printf("%u\n", strlen("hello world"));
}
without any warnings (unless you pass -Wall); as an added bonus, you can just call tcc -run file.c to execute file.c without compiling to an output file first.
In C89 (the most commonly applied standard), a function that is not declared will be assumed to return an int and have unknown arguments, and will compile (probably with warnings). If it does not, teh compiler is not compliant. If on the other hand you compiled the code as C++ it will fail, andf must do so if the C++ compiler is compliant.
Why not simply specify in the question that all necessary headers must be included (and perhaps irrelevant ones omitted).
Alternatively, sit the candidates at a machine with a compiler and have them check their own code, and state that the code must compile at the maximum warning level, without warnings.
I'm doing C/C++ for 20 years (even taught classes) and I guess there's a 50% probability that I'd forget the include too (99% of the time when I code, the include is already there somewhere). I know this isn't exactly answering your question, but if someone knows strlen() they will know within seconds what to do with that specific compiler error, so from a job qualification standpoint the slip is practically irrelevant.
Rather than putting emphasis on stuff like that, checking for the subtleties that require real understanding of the language should be far more relevant, i.e. buffer overruns, strncpy not appending a final \0 when hitting the limits, asking someone to make a safe (truncating) copy to a buffer of limited length, etc. Especially in C/C++ programming, the errors that do not generate a compiler error are the ones which will cause you/your company the real trouble.