Why does this not compile on VC 2005?
bool isTrue(bool, bool) { return true; }
void foo();
#define DO_IF(condition, ...) if (condition) foo(__VA_ARGS__);
void run()
{
DO_IF(isTrue(true, true)); // error C2143: syntax error : missing ')' before 'constant'
}
Running this through the preprocessor alone outputs:
bool isTrue(bool, bool) { return true; }
void foo();
void run()
{
if (isTrue(true true)) foo();;
}
Notice the missing comma in the penultimate line.
Last Edit:
LOL!
bool isTrue(bool, bool) { return true; }
void foo();
#define DO_IF(condition, ...) if (condition) { foo(__VA_ARGS__); }
void run()
{
DO_IF(isTrue(true ,, true)); // ROTFL - This Compiles :)
}
Macros with indefinite numbers of arguments don't exist in the 1990 C standard or the current C++ standard. I think they were introduced in the 1999 C standard, and implementations were rather slow to adopt the changes from that standard. They will be in the forthcoming C++ standard (which I think is likely to come out next year).
I haven't bothered to track C99 compliance in Visual Studio, mostly because the only things I use C for anymore require extreme portability, and I can't get that with C99 yet. However, it's quite likely that VS 2005 lacked parts of C99 that VS2008 had.
Alternately, it could be that you were compiling the program as C++. Check your compiler properties.
Run your code through CPP (C preprocessor) to see what substitutions CPP does for your macro.
You could do it either by invoking cpp or providing -E parameter to compiler (if you use gcc, of course).
Various preprocessor implementations parse the commas greedily, treating them as separators for macro arguments. Thus, CPP thinks that you're asking "DO_IF" to do a substitution with two parameters, "isTrue(true" and "true)".
Your code compiles just fine in VS2008, if i change DO_IF to RETURN_IF. However, this should not change anything relevant for your error.
Edit: Still compiles without errors, even after your changes.
I think that should work, except, shouldn't it be...
RETURN_IF(isTrue(b, !b));
and
RETURN_IF(isTrue(b, b));
Related
I'm trying to investigate the difference in generated code when I switch Visual Studio 2019 from /EHsc (Structured and C++ exceptions only) to /EHs (also do not assume that extern "C" functions won't throw – ref), but I can't seem to coax VS into providing a useful testcase when I've minimised it.
I'm surprised the following doesn't do it (the assembly in both cases is identical), since the contents of the function referred to by fptr (a possible definition being included in comments for exposition) are unknown to the optimiser.
void foo();
/*void foo()
{
throw 0;
}*/
extern "C"
void bar(void (*fptr)())
{
fptr();
}
int main()
{
try
{
bar(&foo);
}
catch (...) {}
}
Granted, it knows that any hypothetical exception will be immediately caught, and since with /EHsc the result of this exception propagation is "undefined" per Microsoft, it "appearing to work" is of course a valid outcome. But that's not much help to me here!
So how can I perform this experiment, without introducing different translation units? Ideally I want to be able to come up with a Compiler Explorer snippet for this.
My goal is to prove that permitting extern "C" to throw (or, rather, propagate) C++ exceptions in a well-defined manner does not have a higher runtime cost than I'm willing to accept in trade.
Yes, I am aware of general advice not to let exceptions cross module boundaries or flow through third-party C code. Yes, I am doing that anyway. Yes, that's fine in our project! 😊
The following MCVE may give you some ideas for a test framework. You could easily add more 'thorough' test code, but this will compile and run. Note that, with the /EHsc option set, this warning is generated:
warning C5039: 'FuncTst': pointer or reference to potentially throwing
function passed to extern C function under -EHc. Undefined behavior
may occur if this function throws an exception.
However, when using /EHs the warning goes away - which suggests at least the possibility of different code generation.
Here's the suggested code:
#pragma warning(disable:4514) // These two lines de-fluff the hundreds of other
#pragma warning(disable:4710) // warning generated when /Wall is used.
#include <iostream>
using pFunc = int(__stdcall*)(int);
extern int __stdcall FuncOne(int p) {
int answer = 0;
for (int i = 0; i < p; ++i) answer += i / p; // Possible divide-by-zero
return answer;
}
extern "C" int __stdcall FuncTst(int i, pFunc fnc) noexcept // Expects noexcept but "FuncOne" can throw!
{
return fnc(i);
}
int main()
{
int q;
std::cout << "Enter test number: ";
std::cin >> q;
int z = FuncTst(q, FuncOne);
std::cout << "Result = " << z << std::endl;
return 0;
}
Hope it helps! Feel free to offer critique and/or ask for any 'improvements' or explanation.
Note: Although you can't specifically enable the C5039 warning when you compile with (say) \W4, you can force it to flag as an error (though this may be a bit harsh), with:
#pragma warning(error:5039)
This seems a sort of bug on Compiler Explorer, that forces /EHc even if /EHs is explicitly set.
Try to compile this code on some MSVC version with /EHs:
extern "C" void foo() {
throw 1;
}
static_assert(noexcept(foo()), "");
Surprisingly, the function foo is assumed noexcept, and the compiler output states the reason:
example.cpp (2): warning C4297: 'foo': function assumed not to
throw an exception but does (2): note: The function is extern
"C" and /EHc was specified
Here you can find this test. You may also see this by adding something like throw 1; in your example.
Nevertheless, you can fix it compiling with /EHc- /EHs (also replace /O3 with /O2, as it is not supported on MSVC), but the output is still the same.
You can find your example, fixed, here. In the /EHc- /EHs case, the output states
cl : Command line warning D9025 : overriding '/EHc' with '/EHc-'
and a static assertion will state that bar is actually not assumed noexcept.
I'm attempting to build source files of an open source C++ library written by someone else. This is being done on Windows with Cygwin's mingw-w64 compiler. The only compiler option I'm attaching is -std=gnu++11 since the library depends on some C++11 features.
Here are some examples of code in their library that appears to be causing issues:
CPScalar & Abs()
{
m_dValue = std::abs(m_dValue);
return *this;
}
//...
template<typename Unit>
bool SEScalarQuantity<Unit>::Set(const SEScalarQuantity<Unit>& s)
{
if (m_readOnly)
throw CommonDataModelException("Scalar is marked read-only");
if (!s.IsValid())
return false;
m_value = s.m_value;
m_isnan = (std::isnan(m_value)) ? true : false;
m_isinf = (std::isinf(m_value)) ? true : false;
m_unit = s.m_unit;
return true;
}
I get compiler errors on the std:: qualified functions above. The compiler error on the m_dValue = std::abs(m_dValue); line is
error: call of overloaded 'abs(double&)' is ambiguous
Which made me think it could be related to the question of whether std::abs(0u) is ill-formed as well as this answer to a similar SO question.
m_isnan = (std::isnan(m_value)) ? true : false; and the following line gives me
error: expected unqualified-id before '(' token
There are countless other uses of std:: that the compiler doesn't complain about. If I remove all of the std:: qualifiers in the statements that are giving me errors, the code compiles beautifully.
Thing is, this open source project is (presumably) being built by others without modification, so what am I missing here?
Add #include <cmath> to the file being compiled. The problem is that there are a couple of overloads of std::abs for integer types that are declared in the header <cstdlib> and the compiler is complaining that it doesn't know which of those to use. What's needed, though, is std::abs(double), and that's declared in <cmath>.
The reason that this code works with some compilers and not others is probably that there is a declaration of std::abs(double) coming in from some header other than <cmath>. That's allowed, but not required.
struct T{ double x};
In C, it creates no problem.
But in C++, it gives the following compilation error:
expected ';' at end of member declaration.
From C11, "Structure and union specifiers, syntax" (6.7.2.1/1):
struct-declaration:
specifier-qualifier-list struct-declarator-listopt ;
Each element of a struct ends in a semicolon. Your claim that there is "no problem" is not based on what the C specification says. If your compiler accepts such code, it is not a conforming C compiler, or you are not using it correctly. (Some compilers have a configurable level of standards conformance.)
The GCC parser for C grammar is implemented as follows:
/* If no semicolon follows, either we have a parse error or
are at the end of the struct or union and should
pedwarn. */
if (c_parser_next_token_is (parser, CPP_SEMICOLON))
c_parser_consume_token (parser);
else
{
if (c_parser_next_token_is (parser, CPP_CLOSE_BRACE))
pedwarn (c_parser_peek_token (parser)->location, 0,
"no semicolon at end of struct or union");
else if (parser->error
|| !c_parser_next_token_starts_declspecs (parser))
{
c_parser_error (parser, "expected %<;%>");
c_parser_skip_until_found (parser, CPP_CLOSE_BRACE, NULL);
break;
}
/* If we come here, we have already emitted an error
for an expected `;', identifier or `(', and we also
recovered already. Go on with the next field. */
}
It calls function pedwarn on a missing semicolon.
The definition of pedwarn can be found here. It reads:
pedwarn is for code that is accepted by GCC but it should be rejected or diagnosed according to the current standard, or it conflicts with the standard (either the default or the one selected by -std=). It can also diagnose compile-time undefined behavior (but not runtime UB). pedwarns become errors with -pedantic-errors.
Why output of struct T{ double x}; different in C and C++?
The example struct definition is ill-formed in both C and C++.
C and C++ are different languages, they use different parsers (or whatever component of the compiler detects this error). The output is different because different decisions were made by people when they implemented the parser of the C compiler, than were made when the C++ parser was implemented.
Latter decided to issue an error, the former issues merely a warning, and successfully compiles despite the bug. Another C compiler can refuse to compile as well, and a C++ compiler can accept the program (as long as it produces a warning).
Generally, C language likes semicolon much more than Pascal (Delphi). In your case, C accepts struct T{ double x};, but C++ already requires struct T{ double x;};
Let's look at such piece of code:
#include <iostream>
int foo(int i) {return i; }
int foobar(int z) {return foo(z);}
int main() {
std::cout << foobar(3) << std::endl;
}
It compiles fine with g++ -std=c++11 ... and gives output 3. But The same output is given by:
#include <iostream>
int foo(int i) {return i; }
int foobar(int z) { foo(z);}
int main() {
std::cout << foobar(3) << std::endl;
}
It compiles without problems but clearly the keyword return is missed in foobar. Is it a bug in gcc 4.8.3 or maybe I'm not aware of some c++11 principle? (Runned on Fedora 20)
The C++ standard doesn't make a mandate for compilers to insist on a return-statement in functions return non-void. Instead, flowing off the end of such a function without a return-statement is undefined behavior. The relevant statement in the standard is in 6.6.3 [stmt.return] paragraph 2, last sentence (and in 3.6.1 [basic.start.main] paragraph 5 is the statement making it OK for main() to flow off this function):
Flowing off the end of a function is equivalent to a return with no value; this results in undefined behavior in a value-returning function.
The primary reason for this approach is that it may be non-trivial or even impossible if the function actually ever really returns. Consider this function declaration and function definition:
extern void will_always_throw();
int does_not_return_anything() {
will_always_throw();
}
Assuming will_always_throw() indeed does as the name suggests, there is nothing wrong. In fact, if the compiler gets smarter and manages to verify that will_always_throw(), indeed, always throws (or a "noreturn" attribute is attached to will_always_throw(), it may warn about the last statement in this definition never being reached:
int does_return_something_just_in_case() {
will_always_throw();
return 17;
}
The general approach to deal with these situations is for compilers to support suitable options enabling/disabling warnings as necessary. For example, on your code all compilers I have access to (gcc, clang, and icc) create a warning assuming warnings are enable (using -Wall for the first two and -w2 for Intel's compiler).
The code compiles fine because it is well-formed, and so you can run it. But since this is undefined behavior, you cannot rely on any behavior of the program, anything is legal. To prevent accidents like this, enable compiler warnings. if you compile your code with -Wall, you will see
main.cpp:10:28: warning: no return statement in function returning non-void [-Wreturn-type]
int foobar(int z) { foo(z);}
Here you can get more information about those warnings. Use them and make sure your code compiles warning free. It can catch a lot of errors in your code at compile time.
I'm trying to compile a source with Visual Studio 2008 Express, but I'm getting this error:
Error C2065: 'nullptr' undeclared identifier.
My code:
if (Data == nullptr) {
show("Data is null");
return 0;
}
I read on Google that I should upgrade to Visual Studio 2010, but I don't want to do this because of the IntelliSense in Visual Studio 2008. Can this be repaired or replaced?
The error you are getting is because the compiler doesn't recognize the nullptr keyword. This is because nullptr was introduced in a later version of visual studio than the one you are using.
There's 2 ways you might go about getting this to work in an older version. One idea comes from Scott Meyers c++ book where he suggests creating a header with a class that emulates nullptr like this:
const // It is a const object...
class nullptr_t
{
public:
template<class T>
inline operator T*() const // convertible to any type of null non-member pointer...
{ return 0; }
template<class C, class T>
inline operator T C::*() const // or any type of null member pointer...
{ return 0; }
private:
void operator&() const; // Can't take address of nullptr
} nullptr = {};
This way you just need to conditionally include the file based on the version of msvc
#if _MSC_VER < 1600 //MSVC version <8
#include "nullptr_emulation.h"
#endif
This has the advantage of using the same keyword and makes upgrading to a new compiler a fair bit easier (and please do upgrade if you can). If you now compile with a newer compiler then your custom code doesn't get used at all and you are only using the c++ language, I feel as though this is important going forward.
If you don't want to take that approach you could go with something that emulates the old C style approach (#define NULL ((void *)0)) where you make a macro for NULL like this:
#define NULL 0
if(data == NULL){
}
Note that this isn't quite the same as NULL as found in C, for more discussion on that see this question: Why are NULL pointers defined differently in C and C++?
The downsides to this is that you have to change the source code and it is not typesafe like nullptr. So use this with caution, it can introduce some subtle bugs if you aren't careful and it was these subtle bugs that motivated the development of nullptr in the first place.
nullptr is part of C++11, in C++03 you simply use 0:
if (!Data)
When it's not recommended to edit sources, you can just give your c++ compiler appropriate define. For example, in CMakeLists.txt I've added line:
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Dnullptr=0")
And everything worked well