I have to use IAR compiller in embedded application (it does not have namespaces, exceptions, multiple/virtual inheritance, templates are bit limited and only C++03 is supported).
I can't use parameter pack so I tried to create member function with variadic parameter.
I know variadic parameters are generally unsafe. But is safe to use this pointer in va_start macro?
If I use ordinary variadic function it would need a dummy parameter before ... to be able to access remaining parameters. I know variadic macro would not need parameter before ... but I would prefer not to use it.
If I use member function it has hidden this parameter before ... so I tried it.:
struct VariadicTestBase{
virtual void DO(...)=0;
};
struct VariadicTest: public VariadicTestBase{
virtual void DO(...){
va_list args;
va_start(args, this);
vprintf("%d%d%d\n",args);
va_end(args);
}
};
//Now I can do
VariadicTestBase *tst = new VariadicTest;
tst->DO(1,2,3);
tst->DO(1,2,3); prints 123 as expected. But I am not sure if it is not just some random/undefined behavior. I know tst->DO(1,2); would crash just like normal prinf would. I do not mind it.
Nothing specifies that behaviour in the standard, so this construct just invokes formal Undefined Behaviour. That means that it can work fine in your implementation and cause compilation error or unexpected results in a different implementation.
The fact that non static methods have to pass the hidden this pointer cannot guarantee that va_start can use it. It probably works that way because in the early times, C++ compilers were just pre-processors that converted C++ source to C source and the hidden this parameter was added by the pre-processor to be available to the C compiler. And it has probably be maintained for compatibility reasons. But I would try hard to avoid that in mission critical code...
Seems to be undefined behavior. If you look at what va_start(ap, pN) does in many implementations (check your header file), it takes the address of pN, increments the pointer by the size of pN and stores the result in ap. Can we legally look at &this?
I found a nice reference here: https://stackoverflow.com/a/9115110/10316011
Quoting the 2003 C++ standard:
5.1 [expr.prim] The keyword this names a pointer to the object for which a nonstatic member function (9.3.2) is invoked. ... The type of the expression is a pointer to the function’s class (9.3.2), ... The expression is an rvalue.
5.3.1 [expr.unary.op] The result of the unary & operator is a pointer to its operand. The operand shall be an lvalue or a qualified_id.
So even if this works for you, it is not guaranteed to and you should not rely on it.
I think it should be OK, though I doubt you will find a specific citation from the C++ standard which says so.
The rationale is this: va_start() must be passed the last argument to the function. A member function taking no explicit parameters has only a single parameter (this), which therefore must be its last parameter.
It will be easy to add a unit test to alert you if you ever compile on a platform where this doesn't work (which seems unlikely, but then again you are already compiling on a somewhat atypical platform).
This is undefined behavior. Since the language does not require this to be passed as a parameter, it might not be passed at all.
For example, if a compiler can figure out that an object is a singleton, it may avoid passing this as a parameter and use a global symbol when the address of this is explicitly required (like in the case of va_start). In theory, the compiler might generate code to compensate that in va_start (after all, the compiler knows this is a singleton), but it is not required to do so by the standard.
Think of something like:
class single {
public:
single(const single& )= delete;
single &operator=(const single& )= delete;
static single & get() {
// this is the only place that can construct the object.
// this address is know not later than load time:
static single x;
return x;
}
void print(...) {
va_list args;
va_start (args, this);
vprintf ("%d\n", args);
va_end (args);
}
private:
single() = default;
};
Some compilers, like clang 8.0.0, emit a warning for the above code:
prog.cc:15:23: warning: second argument to 'va_start' is not the last named parameter [-Wvarargs] va_start (args, this);
Despite the warning, it runs ok. In general, this proves nothing, but it is a bad idea to have a warning .
Note: I don't know of any compiler that detects singletons and treats them specially, but the language does not forbid this kind of optimization. If it is not done today by your compiler, it might be done tomorrow by another compiler.
Note 2: despite all that, it might work in practice to pass this to va_start. Even if it works, it is not a good idea to do something that isn't guaranteed by the standard.
Note 3: The same singleton optimization can't be applied to parameters such as:
void foo(singleton * x, ...)
It can't be optimized away since it may have one of two values, point to the singleton or be nullptr. This means that this optimization concern does not apply here.
Related
In C++, it is a compiler error to call a function before it is declared. But in C, it may compile.
#include<stdio.h>
int main()
{
foo(); // foo() is called before its declaration/definition
}
int foo()
{
printf("Hello");
return 0;
}
I have tried and know that it is correct but I can't get the reason behind it. Can anyone please explain how the compilation process actually takes place and differs in both the languages.
The fact that the code "compiles" as a c program doesn't mean you can do it. The compiler should warn about implicit declaration of the function foo().
In this particular case implicit declaration would declare an identical foo() and nothing bad will happen.
But suppose the following, say this is
main.c
/* Don't include any header, why would you include it if you don't
need prototypes? */
int main(void)
{
printf("%d\n", foo()); // Use "%d" because the compiler will
// implicitly declare `foo()` as
//
// int foo()
//
// Using the "correct" specifier, would
// invoke undefined behavior "too".
return 0;
}
Now suppose foo() was defined in a different compilation unit1 foo.c as
foo.c
double foo()
{
return 3.5;
}
does it work as expected?
You could imagine what would happen if you use malloc() without including stdio.h, which is pretty much the same situation I try to explain above.
So doing this will invoke undefined behavior2, thus the term "Works" is not applicable in the understandable sense in this situation.
The reason this could compile is because in the very old days it was allowed by the c standard, namely the c89 standard.
The c++ standard has never allowed this so you can't compile a c++ program if you call a function that has no prototype ("declaration") in the code before it's called.
Modern c compilers warn about this because of the potential for undefined behavior that can easily occur, and since it's not that hard to forget to add a prototype or to include the appropriate header it's better for the programmer if the compiler can warn about this instead of suddenly having a very unexplicable bug.
1It can't be compiled in the same file because it would be defined with a different return type, since it was already implicitly declared
2Starting with the fact that double and int are different types, there will be undefined behavior because of this.
When C was developed, the function name was all you needed to be able to call it. Matching arguments to function parameters was strictly the business of the programmer; the compiler didn't care if you passed three floats to something that needed just an integer.
However, that turned out to be rather error prone, so later iterations of the C language added function prototypes as a (still optional) additional restriction. In C++ these restrictions have been tightened further: now a function prototype is always mandatory.
We can speculate on why, but in part this is because in C++ it is no longer enough to simply know the function name. There can be multiple functions with the same name, but with different arguments, and the compiler must figure out which one to call. It also has to figure out how to call (direct or virtual?), and it may even have to generate code in case of a template function.
In light of all that I think it makes sense to have the language demand that the function prototype be known at the point where the function is called.
Originally, C had no function prototypes, and C++ did not exist.
If you said
extern double atof();
this said that atof was a function returning double. (Nothing was said about its arguments.)
If you then said
double d = atof("1.3");
it would work. If you said
double d2 = atof(); /* whoops, forgot the argument to atof() */
the compiler would not complain, but something weird would happen if you tried to run it.
In those days, if you wanted to catch errors related to calling functions with the wrong number or type of arguments, that was the job of a separate program, lint, not the C compiler.
Also in those days, if you just called a function out of the blue that the compiler had never heard of before, like this:
int i = atoi("42");
the compiler basically pretended that earlier you had said
extern int atoi();
This was what was called an implicit function declaration. Whenever the compiler saw a call to a function whose name it didn't know, the compiler assumed it was a function returning int.
Fast forward a few years. C++ invents the function prototypes that we know today. Among other things, they let you declare the number and types of the arguments expected by a function, not just its return type.
Fast forward a few more years, C adopts function prototypes, optionally. You can use them if you want, but if you don't, the compiler will still do an implicit declaration on any unknown function call it sees.
Fast forward a few more years, to C11. Now implicit int is finally gone. A compiler is required to complain if you call a function without declaring it first.
But even today, you may be using a pre-C11 compiler that's still happy with implicit int. And a C11-complaint compiler may choose to issue a mere warning (not a compilation-killing error) if you forget to declare a function before calling it. And a C11-compliant compiler may offer an option to turn off those warnings, and quietly accept implicit ints. (For example, when I use the very modern clang, I arrange to invoke it with -Wno-implicit-int, meaning that I don't want warnings about implicit int, because I've still got lots of old, working code that I don't feel like rewriting.)
Why can I call a function in C without declaring it?
Because in C, but not in C++, a function without a prototype is assumed to return an int. This is an implicit declaration of that function. If that assumption turns out to be true (the function is declared later on with return-type of int), then the program compiles just fine.
If that assumption turns out to be false (it was assumed to return an int, but then actually is found to return a double, for example) then you get a compiler error that two functions cannot have the same name. (eg. int foo() and double foo() cannot both exist in the same program)
Note that all of this is C only.In C++, implicit declarations are not allowed. Even if they were, the error message would be different because C++ has function overloading. The error would say that overloads of a function cannot differ only by return type. (overloading happens in the parameter list, not the return-type)
Im reading c++ code, i have found such definition
#define USE_VAL(X) if (&X-1) {}
has anybody idea, what does it mean?
Based on the name, it looks like a way of getting rid of an "unused variable" warning. The intended use is probably something like this:
int function(int i)
{
USE_VAL(i)
return 42;
}
Without this, you could get a compiler warning that the parameter i is unused inside the function.
However, it's a rather dangerous way of going about this, because it introduces Undefined Behaviour into the code (pointer arithmetic beyond bounds of an actual array is Undefined by the standard). It is possible to add 1 to an address of an object, but not subtract 1. Of course, with + 1 instead of - 1, the compiler could then warn about "condition always true." It's possible that the optimiser will remove the entire if and the code will remain valid, but optimisers are getting better at exploiting "undefined behaviour cannot happen," which could actually mess up the code quite unexpectedly.
Not to mention that fact that operator& could be overloaded for the type involved, potentially leading to undesired side effects.
There are better ways of implementing such functionality, such as casting to void:
#define USE_VAL(X) static_cast<void>(X)
However, my personal preference is to comment out the name of the parameter in the function definition, like this:
int function(int /*i*/)
{
return 42;
}
The advantage of this is that it actually prevents you from accidentally using the parameter after passing it to the macro.
Typically it's to avoid an "unused return value" warning. Even if the usual "cast to void" idiom normally works for unused function parameters, gcc with -pedantic is particularly strict when ignoring the return values of functions such as fread (in general, functions marked with __attribute__((warn_unused_result))), so a "fake if" is often used to trick the compiler in thinking you are doing something with the return value.
A macro is a pre-processor directive, meaning that wherever it's used, it will be replaced by the relevant piece of code.
and here after USE_VAL(X) the space it is explain what will USE_VAL(X) do.
first it take the address of x and then subtract 1 from it. if it is 0 then do nothing.
where USE_VAL(X) will used it will replaced by the if (&X-1) {}
I am reading some code of a library I am using, and I found that in a function this was used:
void someFunction(Foo& a, int index, int partId)
{
(void) partId;
(void) index;
// more code
}
Anyone knows why? Thanks.
To avoid a compiler warning/error indicating that the variable was unused in the function body. It's a style choice, the other way to achieve the same effect would be to leave the variable un-named:
void someFunction(Foo& a, int /*index*/, int /*partId*/)
This is usually done when the parameters aren't being used in the function and the compiler emits a warning about unused parameters. By adding the case the compiler will deem that they have been used and not issue the waring.
You can accomplish the same thing my just removing the name of the parameter from the function:
void someFunction(Foo& a, int, int)
{
}
Index and partId are not used inside the function.
A C/C++ compiler will usually throw a warning about unused parameters.
the (void) parameter; statement will not generate any code, but let the compiler know you are using the parameter, in order to avoid the said warning.
It is also a polite way to let another programmer know easily that the parameters are unused for some reason
(typically, complying with a more generic interface or supporting obsolete parameters from a previous version of the same interface).
Last but not least, as Jerry Coffin pointed out, this works both in C and C++, while the alternative solution of using unnamed parameters only works in C++.
What it does is to use partId and index, without actually using them. In other words, it fools the compiler into thinking that the function arguments are actually used, while in reality the code doesn't use them.
Why would one do that? Because they have set a flag in the compiler to generate a warning when an argument of a function is not used in its body.
Note that in C++, one can simply remove the argument name from the function. If you see that in C, try disabling warning on unused function arguments since you can't omit argument names.
I've been playing with the return type deduction supported in g++ with -std=c++1y.
If you prototype a function with an explicit return type, and then later try to define the function with return type deduction, the compiler complains of an ambiguous old declaration:
std::string some_function();
...
auto some_function(){ return std::string{"FOO"}; } //fails to compile
Is there a good reason why this doesn't work?
My rationale for using return type deduction in the definition is to keep the code clean, but want an explicit type in the prototype for self-documenting reasons. Recommendations on best practices for when and when not to use return type deduction would be appreciated :)
To be more clear, I would like answers to:
1. Is this an implementation mistake in the compiler? (I am fairly sure it is not)
2. Could this type of deduction be done, but isn't allowed by the proposal to the standard? If so, why not?
3. If this is really ambiguous, what are some examples where deducing the type and trying to match it with an explicit forward declaration would get you into trouble?
4. Are there deeper implementation specific issues behind this?
5. Is it simply an oversight?
It's because of how the function tables and overloaded functions work in C++.
When the compiler reads your std::string some_function(); it creates a spot for it to reference in the binary and says if "this function is ever called jump to this spot".
So we have a vtable that looks like this...
(Address offset) (Symbol)
0x???????? std::string somefunction();
Now it gets to your auto some_function() {...}. Normally it would first look in the function table to see if auto somefunction(); exists in the table or some variation there of, but the compiler notices that this is a implementation and it has the auto keyword so to reduce the entropy it writes *blank* some_function(); to the function table and ties to solve the return type.
Now the Function table looks like this...
(Address offset) (Symbol)
0x???????? std::string somefunction();
0x???????? ????? somefunction();
So it chugs along compiling code into binary when it finds out the return type which in this case is std::string. The compiler now knows what the return type is so it goes to the function table and changes auto somefunction(); too std::string somefunction();.
Now the Function table looks like this...
(Address offset) (Symbol)
0x???????? std::string somefunction();
0x???????? std::string somefunction();
Now the compiler goes back and continues to compile the function. Once it's done it goes back and finishes up the vtable only to find the same symbol is in there twice. It's now ambiguous to which symbol we are referring too.
So what is the reason for this?
Not 100% sure but vtables are made long before your code is worked down enough to allow for the deduction type to be made. So the only option the compiler has to work with at that stage in time is to just assume it's a new symbol. It's just something I've noticed looking at symbol tables all day and writing my own C++11 compiler.
I however can in no way speak for other compilers where lots of optimizations and other steps are introduced, I just work with the bare bones, however that is my understanding of the standard.
One last thing the type is completely arbitrary. It doesn't even matter if they don't match at the time of deduction. It never get's that far. The problem arises that there are two of the same symbols in the table, and you can't overload on return types.
you need to do it as:
auto some_function() -> decltype(std::string) { return std::string{"FOO"}; }
for more info look in http://en.wikipedia.org/wiki/C%2B%2B11 -> alternative function syntax
main()
{
f();
}
int f( int i, float fl)
{
printf("function");
}
Why does the above code runs successfully in 'C' and prints function when it should report an error, as f () is being called before it is declared.
When it's running successfully in 'C', then why not in 'C++'. When running in c++ it's showing: error: 'f' was not declared in this scope
If it is because of something like the compiler assumes an undeclared function to return an int and accept an unspecified number of arguments, then why does it runs successfully for the function below too ( i.e. when returning the returning type to void instead of int ?
void f ( int i, float fl)
{
printf("function");
}
Old versions of the C programming language permitted function references without earlier declarations. As a legacy, many current compilers still support the old language or aspects of it. This is why some compilers accept the source code you have shown. Your compiler likely has switches that tell it to use a more recent version of the C programming language or to be more strict about adherence to the standard.
C++ was developed more recently and does not have the legacy of functions without declarations.
The different return types work because the assembly language happens to be implemented the same way. For a function returning void, the called routine simply performs its operations and returns. For a function returning int, the called routine performs its operations, puts its final result in a specific processor register, and returns. In the calling routine, when the return value of a function returning int is not used, the calling routine simply ignores what is in the processor register. Because the register is ignored, there is no difference, to the calling routine, between a function returning void and a function returning int. This will not be the case on all target platforms; there can be differences between functions with different return types, especially when the return types are more complicated objects (such as structs). And, if the calling function did use the return value, the return type would make a difference. The function returning void would leave some uncontrolled value in the processor register where a return value is supposed to be, and the calling function would use that and get unexpected results.
As should be apparent, none of this is behavior you should rely on. It is good practice to use the compiler switches that specify you would like stricter adherence to the standard and would like more warnings. (I would prefer these be the default for compilers.) And it is good practice to write code that conforms to the standard.
Because C allows the implicit declaration of functions. Or at least
it did; C90 may require a declaration, I'm not sure. But since not
declaring functions was common practice in C for such a long time, I
would expect most compilers to continue to allow it, even after it is
banned.
Because C and C++ are different languages. C++ has never allowed
implicitly declaring functions.
Because historically, C didn't have a void type; functions with no
return value were declared int, even if they didn't return anything,
and there's no problem as long as you don't attempt to use the
(non-existant) return value.
The error does not show in C because you're not using the proper flags in the invocation of your compiler.
What is your compiler?
If it's gcc, try gcc -std=c99 -pedantic -Werror ...