What do the "..." mean in virtual void foo(...) = 0;? - c++

Pretty simple question I think but I'm having trouble finding any discussion on it at all anywhere on the web. I've seen the triple-dot's as function parameters multiple times throughout the years and I've always just thought it meant "and whatever you would stick here." Until last night, when I decided to try to compile a function with them. To my surprise, it compiled without warnings or errors on MSVC2010. Or at least, it appeared to. I'm not really sure, so I figured I'd ask here.

They are va_args, or variable number of arguments. See for example The C book

Triple dots means the function is variadic (i.e. accepts a variable number of parameters). However to be used there should be at least a parameter... so having just "..." isn't an usable portable declaration.
Sometimes variadic function declarations are used in C++ template trickery just because of the resolution precedence of overloads (i.e. those functions are declared just to make a certain template instantiation to fail or succeed, the variadic function themselves are not implemented). This technique is named Substitution failure is not an error (SFINAE).

It's called ellipses - basically saying that function accepts any number of arguments of any non-class type.

It means that the types of arguments, and the number of them are unspecified. A concrete example with which you are probably familiar would be something like printf(char *, ...)
If you use printf, you can put whatever you like after the format string, and it is not enforced by the compiler.
e.g. printf("%s:%s",8), gets through the compiler just the same as if the "expected" arguments are provided printf("%s:%s", "stringA", "stringB").
Unless really necessary, it should be avoided, as it creates the potential for a run time error to occur, where it might otherwise have been picked up at compile time. If there is a finite, enumerable variation in the arguments your function can accept, then it is better to enumerate them by overloading.

Related

Is compiler allowed to call an immediate (consteval) function during runtime?

This might be a stupid question, but I am confused. I had a feeling that an immediate (consteval) function has to be executed during compile time and we simply cannot see its body in the binary.
This article clearly supports my feeling:
This has the implication that the [immediate] function is only seen at compile time. Symbols are not emitted for the function, you cannot take the address of such a function, and tools such as debuggers will not be able to show them. In this matter, immediate functions are similar to macros.
The similar strong claim might be found in Herb Sutter's publication:
Note that draft C++20 already contains part of the first round of reflection-related work to land in the standard: consteval functions that are guaranteed to run at compile time, which came from the reflection work and are designed specifically to be used to manipulate reflection information.
However, there is a number of evidences that are not so clear about this fact.
From cppreference:
consteval - specifies that a function is an immediate function, that is, every call to the function must produce a compile-time constant.
It does not mean it has to be called during compile time only.
From the P1073R3 proposal:
There is now general agreement that future language support for reflection should use constexpr functions, but since "reflection functions" typically have to be evaluated at compile time, they will in fact likely be immediate functions.
Seems like this means what I think, but still it is not clearly said. From the same proposal:
Sometimes, however, we want to express that a function should always produce a constant when called (directly or indirectly), and a non-constant result should produce an error.
Again, this does not mean the function has to be evaluated during compile time only.
From this answer:
your code must produce a compile time constant expression. But a compile time constant expression is not an observable property in the context where you used it, and there are no side effects to doing it at link or even run time! And under as-if there is nothing preventing that
Finally, there is a live demo, where consteval function is clearly called during runtime. However, I hope this is due to the fact consteval is not yet properly supported in clang and the behavior is actually incorrect, just like in Why does a consteval function allow undefined behavior?
To be more precise, I'd like to hear which of the following statements of the cited article are correct:
An immediate function is only seen at compile time (and cannot be evaluated at run time)
Symbols are not emitted for an immediate function
Tools such as debuggers will not be able to show an immediate function
To be more precise, I'd like to hear which of the following statements of the cited article are correct:
An immediate function is only seen at compile time (and cannot be evaluated at run time)
Symbols are not emitted for an immediate function
Tools such as debuggers will not be able to show an immediate function
Almost none of these are answers which the C++ standard can give. The standard doesn't define "symbols" or what tools can show. Almost all of these are dealer's choice as far as the standard is concerned.
Indeed, even the question of "compile time" vs. "run time" is something the standard doesn't deal with. The only question that concerns the standard is whether something is a constant expression. Invoking a constexpr function may produce a constant expression, depending on its parameters. Invoking a consteval function in a way which does not produce a constant expression is il-formed.
The one thing the standard does define is what gets "seen". Though it's not really about "compile time". There are a number of statements in C++20 that forbid most functions from dealing in pointers/references to immediate functions. For example, C++20 states in [expr.prim.id]/3:
An id-expression that denotes an immediate function shall appear only
as a subexpression of an immediate invocation, or
in an immediate function context.
So if you're not in an immediate function, or you're not using the name of an immediate function to call another immediate function (passing a pointer/reference to the function), then you cannot name an immediate function. And you can't get a pointer/reference to a function without naming it.
This and other statements in the spec (like pointers to immediate function not being valid results of constant expressions) essentially make it impossible for a pointer/reference to an immediate function to leak outside of constant expressions.
So statements about the visibility of immediate functions are correct, to some degree. Symbols can be emitted for immediate functions, but you cannot use immediate functions in a way that would prevent an implementation from discarding said symbols.
And that's basically the thing with consteval. It doesn't use standard language to enforce what must happen. It uses standard language to make it impossible to use the function in a way that will prevent these things from happening. So it's more reasonable to say:
You cannot use an immediate function in a way that would prevent the compiler from executing it at compile time.
You cannot use an immediate function in a way that would prevent the compiler from discarding symbols for it.
You cannot use an immediate function in a way that would force debuggers to be able to see them.
Quality of implementation is expected to take things from there.
It should also be noted that debugging builds are for... debugging. It would be entirely reasonable for advanced compiler tools to be able to debug code that generates constant expressions. So a debugger which could see immediate functions execute is an entirely desirable technology. This becomes moreso as compile-time code grows more complex.
The proposal mentions:
One consequence of this specification is that an immediate function never needs to be seen by a back end.
So it is definitely the intention of the proposal that calls are replaced by the constant. In other words, that the constant expression is evaluated during translation.
However, it does not say it is required that it is not seen by the backend. In fact, in another sentence of the proposal, it just says it is unlikely:
It also means that, unlike plain constexpr functions, consteval functions are unlikely to show up in symbolic debuggers.
More generally, we can re-state the question as:
Are compilers forced to evaluate constant expressions (everywhere; not just when they definitely need it)?
For instance, a compiler needs to evaluate a constant expression if it is the number of elements of an array, because it needs to statically determine the total size of the array.
However, a compiler may not need to evaluate other uses, and while any decent optimizing compiler will try to do so anyway, it does not mean it needs to.
Another interesting case to think about is an interpreter: while an interpreter still needs to evaluate some constant expressions, it may just do it lazily all the time, without performing any constant folding.
So, as far as I know, they aren't required, but I don't know the exact quotes we need from the standard to prove it (or otherwise). Perhaps it is a good follow-up question on its own, which would answer this one too.
For instance, in [expr.const]p1 there is a note that says they can, not that they are:
[Note: Constant expressions can be evaluated during translation. — end note]

C++ Macro's Token-Paster as argument of a function

I was searching for a while on the net and unfortunately i didn't find an answer or a solution for my problem, in fact, let's say i have 2 functions named like this :
1) function1a(some_args)
2) function2b(some_args)
what i want to do is to write a macro that can recognize those functions when feeded with the correct parameter, just that the thing is, this parameter should be also a parameter of a C/C++ function, here is what i did so far.
#define FUNCTION_RECOGNIZER(TOKEN) function##TOKEN()
void function1a()
{
}
void function2a()
{
}
void anotherParentFunction(const char* type)
{
FUNCTION_RECOGNIZER(type);
}
clearly, the macro is recognizing "functiontype" and ignoring the argument of anotherParentFunction, i'm asking if there is/exist a trick or anything to perform this way of pasting.
thank you in advance :)
If you insist on using a macro: Skip the anotherParentFunction() function and use the macro directly instead. When called with constant strings, i.e.
FUNCTION_RECOGNIZER( "1a");
it should work.
A more C++ like solution would be to e.g use an enum, then implement anotherParentFunction() with the enum as parameter and a switch that calls the corresponding function. Of course you need to change the enum and the switch statement then every time you add a new function, but you would be more flexible in choosing the names of the functions.
There are many more solutions to achieve something similar, the question really is: What is your use case? What do want to achieve?
In 16.1.5 the standard says:
The implementation can process and skip sections of source files conditionally, include other source files, and replace macros. These capabilities are called preprocessing, because conceptually they occur before translation of the resulting translation unit.
[emphasis mine]
Originally pre-processing was done by a separate app, it is essentially an independent language.
Today, the pre-processor is often part of the compiler, but - for example - you can't see macros etc in the Clang AST tree.
The significance of this is that the pre-processor knows nothing about types or functions or arguments.
Your function definition
void anotherParentFunction(const char* type)
means nothing to the pre-processor and is completely ignored by it.
FUNCTION_RECOGNIZER(type);
this is recognized as a defined macro, but type is not a recognized pre-processor symbol so it is treated as a literal, the pre-processor does not consult the C++ parser or interact with it's AST tree.
It consults the macro definition:
#define FUNCTION_RECOGNIZER(TOKEN) function##TOKEN()
The argument, literal type, is tokenized as TOKEN. The word function is taken as a literal and copied to the result string, the ## tells the processor to copy the value of the token TOKEN literally, production functiontype in the result string. Because TOKEN isn't recognized as a macro, the ()s end the token and the () is appended as a literal to the result string.
Thus, the pre-processor substitutes
FUNCTION_RECOGNIZER(type);
with
functiontype();
So the bad news is, no there is no way to do what you were trying to do, but this may be an XY Problem and perhaps there's a solution to what you were trying to achieve instead.
For instance, it is possible to overload functions based on argument type, or to specialize template functions based on parameters, or you can create a lookup table based on parameter values.

What is the purpose of commenting out function argument names? [duplicate]

This question already has answers here:
Why comment parameter names rather than leave it as it is
(6 answers)
Closed 8 years ago.
I'm working with this sdk that generates a skeleton plugin project, as in, all the functions required by the host application are there, just not filled.
Initially, all the function definitions are sorta like this:
void Mod1::ModifyObject(TimeValue /*t*/, ModContext& /*mc*/, ObjectState* /*os*/, INode* /*node*/) {}
With the argument names commented out, why is that? As far as I can tell, if I'm not using the arguments, it makes no difference whether the names are there or not.
I guess there are two reasons for this:
It's deliberate to essentially only implement what's necessary: Function names and their signatures. They don't define any names, so it's up to you to either pick the suggestions or your own (or none at all).
It's to avoid extra-pedantic compilers complaining about unused variables that are defined as parameters. If you don't need a parameter, it's most efficient to simply drop it (unless you need it in some other implementation, but the compiler won't necessarily know about that). But then again they could also complain about params that are there, but not named (although you could consider that intentionally omitted).
Some compilers will issue a warning about unused named parameters, but not about unused unnamed parameters. GCC is one such compiler, if the -Wunused-parameter option is used, enabled by -Wextra.
The theory behind this is that an unused named parameter is more likely to be a mistake than an unused unnamed parameter. Of course, that theory doesn't apply to all code.
When you turn on treat warnings as errors, and you don't use a parameter, you'll need to comment out its name or delete it entirely.
There are sometimes macros such as Q_UNUSED in Qt, or you can just reference it in code without doing anything to make the compiler shut up.
void foo(int unused) {
(void) unused; // So the compiler doesn't emit a warning.
}

Why is constexpr not automatic? [duplicate]

This question already has answers here:
Why do we need to mark functions as constexpr?
(4 answers)
Closed 2 years ago.
As far as I understand it, constexpr can be seen as a hint to the compiler to check whether given expressions can be evaluated at compile-time and do so if possible.
I know that it also imposes some restriction on the function or initialization declared as constexpr but the final goal is compile-time evaluation, isn't it?
So my question is, why can't we leave that at the compiler? It is obviously capable of checking the pre-conditions, so why doesn't it do for each expression and evaluate at compile-time where possible?
I have two ideas on why this might be the case but I am not yet convinced that they hit the point:
a) It might take too long during compile-time.
b) Since my code can use constexpr functions in locations where normale functions would not be allowed the specifier is also kind of part of the declaration. If the compiler did everything by itself, one could use a function in a C-array definition with one version of the function but with the next version there might be a compiler-error, because the pre-conditions for compile-time evaluation are no more satisfied.
constexpr is not a "hint" to the compiler about anything; constexpr is a requirement. It doesn't require that an expression actually be executed at compile time; it requires that it could.
What constexpr does (for functions) is restrict what you're allowed to put into function definition, so that the compiler can easily execute that code at compile time where possible. It's a contract between you the programmer and the compiler. If your function violates the contract, the compiler will error immediately.
Once the contract is established, you are now able to use these constexpr functions in places where the language requires a compile time constant expression. The compiler can then check the elements of a constant expression to see that all function calls in the expression call constexpr functions; if they don't, again a compiler error results.
Your attempt to make this implicit would result in two problems. First, without an explicit contract as defined by the language, how would I know what I can and cannot do in a constexpr function? How do I know what will make a function not constexpr?
And second, without the contract being in the compiler, via a declaration of my intent to make the function constexpr, how would the compiler be able to verify that my function conforms to that contract? It couldn't; it would have to wait until I use it in a constant expression before I find that it isn't actually a proper constexpr function.
Contracts are best stated explicitly and up-front.
constexpr can be seen as a hint to the compiler to check whether given expressions can be evaluated at compile-time and do so if possible
No, see below
the final goal is compile-time evaluation
No, see below.
so why doesn't it do for each expression and evaluate at compile-time where possible?
Optimizers do things like that, as allowed under the as-if rule.
constexpr is not used to make things faster, it is used to allow usage of the result in context where a runtime-variable expression is illegal.
This is only my evaluation, but I believe your (b) reason is correct (that it forms part of the interface that the compiler can enforce). The interface requirement serves both for the writer of the code and the client of the code.
The writer may intend something to be usable in a compile-time context, but not actually use it in this way. If the writer violates the rules for constexpr, they might not find out until after publication when clients who try to use it constexpr fail. Or, more realistically, the library might use the code in a constexpr sense in version 1, refactor this usage out in version 2, and break constexpr compatibility in version 3 without realizing it. By checking constexpr-compliance, the breakage in version 3 will be caught before deployment.
The interface for the client is more obvious --- an inline function won't silently become constexpr-required because it happened to work and someone used that way.
I don't believe your (a) reason (that it could take too long for the compiler) is applicable because (1) the compiler has to check much of the constexpr constraints anyway when the code is marked, (2) without the annotation, the compiler would only have to do the checking when used in a constexpr way (so most functions wouldn't have to be checked), and (3) IIUC the D programming language actually does allow functions to be compile-time evaluated if they meet requirements without any declaration assistance, so apparently it can be done.
I think I remember watching an early talk by Bjarne Stroustrup where he mentioned that programmers wanted fine grained control on this "dangerous" feature, from which I understand that they don't want things "accidentally" executed at compile time without them knowing. (Even if that sound like a good thing.)
There can be many reasons for that, but the only valid one is ultimatelly compilation speed I think ( (a) in your list ).
It would be too much burden on the compiler to determine for every function if it could be computed at compile time.
This argument is weaker as compilation times in general go down.
Like many other features of C++ what end up happening is that we end up with the "wrong defaults".
So you have to tell when you want constexpr instead of when you don't want constexpr (runtimeexpr); you have to tell when you want const intead of where you want mutable, etc.
Admitedly, you can imagine functions that take an absurd amount of time to run at compile time and that cannot be amortized (with other kind of machine resources) at runtime.
(I am not aware that "time-out" can be a criterion in a compiler for constexpr, but it could be so.)
Or it could be that one is compiling in a system that is always expected to finish compilation in a finite time but an unbounded runtime is admissible (or debuggable).
I know that this question is old, but time has illuminated that it actually makes sense to have constexpr as default:
In C++17, for example, you can declare a lambda constexpr but more importantly they are constexpr by default if they can be so.
https://learn.microsoft.com/en-us/cpp/cpp/lambda-expressions-constexpr
Note that lambda has all the "right" (opposite) defaults, members (captures) are const by default, arguments are templates by default auto, and now these functions are constexpr by default.

How many parameters does ios::setstate actually take?

Every definition I've seen of function ios::setstate( iostate state ) shows that the function takes ONE and ONLY ONE parameter yet when I compile a program with the following function call, everything compiles and runs just fine:
mystream.setstate( std::ios_base::badbit, true );
What exactly is the second parameter and why is there no documentation about it?
EDIT: I'm using the command line compiler of the latest version of Microsoft Visual Studio 2010.
It's required to accept a single argument, as you've noted, but implementations are allowed to extend member functions via parameters with default values (§17.6.5.5). In other words, as long as this works:
mystream.setstate( std::ios_base::badbit );
your compiler is conforming. Nothing says that your code doesn't have to work, though.
(Your library implementation has decided that a boolean parameter would be useful to have. You never notice it because it has a default value, but you can still get into implementation-specific territory and provide the argument yourself. Whether or not this is a good idea is obviously another question, but probably not.)