Exclude Macro, What can I use like inline function in C++ - c++

I don't want to edit the common part of the source code repeatedly.
So I separate the other parts with different functions as below.
/* Origin */
void MyClass::threadFunc_A()
{
// many variables in this function
...
// do something A
...
}
void MyClass::threadFunc_B()
{
// many variables in this function
...
// do something B
...
}
/* I wish */
void MyClass::threadFunc(type)
{
// many variables
int a, b;
char c, d, e;
...
string x, y, z;
...
// case
if (type == A) do_something_A();
if (type == B) do_something_B();
...
if (type == Z) do_something_Z();
}
void do_something_A()
{
// using "many variables (a ~ z)" here
a = 10;
b = 20;
...
}
In the case of macro functions, I know that the code is built in when compiling, so that variables within the same range can be used.
However, if the do_something() function is lengthened, there is a limit to writing using the macro function.
Is there a way to write an inline function like 'I wish' in C++ 17 or higher?
(Except for putting A as a member variable of the class or making it a structure and passing it)

No there is no way to do that. C++ has lexical scoping. What you want would be (at least partially) more like dynamic scoping.
The point of a function is that it separates some part of the logic into a self-contained block of code that can be reused. If you make the name resolution in the function dependent on the declarations at the call site, it becomes impossible to reason about the behavior of the function without also specifying the call site.
Macros effectively do behave that way, but that is not a good thing. It is one of the reasons to avoid them.
C++ does have templates, which allows making similar logic independent of concrete types in a function, but that still doesn't allow making name resolution dependent on the call site.
Write your functions so that they represent a part of the program logic that makes sense in itself. The function should take all variables to which it needs access as arguments, possibly with templated types, and if it needs to work on an unspecified number of arguments, possibly of different types, it can be a variadic function template. If there are many variables with similar meaning, consider putting them in an array or container or class combining them into one unit that makes sense in the program logic.

I am not aware of a way to literally use a typename as a function parameter; except for specialized functions such as sizeof().
Some more normative ways to approach it would be:
Use an enum representing types, to pass to threadFunc().
Make a function in a class for each type, as overrides; and pass it a value of the type in question.
Make a function parameter std::variant, and pass it a value of the type in question. That way, you can inspect it to see which of the types it supports that it is, using the ::index() property; or if you're using the Qt framework, QVariant() can get a dummy QVariant of the type with QVariant::type() that you can compare to, or a string as the typename, with QVariant::typename().
This SO ? purports to shows how to use a template to allow a typename as a function parameter: How to take a typename as a parameter in a function? (C++); although the answers only have it to work for multiple return types.
One downside of putting large blocks of code in C++ macros, is they can be hard to debug; as they probably won't have syntax highlighting or in-editor syntax checking. Also, the debugger may treat the whole block as a single line.

Related

C/C++ compiler optimisations: should I prefer creating new variables, re-using existing ones, or avoiding variables altogether?

This is something I've always wondered: is it easier for the compiler to optimise functions where existing variables are re-used, where new (ideally const) intermediate variables are created, or where creating variables is avoided in favour of directly using expressions?
For example, consider the functions below:
// 1. Use expression as and when needed, no new variables
void MyFunction1(int a, int b)
{
SubFunction1(a + b);
SubFunction2(a + b);
SubFunction3(a + b);
}
// 2. Re-use existing function parameter variable to compute
// result once, and use result multiple times.
// (I've seen this approach most in old-school C code)
void MyFunction2(int a, int b)
{
a += b;
SubFunction1(a);
SubFunction2(a);
SubFunction3(a);
}
// 3. Use a new variable to compute result once,
// and use result multiple times.
void MyFunction3(int a, int b)
{
int sum = a + b;
SubFunction1(sum);
SubFunction2(sum);
SubFunction3(sum);
}
// 4. Use a new const variable to compute result once,
// and use result multiple times.
void MyFunction4(int a, int b)
{
const int sum = a + b;
SubFunction1(sum);
SubFunction2(sum);
SubFunction3(sum);
}
My intuition is that:
In this particular situation, function 4 is easiest to optimise because it explicitly states the intention for the use of the data. It is telling the compiler: "We are summing the two input arguments, the result of which will not be modified, and we are passing on the result in an identical way to each subsequent function call." I expect that the value of the sum variable will just be put into a register, and no actual underlying memory access will occur.
Function 1 is the next easiest to optimise, though it requires more inference on the part of the compiler. The compiler must spot that a + b is used in an identical way for each function call, and it must know that the result of a + b is identical each time that expression is used. I would still expect the result of a + b to be put into a register rather than committed to memory. However, if the input arguments were more complicated than plain ints, I can see this being more difficult to optimise (rules on temporaries would apply for C++).
Function 3 is the next easiest after that: the result is not put into a const variable, but the compiler can see that sum is not modified anywhere in the function (assuming that the subsequent functions do not take a mutable reference to it), so it can just store the value in a register similarly to before. This is less likely than in function 4's case, though.
Function 4 gives the least assistance for optimisations, since it directly modifies an incoming function argument. I'm not 100% sure what the compiler would do here: I don't think it's unreasonable to expect it to be intelligent enough to spot that a is not used anywhere else in the function (similarly to sum in function 3), but I wouldn't guarantee it. This could require modifying stack memory depending on how the function arguments are passed in (I'm not too familiar with the ins and outs of how function calls work at that level of detail).
Are my assumptions here correct? Are there more factors to take into account?
EDIT: A couple of clarifications in response to comments:
If C and C++ compilers would approach the above examples in different ways, I'd be interested to know why. I can understand that C++ would optimise things differently depending on what constraints there are on whichever objects might be inputs to these functions, but for primitive types like int I would expect them to use identical heuristics.
Yes, I could compile with optimisations and look at the assembly output, but I don't know assembly, hence I'm asking here instead.
Good modern compilers generally do not “care” about the names you use to store values. They perform lifetime analyses of the values and generate code based on that. For example, given:
int x = complicated expression 0;
... code using x
x = complicated expression 1;
... code using x
the compiler will see that complicated expression 0 is used in the first section of code and complicated expression 1 is used in the second section of code, and the name x is irrelevant. The result will be the same as if the code used different names:
int x0 = complicated expression 0;
... code using x0
int x1 = complicated expression 1;
... code using x1
So there is no point in reusing a variable for a different purpose; it will not help the compiler save memory or otherwise optimize.
Even if the code were in a loop, such as:
int x;
while (some condition)
{
x = complicated expression;
... code using x
}
the compiler will see that complicated expression is born at the beginning of the loop body and ends by the end of the loop body.
What this means is you do not have to worry about what the compiler will do with the code. Instead, your decisions should be guided mostly by what is clearer to write and more likely to avoid bugs:
Avoid reusing a variable for more than one purpose. For example, if somebody is later updating your function to add a new feature, they might miss the fact you have changed the function parameter with a += b; and use a later in the code as if it still contained the original parameter.
Do freely create new variables to hold repeated expressions. int sum = a + b; is fine; it expresses the intent and makes it clearer to readers when the same expression is used in multiple places.
Limit the scope of variables (and identifiers generally). Declare them only in the innermost scope where they are needed, such as inside a loop rather than outside. The avoids a variable being used accidentally where it is no longer appropriate.

Why member function parameter const mismatch allowed? [duplicate]

From the C++ Primer 5th Edition, it says:
int f(int){ /* can write to parameter */}
int f(const int){ /* cannot write to parameter */}
The two functions are indistinguishable. But as you know, the two functions really differ in how they can update their parameters.
Can someone explains to me?
EDIT
I think I didn't interpret my question well. What I really care is why C++ doesn't allow these two functions simultaneously as different function since they are really different as to "whether parameter can be written or not". Intuitively, it should be!
EDIT
The nature of pass by value is actually pass by copying argument values to parameter values. Even for references and pointers where thee copied values are addresses. From the caller's viewpoint, whether const or non-const is passed to the function does not influence values (and of course types of) copied to parameters.
The distinction between top-level const and low-level const matters when copying objects. More specifically, top-level const(not the case of low-level const) is ignored when copying objects since copying won't influence the copied object. It is immaterial whether the object copied to or copied from is const or not.
So for the caller, differentiating them is not necessary. Likely, from the function viewpoint, the top-level const parameters doesn't influence the interface and/or the functionality of function. The two function actually accomplish the same thing. Why bother implementing two copies?
allow these two functions simultaneously as different function since they are really different as to "whether parameter can be written or not". Intuitively, it should be!
Overloading of functions is based on the parameters the caller provides. Here, it's true that the caller may provide a const or non-const value but logically it should make no difference to the functionality that the called function provides. Consider:
f(3);
int x = 1 + 2;
f(x);
If f() does different thing in each of these situations, it would be very confusing! The programmer of this code calling f() can have a reasonable expectation of identical behaviour, freely adding or removing variables that pass parameters without it invalidating the program. This safe, sane behaviour is the point of departure that you'd want to justify exceptions to, and indeed there is one - behaviours can be varied when the function's overloaded ala:
void f(const int&) { ... }
void f(int&) { ... }
So, I guess this is what you find non-intuitive: that C++ provides more "safety" (enforced consistent behaviour through supporting only a single implementation) for non-references than references.
The reasons I can think of are:
So when a programmer knows a non-const& parameter will have a longer lifetime, they can select an optimal implementation. For example, in the code below it may be faster to return a reference to a T member within F, but if F is a temporary (which it might be if the compiler matches const F&) then a by-value return is needed. This is still pretty dangerous as the caller has to be aware that the returned reference is only valid as long as the parameter's around.
T f(const F&);
T& f(F&); // return type could be by const& if more appropriate
propagation of qualifiers like const-ness through function calls as in:
const T& f(const F&);
T& f(F&);
Here, some (presumably F member-) variable of type T is being exposed as const or non-const based on the const-ness of the parameter when f() is called. This type of interface might be chosen when wishing to extend a class with non-member functions (to keep the class minimalist, or when writing templates/algos usable on many classes), but the idea is similar to const member functions like vector::operator[](), where you want v[0] = 3 allowed on a non-const vector but not a const one.
When values are accepted by value they go out of scope as the function returns, so there's no valid scenario involving returning a reference to part of the parameter and wanting to propagate its qualifiers.
Hacking the behaviour you want
Given the rules for references, you can use them to get the kind of behaviour you want - you just need to be careful not to modify the by-non-const-reference parameter accidentally, so might want to adopt a practice like the following for the non-const parameters:
T f(F& x_ref)
{
F x = x_ref; // or const F is you won't modify it
...use x for safety...
}
Recompilation implications
Quite apart from the question of why the language forbids overloading based on the const-ness of a by-value parameter, there's the question of why it doesn't insist on consistency of const-ness in the declaration and definition.
For f(const int) / f(int)... if you are declaring a function in a header file, then it's best NOT to include the const qualifier even if the later definition in an implementation file will have it. This is because during maintenance the programmer may wish to remove the qualifier... removing it from the header may trigger a pointless recompilation of client code, so it's better not to insist they be kept in sync - and indeed that's why the compiler doesn't produce an error if they differ. If you just add or remove const in the function definition, then it's close to the implementation where the reader of the code might care about the constness when analysing the function behaviour. If you have it const in both header and implementation file, then the programmer wishes to make it non-const and forgets or decides not to update the header in order to avoid client recompilation, then it's more dangerous than the other way around as it's possible the programmer will have the const version from the header in mind when trying to analyse the current implementation code leading to wrong reasoning about the function behaviour. This is all a very subtle maintainence issue - only really relevant to commercial programming - but that's the basis of the guideline not to use const in the interface. Further, it's more concise to omit it from the interface, which is nicer for client programmers reading over your API.
Since there is no difference to the caller, and no clear way to distinguish between a call to a function with a top level const parameter and one without, the language rules ignore top level consts. This means that these two
void foo(const int);
void foo(int);
are treated as the same declaration. If you were to provide two implementations, you would get a multiple definition error.
There is a difference in a function definition with top level const. In one, you can modify your copy of the parameter. In the other, you can't. You can see it as an implementation detail. To the caller, there is no difference.
// declarations
void foo(int);
void bar(int);
// definitions
void foo(int n)
{
n++;
std::cout << n << std::endl;
}
void bar(const int n)
{
n++; // ERROR!
std::cout << n << std::endl;
}
This is analogous to the following:
void foo()
{
int = 42;
n++;
std::cout << n << std::endl;
}
void bar()
{
const int n = 42;
n++; // ERROR!
std::cout << n << std::endl;
}
In "The C++ Programming Language", fourth edition, Bjarne Stroustrup writes (§12.1.3):
Unfortunately, to preserve C compatibility, a const is ignored at the highest level of an argument type. For example, this is two declarations of the same function:
void f(int);
void f(const int);
So, it seems that, contrarily to some of the other answers, this rule of C++ was not chosen because of the indistinguishability of the two functions, or other similar rationales, but instead as a less-than-optimal solution, for the sake of compatibility.
Indeed, in the D programming language, it is possible to have those two overloads. Yet, contrarily to what other answers to this question might suggest, the non-const overload is preferred if the function is called with a literal:
void f(int);
void f(const int);
f(42); // calls void f(int);
Of course, you should provide equivalent semantics for your overloads, but that is not specific to this overloading scenario, with nearly indistinguishable overloading functions.
As the comments say, inside the first function the parameter could be changed, if it had been named. It is a copy of the callee's int. Inside the second function, any changes to the parameter, which is still a copy of the callee's int, will result in a compile error. Const is a promise you won't change the variable.
A function is useful only from the caller's perspective.
Since there is no difference to the caller, there is no difference, for these two functions.
I think the indistinguishable is used in the terms of overloading and compiler, not in terms if they can be distinguished by caller.
Compiler does not distinguish between those two functions, their names are mangled in the same way. That leads to situation when compiler treats those two declarations as redefinition.
Answering this part of your question:
What I really care is why C++ doesn't allow these two functions simultaneously as different function since they are really different as to "whether parameter can be written or not". Intuitively, it should be!
If you think about it a little more, it isn't at all intinuitive - in fact, it doesn't make much sense. As everybody else has said, a caller is in no way influenced when a functon takes it's parameter by value and it doesn't care, either.
Now, let's suppose for a moment that overload resolution worked on top level const, too. Two declarations like this
int foo(const int);
int foo(int);
would declare two different functions. One of the problems would be which functions would this expression call: foo(42). The language rules could say that literals are const and that the const "overload" would be called in this case. But that's the least of a problem.
A programmer feeling sufficiently evil could write this:
int foo(const int i) { return i*i; }
int foo(int i) { return i*2; }
Now you'd have two overloads that are appear semanticaly equivalent to the caller but do completely different things. Now that would be bad. We'd be able to write interfaces that limit the user by the way they do things, not by what they offer.

c++ return type of function to be a function of it's input

I am learning c++ I am wondering if there is a way to define a template where the return type would actually be a function of the input of the function.
For example:
calling fun(1) would return me an int
calling fun(2) would return me a float
I guess this could be done using some kind of map?
1 <> int
2 <> float
The problem I was trying to solve is, for example, if I have an object called room I wanted to have a function called get_contents on which I would pass an enum to define the return type. For example:
std::vector<Table> tables = room.get_contents(Room::TABLE);
std::vector<Chair> chairs = room.get_contents(Room::CHAIR);
The first question probably isn't the best solution to this problem, nevertheless I wanted to know if it possible. Also, what is the best pattern to do what I want?
This looks to me like room.get just needs to be a template, and forget the enum.
std::vector<Table> tables = room.get<Table>();`
The language doesn't allow you to do that.
Think about it, the input for your function could be a variable, whose value is not known at compile time. The compiler won't be able to deduce the return type.
What you can do:
Use two different functions and call the appropriate one.
Have a common return type, e.g. a base class pointer or reference.
You can use templating for a function, but you will have to explicitly pass the type as the template parameter, it always has to be known at compile-time can't be inferred from the input automatically:
template <typename T>
T product(T x, T y) { x * y; }
and call it as:
product<int>(3,2);
(abstract example, may be less useful in what you are doing)

If void() does not return a value, why do we use it?

void f() means that f returns nothing. If void returns nothing, then why we use it? What is the main purpose of void?
When C was invented the convention was that, if you didn't specify the return type, the compiler automatically inferred that you wanted to return an int (and the same holds for parameters).
But often you write functions that do stuff and don't need to return anything (think e.g. about a function that just prints something on the screen); for this reason, it was decided that, to specify that you don't want to return anything at all, you have to use the void keyword as "return type".
Keep in mind that void serves also other purposes; in particular:
if you specify it as the list of parameters to a functions, it means that the function takes no parameters; this was needed in C, because a function declaration without parameters meant to the compiler that the parameter list was simply left unspecified. In C++ this is no longer needed, since an empty parameters list means that no parameter is allowed for the function;
void also has an important role in pointers; void * (and its variations) means "pointer to something left unspecified". This is useful if you have to write functions that must store/pass pointers around without actually using them (only at the end, to actually use the pointer, a cast to the appropriate type is needed).
also, a cast to (void) is often used to mark a value as deliberately unused, suppressing compiler warnings.
int somefunction(int a, int b, int c)
{
(void)c; // c is reserved for future usage, kill the "unused parameter" warning
return a+b;
}
This question has to do with the history of the language: C++ borrowed from C, and C used to implicitly type everything untyped as int (as it turned out, it was a horrible idea). This included functions that were intended as procedures (recall that the difference between functions and procedures is that function invocations are expressions, while procedure invocations are statements). If I recall it correctly from reading the early C books, programmers used to patch this shortcoming with a #define:
#define void int
This convention has later been adopted in the C standard, and the void keyword has been introduced to denote functions that are intended as procedures. This was very helpful, because the compiler could now check if your code is using a return value from a function that wasn't intended to return anything, and to warn you about functions that should return but let the control run off the end instead.
In imperative programming languages such as C, C++, Java, etc., functions and methods of type void are used for their side effects. They do not produce a meaningful value to return, but they influence the program state in one of many possible ways. E.g., the exit function in C returns no value, but it has the side effect of aborting the application. Another example, a C++ class may have a void method that changes the value of its instance variables.
void() means return nothing.
void doesn't mean nothing. void is a type to represent nothing. That is a subtle difference : the representation is still required, even though it represents nothing.
This type is used as function's return type which returns nothing. This is also used to represent generic data, when it is used as void*. So it sounds amusing that while void represents nothing, void* represents everything!
Because sometimes you dont need a return value. That's why we use it.
If you didn't have void, how would you tell the compiler that a function doesn't return a value?
Cause consider some situations where you may have to do some calculation on global variables and put results in global variable or you want to print something depending on arguments , etc.. In these situations you can use the method which dont return value.. i.e.. void
Here's an example function:
struct SVeryBigStruct
{
// a lot of data here
};
SVeryBigStruct foo()
{
SVeryBigStruct bar;
// calculate something here
return bar;
}
And now here's another function:
void foo2(SVeryBigStruct& bar) // or SVeryBigStruct* pBar
{
bar.member1 = ...
bar.member2 = ...
}
The second function is faster, it doesn't have to copy whole struct.
probably to tell the compiler " you dont need to push and pop all cpu-registers!"
Sometimes it can be used to print something, rather than to return it. See http://en.wikipedia.org/wiki/Mutator_method#C_example for examples
Functions are not required to return a value. To tell the compiler that a function does not return a value, a return type of void is used.

Initializing variable in C++ function header

I've come across some C++ code that looks like this (simplified for this post):
(Here's the function prototype located in someCode.hpp)
void someFunction(const double & a, double & b, const double c = 0, const double * d = 0);
(Here's the first line of the function body located in someCode.cpp that #include's someCode.hpp)
void someFunction(const double & a, double & b, const double c, const double * d);
Can I legally call someFunction using:
someFunction(*ptr1, *ptr2);
and/or
someFunction(*ptr1, *ptr2, val1, &val2);
where the variables ptr1, ptr2, val, and val2 have been defined appropriately and val1 and val2 do not equal zero? Why or why not?
And if it is legal, is this syntax preferred vs overloading a function to account for the optional parameters?
Yes, this is legal, this is called default arguments. I would say it's preferred to overloading due to involving less code, yes.
Regarding your comment about const, that doesn't apply to the default value itself, it applies to the argument. If you have an argument of type const char* fruit = "apple", that doesn't mean it has to be called with a character pointer whose value is the same as the address of the "apple" string literal (which is good, since that would be hard to guarantee). It just means that it has to be called with a pointer to constant characters, and tells you that the function being called doesn't need to write to that memory, it is only read from.
Yes, the parameters are optional and when you don't pass them, the given default values will be used.
It has some advantages and disadvantages to use default parameter values instead of overloading. The advantage is less typing in both interface and implementation part. But the disadvantage is that the default value is a part of interface with all its consequences. Then when you change the default value, you for example need to recompile a lot of code instead of a single file when using overloading.
I personally prefer default parameters.
I'd like to expand a bit on whether Default Parameters are preferred over overloading.
Usually they are for all the reasons given in the other answers, most notably less boilerplate code.
There are also valid reasons that make overloading a better alternative in some situations:
Default values are part of the interface, changes might break clients (as #Juraj already noted)
Additionally Overloads make it easier to add additional (combinations of) parameters, without breaking the (binary) interface.
Overloads are resolved at compile time, which can give the compiler better optimization (esp inlining) possibilities. e.g. if you have something like this:
void foo(Something* param = 0) {
if (param == 0) {
simpleAlgorithm();
} else {
complexAlgorithm(param);
}
}
It might be better to use overloads.
Can I legally call someFunction using:
someFunction(*ptr1, *ptr2);
Absolutely! Yes, the other 2 variables that the function accepts would have default values you have set in the header file which is zero for both the arguments.
But if you do supply the 3rd and the 4th argument to the function, then those values are considered instead of the default values.