In inline functions, the compiler is suggested to copy a piece of code at compile time into the caller program.
#include<iostream>
inline void swap(int *a, int *b){
a=a^b;
b=a^b;
a=a^b;
}
int main(){
int a=2,b=3;
swap(a,b);
std::cout<<a<<b;
return 0;
}
Here, the swap function is call by value.
So, suppose that the above code is inlined by the compiler, then will the actual variable a and b will be passed so that it works perfectly or the part that has an inline function has some other scope?
Because, if it is inlined, the code is directly copied into main here, and since no parameter passing will take place, the code have the access to a and b.
ps:The question has been edited because I was not able to precisely put in words my doubt so it has so may downvotes.
Why is it that the first code fails to work until a pointer to a and b are used?
Because swap takes pointers. You can't call it without pointers, even it it would be inlined. The program still needs to be syntactically correct even if the semantics change due to optimizations.
Also note that inline is just a suggestion to the compiler to inline the function. It's actual purpose is to prevent multiple definition errors if you define the function in multiple translation units.
In inline functions, the piece of code at compile time is copied into the caller program.
That's completely incorrect. Making a function inline has no effect on what your program does. If your function does not work without inline then it's not going to start working when you make it inline, and vice versa.
So the premise of the question is wrong.
Your inline function and usage are wrong. You are passing variables to a function that requires pointers.
In main, your function call should be:
swap(&a, &b);
If you don't want to mess with pointers, I recommend a version using references:
#include <iostream>
using std::cout;
inline void my_swap(int& a, int& b)
{
int temp = a;
a = b;
b = a;
}
int main()
{
int p = 42, q = 1;
cout << "p: " << p << ", q: " << q << "\n";
swap(p, q);
cout << "p: " << p << ", q: " << q << "\n";
return 0;
}
The parameters of swap are passed by reference which means that the function will modify the variables passed to swap.
Take the above main function and print out the assembly language. There should be no branch or call instructions to the swap function. Increment the optimization levels as necessary (some compilers won't inline the code at the lowest optimization levels, to make debugging easier). A very intelligent compiler may replace the entire function with an assembly language instruction for swapping.
Related
This question already has answers here:
Sell me const-correctness
(16 answers)
Closed 2 years ago.
What is the point of using the keyword const? for example when making a game, one of the first things to do is to set the width and height of it. And most of the time you'll use for example:
const int Width
and
const int height
Now I know that you should do it like that because the width and height of the screen will not change throughout the game, but what is the point of doing so ? you can do the same thing without using const and it will work just fine.
That was just an example. so what I'm confused about right now is:
What is the point of using the const keyword anywhere if you won't change the variable anyway?
Non-exhaustive list of reasons:
Software Engineering (SWE). SWE is not just programming, but programming with other people and over time.
const allows to explicitly express an invariant, which lets you and others reason about the code. As the program becomes bigger, these invariants cannot be just memorized. That's why encoding them in the programming language helps.
Optimization opportunities.
With the knowledge that certain values will not change, the compiler can make optimizations that would not be possible otherwise. To take this to the max, constexpr means that a value will be known at compile time, not just at run-time. This becomes even more important in potentially multi-threading contexts.
Example:
What kind of optimization does const offer in C/C++?
I leave out whole program analysis which would require a much longer answer and almost certainly is not applicable to generic C++ programs. But whole-program-analysis will allow reasoning of the analyzer or compiler about constness of variables as they get passed between functions, translation units and libraries.
Without const, you have to remember to not change the variable. The larger your program becomes, the harder it gets.
It also has some other useful effects:
const int a = 10;
int b[a]; // Doesn't work if `a` is not `const`.
// ...
void foo(const int &a) {};
void bar()
{
foo(42); // Doesn't work if the parameter is a non-const reference.
}
Having something declared const, compared to a value set with #define for instance, allows you to declare something that the compiler will never let you alter, but that will still keep all of the other properties of a regular variable.
In particular, it will keep a certain place in memory and a pointer on it can be obtained with « & », keeping all read-only routines that use to work on regular object compatible with it.
It's especially useful when your constant object is not a simple native type variable, but rather a complicated object spawned from a class and that still need to be initialized through a constructor.
Also remember that const is a type qualifier, than can apply not only on variable declarations, but also on arguments of a function prototype. In this particular case, this will enable your function to accept both constant or variable arguments.
Such a constant argument could be, for example, a double-quoted "string", which is const char *-typed, because the string is directly part of the code itself and defined at compilation type. Without a const qualifier, nothing could prevent your function from trying to write in it, nor warn the programmer that it's forbidden.
To stay with your example, suppose I write a game library that has a
struct game {
int width;
int height;
int area;
game(int w, int h) : width(w),height(h),area(w*h) {}
};
Now you use my library and because I did not write any documentation (evil me) you just start writing code and try what you can do with that class. You write code
#include <iostream>
int main() {
game g{3,5};
g.width = 12;
std::cout << g.width << " * " << g.height << " == " << g.area;
}
and get output:
12 * 5 == 15
You will complain that the code I wrote is broken because you get non-sense results when you use it. If however I had used const for things you are not supposed to modify:
struct game {
const int width;
const int height;
const int area;
game(int w, int h) : width(w),height(h),area(w*h) {}
};
Then you would get a nice error message that tells you that you tried to modify something that you are not supposed to modify:
prog.cc: In function 'int main()':
prog.cc:11:15: error: assignment of read-only member 'game::width'
g.width = 12;
Once you fixed your code to
#include <iostream>
int main() {
game g{3,5};
std::cout << g.width << " * " << g.height << " == " << g.area;
}
All const could be removed and the output would not change. However this is not always the case. For example member functions can have const and non-const overloads that can do different things depending on whether the method is called on a const or on a non-const object:
#include <iostream>
struct foo {
void sayHello() const {
std::cout << "I am a const object\n";
}
void sayHello() {
std::cout << "I am a non-const object\n";
}
};
int main() {
const foo f;
f.sayHello();
foo g;
g.sayHello();
}
output:
I am a const object
I am a non-const object
Conclusion:
const is mainly to ensure correctnes and to avoid mistakes. const can also be used to make const objects behave differently than non const objects. There is more to it and details you can read up eg here.
const is for a constant variable, that it means nobody should change it, or maybe for const T & passing non-trivial type as parameter, or maybe for making a pointer constant, or for value pointed from the pointer (const *P *variable)
Say I have the following code:
#include <iostream>
using namespace std;
int defaultvalue[] = {1,2};
int fun(int * arg = defaultvalue)
{
arg[0] += 1;
return arg[0];
}
int main()
{
cout << fun() << endl;
cout << fun() << endl;
return 0;
}
and the result is:
2
3
which make sense because the pointer *arg manipulated the array defaultvalue. However, if I changed the code into:
#include <iostream>
using namespace std;
int defaultvalue[] = {1,2};
int fun(int arg[] = defaultvalue)
{
arg[0] += 1;
return arg[0];
}
int main()
{
cout << fun() << endl;
cout << fun() << endl;
return 0;
}
but the result is still:
2
3
Moreover, when I print out the defaultvalue:
cout << defaultvalue[0] <<endl;
It turn out to be 3.
My question is, in the second example, should the function parameter be passed by value, so that change of arg will have no effect on defaultvalue?
My question is, in the second example, should the function parameter be passed by value, so that change of arg will have no effect on defaultvalue?
No.
It is impossible to pass an array by value (thanks a lot, C!) so, as a "compromise" (read: design failure), int[] in a function parameter list actually means int*. So your two programs are identical. Even writing int[5] or int[24] or int[999] would actually mean int*. Ridiculous, isn't it?!
In C++ we prefer to use std::array for arrays: it's an array wrapper class, which has proper object semantics, including being copyable. You can pass those into a function by value just fine.
Indeed, std::array was primarily introduced for the very purpose of making these silly and surprising native array semantics obsolete.
When we declare a function like this
int func(int* arg);
or this
int (func(int arg[])
They're technically the same. It's a matter of expressiveness. In the first case, it's suggested by the API author that the function should receive a pointer to a single value; whereas in the second case, it suggests that it wants an array (of some unspecified length, possibly ending in nullptr, for instance).
You could've also written
int (func(int arg[3])
which would again be technically identical, only it would hint to the API user that they're supposed to pass in an int array of at least 3 elements. The compiler doesn't enforce any of these added modifiers in these cases.
If you wanted to copy the array into the function (in a non-hacked way), you would first create a copy of it in the calling code, and then pass that one onwards. Or, as a better alternative, use std::array (as suggested by #LightnessRacesinOrbit).
As others have explained, when you put
int arg[] as a function parameter, whatever is inside those brackets doesn't really matter (you could even do int arg[5234234] and it would still work] since it won't change the fact that it's still just a plain int * pointer.
If you really want to make sure a function takes an array[] , its best to pass it like
template<size_t size>
void func (const int (&in_arr)[size])
{
int modifyme_arr[100];
memcpy(modifyme_arr, in_arr, size);
//now you can work on your local copied array
}
int arr[100];
func(arr);
or if you want 100 elements exactly
void func (const int (&arr)[100])
{
}
func(arr);
These are the proper ways to pass a simple array, because it will give you the guaranty that what you are getting is an array, and not just a random int * pointer, which the function doesn't know the size of. Of course you can pass a "count" value, but what if you make a mistake and it's not the right one? then you get buffer overflow.
Consider the trivial test of this swap function in C++ which uses pass by pointer.
#include <iostream>
using std::cout;
using std::endl;
void swap_ints(int *a, int *b)
{
int temp = *a;
*a = *b;
*b = temp;
return;
}
int main(void)
{
int a = 1;
int b = 0;
cout << "a = " << a << "\t" << "b = " << b << "\n\n";
swap_ints(&a, &b);
cout << "a = " << a << "\t" << "b = " << b << endl;
return 0;
}
Does this program use more memory than if I had passed by address? Such as in this function decleration:
void swap_ints(int &a, int &b)
{
int temp = a;
a = b;
b = temp;
return;
}
Does this pass-by-reference version of the C++ function use less memory, by not needing to create the pointer variables?
And does C not have this "pass-by-reference" ability the same that C++ does? If so, then why not, because it means more memory efficient code right? If not, what is the pitfall behind this that C does not adopt this ability. I suppose what I am not consider is the fact that C++ probably creates pointers to achieve this functionality behind the scenes. Is this what the compiler actually does -- and so C++ really does not have any true advantage besides neater code?
The only way to be sure would be to examine the code the compiler generated for each and compare the two to see what you get.
That said, I'd be a bit surprised to see a real difference (at least when optimization was enabled), at least for a reasonably mainstream compiler. You might see a difference for a compiler on some really tiny embedded system that hasn't been updated in the last decade or so, but even there it's honestly pretty unlikely.
I should also add that in most cases I'd expect to see code for such a trivial function generated inline, so there was on function call or parameter passing involved at all. In a typical case, it's likely to come down to nothing more than a couple of loads and stores.
Don't confuse counting variables in your code with counting memory used by the processor. C++ has many abstractions that hide the inner workings of the compiler in order to make things simpler and easier for a human to follow.
By design, C does not have quite as many levels of abstractions as C++.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Pointer vs. Reference
Hi All,
I was trying to explore and I encountered a concern with the reference operator. Consider a normal call by reference swap code as below, which works as desired
#include<iostream>
using namespace std;
void test(int *a,int *b){
int temp;
temp = *a;
*a= *b;
*b= temp;
cout<<"\n Func a="<<*a << " b=" << *b;
}
int main()
{
int a=5,b =3;
cout<<"\n Main a="<<a << " b=" << b;
test(&a,&b);
cout<<"\n Main again a="<<a << " b=" << b;
return 0;
}
On the other hand a code as below also does the same kind of swapping and yield exactly the same result.
#include<iostream>
using namespace std;
void test(int &a,int &b){
int temp;
temp = a;
a= b;
b= temp;
cout<<"\n Func a="<<a << " b=" << b;
}
int main()
{
int a=5,b =3;
cout<<"\n Main a="<<a << " b=" << b;
test(a,b);
cout<<"\n Main again a="<<a << " b=" << b;
return 0;
}
Can some one explain how different is the function call in the second example(first part I am comfortable in which the address is taken as the reference, but what happens in the second case) ?
Within the same line, hope the same happens in an assignment statement as well i.e.
int a=5;
int &b=a;
Thanks in advance.
EDIT:
Thanks for the replies. But my doubt is what exactly happens in the memory
int *pointer=&x
stores the address but what happens when we do
int &point=x.
Both versions perform an identical job and quite probably the compiler will emit identical object code.
The version using reference parameters is much easier to read.
You can pass a NULL pointer to the version that uses pointers which leads to a memory violation. The same mistake cannot be made with reference parameters.
& means that you're passing your parameter by reference. The variable you've passed is exactly the same variable you're operating in your function. Actually there is no significant difference between using pointer or reference, because when you passing pointer and then dereference it, you again get exactly the same variable. To sum up: in both cases it's possible to modify passed variable. The opposite, when you pass variables value.
In both cases you are passing the variables by reference. In the first function you can conceptually think of the address that is being passed. In the second example though, I conceptually think of the variable itself being passed, but passed by reference instead of by value.
I am not 100% sure, but I suspect on most compilers they would compile to the same object code.
I'm a TA for an intro C++ class. The following question was asked on a test last week:
What is the output from the following program:
int myFunc(int &x) {
int temp = x * x * x;
x += 1;
return temp;
}
int main() {
int x = 2;
cout << myFunc(x) << endl << myFunc(x) << endl << myFunc(x) << endl;
}
The answer, to me and all my colleagues, is obviously:
8
27
64
But now several students have pointed out that when they run this in certain environments they actually get the opposite:
64
27
8
When I run it in my linux environment using gcc I get what I would expect. Using MinGW on my Windows machine I get what they're talking about.
It seems to be evaluating the last call to myFunc first, then the second call and then the first, then once it has all the results it outputs them in the normal order, starting with the first. But because the calls were made out of order the numbers are opposite.
It seems to me to be a compiler optimization, choosing to evaluate the function calls in the opposite order, but I don't really know why. My question is: are my assumptions correct? Is that what's going on in the background? Or is there something totally different? Also, I don't really understand why there would be a benefit to evaluating the functions backwards and then evaluating output forward. Output would have to be forward because of the way ostream works, but it seems like evaluation of the functions should be forward as well.
Thanks for your help!
The C++ standard does not define what order the subexpressions of a full expression are evaluated, except for certain operators which introduce an order (the comma operator, ternary operator, short-circuiting logical operators), and the fact that the expressions which make up the arguments/operands of a function/operator are all evaluated before the function/operator itself.
GCC is not obliged to explain to you (or me) why it wants to order them as it does. It might be a performance optimisation, it might be because the compiler code came out a few lines shorter and simpler that way, it might be because one of the mingw coders personally hates you, and wants to ensure that if you make assumptions that aren't guaranteed by the standard, your code goes wrong. Welcome to the world of open standards :-)
Edit to add: litb makes a point below about (un)defined behavior. The standard says that if you modify a variable multiple times in an expression, and if there exists a valid order of evaluation for that expression, such that the variable is modified multiple times without a sequence point in between, then the expression has undefined behavior. That doesn't apply here, because the variable is modified in the call to the function, and there's a sequence point at the start of any function call (even if the compiler inlines it). However, if you'd manually inlined the code:
std::cout << pow(x++,3) << endl << pow(x++,3) << endl << pow(x++,3) << endl;
Then that would be undefined behavior. In this code, it is valid for the compiler to evaluate all three "x++" subexpressions, then the three calls to pow, then start on the various calls to operator<<. Because this order is valid and has no sequence points separating the modification of x, the results are completely undefined. In your code snippet, only the order of execution is unspecified.
Exactly why does this have unspecified behaviour.
When I first looked at this example I felt that the behaviour was well defined because this expression is actually short hand for a set of function calls.
Consider this more basic example:
cout << f1() << f2();
This is expanded to a sequence of function calls, where the kind of calls depend on the operators being members or non-members:
// Option 1: Both are members
cout.operator<<(f1 ()).operator<< (f2 ());
// Option 2: Both are non members
operator<< ( operator<<(cout, f1 ()), f2 () );
// Option 3: First is a member, second non-member
operator<< ( cout.operator<<(f1 ()), f2 () );
// Option 4: First is a non-member, second is a member
cout.operator<<(f1 ()).operator<< (f2 ());
At the lowest level these will generate almost identical code so I will refer only to the first option from now.
There is a guarantee in the standard that the compiler must evaluate the arguments to each function call before the body of the function is entered. In this case, cout.operator<<(f1()) must be evaluated before operator<<(f2()) is, since the result of cout.operator<<(f1()) is required to call the other operator.
The unspecified behaviour kicks in because although the calls to the operators must be ordered there is no such requirement on their arguments. Therefore, the resulting order can be one of:
f2()
f1()
cout.operator<<(f1())
cout.operator<<(f1()).operator<<(f2());
Or:
f1()
f2()
cout.operator<<(f1())
cout.operator<<(f1()).operator<<(f2());
Or finally:
f1()
cout.operator<<(f1())
f2()
cout.operator<<(f1()).operator<<(f2());
The order in which function call parameters is evaluated is unspecified. In short, you shouldn't use arguments that have side-effects that affect the meaning and result of the statement.
Yeah, the order of evaluation of functional arguments is "Unspecified" according to the Standards.
Hence the outputs differ on different platforms
As has already been stated, you've wandered into the haunted forest of undefined behavior. To get what is expected every time you can either remove the side effects:
int myFunc(int &x) {
int temp = x * x * x;
return temp;
}
int main() {
int x = 2;
cout << myFunc(x) << endl << myFunc(x+1) << endl << myFunc(x+2) << endl;
//Note that you can't use the increment operator (++) here. It has
//side-effects so it will have the same problem
}
or break the function calls up into separate statements:
int myFunc(int &x) {
int temp = x * x * x;
x += 1;
return temp;
}
int main() {
int x = 2;
cout << myFunc(x) << endl;
cout << myFunc(x) << endl;
cout << myFunc(x) << endl;
}
The second version is probably better for a test, since it forces them to consider the side effects.
And this is why, every time you write a function with a side-effect, God kills a kitten!