Declaring constants or using numbers inside the code - c++

So, i have this C++ test and the teacher is really hard on declaring constants instead of using numbers directly in the code. In the example below i have even declared ZERO as a constant.
Is this unnecessary or is this a good thing to do? Does this way take up more memory or make the code "slower"?
int main() {
int kmStart, kmEnd;
const int ZERO = 0;
cout << "Starting Kms? ";
cin >> kmStart;
cout << "Ending Kms? ";
cin >> kmEnd;
while (kmStart < ZERO || kmStart > kmEnd) {
cout << "Invalid Input!" << endl << endl;
cout << "Starting Kms? ";
cin >> kmStart;
cout << "Ending Kms? ";
cin >> kmEnd;
}
}

constexpr int ZERO = 0; would almost certainly be completely compiled out.
Note the new keyword constexpr, from C++11 onwards.
For you current code, ZERO may well be compiled out, but even if it isn't any degradation in performance will be negligible cf. the input / output functions.
I wonder why your teacher regards ZERO to be clearer than 0. Everyone knows what they are dealing with when they see a 0. For example, ZERO could feasibly mean '0', or even "0" which are entirely different beasts: you'd always have be checking back through the code when debugging this.

Use of properly named constants is mandatory for a development of long-living applications. Managing plain numeric literals quickly goes off hands. Consider the following example:
foo(42);
bar(42);
It has several problems:
it is not possible to guess where 42 value came from
it is not possible to guess whether 42 in both function calls is just a coincidence or we are intentionally passing the same value
as a consequence of the previous point changing program behavior may be challenging because we need to manually identify all the places where particular value that we want to adjust is utilized
If your application consists of hundreds of files it will be a literal nightmare.
So if constants are used instead the example code piece may become
constexpr int const fast_foobing_rate{42};
constexpr int const slow_barring_coeff{42};
foo(fast_foobing_rate);
bar(slow_barring_coeff);
or
constexpr int const days_in_week_count{7};
constexpr int const frobbing_weeks_count{6};
inline constexpr int get_frob_repetitions_count(void) noexcept
{
return(days_in_week_count * frobbing_weeks_count);
}
foo(get_frob_repetitions_count());
bar(get_frob_repetitions_count());
So now:
we are able to track origins of values
we can locate places where these persistent values are used
we can easily adjust these persistent values by modifying their definitions and our changes will be automatically applied across the entire codebase
And with all these benefits we won't suffer from performance penalties. Depending on constant type there could even be some benefits for performance as well.

In general, using constants instead of numbers in the code directly can make your code more readable and easier to maintain.
Consider the following example:
For example you have some kind of simulation with a timestep of 0.1 seconds and you need this timestep value in different locations of your source code, then it would be easier to use a
const double timesstep = 0.1
instead of writing 0.1 every where.
Advantage is:
only one line must be changed if you want to change the value of
timestep
the code becomes more readable, if you know the constants meaning
But in your case, I think its more readable to use 0 instead of zero, or you rename it to something more expressive as "minimum_start" or something like that...

Personnaly, for integer constant I am using enumerations (source == Scott Meyer, Effective C++) :
int main (int argc, char* argv []) {
enum Constant {
NTRY = 32,
NEQ = 8,
SMAX = 200000000,
ALERT = 65536
};
size_t ntry (Constant::NTRY);
std::cout << "ntry == " << ntry << std::endl;
return 0;
}

Related

Is there a reason to use zero-initialization instead of simply not defining a variable when it is going to be updated before the value is used anyway?

I came across a code example learncpp.com where they zero-initialized a variable, then defined it with std::cin:
#include <iostream> // for std::cout and std::cin
int main()
{
std::cout << "Enter a number: "; // ask user for a number
int x{ }; // define variable x to hold user input (and zero-initialize it)
std::cin >> x; // get number from keyboard and store it in variable x
std::cout << "You entered " << x << '\n';
return 0;
}
Is there any reason that on line 7 you wouldn't just not initialize x? It seems zero-initializing the variable is a waste of time because it's assigned a new value on the next line.
Is there any reason that on line 7 you wouldn't just not initialize x?
In general, it is advised that local/block scope built in types should always be initialized. This is to prevent potential uses of uninitialized built in types which have indeterminate value and will lead to undefined behavior.
In your particular example though since in the immediate next line we have std::cin >> x; so it is safe to omit the zero initialization in the previous line.
Note also that in your example if reading input failed for some reason then prior to C++11, x still has indeterminate value but from C++11(&onwards) x will no longer has indeterminate value according to the below quoted statement.
From basic_istream's documentation:
If extraction fails (e.g. if a letter was entered where a digit is expected), zero is written to value and failbit is set. For signed integers, if extraction results in the value too large or too small to fit in value, std::numeric_limits<T>::max() or std::numeric_limits<T>::min() (respectively) is written and failbit flag is set. For unsigned integers, if extraction results in the value too large or too small to fit in value, std::numeric_limits<T>::max() is written and failbit flag is set.
It seems zero-initializing the variable is a waste of time because it's redefined on the next line.
As i said, in your example and assuming you're using C++11(& higher), you can safely leave off the zero-initialization.
then defined it with std::cin
No the use of std::cin does not "define" it(x). Definition happened only once when you wrote:
int x{ }; //this is a definition
Even if you leave the zero initialization, then also it will be a definition:
int x; //this is also a definition but x is uninitialized here
The problem you're encountering is, that in order to use the >> operator of std::cin, you have to have your variable declared beforehand, which is, well, undesirable. It also leads to your question of whether or not to initialize the variable.
Instead, you could do something like this:
#include <iostream>
#include <iterator>
int main()
{
std::cout << "Enter a number: ";
auto const i = *std::istream_iterator<int>(std::cin);
std::cout << "You entered " << x << '\n';
}
... although that's also problematic, since you're at the end-of-file, then the behavior is undefined.
Now, you might be asking "why is this guys talking to me about alternatives to the common way of doing things, then telling me that the alternative isn't good enough either?"
The point of my doing this is to introduce you to an uncomfortable fact of "C++ life", which is: We are often stuck with design decisions of the standard library which are not optimal. Sometimes, there's really no better alternative; and sometimes there is, but it wasn't adopted - and since C++ does not easily change designs of standard library classes, we may be stuck with some "warts" in the language. See also the discussion in the related question I've opened:
Why do C++ istreams only allow formatted-reading into an existing variable?

Why local variable within inline function expanded multiple times in "discrete statements" can have a single set of variables?

I'm reading Inside The C++ Object Model and find confused about inline function expansion.
In general, each local variable within the inline function must be introduced into the enclosing block of the call
as a uniquely named variable. If the inline function is expanded multiple times within one expression, each
expansion is likely to require its own set of the local variables. If the inline function is expanded multiple
times in discrete statements, however, a single set of the local variables can probably be reused across the
multiple expansions.
Here, what does it mean to expand inline function multiple times in discrete statements and how could that happen? Can anyone raise a concrete example to apply this?
I had some trouble to handle the term discrete statement (especially because it has been emphasized multiple times). I tried to find something like a clear definition (by google) but I couldn't. Thus, I decided to read this literally as one statement (with discrete in the sense of separate).
Denoting a function inline is just a hint to the compiler that the programmer would like to have the function body inserted directly at every "call point" (instead of simply calling the function). Actually, the compiler decides whether the function is really inlined. (It might be even inlined at one point of call but become a function call at another point.) If a macro is used instead of the inline function, the inline requirement would be granted (as macro expansion is actually nothing else than text replacement). Of course, macros have a lot of limitations which inline functions have not. One of them is that inline functions may have local variables.
I made a synthetic example. It's not code "ready for production" but it hopefully helps to illustrate the topic:
#include <iostream>
using namespace std;
inline int absValue(int a)
{
int mB = -a;
return a < 0 ? mB : a;
}
int main()
{
int value;
// use input to prevent compile-time computation
cout << "input: " << flush;
cin >> value;
// multiple usages of absValue()
cout << "value: " << value << endl
<< "absValue(value): "
<< absValue(value)
<< endl
<< "absValue(-value): "
<< absValue(-value)
<< endl;
// done
return 0;
}
The second output statement calls function absValue() multiple times where the call should be inlined. I imagine it like:
// multiple usages of absValue()
cout << "value: " << value << endl
<< "absValue(value): "
<< {
int mB = -(value);
return (value) < 0 ? mB : (value);
}
<< endl
<< "absValue(-value): "
<< {
int mB = -(-value);
return (-value) < 0 ? mB : (-value);
}
<< endl;
There are two occurrences of mB in this statement. On one hand, these are two separate local variables. On the other hand, they may share the same storage on stack as they are used consecutively. (They might not share the same storage if the compiler optimization introduces some kind of code re-ordering which results in interleaving of the first and second expansion of absValue().)
This whole explanation is rather theoretically. Practically, the compiler will hopefully put mB into a register or even optimize most of the code away.
I fiddled a little bit with godbolt to illustrate it further. Finally, I must admit that it proofs essentially my last paragraph above.

How many indirection level I can have in c++? [duplicate]

How many pointers (*) are allowed in a single variable?
Let's consider the following example.
int a = 10;
int *p = &a;
Similarly we can have
int **q = &p;
int ***r = &q;
and so on.
For example,
int ****************zz;
The C standard specifies the lower limit:
5.2.4.1 Translation limits
276 The implementation shall be able to translate and execute at least one program that contains at least one instance of every one of the following limits: [...]
279 — 12 pointer, array, and function declarators (in any combinations) modifying an
arithmetic, structure, union, or void type in a declaration
The upper limit is implementation specific.
Actually, C programs commonly make use of infinite pointer indirection. One or two static levels are common. Triple indirection is rare. But infinite is very common.
Infinite pointer indirection is achieved with the help of a struct, of course, not with a direct declarator, which would be impossible. And a struct is needed so that you can include other data in this structure at the different levels where this can terminate.
struct list { struct list *next; ... };
now you can have list->next->next->next->...->next. This is really just multiple pointer indirections: *(*(..(*(*(*list).next).next).next...).next).next. And the .next is basically a noop when it's the first member of the structure, so we can imagine this as ***..***ptr.
There is really no limit on this because the links can be traversed with a loop rather than a giant expression like this, and moreover, the structure can easily be made circular.
Thus, in other words, linked lists may be the ultimate example of adding another level of indirection to solve a problem, since you're doing it dynamically with every push operation. :)
Theoretically:
You can have as many levels of indirections as you want.
Practically:
Of course, nothing that consumes memory can be indefinite, there will be limitations due to resources available on the host environment. So practically there is a maximum limit to what an implementation can support and the implementation shall document it appropriately. So in all such artifacts, the standard does not specify the maximum limit, but it does specify the lower limits.
Here's the reference:
C99 Standard 5.2.4.1 Translation limits:
— 12 pointer, array, and function declarators (in any combinations) modifying an
arithmetic, structure, union, or void type in a declaration.
This specifies the lower limit that every implementation must support. Note that in a footenote the standard further says:
18) Implementations should avoid imposing fixed translation limits whenever possible.
As people have said, no limit "in theory". However, out of interest I ran this with g++ 4.1.2, and it worked with size up to 20,000. Compile was pretty slow though, so I didn't try higher. So I'd guess g++ doesn't impose any limit either. (Try setting size = 10 and looking in ptr.cpp if it's not immediately obvious.)
g++ create.cpp -o create ; ./create > ptr.cpp ; g++ ptr.cpp -o ptr ; ./ptr
create.cpp
#include <iostream>
int main()
{
const int size = 200;
std::cout << "#include <iostream>\n\n";
std::cout << "int main()\n{\n";
std::cout << " int i0 = " << size << ";";
for (int i = 1; i < size; ++i)
{
std::cout << " int ";
for (int j = 0; j < i; ++j) std::cout << "*";
std::cout << " i" << i << " = &i" << i-1 << ";\n";
}
std::cout << " std::cout << ";
for (int i = 1; i < size; ++i) std::cout << "*";
std::cout << "i" << size-1 << " << \"\\n\";\n";
std::cout << " return 0;\n}\n";
return 0;
}
Sounds fun to check.
Visual Studio 2010 (on Windows 7), you can have 1011 levels before getting this error:
fatal error C1026: parser stack overflow, program too complex
gcc (Ubuntu), 100k+ * without a crash ! I guess the hardware is the limit here.
(tested with just a variable declaration)
There is no limit, check example at Pointers :: C Interview Questions and Answers.
The answer depends on what you mean by "levels of pointers." If you mean "How many levels of indirection can you have in a single declaration?" the answer is "At least 12."
int i = 0;
int *ip01 = & i;
int **ip02 = & ip01;
int ***ip03 = & ip02;
int ****ip04 = & ip03;
int *****ip05 = & ip04;
int ******ip06 = & ip05;
int *******ip07 = & ip06;
int ********ip08 = & ip07;
int *********ip09 = & ip08;
int **********ip10 = & ip09;
int ***********ip11 = & ip10;
int ************ip12 = & ip11;
************ip12 = 1; /* i = 1 */
If you mean "How many levels of pointer can you use before the program gets hard to read," that's a matter of taste, but there is a limit. Having two levels of indirection (a pointer to a pointer to something) is common. Any more than that gets a bit harder to think about easily; don't do it unless the alternative would be worse.
If you mean "How many levels of pointer indirection can you have at runtime," there's no limit. This point is particularly important for circular lists, in which each node points to the next. Your program can follow the pointers forever.
It's actually even funnier with pointer to functions.
#include <cstdio>
typedef void (*FuncType)();
static void Print() { std::printf("%s", "Hello, World!\n"); }
int main() {
FuncType const ft = &Print;
ft();
(*ft)();
(**ft)();
/* ... */
}
As illustrated here this gives:
Hello, World!
Hello, World!
Hello, World!
And it does not involve any runtime overhead, so you can probably stack them as much as you want... until your compiler chokes on the file.
There is no limit. A pointer is a chunk of memory whose contents are an address.
As you said
int a = 10;
int *p = &a;
A pointer to a pointer is also a variable which contains an address of another pointer.
int **q = &p;
Here q is pointer to pointer holding the address of p which is already holding the address of a.
There is nothing particularly special about a pointer to a pointer. So there is no limit on chain of poniters which are holding the address of another pointer.
ie.
int **************************************************************************z;
is allowed.
Every C++ developer should have heard of the (in)famous Three star programmer.
And there really seems to be some magic "pointer barrier" that has to be camouflaged.
Quote from C2:
Three Star Programmer
A rating system for C-programmers. The more indirect your pointers are (i.e. the more "*" before your variables), the higher your reputation will be. No-star C-programmers are virtually non-existent, as virtually all non-trivial programs require use of pointers. Most are one-star programmers. In the old times (well, I'm young, so these look like old times to me at least), one would occasionally find a piece of code done by a three-star programmer and shiver with awe.
Some people even claimed they'd seen three-star code with function pointers involved, on more than one level of indirection. Sounded as real as UFOs to me.
Note that there are two possible questions here: how many levels of pointer indirection we can achieve in a C type, and how many levels of pointer indirection we can stuff into a single declarator.
The C standard allows a maximum to be imposed on the former (and gives a minimum value for that). But that can be circumvented via multiple typedef declarations:
typedef int *type0;
typedef type0 *type1;
typedef type1 *type2; /* etc */
So ultimately, this is an implementation issue connected to the idea of how big/complex can a C program be made before it is rejected, which is very compiler specific.
I'd like to point out that producing a type with an arbitrary number of *'s is something that can happen with template metaprogramming. I forget what I was doing exactly, but it was suggested that I could produce new distinct types that have some kind of meta maneuvering between them by using recursive T* types.
Template Metaprogramming is a slow descent into madness, so it is not necessary to make excuses when generating a type with several thousand level of indirection. It's just a handy way to map peano integers, for example, onto template expansion as a functional language.
Rule 17.5 of the 2004 MISRA C standard prohibits more than 2 levels of pointer indirection.
There isn't such a thing like real limit but limit exists. All pointers are variables that are usually storing in stack not heap. Stack is usually small (it is possible to change its size during some linking). So lets say you have 4MB stack, what is quite normal size. And lets say we have pointer which is 4 bytes size (pointer sizes are not the same depending on architecture, target and compiler settings).
In this case 4 MB / 4 b = 1024 so possible maximum number would be 1048576, but we shouldn't ignore the fact that some other stuff is in stack.
However some compilers may have maximum number of pointer chain, but the limit is stack size. So if you increase stack size during linking with infinity and have machine with infinity memory which runs OS which handles that memory so you will have unlimited pointer chain.
If you use int *ptr = new int; and put your pointer into heap, that is not so usual way limit would be heap size, not stack.
EDIT Just realize that infinity / 2 = infinity. If machine has more memory so the pointer size increases. So if memory is infinity and size of pointer is infinity, so it is bad news... :)
It depends on the place where you store pointers. If they are in stack you have quite low limit. If you store it in heap, you limit is much much much higher.
Look at this program:
#include <iostream>
const int CBlockSize = 1048576;
int main()
{
int number = 0;
int** ptr = new int*[CBlockSize];
ptr[0] = &number;
for (int i = 1; i < CBlockSize; ++i)
ptr[i] = reinterpret_cast<int *> (&ptr[i - 1]);
for (int i = CBlockSize-1; i >= 0; --i)
std::cout << i << " " << (int)ptr[i] << "->" << *ptr[i] << std::endl;
return 0;
}
It creates 1M pointers and at the shows what point to what it is easy to notice what the chain goes to the first variable number.
BTW. It uses 92K of RAM so just imagine how deep you can go.

How many levels of pointers can we have?

How many pointers (*) are allowed in a single variable?
Let's consider the following example.
int a = 10;
int *p = &a;
Similarly we can have
int **q = &p;
int ***r = &q;
and so on.
For example,
int ****************zz;
The C standard specifies the lower limit:
5.2.4.1 Translation limits
276 The implementation shall be able to translate and execute at least one program that contains at least one instance of every one of the following limits: [...]
279 — 12 pointer, array, and function declarators (in any combinations) modifying an
arithmetic, structure, union, or void type in a declaration
The upper limit is implementation specific.
Actually, C programs commonly make use of infinite pointer indirection. One or two static levels are common. Triple indirection is rare. But infinite is very common.
Infinite pointer indirection is achieved with the help of a struct, of course, not with a direct declarator, which would be impossible. And a struct is needed so that you can include other data in this structure at the different levels where this can terminate.
struct list { struct list *next; ... };
now you can have list->next->next->next->...->next. This is really just multiple pointer indirections: *(*(..(*(*(*list).next).next).next...).next).next. And the .next is basically a noop when it's the first member of the structure, so we can imagine this as ***..***ptr.
There is really no limit on this because the links can be traversed with a loop rather than a giant expression like this, and moreover, the structure can easily be made circular.
Thus, in other words, linked lists may be the ultimate example of adding another level of indirection to solve a problem, since you're doing it dynamically with every push operation. :)
Theoretically:
You can have as many levels of indirections as you want.
Practically:
Of course, nothing that consumes memory can be indefinite, there will be limitations due to resources available on the host environment. So practically there is a maximum limit to what an implementation can support and the implementation shall document it appropriately. So in all such artifacts, the standard does not specify the maximum limit, but it does specify the lower limits.
Here's the reference:
C99 Standard 5.2.4.1 Translation limits:
— 12 pointer, array, and function declarators (in any combinations) modifying an
arithmetic, structure, union, or void type in a declaration.
This specifies the lower limit that every implementation must support. Note that in a footenote the standard further says:
18) Implementations should avoid imposing fixed translation limits whenever possible.
As people have said, no limit "in theory". However, out of interest I ran this with g++ 4.1.2, and it worked with size up to 20,000. Compile was pretty slow though, so I didn't try higher. So I'd guess g++ doesn't impose any limit either. (Try setting size = 10 and looking in ptr.cpp if it's not immediately obvious.)
g++ create.cpp -o create ; ./create > ptr.cpp ; g++ ptr.cpp -o ptr ; ./ptr
create.cpp
#include <iostream>
int main()
{
const int size = 200;
std::cout << "#include <iostream>\n\n";
std::cout << "int main()\n{\n";
std::cout << " int i0 = " << size << ";";
for (int i = 1; i < size; ++i)
{
std::cout << " int ";
for (int j = 0; j < i; ++j) std::cout << "*";
std::cout << " i" << i << " = &i" << i-1 << ";\n";
}
std::cout << " std::cout << ";
for (int i = 1; i < size; ++i) std::cout << "*";
std::cout << "i" << size-1 << " << \"\\n\";\n";
std::cout << " return 0;\n}\n";
return 0;
}
Sounds fun to check.
Visual Studio 2010 (on Windows 7), you can have 1011 levels before getting this error:
fatal error C1026: parser stack overflow, program too complex
gcc (Ubuntu), 100k+ * without a crash ! I guess the hardware is the limit here.
(tested with just a variable declaration)
There is no limit, check example at Pointers :: C Interview Questions and Answers.
The answer depends on what you mean by "levels of pointers." If you mean "How many levels of indirection can you have in a single declaration?" the answer is "At least 12."
int i = 0;
int *ip01 = & i;
int **ip02 = & ip01;
int ***ip03 = & ip02;
int ****ip04 = & ip03;
int *****ip05 = & ip04;
int ******ip06 = & ip05;
int *******ip07 = & ip06;
int ********ip08 = & ip07;
int *********ip09 = & ip08;
int **********ip10 = & ip09;
int ***********ip11 = & ip10;
int ************ip12 = & ip11;
************ip12 = 1; /* i = 1 */
If you mean "How many levels of pointer can you use before the program gets hard to read," that's a matter of taste, but there is a limit. Having two levels of indirection (a pointer to a pointer to something) is common. Any more than that gets a bit harder to think about easily; don't do it unless the alternative would be worse.
If you mean "How many levels of pointer indirection can you have at runtime," there's no limit. This point is particularly important for circular lists, in which each node points to the next. Your program can follow the pointers forever.
It's actually even funnier with pointer to functions.
#include <cstdio>
typedef void (*FuncType)();
static void Print() { std::printf("%s", "Hello, World!\n"); }
int main() {
FuncType const ft = &Print;
ft();
(*ft)();
(**ft)();
/* ... */
}
As illustrated here this gives:
Hello, World!
Hello, World!
Hello, World!
And it does not involve any runtime overhead, so you can probably stack them as much as you want... until your compiler chokes on the file.
There is no limit. A pointer is a chunk of memory whose contents are an address.
As you said
int a = 10;
int *p = &a;
A pointer to a pointer is also a variable which contains an address of another pointer.
int **q = &p;
Here q is pointer to pointer holding the address of p which is already holding the address of a.
There is nothing particularly special about a pointer to a pointer. So there is no limit on chain of poniters which are holding the address of another pointer.
ie.
int **************************************************************************z;
is allowed.
Every C++ developer should have heard of the (in)famous Three star programmer.
And there really seems to be some magic "pointer barrier" that has to be camouflaged.
Quote from C2:
Three Star Programmer
A rating system for C-programmers. The more indirect your pointers are (i.e. the more "*" before your variables), the higher your reputation will be. No-star C-programmers are virtually non-existent, as virtually all non-trivial programs require use of pointers. Most are one-star programmers. In the old times (well, I'm young, so these look like old times to me at least), one would occasionally find a piece of code done by a three-star programmer and shiver with awe.
Some people even claimed they'd seen three-star code with function pointers involved, on more than one level of indirection. Sounded as real as UFOs to me.
Note that there are two possible questions here: how many levels of pointer indirection we can achieve in a C type, and how many levels of pointer indirection we can stuff into a single declarator.
The C standard allows a maximum to be imposed on the former (and gives a minimum value for that). But that can be circumvented via multiple typedef declarations:
typedef int *type0;
typedef type0 *type1;
typedef type1 *type2; /* etc */
So ultimately, this is an implementation issue connected to the idea of how big/complex can a C program be made before it is rejected, which is very compiler specific.
I'd like to point out that producing a type with an arbitrary number of *'s is something that can happen with template metaprogramming. I forget what I was doing exactly, but it was suggested that I could produce new distinct types that have some kind of meta maneuvering between them by using recursive T* types.
Template Metaprogramming is a slow descent into madness, so it is not necessary to make excuses when generating a type with several thousand level of indirection. It's just a handy way to map peano integers, for example, onto template expansion as a functional language.
Rule 17.5 of the 2004 MISRA C standard prohibits more than 2 levels of pointer indirection.
There isn't such a thing like real limit but limit exists. All pointers are variables that are usually storing in stack not heap. Stack is usually small (it is possible to change its size during some linking). So lets say you have 4MB stack, what is quite normal size. And lets say we have pointer which is 4 bytes size (pointer sizes are not the same depending on architecture, target and compiler settings).
In this case 4 MB / 4 b = 1024 so possible maximum number would be 1048576, but we shouldn't ignore the fact that some other stuff is in stack.
However some compilers may have maximum number of pointer chain, but the limit is stack size. So if you increase stack size during linking with infinity and have machine with infinity memory which runs OS which handles that memory so you will have unlimited pointer chain.
If you use int *ptr = new int; and put your pointer into heap, that is not so usual way limit would be heap size, not stack.
EDIT Just realize that infinity / 2 = infinity. If machine has more memory so the pointer size increases. So if memory is infinity and size of pointer is infinity, so it is bad news... :)
It depends on the place where you store pointers. If they are in stack you have quite low limit. If you store it in heap, you limit is much much much higher.
Look at this program:
#include <iostream>
const int CBlockSize = 1048576;
int main()
{
int number = 0;
int** ptr = new int*[CBlockSize];
ptr[0] = &number;
for (int i = 1; i < CBlockSize; ++i)
ptr[i] = reinterpret_cast<int *> (&ptr[i - 1]);
for (int i = CBlockSize-1; i >= 0; --i)
std::cout << i << " " << (int)ptr[i] << "->" << *ptr[i] << std::endl;
return 0;
}
It creates 1M pointers and at the shows what point to what it is easy to notice what the chain goes to the first variable number.
BTW. It uses 92K of RAM so just imagine how deep you can go.

cout or printf which of the two has a faster execution speed C++?

I have been coding in C++ for a long time. I always wondered which has a faster execution speed printf or cout?
Situation: I am designing an application in C++ and I have certain constraints such as time limit for execution. My application has loads printing commands on the console. So which one would be preferable printf or cout?
Each has its own overheads. Depending on what you print, either may be faster.
Here are two points that come to mind -
printf() has to parse the "format" string and act upon it, which adds a cost.
cout has a more complex inheritance hierarchy and passes around objects.
In practice, the difference shouldn't matter for all but the weirdest cases. If you think it really matters - measure!
EDIT -
Oh, heck, I don't believe I'm doing this, but for the record, on my very specific test case, with my very specific machine and its very specific load, compiling in Release using MSVC -
Printing 150,000 "Hello, World!"s (without using endl) takes about -
90ms for printf(), 79ms for cout.
Printing 150,000 random doubles takes about -
3450ms for printf(), 3420ms for cout.
(averaged over 10 runs).
The differences are so slim this probably means nothing...
Do you really need to care which has a faster execution speed? They are both used simply for printing text to the console/stdout, which typically isn't a task that demands ultra-high effiency. For that matter, I wouldn't imagine there to be a large difference in speed anyway (though one might expect printf to be marginally quicker because it lacks the minor complications of object-orientedness). Yet given that we're dealing with I/O operations here, even a minor difference would probably be swamped by the I/O overhead. Certainly, if you compared the equivalent methods for writing to files, that would be the case.
printf is simply the standard way to output text to stdout in C.
'cout' piping is simply the standard way to output text to stdout in C++.
Saying all this, there is a thread on the comp.lang.cc group discussing the same issue. Consensus does however seem to be that you should choose one over the other for reasons other than performance.
The reason C++ cout is slow is the default sync with stdio.
Try executing the following to deactivate this issue.
ios_base::sync_with_stdio(false)
http://www.cplusplus.com/reference/iostream/ios_base/sync_with_stdio/
http://msdn.microsoft.com/es-es/library/7yxhba01.aspx
On Windows at least, writing to the console is a huge bottleneck, so a "noisy" console mode program will be far slower than a silent one. So on that platform, slight differences in the library functions used to address the console will probably make no significant difference in practice.
On other platforms it may be different. Also it depends just how much console output you are doing, relative to other useful work.
Finally, it depends on your platform's implementation of the C and C++ I/O libraries.
So there is no general answer to this question.
Performance is a non-issue for comparison; can't think of anything where it actually counts (developing a console-program). However, there's a few points you should take into account:
Iostreams use operator chaining instead of va_args. This means that your program can't crash because you passed the wrong number of arguments. This can happen with printf.
Iostreams use operator overloading instead of va_args -- this means your program can't crash because you passed an int and it was expecting a string. This can happen with printf.
Iostreams don't have native support for format strings (which is the major root cause of #1 and #2). This is generally a good thing, but sometimes they're useful. The Boost format library brings this functionality to Iostreams for those who need it with defined behavior (throws an exception) rather than undefined behavior (as is the case with printf). This currently falls outside the standard.
Iostreams, unlike their printf equivilants, can handle variable length buffers directly themselves instead of you being forced to deal with hardcoded cruft.
Go for cout.
I recently was working on a C++ console application on windows that copied files using CopyFileEx and was echoing the 'to' and 'from' paths to the console for each copy and then displaying the average throughput at the end of the operation.
When I ran the console application using printf to echo out the strings I was getting 4mb/sec, when replacing the printf with std::cout the throughput dropped to 800kb/sec.
I was wondering why the std::cout call was so much more expensive and even went so far as to echo out the same string on each copy to get a better comparison on the calls. I did multiple runs to even out the comparison, but the 4x difference persisted.
Then I found this answer on stackoverflow..
Switching on buffering for stdout did the trick, now my throughput numbers for printf and std::cout are pretty much the same.
I have not dug any deeper into how printf and cout differ in console output buffering, but setting the output buffer before I begin writing to the console solved my problem.
Another Stack Overflow question addressed the relative speed of C-style formatted I/O vs. C++ iostreams:
Why is snprintf faster than ostringstream or is it?
http://www.fastformat.org/performance.html
Note, however, that the benchmarks discussed were for formatting to memory buffers. I'd guess that if you're actually performing the I/O to a console or file that the relative speed differences would be much smaller due to the I/O taking more of the overall time.
If you're using C++, you should use cout instead as printf belongs to the C family of functions. There are many improvements made for cout that you may benefit from. As for speed, it isn't an issue as console I/O is going to be slow anyway.
In practical terms I have always found printf to be faster than cout. But then again, cout does a lot more for you in terms of type safety. Also remember printf is a simple function whereas cout is an object based on a complex streams hierarchy, so it's not really fair to compare execution times.
To settle this:
#include <iostream>
#include <cstdio>
#include <ctime>
using namespace std;
int main( int argc, char * argcv[] ) {
const char * const s1 = "some text";
const char * const s2 = "some more text";
int x = 1, y = 2, z = 3;
const int BIG = 2000;
time_t now = time(0);
for ( int i = 0; i < BIG; i++ ) {
if ( argc == 1 ) {
cout << i << s1 << s2 << x << y << z << "\n";
}
else {
printf( "%d%s%s%d%d%d\n", i, s1, s2, x, y, z );
}
}
cout << (argc == 1 ? "cout " : "printf " ) << time(0) - now << endl;
}
produces identical timings for cout and printf.
Why don't you do an experiment? On average for me, printing the string helloperson;\n using printf takes, on average, 2 clock ticks, while cout using endl takes a huge amount of time - 1248996720685 clock ticks. Using cout with "\n" as the newline takes only 41981 clock ticks. The short URL for my code is below:
cpp.sh/94qoj
link may have expired.
To answer your question, printf is faster.
#include <iostream>
#include <string>
#include <ctime>
#include <stdio.h>
using namespace std;
int main()
{
clock_t one;
clock_t two;
clock_t averagePrintf;
clock_t averageCout;
clock_t averagedumbHybrid;
for (int j = 0; j < 100; j++) {
one = clock();
for (int d = 0; d < 20; d++) {
printf("helloperson;");
printf("\n");
}
two = clock();
averagePrintf += two-one;
one = clock();
for (int d = 0; d < 20; d++) {
cout << "helloperson;";
cout << endl;
}
two = clock();
averageCout += two-one;
one = clock();
for (int d = 0; d < 20; d++) {
cout << "helloperson;";
cout << "\n";
}
two = clock();
averagedumbHybrid += two-one;
}
averagePrintf /= 100;
averageCout /= 100;
averagedumbHybrid /= 100;
cout << "printf took " << averagePrintf << endl;
cout << "cout took " << averageCout << endl;
cout << "hybrid took " << averagedumbHybrid << endl;
}
Yes, I did use the word dumb. I first made it for myself, thinking that the results were crazy, so I searched it up, which ended up with me posting my code.
Hope it helps,
Ndrewffght
If you ever need to find out for performance reasons, something else is fundamentally wrong with your application - consider using some other logging facility or UI ;)
Under the hood, they will both use the same code, so speed differences will not matter.
If you are running on Windows only, the non-standard cprintf() might be faster as it bypasses a lot of the streams stuff.
However it is an odd requirement. Nobody can read that fast. Why not write output to a file, then the user can browse the file at their leisure?
Anecdotical evidence:
I've once designed a logging class to use ostream operators - the implementation was insanely slow (for huge amounts of data).
I didn't analyze that to much, so it might as well have been caused by not using ostreams correctly, or simply due to the amount of data logged to disk. (The class has been scrapped because of the performance problems and in practice printf / fmtmsg style was preferred.)
I agree with the other replies that in most cases, it doesn't matter. If output really is a problem, you should consider ways to avoid / delay it, as the actual display updates typically cost more than a correctly implemented string build. Thousands of lines scrolling by within milliseconds isn't very informative anyway.
You should never need to ask this question, as the user will only be able to read slower than both of them.
If you need fast execution, don't use either.
As others have mentioned, use some kind of logging if you need a record of the operations.