Why the value of the variable is output 0? [duplicate] - c++

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Using printf function
#include<iostream>
using namespace std;
int main()
{
long long a=20;
long long b=21;
printf("%d %d",a,b);
}
Output: 20 0
Can anyone please explain this behavior?
[EDIT]
I know %d is not the right way to print long long But main objective of posting this question is I want to know the behaviour of long long such that it is printing 0 for b while the correct value for a.

Pedantically speaking, your code invokes undefined behavior, because you've provided incorrect format specifier. You should use %lld instead of %d.
When it is undefined behavior, you cannot really reason out why it behaves like that. You may reason out for this input, but then that may fail for another set of input. Because it is undefined. Or you can see the documentation, it might say something about it, but it is not required to say anything why it is printing 0.

It looks like long long is a 64-bit type on your machine, while int is a 32-bit type. You must also be on a little-endian machine.
Because printf is a variadic function, the only way it can know what types you passed to it is by how you label the arguments in the format string. You are sending two 64-bit arguments, but only using two 32-but ones, according to your format string. That means the first print is the lower 32-bit "bottom half" of your 64-bit 20, and the second print is the 32-bit "top half" (which is, of course, 0). The 21 you passed is completely ignored.

Related

Mangling of output on initialising an int with a larger-than-the-maximum value on an MSDOS compiler

In a book on C++ Programming, a question was:
"An int variable x is initialised with the value 92,126. What will be the result when this is run on an MS-DOS compiler? "
The answer said since x=92126 is larger than the maximum value an int variable can store (namely, 32,767) on an MS-DOS compiler, so x will be mangled and the output would be 26,590.
I don't understand what "mangling" is. I couldn't find anything about it on the net. So, I don't know why the result is 26,590. I think, if anything, since the maximum value possible is 32,767, that should be the result. But I am not sure. I need some help on this.
Here's the link for the chapter of the book containing the problem (question 4.1) and its solution: https://www.safaribooksonline.com/library/view/practical-c-programming/0596004192/ch04.html
Ditch the book.
There's no good answer. From a C++ perspective, "purple" is a valid outcome. The relevant term is not "mangled", it's Undefined Behavior. Literally anything can happen. There's certainly no restriction that x holds a numerical value, or even that x exists afterwards.
The reason that this is Undefined Behavior is because it's signed integer overflow. unsigned int x = 92,126UL would be a different matter, that's unambiguously 26590 when the target system has 16 bits integers (like MS-DOS did)

pre increment and post increment result discrepancy in MSdos and DevC++ compiler [duplicate]

This question already has answers here:
Is the output of printf ("%d %d", c++, c); also undefined?
(6 answers)
Closed 6 years ago.
I am unable to understand the below issues while pre-incrementing and post-incrementing a variable inside printf:-
code used in turbocpp compiler:-
#include<stdio.h>
main()
{
int i=0;
clrscr();
printf("%d %d %d",i,i++,++i);
getch();
return(0);
}
the output in MSdos Compiler is :- 2 1 1
but for the same program in DevC++ 5.11 the output is:- 2 1 2
1) My understanding is printf prints by taking left variable first and then moves to right.(i have verified it using 3 different variables.) So, according to that shouldn't the output be 0 0 2?
2) I tried with DevC++ to check the output of the same program but it gave a different result. Now I am really confused as what should be the output.
3) Also if I vary:- printf ("%d %d %d", i,++i,i++); the output is 2 2 0.
I am not getting what is going on here. Somebody Please help me to understand better...
Having two side effects on the same variable will give you an undetermined result, as each compiler is free to choose the order in which it evaluates the arguments.
1.9/15: If a side effect on a scalar object is unsequenced relative to either another side effect on the same scalar object or a
value computation using the value of the same scalar object, the
behavior is undefined.
So it could for example be:
0,0,1 if evaluated left to right
2,1,1 if evaluated right to left
2,1,2 if pre-increment is done on i and stored in i, then i is loaded as second argument and post incremented, then i is taken ans third argument (the compiler assuming that preincrement was already done), and then i is taken as first argument.
But other combinations could also be plausible. And undefined behaviour means really undefined, so perhaps one day this could even crash (if one say a compiler would automatically generate parallel code and 2 cores access to the same variable in the same time)
C++ doesn't standardize the order in which function arguments are calculated, that's why results differ from compiler to compiler. See C++ Standard, section 5.2.2/8:
The order of evaluation of arguments is unspecified.

assign `this` to `MyClass *instance`, `instance->member` is not referencing same memory as `this->member` [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
In my cpp implementation I have:
static MyClass *instance;
floating freely outside all containing scopes (curly {}). I've tried initializing it to nullptr too.
void MyClass::myMethod() {
instance = this;
LOG("Hello, %d, %d", wList, instance->wList);
The above log displays an arbitrary location for member object pointer wList but instance should be pointing to the same data that this does, and hence the same wList, but it doesn't. instance->wList is still 0. What's happening here?
As Mark Ransom noted in the comments, you can't use the %d format specifier to print a pointer, you must use %p in order for your code to be portable. It's technically Undefined Behavior to use anything besides %p for pointers, but in all likelihood you'll see that it will seem to work ok on 32-bit systems.
However, on 64-bit systems, which I'd wager you're using, it blows up. Pointers are 8 bytes, but printf only tries to read 4 bytes off of the stack when it sees %d. For the second %d specifier, it reads the next 4 bytes, which are the other 4 bytes of the pointer—if you're seeing 0 for that, it probably means that your pointer was allocated within the first 4 GB of memory on a little-endian system (i.e. its value was something like 0x00000000'xxxxxxxx). The 8 bytes on the stack from the second pointer passed to printf never get read.
The %p specifier portably prints a pointer on all platforms, regardless of whether they're 32 bits, 64 bits, or some other size. One unfortunate consequence of this is that the exact output format is implementation-defined, meaning it can and does change between systems. Some systems might use a leading 0x or 0X, while others might have no prefix at all and just print the raw hex value. Some might pad to the left with 0's, and others might not.
If you want to control the exact output, then you can use a format of your choice (e.g. %08x or %016llx) as long as you cast the pointer appropriately for that specifier. For example, here's how you would do that on 32- and 64-bit systems:
printf("0x%08x\n", (unsigned int)(uintptr_t)myPointer); // 32-bit
printf("0x%016llx\n", (unsigned long long)(uintptr_t)myPointer); // 64-bit
The reason for casting twice is to avoid spurious compiler warnings, since some compilers will complain if you cast from a pointer type to an integer type which isn't guaranteed to be large enough to hold a pointer.

Difference between atoi and atol in windows

One of clients to whom we provided with source code said that by changing int to long and atoi to atol, they get different result of our program. But as far as I understand, int and long on Windows have the same 4 byte size and the same min/max. In the same analogy, I expected atoi and atol produce same output and with our testing, they do.
Is there any difference between those commands I didn't know?
In non-error cases, the functions are both defined equivalent to
strtol(nptr, (char **)NULL, 10)
the only difference is that atoi casts the return value to int.
There could be different behavior in error cases (when the string represents a value that is out of range of the type), since behavior is undefined for both. But I'd be surprised. Even if atoi and atol aren't implemented by calling strtol, they're probably implemented by the same code or very similar.
Personally I'd ask that the client show me the exact code. Maybe they didn't just replace int -> long and atoi -> atol as they claim. If that is all they changed (but they did so slightly differently from how you assumed when you did your tests), probably they've found the symptom of a bug in your code.

how casting of return value of main() function works?

I am using Visual studio 2008.
For below code
double main()
{
}
I get error:
error C3874: return type of 'main'
should be 'int' instead of 'double'
But if i use below code
char main()
{
}
No errors.
After running and exiting the output window displays
The program '[5856] test2.exe: Native'
has exited with code -858993664
(0xcccccc00).
Question: Is compiler doing implicit cast from default return value of zero(integer) to char ?
how the code 0xcccccc00 got generated ?
Looks like last byte in that code seem to be the actual returned value. Why 0xcccccc is coming ?
The correct way to do it, per the C++ standard is:
int main()
{
...
}
Don't change the return type to anything else or your code will not be C++ and you're just playing with compiler specific functionality. Those ccccc values in your example are just unitialized bytes (which the C allocator sets to 0xCC) being returned.
The value returned from the main function becomes the exit status of the process, though the C standard only ascribes specific meaning to two values: EXIT_SUCCESS (traditionally zero) and EXIT_FAILURE. The meaning of other possible return values is implementation-defined. However there is no standard for how non-zero codes are interpreted.
You may refer to an useful post:
What should main() return in C and C++?
Yet another MSVC extension/bug!
The answer to your first question is sort of yes. A char is essentially a very small integral type, so the compiler is being (extremely) lenient. Double isn't acceptable because it's not an integral type. The 0xCCCCCC is memory that never got initialized (except for the purposes of debugging). Since ASCII characters can only have two hex digits, the conversion failed to set the first 24 bits at all (and just set the last 8 bits to 0). What an odd and undesirable compiler trick.
About main function, $3.6.1/2 - "It
shall have a return type of type int,
but otherwise its type is
implementation-defined."
As I understand, anything that mentions 'shall' in the standard document AND is not ahdered to by the code is an instant condition required to be diagnosed by the compiler, unless the Standard specifically says that such diagnosis is not required.
So I guess VS has a bug if it allows such a code.
The main function is supposed to return an int. Not doing that means you're out in undefined territory. Don't forget to wave at the standard on your way past. Your char return probably works because char can be easily converted to an int. Doubles can certainly not. Not only are they longer (double the length) but they're floating point, which means you'll have 1s in wonky places.
Short answer: don't do that.
It is probably because char will implicitly cast to an int, however double won't as there would be data loss.
(See here: http://msdn.microsoft.com/en-us/library/y5b434w4%28v=VS.71%29.aspx for more info)
However you don't see the conversion problem because the compiler catches the worst sin (as stated in the other answers) of using a non standard return type.