Example of p/x:
(gdb) p/x (long long)-2147483647 #still works
$1 = 0xffffffff80000001
(gdb) p/x (long long)-2147483648 #truncated once input exceed max int
$2 = 0x80000000
(gdb) p/x (long long)-2147483649
$3 = 0x7fffffff
(gdb) whatis $1
type = long long
(gdb) whatis $2
type = long long
(gdb) whatis $3
type = long long
And p/u:
(gdb) p/u (long long)-2147483647
$1 = 18446744071562067969
(gdb) p/u (long long)-2147483648
$2 = 2147483648
(gdb) whatis $1
type = long long
(gdb) whatis $2
type = long long
I always concluded that this is gdb bug, but now I think I may misunderstanding and decided to post this question here.
Output (as requested by comment):
(gdb) whatis -2147483647
type = int
(gdb) whatis -2147483648
type = unsigned int
(gdb) whatis -2147483648LL
type = long long
(gdb) p (long long)-2147483648LL
$1 = -2147483648
Related
So I am new to using GDB and I am using the following program as an excercise
#include <stdio.h>
#include <stdlib.h>
int main() {
unsigned int value1, abs_value1, abs_value2, abs_value3;
int value2, value3;
char myarray[10];
printf("Enter 10 characters:");
fgets(myarray, 11, stdin);
printf("Enter an integer between 0 and 10,000:");
scanf("%d", &value1);
abs_value1 = abs(value1);
printf("Enter an integer between -10,000 and 0:");
scanf("%d", &value2);
abs_value2 = abs(value2);
printf("Enter an integer between -10,000 and 10,000:");
scanf("%d", &value3);
abs_value3 = abs(value3);
// set breakpoint here
return 0;
}
The values I've entered are as follows...
myarray[10] = "characters"
value1 = 578
value2 = -1123
value3 = 999
After running some commands I got the following output...
x/1x myarray : 0x63
x/1c myarray : 99 'c'
x/1s myarray : "characters"
x/1d &abs_value1 : 578
x/1x &value1 : 0x00000242
x/1d &abs_value2 : 1123
x/1x &value2 : 0xfffffb9d
x/1d &abs_value3 : 999
x/1x &value3 :0x000003e7
x/1c &value1 : 66 'B'
x/1c &value2 : -99 '\235'
So my question is, without looking at the code and using only the previous commands, can we tell if value1, value2, and value3 are signed or unsigned?
To my knowledge I don't think there is enough information to tell if they are signed or unsigned. My first instinct was to look for negative values but since we are taking the absolute value of our variables there is no way to be certain if the the value began as a negative number based of just looking at the commands.
Are there some different ways we might be able to deduce if the variables are signed or unsigned?
Are there some different ways we might be able to deduce if the variables are signed or unsigned?
With debug info enabled this is quite easy. You can use ptype command but you should build the binary with -g option to make it work.
(gdb) ptype value1
type = unsigned int
(gdb) ptype abs_value1
type = unsigned int
(gdb) ptype abs_value2
type = unsigned int
(gdb) ptype abs_value3
type = unsigned int
(gdb) ptype value2
type = int
(gdb) ptype value3
type = int
(gdb) ptype myarray
type = char [10]
(gdb)
I have two unsigned char arrays of the same size and an if statement that checks to see if they're equal:
#define BUFFER_SIZE 10000
unsigned char origChar[BUFFER_SIZE];
unsigned char otherChar[BUFFER_SIZE];
//Yes, I know this is unnecessary
memset(origChar,'\0',BUFFER_SIZE);
memset(otherChar,'\0',BUFFER_SIZE);
. . .
if(memcmp(origChar,otherChar,offset))
{
. . .
}
When I examine the two arrays in gdb, I get the following:
(gdb) p origChar
$1 = '\000' <repeats 9999 times>
(gdb) p otherChar
$2 = '\000' <repeats 9999 times>...
(gdb) p memcmp(otherChar,origChar,offset)
$3 = 1
However, if I decrement offset by 1, I get the following:
(gdb) p memcmp(otherChar,origChar,offset-1)
$4 = 0
(gdb) p offset
$5 = 10000
It doesn't really make any sense to me. GDB basically says they're completely equal, so why would decrementing offset by one change things?
Well... Reading your dump, I can tell you that origChar and otherChar are both '\0'*9999 ; while you're trying to compare the first 10000 bytes when using offset. So there is probably a difference in the 10000'th byte.
Using offset-1, you're comparing the first 9999 bytes, hence the equality.
The "bug" thus comes from something you do in your first ". . ." that modifies the 10000'th value.
I have:
(gdb) display/t raw_data[4]<<8
24: /t raw_data[4]<<8 = 1111100000000
(gdb) display/t raw_data[5]
25: /t raw_data[5] = 11100111
(gdb) display/t (raw_data[4]<<8)|raw_data[5]
26: /t (raw_data[4]<<8)|raw_data[5] = 11111111111111111111111111100111
Why is the result on line 26 not 0001111111100111? Thanks.
edit: More specifically:
(gdb) display/t raw_data[5]
27: /t raw_data[5] = 11100111
(gdb) display/t 0|raw_data[5]
28: /t 0|raw_data[5] = 11111111111111111111111111100111
Why is the result on line 26 not 11100111?
Your data type is a char, which on your platform appears to be signed. The entry raw_data[5] holds the negative number -25.
The print format t prints the data as unsigned integer in binary. When you print raw_data[5], it is converted to the unsigned char 213, but has only 8 bits. When you do the integer arithmetic on the data, the chars are promoted to a 32-bit integer.
Promoting the negative char value -25 to a signed int will, of course, yield -25, but its representation as an unsigned int is now 2^^32 + x, whereas as an unsigned char it was 2^^8 + x. That's where all the ones at the beginning of the 32-bit binary number come from.
It's maybe better to work with unsigned raw data.
Let's just ignore the first block, since the second block is a minimal reproduction.
Also note that 0 | x preserves the value of x, but causes the usual integral promotions.
Then the second block is not so unexpected.
(gdb) display/t raw_data[5]
27: /t raw_data[5] = 11100111
Ok, raw_data[5] is int8_t(-25)
(gdb) display/t 0|raw_data[5]
28: /t 0|raw_data[5] = 11111111111111111111111111100111
and 0|raw_data[5] is int(-25). Indeed, the value was preserved.
The constant 8 caused a promotion to a signed integer, so you're seeing sign extension as well at the promotion. Change it to UINT8_C(8). You'll need to include stdint.h for the macro.
As the question title reads, assigning 2^31 to a signed and unsigned 32-bit integer variable gives an unexpected result.
Here is the short program (in C++), which I made to see what's going on:
#include <cstdio>
using namespace std;
int main()
{
unsigned long long n = 1<<31;
long long n2 = 1<<31; // this works as expected
printf("%llu\n",n);
printf("%lld\n",n2);
printf("size of ULL: %d, size of LL: %d\n", sizeof(unsigned long long), sizeof(long long) );
return 0;
}
Here's the output:
MyPC / # c++ test.cpp -o test
MyPC / # ./test
18446744071562067968 <- Should be 2^31 right?
-2147483648 <- This is correct ( -2^31 because of the sign bit)
size of ULL: 8, size of LL: 8
I then added another function p(), to it:
void p()
{
unsigned long long n = 1<<32; // since n is 8 bytes, this should be legal for any integer from 32 to 63
printf("%llu\n",n);
}
On compiling and running, this is what confused me even more:
MyPC / # c++ test.cpp -o test
test.cpp: In function ‘void p()’:
test.cpp:6:28: warning: left shift count >= width of type [enabled by default]
MyPC / # ./test
0
MyPC /
Why should the compiler complain about left shift count being too large? sizeof(unsigned long long) returns 8, so doesn't that mean 2^63-1 is the max value for that data type?
It struck me that maybe n*2 and n<<1, don't always behave in the same manner, so I tried this:
void s()
{
unsigned long long n = 1;
for(int a=0;a<63;a++) n = n*2;
printf("%llu\n",n);
}
This gives the correct value of 2^63 as the output which is 9223372036854775808 (I verified it using python). But what is wrong with doing a left shit?
A left arithmetic shift by n is equivalent to multiplying by 2n
(provided the value does not overflow)
-- Wikipedia
The value is not overflowing, only a minus sign will appear since the value is 2^63 (all bits are set).
I'm still unable to figure out what's going on with left shift, can anyone please explain this?
PS: This program was run on a 32-bit system running linux mint (if that helps)
On this line:
unsigned long long n = 1<<32;
The problem is that the literal 1 is of type int - which is probably only 32 bits. Therefore the shift will push it out of bounds.
Just because you're storing into a larger datatype doesn't mean that everything in the expression is done at that larger size.
So to correct it, you need to either cast it up or make it an unsigned long long literal:
unsigned long long n = (unsigned long long)1 << 32;
unsigned long long n = 1ULL << 32;
The reason 1 << 32 fails is because 1 doesn't have the right type (it is int). The compiler doesn't do any converting magic before the assignment itself actually happens, so 1 << 32 gets evaluated using int arithmic, giving a warning about an overflow.
Try using 1LL or 1ULL instead which respectively have the long long and unsigned long long type.
The line
unsigned long long n = 1<<32;
results in an overflow, because the literal 1 is of type int, so 1 << 32 is also an int, which is 32 bits in most cases.
The line
unsigned long long n = 1<<31;
also overflows, for the same reason. Note that 1 is of type signed int, so it really only has 31 bits for the value and 1 bit for the sign. So when you shift 1 << 31, it overflows the value bits, resulting in -2147483648, which is then converted to an unsigned long long, which is 18446744071562067968. You can verify this in the debugger, if you inspect the variables and convert them.
So use
unsigned long long n = 1ULL << 31;
*pSelectData = 4A, *(pSelectData+1) = 54
unsigned int value = ((unsigned short)pSelectData );
Output = 21578 (in hex 0x544A).
Can someone explain me how this is happening (how the values getting converted)??
Thanks in advance
What is the problem more specifically?
Depending on endianness you get either 0x4a54, or 0x544a. That's exactly the representation of your value as it lies in the memory.
This is your memory where p=pSelectedData, ps=cast to short, pint=cast to int (little endian architecture assumed):
[ ][4A][54][00][00][ ]
^ ^ ^ ^ ^
p p+1 p+2 p+3 p+4
ps ps+1 ps+2
pint pint+1
You probably wanted to do this:
*(unsigned short*)pSelectedData = 0x4a;
*(unsigned short*)(pSelectedData+1) = 0x54;
which would give you
[ ][4A][00][54][00][ ]