GDB arithmetics - gdb

When i use the commands:
print/x &_start -> i get: 0x08049054
print/x &_key -> i get: 0x0804916d
It is quite easy to figure out that the difference is: 119h
But if i use the command:
print/x &_key-&_start -> i get: 0x46 (!!)
Why? Does anyone confirm this if debug a program of his own?

What you see is pointer arithmetic.
See also: SO:Pointer Arithmetic

This is because you use pointers to an unsigned int or some other type (for _start and _key) that is four bytes wide. You will notice that even with pointer arithmetics in C/C++ you get the same results.
Write this into foo.cpp:
#include <cstdio>
int main(int argc, char** argv)
{
unsigned int* _start = (unsigned int*)0x08049054, * _key = (unsigned int*)0x0804916d;
printf("start(%p), key(%p) -> [key - start](%li)\n", _start, _key, _key - _start);
}
Now the make file (GNUmakefile):
CXXFLAGS=-ggdb -g3 -O0
foo: foo.cpp
Build it by invoking make (GNU make, to be precise).
The output will be:
start(0x8049054), key(0x804916d) -> [key - start](70)
... and 70 == 0x46.

Related

Automatic conversion of int to unsigned int

#include <iostream>
using namespace std;
int main() {
unsigned int u = 5;
int x = -1;
if(x>u) {
cout<<"Should not happen"<<endl;
} else {
cout<<"Ok"<<endl;
}
}
This code outputs Should not happen. I came across this when comparing the size of a string (size_t is an unsigned int or unsigned long long value) to an int.
It seems that C type casts int to unsigned int but in practice, it seems it would bring in bugs. Honestly, I would have preferred a compile-time error given how incompatible int is to unsigned int. I would like to know why the convention is like this?
You can enable -Wsign-compare -Werror in Clang: Try it online!
It'll produce a compile-time error (because of -Werror that treats warnings as errors):
.code.tio.cpp:7:9: error: comparison of integers of different signs: 'int' and 'unsigned int' [-Werror,-Wsign-compare]
if(x>u) {
~^~
1 error generated.
For some reason, -Wall -Werror in Clang (but not in GCC) doesn't produce any errors. But -Wall -Wextra -Werror does include -Wsign-compare, so you can use that.

cast a pointer to a type in lldb

I want to debug a third-party c++ lib, is it possible to cast a pointer to a printable type?
I have tried
(lldb) expr static_cast<AGInfo*>(0x0000002fcdccc060)
but it shows an error of
error: cannot cast from type 'long' to pointer type 'mxnet::Imperative::AGInfo *'
Is there any way of doing that?
Thanks
lldb uses clang for its expression parser, so it adheres pretty strictly to C++ with only a few modifications. clang won't allow you to do what you were trying in source code:
> cat foo.cpp
struct Something
{
int first;
int second;
};
int
main()
{
Something mySomething = {10, 30};
long ptr_val = (long) &mySomething;
Something *some_ptr = static_cast<Something *>(ptr_val);
return some_ptr->first;
}
> clang++ -g -O0 -o foo foo.cpp
foo.cpp:12:25: error: cannot cast from type 'long' to pointer type 'Something *'
Something *some_ptr = static_cast<Something *>(ptr_val);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
So it won't work in lldb either.
Fortunately, C++ is less strict in C-style casts, so the same code but with:
Something *some_ptr = (Something *) ptr_val;
compiles in actual source, and will work in the lldb expression parser.

Why am I able to assign a function reference to an anonymous function pointer variable?

The following code compiles just fine and I'm not sure why. Can someone please explain to me why this is legal?
I am using g++ (Debian 6.1.1-10) 6.1.1 20160724 to compile.
#include <iostream>
int sum(int x, int y) { return x + y; }
int main(int argc, char *argv[])
{
using std::cout;
int (*) (int, int) = ∑
cout << "what" << '\n';
}
Addendum
The following program compiles fine using g++ version 5.4.0 but fails to compile in gcc.
int main()
{
int (*) = 20;
}
It's very likely to be related to this bug reported by Zack Weinberg:
Bug 68265 - Arbitrary syntactic nonsense silently accepted after 'int (*){}' until the next close brace
(From Why does this invalid-looking code compile successfully on g++ 6.0? :)
The C++ compiler fails to diagnose ill-formed constructs such as
int main()
{
int (*) {}
any amount of syntactic nonsense
on multiple lines, with *punctuation* and ++operators++ even...
will be silently discarded
until the next close brace
}
With -pedantic -std=c++98 you do get "warning: extended initializer
lists only available with -std=c++11 or -std=gnu++11", but with
-std=c++11, not a peep.
If any one (or more) of the tokens 'int ( * ) { }' are removed, you do
get an error. Also, the C compiler does not have the same bug.
Of course, if you try int (*) (int, int) {} or other variants, it erroneously compiles. The interesting thing is that the difference between this and the previous duplicate/bug reports is that int (*) (int, int) = asdf requires asdf to be a name in scope. But I highly doubt that the bugs are different in nature, since the core issue is that GCC is allowing you to omit a declarator-id.
[n4567 §7/8]: "Each init-declarator in the init-declarator-list
contains exactly one declarator-id, which is the name declared by
that init-declarator and hence one of the names declared by the
declaration."
Here's an oddity:
int (*) (int, int) = main;
In this specific scenario, GCC doesn't complain about taking the address of main (like arrays, &main is equivalent to main).

What are the strict aliasing rules when casting *from* a char array?

I'm confused by the strict aliasing rules when it comes to casting a char array to other types. I know that it is permitted to cast any object to a char array, but I'm not sure what happens the other way around.
Take a look at this:
#include <type_traits>
using namespace std;
struct{
alignas (int) char buf[sizeof(int)]; //correct?
} buf1;
alignas(int) char buf2[sizeof(int)]; //incorrect?
struct{
float f; //obviously incorrect
} buf3;
typename std::aligned_storage<sizeof(int), alignof(int)>::type buf4; //obviously correct
int main()
{
reinterpret_cast<int&>(buf1) = 1;
*reinterpret_cast<int*>(buf2) = 1;
reinterpret_cast<int&>(buf3) = 1;
reinterpret_cast<int&>(buf4) = 1;
}
Compiling using g++-5.3.0 results in warnings only on the second and third line of main:
$ g++ -fsyntax-only -O3 -std=c++14 -Wall main.cpp
main.cpp: In function ‘int main()’:
main.cpp:25:30: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
*reinterpret_cast<int*>(buf2) = 1;
^
main.cpp:26:29: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
reinterpret_cast<int&>(buf3) = 1;
^
Is gcc correct in that lines 1 and 4 are correct, while lines 2 and 3 are not? I'm fairly sure line 4 is correct (that's what aligned_storage is for), but what are the rules at play here?
First of all, absence of warning is not a guarantee of correctness! gcc is getting better and better at spotting problematic code, but it is still not a static analyzing tool (and those are not perfect either!)
Second of all, yes, you are not allowed to access char array through a pointer to other type.

Why does not g++ or clang elicit a warning when truncating a non-const variable by assigning it to a variable of smaller type?

Both clang 2.9 and g++ 4.1.2 will generate a warning when the variable x is declared constant in the code snippet below. However when const is removed, as it has been in the snippet, neither of the compilers generates a warning even when executed with the following parameters which are the strictest I know: "-Wall -Wextra -pedantic -ansi"
Why won't the compilers deduce and report the same warning since x isn't volatile and cannot possibly be modified before the type conversion?
#include <iostream>
int main(int argc, char **argv)
{
unsigned int x = 1000;
const unsigned char c = x;
const unsigned int x_ = c;
std::cout << "x=" << x << " x_=" << x_ << std::endl;
return 0;
}
With const unsigned int x = 1000; g++ provides the message "warning: large integer implicitly truncated to unsigned type" and clang "warning: implicit conversion from 'const unsigned int' to 'const unsigned char' changes value from 1000 to 232 [-Wconstant-conversion]".
Is there any way to automatically detect this case without manually inspecting the code or relying on correctly designed unit tests?
For GCC, add the flag -Wconversion and you will get the desired warning. It's not a part of -Wall since so much code just ignores these types of things. I always have it turned on since it finds otherwise hard to debug defects.
If it is a const the compiler can see its value and warn about the truncation. If it is not a const, it cannot, despite the initialisation. This:
const unsigned int x = 1000;
const unsigned char c = x;
is equivalent to:
const unsigned char c = 1000;
I've run gcc with -O3 -fdump-tree-vrp, and what I see in the dump is:
std::__ostream_insert<char, std::char_traits<char> > (&cout, &"x="[0], 2);
D.20752_20 = std::basic_ostream<char>::_M_insert<long unsigned int> (&cout, 1000);
std::__ostream_insert<char, std::char_traits<char> > (D.20752_20, &" x_="[0], 4);
D.20715_22 = std::basic_ostream<char>::_M_insert<long unsigned int> (D.20752_20, 232);
i.e. it just inlines the constants 1000 and 232 in the cout statement!
If I run it with -O0, it doesn't dump anything, despite -ftree-vrp and -ftree-ccp switches.
Seems like gcc inlines the constants before it can emit the warnings...