GDB: examine as instruction with opcodes - gdb

Is it possible to examine memory as instruction (x/i) the way I can see both asm and raw instructions in hex (like with disassemble /r)?
Sometimes I want to disassemble some part of memory which GDB refuses to disassemble saying: "No function contains specified address".
The only option is then x/i, but I would like to see exactly what hex values are translated to what instructions.

I want to disassemble some part of memory which GDB refuses to disassemble saying: "No function contains specified address".
The disas/r 0x1234,0x1235 will work even when GDB can not determine function boundaries. Example:
(gdb) disas/r 0x0000000000400803
No function contains specified address.
(gdb) disas/r 0x0000000000400803,0x000000000040080f
Dump of assembler code from 0x400803 to 0x40080f:
0x0000000000400803: e8 b8 fd ff ff callq 0x4005c0 <system#plt>
0x0000000000400808: 48 81 45 f0 00 10 00 00 addq $0x1000,-0x10(%rbp)
End of assembler dump.

Related

How to link malloc when compiling with GCC? [duplicate]

I have a function foo written in assembly and compiled with yasm and GCC on Linux (Ubuntu) 64-bit. It simply prints a message to stdout using puts(), here is how it looks:
bits 64
extern puts
global foo
section .data
message:
db 'foo() called', 0
section .text
foo:
push rbp
mov rbp, rsp
lea rdi, [rel message]
call puts
pop rbp
ret
It is called by a C program compiled with GCC:
extern void foo();
int main() {
foo();
return 0;
}
Build commands:
yasm -f elf64 foo_64_unix.asm
gcc -c foo_main.c -o foo_main.o
gcc foo_64_unix.o foo_main.o -o foo
./foo
Here is the problem:
When running the program it prints an error message and immediately segfaults during the call to puts:
./foo: Symbol `puts' causes overflow in R_X86_64_PC32 relocation
Segmentation fault
After disassembling with objdump I see that the call is made with the wrong address:
0000000000000660 <foo>:
660: 90 nop
661: 55 push %rbp
662: 48 89 e5 mov %rsp,%rbp
665: 48 8d 3d a4 09 20 00 lea 0x2009a4(%rip),%rdi
66c: e8 00 00 00 00 callq 671 <foo+0x11> <-- here
671: 5d pop %rbp
672: c3 retq
(671 is the address of the next instruction, not address of puts)
However, if I rewrite the same code in C the call is done differently:
645: e8 c6 fe ff ff callq 510 <puts#plt>
i.e. it references puts from the PLT.
Is it possible to tell yasm to generate similar code?
TL:DR: 3 options:
Build a non-PIE executable (gcc -no-pie -fno-pie call-lib.c libcall.o) so the linker will generate a PLT entry for you transparently when you write call puts.
call puts wrt ..plt like gcc -fPIE would do.
call [rel puts wrt ..got] like gcc -fno-plt would do.
The latter two will work in PIE executables or shared libraries. The 3rd way, wrt ..got, is slightly more efficient.
Your gcc is building PIE executables by default (32-bit absolute addresses no longer allowed in x86-64 Linux?).
I'm not sure why, but when doing so the linker doesn't automatically resolve call puts to call puts#plt. There is still a puts PLT entry generated, but the call doesn't go there.
At runtime, the dynamic linker tries to resolve puts directly to the libc symbol of that name and fixup the call rel32. But the symbol is more than +-2^31 away, so we get a warning about overflow of the R_X86_64_PC32 relocation. The low 32 bits of the target address are correct, but the upper bits aren't. (Thus your call jumps to a bad address).
Your code works for me if I build with gcc -no-pie -fno-pie call-lib.c libcall.o. The -no-pie is the critical part: it's the linker option. Your YASM command doesn't have to change.
When making a traditional position-dependent executable, the linker turns the puts symbol for the call target into puts#plt for you, because we're linking a dynamic executable (instead of statically linking libc with gcc -static -fno-pie, in which case the call could go directly to the libc function.)
Anyway, this is why gcc emits call puts#plt (GAS syntax) when compiling with -fpie (the default on your desktop, but not the default on https://godbolt.org/), but just call puts when compiling with -fno-pie.
See What does #plt mean here? for more about the PLT, and also Sorry state of dynamic libraries on Linux from a few years ago. (The modern gcc -fno-plt is like one of the ideas in that blog post.)
BTW, a more accurate/specific prototype would let gcc avoid zeroing EAX before calling foo:
extern void foo(); in C means extern void foo(...);
You could declare it as extern void foo(void);, which is what () means in C++. C++ doesn't allow function declarations that leave the args unspecified.
asm improvements
You can also put message in section .rodata (read-only data, linked as part of the text segment).
You don't need a stack frame, just something to align the stack by 16 before a call. A dummy push rax will do it.
Or we can tail-call puts by jumping to it instead of calling it, with the same stack position as on entry to this function. This works with or without PIE. Just replace call with jmp, as long as RSP is pointing at your own return address.
If you want to make PIE executables (or shared libraries), you have two options
call puts wrt ..plt - explicitly call through the PLT.
call [rel puts wrt ..got] - explicitly do an indirect call through the GOT entry, like gcc's -fno-plt style of code-gen. (Using a RIP-relative addressing mode to reach the GOT, hence the rel keyword).
WRT = With Respect To. The NASM manual documents wrt ..plt, and see also section 7.9.3: special symbols and WRT.
Normally you would use default rel at the top of your file so you can actually use call [puts wrt ..got] and still get a RIP-relative addressing mode. You can't use a 32-bit absolute addressing mode in PIE or PIC code.
call [puts wrt ..got] assembles to a memory-indirect call using the function pointer that dynamic linking stored in the GOT. (Early-binding, not lazy dynamic linking.)
NASM documents ..got for getting the address of variables in section 9.2.3. Functions in (other) libraries are identical: you get a pointer from the GOT instead of calling directly, because the offset isn't a link-time constant and might not fit in 32-bits.
YASM also accepts call [puts wrt ..GOTPCREL], like AT&T syntax call *puts#GOTPCREL(%rip), but NASM does not.
; don't use BITS 64. You *want* an error if you try to assemble this into a 32-bit .o
default rel ; RIP-relative addressing instead of 32-bit absolute by default; makes the [rel ...] optional
section .rodata ; .rodata is best for constants, not .data
message:
db 'foo() called', 0
section .text
global foo
foo:
sub rsp, 8 ; align the stack by 16
; PIE with PLT
lea rdi, [rel message] ; needed for PIE
call puts WRT ..plt ; tailcall puts
;or
; PIE with -fno-plt style code, skips the PLT indirection
lea rdi, [rel message]
call [rel puts wrt ..got]
;or
; non-PIE
mov edi, message ; more efficient, but only works in non-PIE / non-PIC
call puts ; linker will rewrite it into call puts#plt
add rsp,8 ; restore the stack, undoing the add
ret
In a position-dependent Linux executable, you can use mov edi, message instead of a RIP-relative LEA. It's smaller code-size and can run on more execution ports on most CPUs. (Fun fact: MacOS always puts the "image base" outside the low 4GiB so this optimization isn't possible there.)
In a non-PIE executable, you also might as well use call puts or jmp puts and let the linker sort it out, unless you want more efficient no-plt style dynamic linking. But if you do choose to statically link libc, I think this is the only way you'll get a direct jmp to the libc function.
(I think the possibility of static linking for non-PIE is why ld is willing to generate PLT stubs automatically for non-PIE, but not for PIE or shared libraries. It requires you to say what you mean when linking ELF shared objects.)
If you did use call puts in a PIE (call rel32), it could only work if you statically linked a position-independent implementation of puts into your PIE, so the entire thing was one executable that would get loaded at a random address at runtime (by the usual dynamic-linker mechanism), but simply didn't have a dependency on libc.so.6
Linker "relaxing" calls when the target is present at static-link time
GAS call *bar#GOTPCREL(%rip) uses R_X86_64_GOTPCRELX (relaxable)
NASM call [rel bar wrt ..got] uses R_X86_64_GOTPCREL (not relaxable)
This is less of a problem with hand-written asm; you can just use call bar when you know the symbol will be present in another .o (rather than .so) that you're going to link. But C compilers don't know the difference between library functions and other user functions you declare with prototypes (unless you use stuff like gcc -fvisibility=hidden https://gcc.gnu.org/wiki/Visibility or attributes / pragmas).
Still, you might want to write asm source that the linker can optimize if you statically link a library, but AFAIK you can't do that with NASM. You can export a symbol as hidden (visible at static-link time, but not for dynamic linking in the final .so) with global bar:function hidden, but that's in the source file defining the function, not files accessing it.
global bar
bar:
mov eax,231
syscall
call bar wrt ..plt
call [rel bar wrt ..got]
extern bar
The 2nd file, after assembling with nasm -felf64 and disassembling with objdump -drwc -Mintel to see the relocations:
0000000000000000 <.text>:
0: e8 00 00 00 00 call 0x5 1: R_X86_64_PLT32 bar-0x4
5: ff 15 00 00 00 00 call QWORD PTR [rip+0x0] # 0xb 7: R_X86_64_GOTPCREL bar-0x4
After linking with ld (GNU Binutils) 2.35.1 - ld bar.o bar2.o -o bar
0000000000401000 <_start>:
401000: e8 0b 00 00 00 call 401010 <bar>
401005: ff 15 ed 1f 00 00 call QWORD PTR [rip+0x1fed] # 402ff8 <.got>
40100b: 0f 1f 44 00 00 nop DWORD PTR [rax+rax*1+0x0]
0000000000401010 <bar>:
401010: b8 e7 00 00 00 mov eax,0xe7
401015: 0f 05 syscall
Note that the PLT form got relaxed to just a direct call bar, PLT eliminated. But the ff 15 call [rel mem] was not relaxed to an e8 rel32
With GAS:
_start:
call bar#plt
call *bar#GOTPCREL(%rip)
gcc -c foo.s && disas foo.o
0000000000000000 <_start>:
0: e8 00 00 00 00 call 5 <_start+0x5> 1: R_X86_64_PLT32 bar-0x4
5: ff 15 00 00 00 00 call QWORD PTR [rip+0x0] # b <_start+0xb> 7: R_X86_64_GOTPCRELX bar-0x4
Note the X at the end of R_X86_64_GOTPCRELX.
ld bar2.o foo.o -o bar && disas bar:
0000000000401000 <bar>:
401000: b8 e7 00 00 00 mov eax,0xe7
401005: 0f 05 syscall
0000000000401007 <_start>:
401007: e8 f4 ff ff ff call 401000 <bar>
40100c: 67 e8 ee ff ff ff addr32 call 401000 <bar>
Both calls got relaxed to a direct e8 call rel32 straight to the target address. The extra byte in indirect call is filled with a 67 address-size prefix (which has no effect on call rel32), padding the instruction to the same length. (Because it's too late to re-assemble and re-compute all relative branches within functions, and alignment and so on.)
That would happen for call *puts#GOTPCREL(%rip) if you statically linked libc, with gcc -static.
The 0xe8 opcode is followed by a signed offset to be applied to the PC (which has advanced to the next instruction by that time) to compute the branch target. Hence objdump is interpreting the branch target as 0x671.
YASM is rendering zeros because it has likely put a relocation on that offset, which is how it asks the loader to populate the correct offset for puts during loading. The loader is encountering an overflow when computing the relocation, which may indicate that puts is at a further offset from your call than can be represented in a 32-bit signed offset. Hence the loader fails to fix this instruction, and you get a crash.
66c: e8 00 00 00 00 shows the unpopulated address. If you look in your relocation table, you should see a relocation on 0x66d. It is not uncommon for the assembler to populate addresses/offsets with relocations as all zeros.
This page suggests that YASM has a WRT directive that can control use of .got, .plt, etc.
Per S9.2.5 on the NASM documentation, it looks like you can use CALL puts WRT ..plt (presuming YASM has the same syntax).

Is it possible to write asm in C++ with opcode instead of shellcode

I'm curious if there's a way to use __asm in c++ then write that into memory instead of doing something like:
BYTE shell_code[] = { 0x48, 0x03 ,0x1c ,0x25, 0x0A, 0x00, 0x00, 0x00 };
write_to_memory(function, &shell_code, sizeof(shell_code));
So I would like to do:
asm_code = __asm("add rbx, &variable\n\t""jmp rbx") ;
write_to_memory(function, &asm_code , sizeof(asm_code ));
Worst case I can use GCC and objdump externally or something but hoping there's an internal way
You can put an asm(""); statement at global scope, with start/end labels inside it, and declare those labels as extern char start_code[], end_code[0]; so you can access them from C. C char arrays work most like asm labels, in terms of being able to use the C name and have it work as an address.
// compile with gcc -masm=intel
// AFAIK, no way to do that with clang
asm(
".pushsection .rodata \n" // we don't want to run this from here, it's just data
"start_code: \n"
" add rax, OFFSET variable \n" // *absolute* address as 32-bit sign-extended immediate
"end_code: \n"
".popsection"
);
__attribute__((used)) static int variable = 1;
extern char start_code[], end_code[0]; // C declarations for those asm labels
#include <string.h>
void copy_code(void *dst)
{
memcpy(dst, start_code, end_code - start_code);
}
It would be fine to have the payload code in the default .text section, but we can put it in .rodata since we don't want to run it.
Is that the kind of thing you're looking for? asm output on Godbolt (without assembling + disassembling:
start_code:
add rax, OFFSET variable
end_code:
copy_code(void*):
mov edx, OFFSET FLAT:end_code
mov esi, OFFSET FLAT:start_code
sub rdx, OFFSET FLAT:start_code
jmp [QWORD PTR memcpy#GOTPCREL[rip]]
To see if it actually assembles to what we want, I compiled with
gcc -O2 -fno-plt -masm=intel -fno-pie -no-pie -c foo.c to get a .o. objdump -drwC -Mintel shows:
0000000000000000 <copy_code>:
0: ba 00 00 00 00 mov edx,0x0 1: R_X86_64_32 .rodata+0x6
5: be 00 00 00 00 mov esi,0x0 6: R_X86_64_32 .rodata
a: 48 81 ea 00 00 00 00 sub rdx,0x0 d: R_X86_64_32S .rodata
11: ff 25 00 00 00 00 jmp QWORD PTR [rip+0x0] # 17 <end_code+0x11> 13: R_X86_64_GOTPCRELX memcpy-0x4
And with -D to see all sections, the actual payload is there in .rodata, still not linked yet:
Disassembly of section .rodata:
0000000000000000 <start_code>:
0: 48 05 00 00 00 00 add rax,0x0 2: R_X86_64_32S .data
-fno-pie -no-pie is only necessary for the 32-bit absolute address of variable to work. (Without it, we get two RIP-relative LEAs and a sub rdx, rsi. Unfortunately neither way of compiling gets GCC to subtract the symbols at build time with mov edx, OFFSET end_code - start_code, but that's just in the code doing the memcpy, not in the machine code being copied.)
In a linked executable
We can see how the linker filled in those relocations.
(I tested by using -nostartfiles instead of -c - I didn't want to run it, just look at the disassembly, so there was not point to actually writing a main.)
$ gcc -O2 -fno-plt -masm=intel -fno-pie -no-pie -nostartfiles foo.c
/usr/bin/ld: warning: cannot find entry symbol _start; defaulting to 0000000000401000
$ objdump -D -rwC -Mintel a.out
(manually edited to remove uninteresting sections)
Disassembly of section .text:
0000000000401000 <copy_code>:
401000: ba 06 20 40 00 mov edx,0x402006
401005: be 00 20 40 00 mov esi,0x402000
40100a: 48 81 ea 00 20 40 00 sub rdx,0x402000
401011: ff 25 e1 2f 00 00 jmp QWORD PTR [rip+0x2fe1] # 403ff8 <memcpy#GLIBC_2.14>
The linked payload:
0000000000402000 <start_code>:
402000: 48 05 18 40 40 00 add rax,0x404018 # from add rax, OFFSET variable
0000000000402006 <end_code>:
402006: 48 c7 c2 06 00 00 00 mov rdx,0x6
# this was from mov rdx, OFFSET end_code - start_code to see if that would assemble + link
Our non-zero-init dword variable that we're taking the address of:
Disassembly of section .data:
0000000000404018 <variable>:
404018: 01 00 add DWORD PTR [rax],eax
...
Your specific asm instruction is weird
&variable isn't valid asm syntax, but I'm guessing you wanted to add the address?
Since you're going to be copying the machine code somewhere, you must avoid RIP-relative addressing modes and any other relative references to things outside the block you're copying. Only mov can use 64-bit absolute addresses, like movabs rdi, OFFSET variable instead of the usual lea rdi, [rip + variable]. Also, you can even load / store into/from RAX/EAX/AX/AL with 64-bit absolute addresses movabs eax, [variable]. (mov-immediate can use any register, load/store are only the accumulator. https://www.felixcloutier.com/x86/mov)
(movabs is an AT&T mnemonic, but GAS allows it in .intel_syntax noprefix to force using 64-bit immediates, instead of the default 32-bit-sign-extended.)
This is kind of opposite of normal position-independent code, which works when the whole image is loaded at an arbitrary base. This will make code that works when the image is loaded to a fixed base (or even variable since runtime fixups should work for symbolic references), and then copied around relative to the rest of your code. So all your memory refs have to be absolute, except for within the asm.
So we couldn't have made PIE-compatible machine code by using lea rdx, [RIP+variable] / add rax, rdx - that would only get the right address for variable when run from the linked location in .rodata, not from any copy. (Unless you manually fixup the code when copying it, but it's still only a rel32 displacement.)
Terminology:
An opcode is part of a machine instruction, e.g. add ecx, 123 assembles to 3 bytes: 83 c1 7b. Those are the opcode, modrm, and imm8 respectively. (https://www.felixcloutier.com/x86/add).
"opcode" also gets misused (especially in shellcode usage) to describe the whole instruction represented as bytes.
Text names for instructions like add are mnemonics.
this is just a guess, i don't know if it will work. i'm sorry in advance for an ugly answer since i don't have much time due to work.
i think you can enclose your asm code inside labels.
get the address of that label and the size. treat it as a blob of data and you can write it anywhere.
void funcA(){
//some code here.
labelStart:
__asm("
;asm code here.
")
labelEnd:
//some code here.
//---make code as movable data.
char* pDynamicProgram = labelStart;
size_t sizeDP = labelEnd - labelStart;
//---writing to some memory.
char* someBuffer = malloc(sizeDP);
memcpy(someBuffer, pDynamicProgram, sizeDP);
//---execute: cast as a function pointer then execute call.
((func*)someBuffer)(/* parameters if any*/);
}
the sample code above of course is not compilable. but the logic is kind of like that. i see viruses do it that way though i haven't saw the actual c++ code. but we saw it from disassemblers. for the "return" logic after the call, there are many adhoc ways to do that. just be creative.
also, i think you have to enable first some settings for your program to write to some forbidden memory in case you want to override an existing function.

Relative vs Physical addresses in C++

I recently started learning about memory management and I read about relative addresses and physical addresses, and a question appeared in my mind:
When I print a variable's address, is it showing the relative (virtual) address or the physical address in where the variable located in the memory?
And another question regarding memory management:
Why does this code produce the same stack pointer value for each run (from Shellcoder's Handbook, page 28)?
Does any program that I run produce this address?
// find_start.c
unsigned long find_start(void)
{
__asm__("movl %esp, %eax");
}
int main()
{
printf("0x%x\n",find_start());
}
If we compile this and run this a few times, we get:
shellcoders#debian:~/chapter_2$ ./find_start
0xbffffad8
shellcoders#debian:~/chapter_2$ ./find_start
0xbffffad8
shellcoders#debian:~/chapter_2$ ./find_start
0xbffffad8
shellcoders#debian:~/chapter_2$ ./find_start
0xbffffad8
I would appreciate if someone could clarify this topic to me.
When I print a variable's address, is it showing the relative ( virtual ) address or the physical address in where the variable located in the memory ?
The counterpart to a relative address is an absolute address. That has nothing to do with the distinction between virtual and physical addresses.
On most common modern operating systems, such as Windows, Linux and MacOS, unless you are writing a driver, you will never encounter physical addresses. These are handled internally by the operating system. You will only be working with virtual addresses.
Why does this code produces the same stack pointer value for each run ( from shellcoder's handbook , page 28) ?
On most modern operating systems, every process has its own virtual memory address space. The executable is loaded to its preferred base address in that virtual address space, if possible, otherwise it is loaded at another address (relocated). The preferred base address of an executable file is normally stored in its header. Depending on the operating system and CPU, the heap is probably created at a higher address, since the heap normally grows upward (towards higher addresses). Because the stack normally grows downward (towards lower addresses), it will likely be created below the load address of the executable and grow towards the address 0.
Since the preferred load address is the same every time you run the executable, it is likely that the virtual memory addresses are the same. However, this may change if address layout space randomization is used. Also, just because the virtual memory addresses are the same does not mean that the physical memory address are the same, too.
Does any program that I will run produce this address ?
Depending on your operating system, you can set the preferred base address in which your program is loaded into virtual memory in the linker settings. Many programs may still have the same base address as your program, probably because both programs were built using the same linker with default settings.
The virtual addresses are only per program? Let's say I have 2 programs: program1 and program2. Can program2 access program1's memory?
It is not possible for program2 to access program1's memory directly, because they have separate virtual memory address spaces. However, it is possible for one program to ask the operating system for permission to access another process's address space. The operating system will normally grant this permission, provided that the program has sufficient priviledges. On Windows, this is can be accomplished for example with the function WriteProcessMemory. Linux offers similar functionality by using ptrace and writing to /proc/[pid]/mem. See this link for further information.
You get virtual addresses. Your program never gets to see physical addresses. Ever.
can program2 access program1's memory ?
No, because you can don't have addresses that point to program1's memory. If you have virtual address 0xabcd1234 in the program1 process, and you try to read it from the program2 process, you get program2's 0xabcd1234 (or a crash if there is no such address in program2). It's not a permission check - it's not like the CPU goes to the memory and sees "oh, this is program1's memory, I shouldn't access it". It's program2's own memory space.
But yes, if you use "shared memory" to ask the OS to put the same physical memory in both processes.
And yes, if you use ptrace or /proc/<pid>/mem to ask the OS nicely to read from the other process's memory, and you have permission to do that, then it will do that.
why does this code produces the same stack pointer value for each run ( from shellcoder's handbook , page 28) ? does any program that I will run will produce this address ?
Apparently, that program always has that stack pointer value. Different programs might have different stack pointers. And if you put more local variables in main, or call find_start from a different function, you will get a different stack pointer value because there will be more data pushed on the stack.
Note: even if you run the program twice at the same time, the address will be the same, because they are virtual addresses, and every process has its own virtual address space. They will be different physical addresses but you don't see the physical addresses.
In stack overflow example in the book I mentioned, they overwrite the return address in the stack to an address of an exploit in the enviroment variables. how does it work ?
It all works within one process.
Only focusing on a small part of your question.
#include <stdio.h>
// find_start.c
unsigned long find_start(void)
{
__asm__("movl %esp, %eax");
}
unsigned long nest ( void )
{
return(find_start());
}
int main()
{
printf("0x%lx\n",find_start());
printf("0x%lx\n",nest());
}
gcc so.c -o so
./so
0x50e381a0
0x50e38190
There is no magic here. The virtual space allows for programs to be built the same. I don't need to know where my program is going to live, each program can be compiled for the same address space, when loaded and run the can see the same virtual address space because they are all mapped to separate/different physical address spaces.
readelf -a so
(don't but I prefer objdump)
objdump -D so
Disassembly of section .text:
0000000000000540 <_start>:
540: 31 ed xor %ebp,%ebp
542: 49 89 d1 mov %rdx,%r9
545: 5e pop %rsi
....
000000000000064a <find_start>:
64a: 55 push %rbp
64b: 48 89 e5 mov %rsp,%rbp
64e: 89 e0 mov %esp,%eax
650: 90 nop
651: 5d pop %rbp
652: c3 retq
0000000000000653 <nest>:
653: 55 push %rbp
654: 48 89 e5 mov %rsp,%rbp
657: e8 ee ff ff ff callq 64a <find_start>
65c: 5d pop %rbp
65d: c3 retq
000000000000065e <main>:
65e: 55 push %rbp
65f: 48 89 e5 mov %rsp,%rbp
662: e8 e3 ff ff ff callq 64a <find_start>
667: 48 89 c6 mov %rax,%rsi
66a: 48 8d 3d b3 00 00 00 lea 0xb3(%rip),%rdi # 724 <_IO_stdin_used+0x4>
671: b8 00 00 00 00 mov $0x0,%eax
676: e8 a5 fe ff ff callq 520 <printf#plt>
67b: e8 d3 ff ff ff callq 653 <nest>
So, two things or maybe more than two things. Our entry point _start is in ram at a low address. low virtual address. On this system with this compiler I would expect all/most programs to start in a similar place or the same or in some cases it may depend on what is in my program, but it should be somewhere low.
The stack pointer though, if you check above and now as I type stuff:
0x355d38d0
0x355d38c0
it has changed.
0x4ebf1760
0x4ebf1750
0x31423240
0x31423230
0xa63188d0
0xa63188c0
a few times within a few seconds. The stack is a relative thing not absolute so there is no real need to create a fixed address that is the same every time. Needs to be in a space that is related to this user/thread and virtual since it is going through the mmu for protection reasons. There is no reason for a virtual address to not equal the physical address. The kernel code/driver that manages the mmu for a platform is programmed to do it a certain way. You can have the address space for code start at 0x0000 for every program, and you might wish the address space for data to be the same, zero based. but for stack it doesn't matter. And on my machine, my os, this particular version this particular day it isn't consistent.
I originally thought your question was different depending on factors that are specific to your build, and settings. For a specific build a single call to find_start is going to be at a fixed relative address for the stack pointer each function that uses the stack will put it back the way it was found, assuming you can't change the compilation of the program while running the stack pointer for a single instance of the call the nesting will be the same the stack consumption by each function along the way will be the same.
I added another layer and by looking at the disassembly, main, nest and find_start all mess with the stack pointer (unoptimized) so that is why for these runs they are 0x10 apart. if I added/removed more code per function to change the stack usage in one or more of the functions then that delta could change.
But
gcc -O2 so.c -o so
objdump -D so > so.txt
./so
0x0
0x0
Disassembly of section .text:
0000000000000560 <main>:
560: 48 83 ec 08 sub $0x8,%rsp
564: 89 e0 mov %esp,%eax
566: 48 8d 35 e7 01 00 00 lea 0x1e7(%rip),%rsi # 754 <_IO_stdin_used+0x4>
56d: 31 d2 xor %edx,%edx
56f: bf 01 00 00 00 mov $0x1,%edi
574: 31 c0 xor %eax,%eax
576: e8 c5 ff ff ff callq 540 <__printf_chk#plt>
57b: 89 e0 mov %esp,%eax
57d: 48 8d 35 d0 01 00 00 lea 0x1d0(%rip),%rsi # 754 <_IO_stdin_used+0x4>
584: 31 d2 xor %edx,%edx
586: bf 01 00 00 00 mov $0x1,%edi
58b: 31 c0 xor %eax,%eax
58d: e8 ae ff ff ff callq 540 <__printf_chk#plt>
592: 31 c0 xor %eax,%eax
594: 48 83 c4 08 add $0x8,%rsp
598: c3 retq
The optimizer didn't recognize the return value for some reason.
unsigned long fun ( void )
{
return(0x12345678);
}
00000000000006b0 <fun>:
6b0: b8 78 56 34 12 mov $0x12345678,%eax
6b5: c3 retq
calling convention looks fine.
Put find_start in a separate file so the optimizer can't remove it
gcc -O2 so.c sp.c -o so
./so
0xb1192fc8
0xb1192fc8
./so
0x7aa979d8
0x7aa979d8
./so
0x485134c8
0x485134c8
./so
0xa8317c98
0xa8317c98
./so
0x2ba70b8
0x2ba70b8
Disassembly of section .text:
0000000000000560 <main>:
560: 48 83 ec 08 sub $0x8,%rsp
564: e8 67 01 00 00 callq 6d0 <find_start>
569: 48 8d 35 f4 01 00 00 lea 0x1f4(%rip),%rsi # 764 <_IO_stdin_used+0x4>
570: 48 89 c2 mov %rax,%rdx
573: bf 01 00 00 00 mov $0x1,%edi
578: 31 c0 xor %eax,%eax
57a: e8 c1 ff ff ff callq 540 <__printf_chk#plt>
57f: e8 4c 01 00 00 callq 6d0 <find_start>
584: 48 8d 35 d9 01 00 00 lea 0x1d9(%rip),%rsi # 764 <_IO_stdin_used+0x4>
58b: 48 89 c2 mov %rax,%rdx
58e: bf 01 00 00 00 mov $0x1,%edi
593: 31 c0 xor %eax,%eax
595: e8 a6 ff ff ff callq 540 <__printf_chk#plt>
I didn't let it inline those functions it can see nest so it inlined it removing the stack change that came with it. So now the value nested or not is the same.

Detect specific function calls in the source codes with objdump

I need a way to detect places, where a specific function is called.
E.g.:
myPrint():
main.c, line 28
utils.c, line 89
The point is, I need only function calls, not definitions or declarations. And I need it not only for "simple" C-like identifiers, but even for class methods, functions defined in namespaces etc., because I'll use this program mainly for C++.
My attempts
GTags - Detects identifiers pretty well, but there's no way to differentiate function calls from definitions etc.
CScope - good for C, but fails sometimes with C++
objdump + addr2line - described more in detail below
I'm now trying to use objdump with addr2line this way:
objdump --disassemble-all binary | grep 'nameOfFunction'
Output looks like this:
5834c3: e8 e8 79 00 00 callq 58aeb0 <_ZN7espreso14IterSolverBase24Solve_RegCG_singular_domERNS_10ClusterCPUERSt6vectorIS3_IdSaIdEESaIS5_EE>
58bef4: e8 57 45 00 00 callq 590450 <_ZZN7espreso14IterSolverBase24Solve_RegCG_singular_domERNS_10ClusterCPUERSt6vectorIS3_IdSaIdEESaIS5_EEENKUliE_clEi>
58bf43: e8 08 45 00 00 callq 590450 <_ZZN7espreso14IterSolverBase24Solve_RegCG_singular_domERNS_10ClusterCPUERSt6vectorIS3_IdSaIdEESaIS5_EEENKUliE_clEi>
58bf7c: e8 cf 44 00 00 callq 590450 <_ZZN7espreso14IterSolverBase24Solve_RegCG_singular_domERNS_10ClusterCPUERSt6vectorIS3_IdSaIdEESaIS5_EEENKUliE_clEi>
And then I take the number from the first column and try to get the physical location in a code like this:
addr2line -e binary 5834c3
Sometimes it finds the function call correctly, sometimes it finds something completely different (I guess some kind of reference etc.).
And so, my question is - is it possible to detect which findings are function calls? I see, that there is callq in the output, but I'm not completely sure it's only function call in the code.

How does C++ linking work in practice? [duplicate]

This question already has answers here:
What do linkers do?
(5 answers)
Closed 7 years ago.
How does C++ linking work in practice? What I am looking for is a detailed explanation about how the linking happens, and not what commands do the linking.
There's already a similar question about compilation which doesn't go into too much detail: How does the compilation/linking process work?
EDIT: I have moved this answer to the duplicate: https://stackoverflow.com/a/33690144/895245
This answer focuses on address relocation, which is one of the crucial functions of linking.
A minimal example will be used to clarify the concept.
0) Introduction
Summary: relocation edits the .text section of object files to translate:
object file address
into the final address of the executable
This must be done by the linker because the compiler only sees one input file at a time, but we must know about all object files at once to decide how to:
resolve undefined symbols like declared undefined functions
not clash multiple .text and .data sections of multiple object files
Prerequisites: minimal understanding of:
x86-64 or IA-32 assembly
global structure of an ELF file. I have made a tutorial for that
Linking has nothing to do with C or C++ specifically: compilers just generate the object files. The linker then takes them as input without ever knowing what language compiled them. It might as well be Fortran.
So to reduce the crust, let's study a NASM x86-64 ELF Linux hello world:
section .data
hello_world db "Hello world!", 10
section .text
global _start
_start:
; sys_write
mov rax, 1
mov rdi, 1
mov rsi, hello_world
mov rdx, 13
syscall
; sys_exit
mov rax, 60
mov rdi, 0
syscall
compiled and assembled with:
nasm -felf64 hello_world.asm # creates hello_world.o
ld -o hello_world.out hello_world.o # static ELF executable with no libraries
with NASM 2.10.09.
1) .text of .o
First we decompile the .text section of the object file:
objdump -d hello_world.o
which gives:
0000000000000000 <_start>:
0: b8 01 00 00 00 mov $0x1,%eax
5: bf 01 00 00 00 mov $0x1,%edi
a: 48 be 00 00 00 00 00 movabs $0x0,%rsi
11: 00 00 00
14: ba 0d 00 00 00 mov $0xd,%edx
19: 0f 05 syscall
1b: b8 3c 00 00 00 mov $0x3c,%eax
20: bf 00 00 00 00 mov $0x0,%edi
25: 0f 05 syscall
the crucial lines are:
a: 48 be 00 00 00 00 00 movabs $0x0,%rsi
11: 00 00 00
which should move the address of the hello world string into the rsi register, which is passed to the write system call.
But wait! How can the compiler possibly know where "Hello world!" will end up in memory when the program is loaded?
Well, it can't, specially after we link a bunch of .o files together with multiple .data sections.
Only the linker can do that since only he will have all those object files.
So the compiler just:
puts a placeholder value 0x0 on the compiled output
gives some extra information to the linker of how to modify the compiled code with the good addresses
This "extra information" is contained in the .rela.text section of the object file
2) .rela.text
.rela.text stands for "relocation of the .text section".
The word relocation is used because the linker will have to relocate the address from the object into the executable.
We can disassemble the .rela.text section with:
readelf -r hello_world.o
which contains;
Relocation section '.rela.text' at offset 0x340 contains 1 entries:
Offset Info Type Sym. Value Sym. Name + Addend
00000000000c 000200000001 R_X86_64_64 0000000000000000 .data + 0
The format of this section is fixed documented at: http://www.sco.com/developers/gabi/2003-12-17/ch4.reloc.html
Each entry tells the linker about one address which needs to be relocated, here we have only one for the string.
Simplifying a bit, for this particular line we have the following information:
Offset = C: what is the first byte of the .text that this entry changes.
If we look back at the decompiled text, it is exactly inside the critical movabs $0x0,%rsi, and those that know x86-64 instruction encoding will notice that this encodes the 64-bit address part of the instruction.
Name = .data: the address points to the .data section
Type = R_X86_64_64, which specifies what exactly what calculation has to be done to translate the address.
This field is actually processor dependent, and thus documented on the AMD64 System V ABI extension section 4.4 "Relocation".
That document says that R_X86_64_64 does:
Field = word64: 8 bytes, thus the 00 00 00 00 00 00 00 00 at address 0xC
Calculation = S + A
S is value at the address being relocated, thus 00 00 00 00 00 00 00 00
A is the addend which is 0 here. This is a field of the relocation entry.
So S + A == 0 and we will get relocated to the very first address of the .data section.
3) .text of .out
Now lets look at the text area of the executable ld generated for us:
objdump -d hello_world.out
gives:
00000000004000b0 <_start>:
4000b0: b8 01 00 00 00 mov $0x1,%eax
4000b5: bf 01 00 00 00 mov $0x1,%edi
4000ba: 48 be d8 00 60 00 00 movabs $0x6000d8,%rsi
4000c1: 00 00 00
4000c4: ba 0d 00 00 00 mov $0xd,%edx
4000c9: 0f 05 syscall
4000cb: b8 3c 00 00 00 mov $0x3c,%eax
4000d0: bf 00 00 00 00 mov $0x0,%edi
4000d5: 0f 05 syscall
So the only thing that changed from the object file are the critical lines:
4000ba: 48 be d8 00 60 00 00 movabs $0x6000d8,%rsi
4000c1: 00 00 00
which now point to the address 0x6000d8 (d8 00 60 00 00 00 00 00 in little-endian) instead of 0x0.
Is this the right location for the hello_world string?
To decide we have to check the program headers, which tell Linux where to load each section.
We disassemble them with:
readelf -l hello_world.out
which gives:
Program Headers:
Type Offset VirtAddr PhysAddr
FileSiz MemSiz Flags Align
LOAD 0x0000000000000000 0x0000000000400000 0x0000000000400000
0x00000000000000d7 0x00000000000000d7 R E 200000
LOAD 0x00000000000000d8 0x00000000006000d8 0x00000000006000d8
0x000000000000000d 0x000000000000000d RW 200000
Section to Segment mapping:
Segment Sections...
00 .text
01 .data
This tells us that the .data section, which is the second one, starts at VirtAddr = 0x06000d8.
And the only thing on the data section is our hello world string.
Actually, one could say linking is relatively simple.
In the simplest sense, it's just about bundling together object files1 as those already contain the emitted assembly for each of the functions/globals/data... contained in their respective source. The linker can be extremely dumb here and just treat everything as a symbol (name) and its definition (or content).
Obviously, the linker need produce a file that respects a certain format (the ELF format generally on Unix) and will separate the various categories of code/data into different sections of the file, but that is just dispatching.
The two complications I know of are:
the need to de-duplicate symbols: some symbols are present in several object files and only one should make it in the resulting library/executable being created; it is the linker job to only include one of the definitions
link-time optimization: in this case the object files contain not the emitted assembly but an intermediate representation and the linker merge all the object files together, apply optimization passes (inlining, for example), compiles this down to assembly and finally emit its result.
1: the result of the compilation of the different translation units (roughly, preprocessed source files)
Besides the already mentioned "Linkers and Loaders", if you wanted to know how a real and modern linker works, you could start here.