When debugging in Visual Studio, if symbols for a call stack are missing, for example:
00 > HelloWorld.exe!my_function(int y=42) Line 291
01 dynlib2.dll!10011435()
[Frames below may be incorrect and/or missing, no symbols loaded for dynlib2.dll]
02 dynlib2.dll!10011497()
03 HelloWorld.exe!wmain(int __formal=1, int __formal=1) Line 297 + 0xd bytes
04 HelloWorld.exe!__tmainCRTStartup() Line 594 + 0x19 bytes
05 HelloWorld.exe!wmainCRTStartup() Line 414
06 kernel32.dll!_BaseProcessStart#4() + 0x23 bytes
the debugger will display the warning Frames below may be incorrect and/or missing.
(Note that only lines 01 and 02 have no symbols. Line 00, where I set a breakpoint and all other lines have symbols loaded.)
Now, I know how to fix the warning (->get pdb file), what I do not quite get is why it is displayed after all! The stack I pasted above is fully OK, it's just that I do not have a pdb file for the dynlib2.dll module.
Why does the debugger need a symbols file to make sure the stack is correct?
I think this is because not all the functions follow the "standard" stack layout. Usually every function starts with:
push ebp
mov ebp,esp
and ends with
pop ebp
ret
By this every function creates its so-called stack frame. EBP always points to the beginning of the top stack frame. In every frame the first two values are the pointer to the previous stack frame, and the function return address.
Using this information one can easily reconstruct the stack. However:
This stack information won't include function names and parameters info.
Not all the functions obey this stack frame layout. If some optimizations are enabled (/Oy, omit stack frame pointers, for instance) - the stack layout is different.
I tried to understand this myself a while ago.
As of 2013, FPO is not used within MSFT and is generally frowned upon. I did come across a different MS binary technology used internally, that probably hampers naive EBP chain traversal: Basic Block Tools.
As noted in the post, PDBs do include 'StackFrameTypeEnum', and elsewhere it's hinted that they include the 'unwind program' for a stack frame. So all in all, they are still needed, and the gory details of why exactly - are not documented.
Symbols are decoupled from the associated binary code to reduce the size of shipping binaries. Check how big your PDB files are - huge, especially compared to the matching binary file (EXE/DLL). You would not want that overhead every time the binary is shipped, installed and used. This is especially important at load time. The symbol information is only for debugging after all, not required for correct operation of your code. Provided you keep symbols that match your shipped binaries, you can still debug problems post mortem with all symbols loaded.
Related
The line profiling output of google-pprof claims that most of the running time of my numerical C++ program is being spent in a function called __nss_database_lookup (see below). Apparently that function is for handling things like the passwd file on UNIX systems. My C++ program should only be doing numerical calculations, allocating memory, and passing a few custom C++ data types around.
What's going on? Is the appearance of that function a mirage, a mere artefact of how google-pprof works? Or is it actually being called and wasting two thirds of my program's running time? If it is being called, what could be calling it? Has something mistakenly called it in one of my C++ classes? How would I track that down?
I'm using Ubuntu 20.04, g++-7 and g++-9.
Total: 1046 samples
665 63.6% 63.6% 665 63.6% __nss_database_lookup ??:0
107 10.2% 73.8% 193 18.5% <function1> file.h:1035
92 8.8% 82.6% 92 8.8% <function2> file.h:...
87 8.3% 90.9% 87 8.3% <function3> file.h:995
17 1.6% 92.5% 734 70.2% <function4> file.h:1128
...
(Function and file names obscured for confidentiality reasons)
A friend of mine met the similar issue today. Though it has been a while after you raised the question, but I still would like to answer it so that anyone else who reaches here can get some hints.
This is because some local symbols (which corresponds to static local functions in C/C++) are called, and these symbols don't have their entries in the symbol table, and their text (code) is placed after __nss_database_lookup. So your perf tool treats them as a part of __nss_database_lookup.
For example, your program may call memcpy, and memcpy calls __memmove_unaligned_avx_erms, which is a local symbol in glibc and isn't exported in dynamic symbol table, and its code is placed after __nss_database_lookup coincidentally together with other local symbols. And your perf tool can find nothing about __memmove_unaligned_avx_erms, so it just thinks __nss_database_lookup is called.
A potential solution is to install libc-dbg package (the package name may vary on various distros), and if your perf tool is smart enough to automatically load the debug info, it may annotate symbols correctly. (My friend checked that it took some effects on perf tool)
This question already has an answer here:
X86 Assembly - How to calculate instruction opcodes length in bytes [closed]
How to tell length of an x86-64 instruction opcode using CPU itself?
(1 answer)
Closed 3 years ago.
For example, at the address 0x762C51, there is the instruction call sub_E91E50.
In bytes, this is E8 FA F1 72 00.
Next, at the address 0x762C56, there is the instruction push 0. In bytes, this is 6A 00.
Now, when it comes to C++ reading a function like this, it would only have the bytes like: E8 FA F1 72 00 6A 00
How can I determine where the first instruction ends and the next begins.
For variable length instruction sets you can't really do this. Many tools will try but it is often trivial to mess them up if you try. Compiled code works better.
The best way which won't necessarily result in a complete disassembly is to go in execution order. So you have to have a known correct entry point and go from there following the execution paths and looking for collisions and setting those aside for a human to figure out.
Simulation is even better and it might give you better coverage in some areas, but also will leave gaps where an execution order disassembler wouldn't.
Even gnu assembler messes up with variable length instruction sets (next time please specify the target/isa as stated in the assembly tag). So whatever a "full disassembler" is can't possibly do it either, in general.
If someone has told you like it's a class assignment or you compile a function to an object and based on the label in the disassembly you feel comfortable that is a valid entry point then you can start disassembling there in execution order. Understanding that if it is an object there will be incomplete instructions to be filled in later, if you link then if you assume that a function label address is an entry point then can disassemble in execution order.
C++ really has nothing to do with this if you have a sequence of bytes and you are sure you know the entry point it doesn't much matter how that code was created, unless it intentionally has anti-disassembly items in it (hand written or compiler/tool created) generally this is not a problem, but technically a tool could do this and it is trivial for a human to do it by hand.
I'm trying to better understand the structure of the Windows PE executable files and namely how does C++ linker handle static variable references. (In my case for Visual Studio C++.)
Say I have the following example:
::PathFindExtension(L"file.exe");
which may translate into the following assembly (Debug build in this case):
I understand that the pointer to the PathFindExtension API for the call instruction is calculated and inserted into the PE image at the .exe file start-up when the corresponding DLL is loaded in, but I'm not clear about how pointers to static strings are handled.
As you see the push instruction:
00568AC6 68 70 48 60 00 push offset string L"file.exe" (604870h)
is 68 id machine code, or PUSH imm32 [source], thus the imm32 is just an absolute reference to memory, or 604870h in this case:
But my question is, since that value is hardcoded at the compile time, how was that offset (i.e. 604870h) calculated?
And the second question, how does ASLR handle this situation, i.e. when this executable could be loaded at some arbitrary address in memory? (If I understand ASLR concept correctly.)
While looking through the assembly for a console "hello world" program (compiled using the visual c++ compiler), I came across this:
pre_c_init proc near
.text:00401AFE mov eax, 5A4Dh
.text:00401B03 cmp ds:400000h, ax
The code above seems to be accessing memory that isn't filled with anything in particular: All segments start at 0x401000 or even further down in the file. (The image base is at 0x400000, but the first segment is at 0x401000).
I used OllyDbg to see what the actual value at 0x400000 is, and every single time it's the same as in the code (0x5A4D). What's going on here?
5A4D is "MZ" in little-endian ASCII, and MZ is the signature of MS-DOS and, more recently, PE executables.
The comparison checks whether the executable has been mapped at the default base address, 0x400000. This, I believe, is used to determine whether it is necessary to perform relocation.
This is discussed further in the following thread: Why does PE need a relocation table?
VCamD.ax!CFactoryTemplate::CreateInstance() + 0x3f bytes
> VCamD.ax!CClassFactory::CreateInstance() + 0x7f bytes
What's 0x7f and 0x3f?
Those values are the offset of the instruction pointer from the start of the listed function.
It's a way of expressing which assembly instruction is currently being executed in the function. Similar to having the the current source code line highlighted in the editor. Except that the offset is at the assembly level not the source code level.
You got .pdb files that were stripped. Their source code file and line number info was removed. Typical for example for .pdb files you can get from the Microsoft symbol server.
Which is okay, you can't get the source code anyway. Without the line number info, the debugger falls back to using the 'closest' symbol whose address it does have available, typically the function entry point, and adds the offset of the call instruction.
Sometimes you get bogus information if debug info is missing for long stretches of code. That 'closest' symbol is in no way related to the original code. Any time the offset gets to be larger than, oh, 0x200 then you ought to downplay the relevance of the displayed symbol name. There are a couple of places in the Windows code where you actually see the name of a string variable. The stack trace you posted has high confidence, those offsets are small.
If it's at the top of the stack, then it's the offset of the instruction pointer relative to the given symbol. So if the top stack frame is VCamD.ax!CClassFactory::CreateInstance() + 0x7f bytes, and VCamD.ax!CClassFactory::CreateInstance() is at location 0x3000 in memory (fictional obviously), then EIP is currently 0x307f. This shows you how far into the function you are.
If it's further up the stack then it's the return offset from the start of the given symbol. So if VCamD.ax!CFactoryTemplate::CreateInstance() called VCamD.ax!CClassFactory::CreateInstance(), and VCamD.ax!CFactoryTemplate::CreateInstance() is at location 0x4000 , then when VCamD.ax!CClassFactory::CreateInstance() returns, EIP will be at 0x403f.
One important thing to notice though is that if you see something like somedll!SomeFunc() + 0x5f33 bytes, you can be pretty certain that this is NOT the correct symbol. Since functions are rarely 0x5f33 bytes long, you can see that EIP is simply in a place that the debugger doesn't have symbols for.