consider the following function that won't get inlined and assume x86 as platform:
void doSomething(int & in){
//do something
}
firstly I'm not sure such scenario would happen but since I think it is possible I'm gonna ask so IF whenever in any caller this function is called the argument to be supplied lies exactly at the top of the caller stack frame so that in the called function access to that through ebp register(after callee has moved content of esp into ebp) in assembly language is possible do you suggest we ignore declaring a parameter at all for function and use assembly to access our arguments in this exceptional case or just leave function definition as it was and leave it to compiler to do what it does? since I haven't read anywhere that compiler would consider such exceptional case as a factor for calling convention and I think it'll simply generate code to pass a pointer to the argument to the callee stack frame or one of registers
First of all, it's SO easy for this to break - for example, you get a different version of compiler, that generates code differently. Or you change optimisation features. Never mind the situation where you suddenly need to use doSomething in a different place and then it won't work, because the variable is no longer on the top of the stack.
Second, assuming that the code inside the function is short enough, it's highly likely that the compiler will inline the function, so you don't "lose" anything at all.
Third, a single argument in modern compilers, is typically passed in a register anyway, so there is no benefit in this when optimisation is enabled.
If you really think there is worthwhile benefit in this, and the compiler won't inline or otherwise optimise the code [have you looked at the generated code?], then try using forceinline or always_inline or whatever it is called in your compiler (most compilers have such an option). If that doesn't work, use a macro to inline it by hand. Or simply move the code to where it is called by "copy-n-paste".
Your note "the argument to be supplied lies exactly at the top of the caller stack frame so that in the called function access to that through ebp register" contains a factual misunderstanding.
That's because of the following things:
you're assuming a stack-based calling convention, i.e. function arguments being pushed to the stack by the caller before calling the function. That's not generally the case; even on 32bit x86, there's non-stack-based calling conventions (for example, Windows fastcall or the GNU GCC ones used in the 32bit linux kernel). If such are used, the argument wouldn't be found on top of the stack, but rather in ... whatever register is used to hold the first argument.
But even if you have stack-based parameter passing ... still:
you've missed that on x86 at the very least the call instruction is pushing a return address onto the top of the stack, so that when the first instruction of a function reached that way is executing, ESP will not point to the first arg of that function, but to the return address.
you've missed that EBP is a callee-saved (preserved over function calls) registers, and not initialized on your behalf by the architecture - it's necessary for the generated code to explicitly set it up. A function which wants to use it (even if only as a framepointer) is therefore obliged to save it somewhere before using it. That means the normal prologue will have push EBP; mov EBP, ESP (you cannot only do MOV EBP, ESP because that would overwrite the caller's EBP which is invalid / which you may not do). Therefore, if you like to refer to the first argument of the function, you'd need [ EBP + 8 ] not [ EBP ].
If you're not using framepointers, then the first argument (due to the call which was used to reach the function having pushed a return address) is at [ ESP + 4 ] not [ ESP ].
I hope this clarifies a little.
I agree with the other posters that clarifying the question would help, what exactly you want to achieve and why you think assembly language might be useful here.
No, I would not. Calling conventions may vary (between x86 and x86_64); Parameters could be pushed to the stack or put into register, and I'm not sure you can know for sure where they'll be.
Writing this in assembly, unless you really know what you're doing is likely to lead to undefined behavior code.
Related
How do I replace all the function calls in an arm64 binary with call to a specific function. The intent is to 'insert' a indirection such that I can log all function calls.
Example:
mov x29, sp
mov w0, #10
bl bar(int)
...
# Replace "bl bar" with my_func. my_func will now take all the parameters and forward it to foo.
mov x29, sp
mov w0, #10
bl my_func(...)
The replacement function prints the pointer to the function, and then invokes the callee with the provided arguments. I'm also not sure how this forwarding will work for all the cases but the intent is to have something like this:
template<class F, class... Args>
void my_func(F&& f, Args&&... args) {
printf("calling: %p", f);
std::invoke(std::forward<F>(f), std::forward<Args>(args));
}
TL:DR: write asm wrapper functions that call a C++ void logger(void *fptr) which returns. Don't try to tailcall from C++ because that's not possible in the general case.
An alternate approach might be to "hook" every callee, instead of redirecting at the call site. But then you'd miss calls to functions in libraries you weren't instrumenting.
I don't think C++ lets you forward any/all args without knowing what they are. That's easy to do in asm for a specific calling convention, since the final invocation of the real function can be a tailcall jump, with return address and all arg-passing registers set up how they were, and the stack pointer. But only if you're not trying to remove an arg.
So instead of having C++ do the tailcall to the real function, have asm wrappers just call a logging function. Either printf directly, or a function like extern "C" void log_call(void *fptr); which returns. It's is compiled normally so it'll follow the ABI, so the hand-written asm trampoline / wrapper function knows what it needs to restore before jumping.
Capturing the target address
bl my_func won't put the address of bar anywhere.
For direct calls you could use the return address (in lr) to look up the target, e.g. in a hash table. Otherwise you'd need a separate trampoline for every function you're hooking. (Modifying the code to hook at the target function instead of the call sites wouldn't have this problem, but you'd have to replace the first instruction with a jump somewhere which logs and then returns. And which does whatever that replaced first instruction did. Or replace the first couple instructions with one that saves the return address and then calls.)
But any indirect calls like blr x8 will need a special stub.
Probably one trampoline stub for each different possible register that holds a function address.
Those stubs will need to be written in asm.
If you were trying to call a wrapper in C++ the way you imagined, it would be tricky because the real args might be using all the register-arg slots. And changing the stack pointer to add a stack arg makes it a new 5th arg or something weird. So it works much better just to call a C++ function to do the logging, then restore all the arg-passing registers which you saved on the stack. (16 bytes at a time with stp.)
That also avoids the problem of trying to make a transparent function with C++
Removing one arg and forwarding the rest
Your design requires my_func to remove one arg and then forward an unknown number of other args of unknown type to another function. That's not even possible in ARM64 asm, therefore not surprising that C++ doesn't have syntax that would require the compiler to do it.
If the arg was actually a void* or function pointer, it would take one registers, so removing it would move the next 3 regs down (x1 to x0, etc.) and the first stack arg then goes in x3. But the stack has to stay 16-byte aligned, so you can't load just it and leave the later stack args in the right place.
A workaround for that in some cases would be to make that f arg 16 bytes, so it takes two registers. Then you can mov x3,x2 down to x0,x1, and ldp 16 bytes of stack args. Except what if that arg was one that always gets passed in memory, not registers, e.g. part of an even larger object, or non-POD or whatever the criterion for the C++ ABI to make sure it always has an address.
So maybe f could be 32 bytes so it goes on the stack, and can be removed without touching arg-passing registers or needing to pull any stack args back into registers.
Of course in the real case you didn't have a C++ function that can add a new first arg and then pass on all the rest either. That's something you could again only do in special cases, like passing on an f.
It's something you could do in asm on 32-bit x86 with a pure stack-args calling convention and no stack-alignment requirement; you can move the return address up one slot and jump, so you eventually return to the original call-site with the stack pointer restored to how it was before calling the trampoline that added a new first arg and copied the return address lower.
But C++ won't have any constructs that impose requirements on ABIs beyond what C does.
Scanning a binary for bl instructions
That will miss any tailcalls that use b instead of bl. That might be ok, but if not I don't see a way to fix it. Unconditional bl will be all over the place inside functions. (With some heuristics for identifying functions, a b outside the current function can be assumed to be a tailcall, while others aren't, since compilers usually make all the code for a single function contiguous.
Except when some blocks go in a .text.cold section if the compiler identifies them as unlikely.)
AArch64 has fixed-width instructions that require alignment, so consistent disassembly of the compiler-generated instructions is easy, unlike x86. So you can identify all the bl instructions.
But if AArch64 compilers mix in any constant data between functions, like 32-bit ARM compilers do (literal pools for PC-relative loads), false positives are possible even if you limit it to looking at parts of the binary that are in executable ELF sections. (Or program segments if section headers have been stripped.)
I don't think bl gets used for anything other than function calls in compiler-generated code. (e.g. not to private helper functions the compiler invented.)
You might want a library to help parse ELF headers and find the right binary offsets. Looking for bl instructions might be something you do by scanning the machine code, not disassembly.
If you're modifying compiler asm output before even assembling, that would make something easier; you could add instructions are callsites. But for existing binaries you can't compile from source.
I am trying to compile the below code with ICC 2018:
__asm {
mov ebx, xx ;xx address to registers
}
where xx is of type int16. This is the first instruction inside my function.
I get the below warning with the above assembly code:
warning #13212: Reference to ebx in function requiring stack alignment
Surprisingly, when I replaced ebx with eax or esi, I saw the warning go away. I am not able to understand why I am seeing the issue only with ebx, as far as I know, both ebx and eax has same architecture(32 bit registers).
Also, I didn't see the warning when I compiled the same code with ICC 2013.
Can anyone help me resolve this warning?
Thanks!
The compiler on the platform of choice (ICC as it mimics MSVC's behavior) uses EBX to save the original stack pointer value if additional alignment is required. Therefore you cannot overwrite it safely.
The program's behavior would become undefined. The compiler warning just tells you about that.
To help with save/restore of all registers affected by assembly blocks, an extended syntax with so called clobber lists is recommended. Your example uses MSVC-style __asm{...} syntax. In MSVC-style syntax, the compiler detects what registers you touch and saves/restores them for you.
ICC also supports GCC-like notation for extended asm with clobber lists: asm("...":::). It also supports simpler GCC asm("...") without the clobber list part. See this question for more details (thanks Peter Cordes for the link and explanation).
Documentation that I found useful when I was learning to use clobber lists (I actually use it all the time because it is impossible to remember its rather human-unfriendly syntax):
https://www.ibiblio.org/gferg/ldp/GCC-Inline-Assembly-HOWTO.html#s5
https://software.intel.com/en-us/node/694235
The simple inline assembly blocks without clobber lists can be safely used only in the following situations:
Instructions of the block do not modify registers defined in the ABI. Thus GPRs, stack counter, flags should be untouched; if there are floating-point calculations in the function, FPU/vector registers are off limits as well. Even memory writes can lead to bugs because the compiler relies on known values to reside in memory. In contrast, one can issue INT3, HLT, WRMSR etc instructions which either touch no registers or affect only system registers which the compiler do not use. However, the majority of such instructions are privileged and cannot be used in user applications. One can also read all available registers provided there are no side effects of such reads.
The assembler block is the only statement in a function's body. In this case, it has to abide to calling conventions of the chosen platform: how function's arguments are passed, where its exit code should be placed etc. The block will also need to cope with compiler-generated prologue and epilogue code blocks that have their own assumptions about registers. Their code is not strictly stable, nor portable nor guaranteed to be the same with different optimization levels. With GCC on x86, I was unable to disable prologue/epilogue generation, so there is still some risk to violate compiler assumptions.
You save all clobbered registers yourself and restore them afterwards. This is relatively easy because you can see your own assembler code and can tell if a register gets modified by it or not. However, make a mistake and a compiler will not be here for you to point it out. It is very nice of ICC 2018 to actually give a warning even though it could have just treated the asm block as a black box.
You "stole" a register from compiler. GCC allows doing that with register asm statement (do not remember if the same trick works with other compilers). You can thus declare that a variable is bound to a certain register. Be aware that such technique reduces number of registers available to compiler for its register allocation phase, and that will degrade quality of code it generates. Ask for too many registers, and the compiler will be helpless and refuse to work. Similarly, one cannot ask for registers with a dedicated role to be taken away from a compiler, such as stack pointer or program counter.
That said, the extended asm syntax with clobber lists provides a nice alternative. It turns an asm section from a black box to something of an inline internal "function" that declares its own inputs, outputs and resources it overwrites which are shared with the outer function.
From http://en.wikipedia.org/wiki/Stack_pointer#Structure
I am wondering why the return address for a function is placed above the parameters for that function?
It makes more sense to have Return Address pushed onto the stack before the Parameters for Drawline because the parameters are not required any more when the Return Address is popped for returning back to the calling function.
What are the reasons for preferring the implementation shown in diagram above?
The return address is usually pushed via the call machine command, [which in the native language's instruction set] while the parameters and variables are pushed with several machine commands - which the compiler creates.
Thus, the return address is the last thing pushed by the caller, and before anything [local variables] pushed by the callee.
The parameters are all pushed before the return address, because the jump to the actual function and the insertion of the return address to the stack is done in the same machine command.
Also, another reason is - the caller is the one allocating space on stack for the parameters - It [the caller] should also be the one who cleans it up.
The reason is simple: The function arguments are pushed onto the stack by the calling function (which is the only one which can do it because only it has the necessary information; after all the whole point of doing so is to pass that information to the called function). The return address is pushed to the stack by the function call mechanism. The function is called after the calling function has set up the parameters, because after the call it's the called function which is executed, not the calling one.
OK, so now you could argue that the calling function could put the parameters beyond the currently used stack, and the called function could then just adjust the stack pointer accordingly. But that would not work out well because at any time there could be an interrupt or a signal, which would push the current state onto the stack in order to restore later (I wouldn't be surprised if a task switch did so, too). But if you set up the parameters beyond the current stack, those asynchronous events would overwrite it, and since you cannot predict when they will happen, you cannot avoid that (beyond disabling, which may have other drawbacks or even be impossible, in the case of task switch). Basically, everything beyond the current stack has to be considered volatile.
Also note that this is independent of the question of who cleans up the parameters. In principle, the called function could call call destructors of the arguments even if physically they lie in the caller's stack frame. Also, many processors (including the x86) have instructions which automatically pop extra space above the return address on return (for example, Pascal compilers usually did that because in Pascal you don't have any cleanup beyond returning memory, and at least fr the processors of the time, it was more efficient to clean up with that processor instruction (I have no idea if that is still true for modern processors). However C didn't use that mechanism due to variable-length argument lists: For those, the mechanism wasn't applicable because you'd need to know at compile time how much extra space to release, and K&R C did not require to forward-declare variadic functions (C89 does, but few if any compilers take advantage of that, due to compatibility with old code), so there was no way for the calling function to know whether to clean up the arguments unless it had to do that always.
Original question:
Why is the this pointer 0 in a VS c++ release build?
When breaking in a Visual Studio 2008 SP1 release build with the /Zi (Compiler: Debug Information Format - Program Database) and /DEBUG (Linker: Generate Debug Info, yes) options, why are 'this'-pointers always 0x00000000?
EDIT: Rephrased question:
My original question was quite unclear, sorry for that. When using the Visual Studio 2008 debugger to step through a program I can see all variables, except the local object's member variables. This is probably cause the debugger derives these from the this pointer, but VS always says it's 0x00000000, so it cannot derive the current object's member variables (it does not know the memory position of the object)
When loading a megadump (Like a Windows minidump, but containing the entire memory space of the process), I can look at all my local variables (defined in the function) and entire tree-structures on the heap even I have pointers to.
For example: when breaking in A::foo() in Release mode
'this' will have value 0x00000000
'f_' will show garbage
Somehow this information needs to be available to the process. Is this a missing feature in VS2008? Any other debugger that does handle this properly?
class A
{
void foo() { /*break here*/ }
int f_;
};
As some others have mentioned, compiling in Release mode makes certain optimizations (especially eliminating the use of ebp/rbp as a frame pointer) that break assumptions on which the debugger relies for figuring out your local variables. However, knowing why it happens isn't very helpful for debugging your program!
Here's a way you can work around it: at the very beginning of a method call (breaking on the first line of the function, not the opening brace), the this pointer will always be found in a specific register (ecx on 32-bit systems or rcx on 64-bit systems). The debugger knows that and so you should be able to see the value of this right at the start of your method call. You can then copy the address from the Value column and watch that specifically (as (MyObject *)0x003f00f0 or whatever), which will allow you to see into this later in the method.
If that's not good enough (for example, because you only want to stop when a bug manifests itself, which is a very small percentage of the time the given method is called), you can try this slightly more advanced (and less reliable) trick. Usually, the this pointer is taken out of ecx/rcx very early in a function call, because that is a "caller-saves" register, meaning that its value may be clobbered and not restored by function calls your method makes (it's also needed for some instructions that can only use that register for their operand, like REP* and some of the shift instructions). However, if your method uses the this pointer a lot (including the implicit use of referring to member variables or calling virtual member functions), the compiler will probably have saved this in another register, a "callee-saves" register (meaning that any function that clobbers it must restore it before returning).
The practical upshot of this is that, in your watch window, you can try looking at (MyObject *) ebp, (MyObject *) esi, and so on with other registers, until you find that you're looking at a pointer that is probably the correct one (because the member variables line up with your expectation of the contents of this at the time of your breakpoint). On x86, the calle-saved registers are ebp, esi, edi, and ebx. On x86-64, they are rbp, rsi, rdi, rbx, r12, r13, r14, and r15. If you don't want to search all those, you could always try looking at the disassembly of your function prologue to see what ecx (or rcx) is being copied into.
Local variables (including this) when viewed in the Locals window cannot be relied upon in the Release build in the way that they can in Debug builds. Whether the variable value shown is correct at any given instruction depends on how the underlying register is being used at that point. If the code runs OK in Debug it's most unlikely that the value is actually 0.
Optimization in Release builds makes values in the Locals window a crap shoot, to the naked eye. Without concurrent display and correlation of the Disassembly window, you cannot be sure that the Locals window is telling you the actual value of the variable. If you step through the code (maybe in Disassembly not Source) to a line that actually uses this, it's more likely that you will see a valid value there.
Because you wrote a bugged program and called a member function on a NULL pointer.
Edit: Reread your question. Most likely, it's because the optimizer did a number on your code and the debugger can't read it anymore. If you have a problem specific to Release build, then it's a hint that your code has a dodgy #ifdef in it, or you invoked UB that just happens to work in Debug mode. Else, debug with Debug build. However, that's not terribly helpful if you actually have a problem in Release mode you can't find.
Your function foo is inline (it's declared in the class definition, so is implicitly inline), and doesn't access any members. Therefore the optimizer will likely not actually pass the this pointer at all when it compiles the code, so it is not available to the debugger.
In release builds, the optimizer will rearrange code quite substantially in order to improve performance, particularly with inline functions (though it does optimize other functions too, especially if whole program optimization is enabled). Rather than passing this, it may instead pass a pointer to a used member directly, or even just pass the member's value in a register that it loaded for a previous function call.
Sometimes the debug info is enough that the debugger can actually piece together a this pointer, and the values of local variables. Often, it is not, and the this pointer shown in the watch window (and consequently the member variables) are nonsense.
Because it is a release build. The entire point in optimizations is to change the implementation details of the program, while preserving the overall functionality.
Does the program still work? Then it doesn't matter that the this pointer is seemingly null.
In general, when you're working with a release build, you should expect that the debugger is going to get confused. Code is going to be reordered, variables removed entirely, or containing weird unexpected values.
When optimizations are enabled, no guarantees are given about any of these things. But the compiler won't break your program. If it worked without optimizations, it'll still work with optimizations. If it suddenly doesn't work, it's because you have a bug that was only exposed because the compiler optimized and modified the code.
Are they "const" functions?
A const function is one which is declared with the keyword const, and this indicates that it will not change any of the members, only read them (like accessor functions)
An optimising compiler may not bother passing the 'this' pointer to some const functions if it doesn't even read from non-static member variables
An optimising compiler may search for functions which could be const, make them constant, and then not pass a this pointer into them, causing the debugger to be unable to find the hook.
It isn't the this pointer that is NULL, but rather the pointer you are using to call a member function:
class A
{
public:
void f() {}
};
int main()
{
A* a = NULL;
a->f(); // DO'H! NULL pointer access ...
// FIX
A* a = new A;
a->f(); // Aha!
}
As others already said you should make sure that the compiler does not do anything which can confuse the debugger, optimizations are likely to do.
The fact that you have NULL pointer can happen IF you call the function statically like :
A* b=NULL;
b->foo();
The function is not static here but called a static way.
The best spot to find the real this pointer is the take a look at the stack. For non-static class functions the this pointer MUST be the first ( hidden ) argument of your function.
class A
{
void foo() { } // this is "void foo(A *this)" really
int f_;
};
If your this prointer is null here, then you have problem before calling the function. If the pointer is correct here then you debugger is kinda messed up.
I've been using Code::Blocks with Mingw for years now, with the built in debugger ( gdb )
I only have problems with the pointer when I had optimizations turned on, otherwise it always knows the this pointer and can dreference it any time.
I would like to be able to determine if a pointer is on the stack or not at runtime for a number of reasons. Like if I pass it into a function call, I can determine whether I need to clone it or not. or whether I need to delete it.
In Microsft C (VC 6,7,8) is there a way to bounds check a pointer to see if it in on the stack or not? I am only concerned with determining this on the thread that owns the stack the object was placed on.
something like
static const int __stack_size
and __stack_top
????
Thanks!
Knowing whether an object is on the stack or heap isn't going to tell you whether it should be cloned or deleted by the called function. After all, you can clone either type, and while you shouldn't try to delete a stack-allocated function you shouldn't try to delete all heap pointers either.
Having a function that will make some arcane check to see whether it should delete a passed pointer or not is going to cause confusion down the line. You don't want a situation where you may or may not be able to refer to fields in an object you passed, depending on context. Nor do you want to risk a mistake that will result in trying to free a stack object.
There isn't any standard way to tell what a pointer points to, and any nonstandard way is likely to break. You can't count on stack contiguity, particularly in multithreaded applications (and somebody could easily add a thread to an application without realizing the consequences).
The only safe ways are to have a calling convention that the called function will or will not delete a passed object, or to pass some sort of smart pointer. Anything else is asking for trouble.
Interesting question.
Here's an idea on how to determine it, but not a function call.
Create a dummy variable at the very start of your application on the stack.
Create a variable on the stack in a function isOnStack( void *ptr )
Check to see that the 'ptr' is between the dummy variable and the local variable.
Remember that the stack is contiguous for a given thread. I'm not sure what would happen when you started checking from one thread to another for this information.
If it's not in the stack, then it must be on the heap.
I do not know any method to determine where an object was allocated.
I see this kind of behaviour should be avoided. Such things should imho be solved by contract between user and library developer. State these things in the documentation! If unsure copy the object (which requires a copy constructor and saves you from trying to copy uncopyable objects).
You can also use smart pointers from Boost. If unsure when an object is now longer needed, pass it as a shared pointer.
Doing this depends on the calling convention of the function. Some calling conventions place arguments in registers, others place them in memory after the head of the stack. Each one is a different agreement between the caller/callee. So at any function boundary in the stack a different convention could have been used. This forces you to track the calling convention used at every level.
For example, in fastcall, one or more arguments can be passed via registers.
See MSDN for more. This would mess up any scheme to figure out if an address exists within a certain range. In MS's thiscall, the this pointer is passed via registers. The &this would not resolve to somewhere between a range of values between the begin and end of the stack.
Bottom line, research calling conventions, it specifies how stack memory will be laid out. Here is a good tutorial
Note this is very platform specific!
This is very platform specific, and IMO suitable only for debug build diagnostics. What you'd need to do (on WIntel) is this:
When a thread is created, create a stack variable, and store its address in a global (threadid, stack base address) map.
IsOnStack needs to create its own local variable, and check if the pointer passed is between the stack base and the address in the current stack frame.
This will not tell you anything abotu variables within other threads. Stack addresses decrease, so the base address is higher than the current address.
As a portable solution, I'd pass a boost::shared_ptr, which can be associated with a deleter. (In boost, this is not a template parameter, so it doesn't "infect" the function consuming the pointer).
you can create an "unmanaged" pointer like this:
inline void boost_null_deleter(void *) {}
template <typename T> inline
boost::shared_ptr<T> unmanaged_ptr(T * x)
{
return boost::shared_ptr<T>(x, ::boost_null_deleter);
}
and call your function like this
Foo local = { ... };
FooPtr heapy(new Foo);
FunnyFunc(unmanaged_ptr(&local));
FunnyFunc(heapy);
I've wanted such a feature in C++ for a while now, but nothing good really exists. The best you can hope for is to document that you expect to be passed an object that lives on the heap, and then to establish an idiom in the code so that everyone working on the code base will know to pass heap allocated objects to your code. Using something like auto_ptr or boost::shared_ptr is a good idiom for this kind of requirement.
Well, I agree there is probably a better way of doing what you're trying to do. But it's an interesting question anyway. So for discussion's sake...
First, there is no way of doing this is portable C or C++. You have to drop to assembly, using at least a asm{ } block.
Secondly, I wouldn't use this in production code. But for VC++/x86 you can find out if a variable is on your stack by check that it's address is between the values of ESP and EBP registers.
Your ESP ( Extended Stack Pointer, the low value ) holds the top of your stack and the EBP ( Extended Base Pointer ) usually the bottom. Here's the Structure of the Call Stack on x86.
Calling convention will affect function parameters mainly, and how the return address is handled, etc. So it doesn't relate to your stack much. Not for your case anyway.
What throws things off are compiler optimizations. Your compiler may leave out the frame pointer ( EBP ). This is the -Oy flag in VC++. So instead of using the EBP as the base pointer you can use the address of function parameters, if you have any. Since those a bit higher up on the stack.
But what if that variable you're testing is on your caller's stack? Or a caller's stack several generations above you? Well you can walk the entire call stack, but you can see how this can get very ugly ( as if it isn't already :-) )
Since you're living dangerously, another compiler flag that may interest you is -
Gh flag. With that flag and a suitable _penter hook function, you can setup these calculations for the functions or files or modules, etc. easily. But please don't do this unless you'd just like to see how things work.
Figuring out what's on the heap is even worse....
On some platforms, the stack can be split by the run-time system. That is, instead of getting a (no pun intended) stack overflow, the system automatically grabs some more stack space. Of course, the new stack space is usually not contiguous with the old stack space.
It's therefore really not safe to depend on whether something is on the stack.
The use of auto_ptr generally eliminates the need for this kind of thing, and is way cooler besides.
The MSVC Windows compiler specific answer. This is of course specific to the thread the object is in. It's a pretty bad idea to pass any auto-stack item into any thread other than the one whos stack it is on so I'm not worried about that :)
bool __isOnStack(const void *ptr)
{
// FS:[0x04] 4 Win9x and NT Top of
stack // FS:[0x08] 4 Win9x and
NT Current bottom of stack
const char *sTop; const char
*sBot;
__asm {
mov EAX, FS:[04h]
mov [sTop], EAX
mov EAX, FS:[08h]
mov [sBot], EAX
}
return( sTop > ((const char *)ptr) && ((const char *)ptr) > sBot);
}