Cannot Set 4 Byte Hardware Breakpoint Windbg - c++

I cannot set 4 byte read / write access hardware breakpoint using windbg.
0:000> dd 02e80dcf
02e80dcf 13121110 17161514 1a191800 1e1d1c1b
02e80ddf 011c171f c7be7df1 00000066 4e454900
Actually I have to check when the value 0x13121110 (at address 0x02e80dcf)is getting changed/overwritten by the program.
So When I'm trying to set a 4 byte write access hardware breakpoint # 0x02e80dcf, I'm getting Data breakpoint must be aligned Error.
0:000> ba w 4 02e80dcf
Data breakpoint must be aligned
^ Syntax error in 'ba w 4 02e80dcf'
0:000> ba r 4 02e80dcf
Data breakpoint must be aligned
^ Syntax error in 'ba r 4 02e80dcf'
0:000> ba w 1 02e80dcf
breakpoint 0 redefined
I'm able to set 1 byte write access breakpoint at the address, But it not getting triggered when the pointer # address 0x02e80dcf is getting overwritten.
And also if anyone could suggest any other way to detect the address overwritten thing would be really helpful.
Note : The problem I'm facing for a particular program. I'm able to set 4 byte hardware break point in the same debugging environment.

As a side note, this particular behavior is from the CPU architecture itself (not from the system or the debugger).
x86 and x86-64 (IA32 and IA32-e in Intel lingo) architecture use Drx (Debug Registers) to handle hardware breakpoints.
Dr7 LENn field will set the length of a breakpoint and Dr0 to Dr3 will hold the breakpoint addresses.
from Intel Manual 3B - Chapter 18.2.5. "Breakpoint Field Recognition":
The LENn fields permit specification of a 1-, 2-, 4-, or 8-byte range,
beginning at the linear address specified in the corresponding debug
register (DRn).
In the same chapter it is explicitly stated:
Two-byte ranges must be aligned on word boundaries; 4-byte ranges must
be aligned on doubleword boundaries.
If you cover the desired address with a data breakpoint with a big enough length, then it will trap (breakpoint will be hit):
A data breakpoint for reading or writing data is triggered if any of
the bytes participating in an access is within the range defined by a
breakpoint address register and its LENn field.
The manual then goes on giving a tip to trap on unaligned address and gives an example table:
A data breakpoint for an unaligned operand can be constructed using
two breakpoints, where each breakpoint is byte-aligned and the two
breakpoints together cover the operand.

Addresses must be aligned on a 4-byte boundary (or larger for 64-bit systems).
Any hex address ending in 0xf is not aligned to a 4-byte boundary.
There may be a restriction by WinDbg that data breakpoints are aligned to 4 or 8 byte boundaries. You many need to use conditional break so that only the one byte is checked.

Related

Get the address of an intrinsic function-generated instruction

I have a function that uses the compiler intrinsic __movsq to copy some data from a global buffer into another global buffer upon every call of the function. I'm trying to nop out those instructions once a flag has been set globally and the same function is called again. Example code:
// compiler: MSVC++ VS 2022 in C++ mode; x64
void DispatchOptimizationLoop()
{
__movsq(g_table, g_localimport, 23);
// hopefully create a nop after movsq?
static unsigned char* ptr = (unsigned char*)(&__nop);
if (!InterlockedExchange8(g_Reduce, 1))
{
// point to movsq in memory
ptr -= 3;
// nop it out
...
}
// rest of function here
...
}
Basically the function places a nop after the movsq, and then tries to get the address of the placed nop then backtrack by the size of the movsq so that a pointer is pointing to the start of movsq, so then I can simply cover it with 3 0x90s. I am aware that the line (unsigned char*)(&__nop) is not actaully creating a nop because I'm not calling the intrinsic, I'm just trying to show what I want to do.
Is this possible, or is there a better way to store the address of the instructions that need to be nop'ed out in the future?
It's not useful to have the address of a 0x90 NOP somewhere else, all you need is the address of machine code inside your function. Nothing you've written comes remotely close to helping you find that. As you say, &__nop doesn't lead to there being a NOP in your function's machine code which you could offset relative to.
If you want to hard-code offsets that could break with different optimization settings, you could take the address of the start of the function and offset it.
Or you could write the whole function in asm so you can put a label on the address you want to modify. That would actually let you do this safely.
You might get something that happens to work with GNU C labels as values, where you can take the address of C goto labels like &&label. Like put a mylabel: before the intrinsic, and maybe after for good measure so you can check that the difference is the expected 3 bytes. If you're lucky, the compiler won't put any other instructions between your labels.
So you can memset((void*)&&mylabel, 0x90, 3) (after an assert on &&mylabel_end - &&mylabel == 3). But I don't think MSVC supports that GNU extension or anything equivalent.
But you can't actually use memset if another thread could be running this at the same time.
And for efficiency, you want a single 3-byte NOP anyway.
And of course you'd have to VirtualProtect the page of machine code containing that instruction to make it writeable. (Assuming the function is 16-byte aligned, it's hopefully impossible for that one instruction near the start to be split across two pages.)
And if other threads could be running this function at the same time, you'd better use an atomic RMW (on the containing dword or qword) to replace the 3-byte instruction with a single 3-byte NOP, otherwise you could have another thread fetch and decode the first NOP, but then fetch a byte of of the movsq machine code not replaced yet.
Actually a plain mov store would be atomic if it's 4 bytes not crossing an 8-byte boundary. Since there are no other writers of different data, it's fine to load / AND/OR / store to later store the same surrounding bytes you loaded earlier. Normally a non-atomic load+store is not thread-safe, but no other threads could have written a different value in the meantime.
Atomicity of cross-modifying code
I think cross-modifying code has atomicity rules similar to data. But if the instruction spans a 16-byte boundary, code-fetch in another core might have pulled in the first 1 or 2 bytes of it before you atomically replace all 3. So the 2nd and 3rd byte get treated as either the start of an instruction, or the 2nd + 3rd bytes of a long-NOP. Since long-NOPs generally start with 0F 1F with an escape byte, if that's not how __movsq starts then it could desync.
So if cross-modifying code doesn't trigger a pipeline nuke on the other core, it's not safe to do it while another thread might be running the code. Code fetch is usually done in 16-byte chunks but that's not guaranteed. And it's not guaranteed that they're aligned 16-byte chunks.
So you should probably make sure no other threads are running this function while you change the machine code. Unless you're very sure of the safety of what you're doing and check each build to make sure the instruction starts at a safe offset, where safe is defined according to any possibility or anything that could go wrong.

Data Alignment: Reason for restriction on memory address being multiple of data type size

I understand the general concept of data alignment, but what I do not understand is the restriction on memory address value, forced to be a multiple of the size of underlying data type.
This answer explains the data alignment well.
Quote:
Let's look at a memory map:
+----+
|0000|
|0001|
+----+
|0002|
|0003|
+----+
|0004|
|0005|
+----+
| .. |
At each address there is a byte which can be accessed individually. But words can only be fetched at even addresses. So if we read a word at 0000, we read the bytes at 0000 and 0001. But if we want to read the word at position 0001, we need two read accesses. First 0000,0001 and then 0002,0003 and we only keep 0001,0002.
Question:
Assuming it's true, why " But words can only be fetched at even addresses. " be true? Can't the memory/stack pointer point to 0001 in the example and then read a word of information starting there?
We know the machine can read memory in blocks of 2 bytes with one read action, (in the example [0000, 0001] or [0002, 0003]). So if my address register is pointing to 0001 (odd address instead of even), then I can read 2 bytes from there (i.e. 0001 and 0002) directly in one read action, right?
The assumption about that statement is not necessarily true. I don't want to re-iterate the answer you linked to describing the reasons for using and highly preferring aligned access, but there are architectures that do support unaligned memory access -- ARM for example (check out this SO answer).
But your question, I think, really comes down to hardware architecture, specifically the data bus design, and the accompanying instructions set that engineers at various silicon manufacturers have designed.
Some Cortex-M cores explicitly allow you to enable a CPU to trigger an exception on un- aligned access by configuring a Usage Fault register, which means that you can "utilize" unaligned memory access in rare use-cases.
Usually a processors internal addresses points to a whole word. This is because you don't want your (simple) processor be able to address a word at a random byte (or even worse: bit) because
You waste addressable memory: presuming the biggest possible address your processor is able to process is the max value of its word size, and you can multiply that by the size of your word to calculate the amount of storage you can address. (Each unique address points to a full word) The "address" I'm talking about here is not necessarily looking like the address which might be stored in a pointer of a higher programming language. The pointer address addresses each byte, which will be interpreted by a compiler or interpreter into the corresponding assembly instructions (discarding unwanted bytes from the loaded word)
A word loaded from memory could be anything, a value or the next instruction of the program you are running on your processor - the previous word loaded into the processor will often give an indication what the following word that gets loaded is used for: another instruction (eg arithmetic operation, load or store instruction) which might be followed by operands (values or addresses). Being able to address unaligned words would complicate a processor a lot in easy words.
Assuming it's true, why " But words can only be fetched at even addresses. " be true?
The memory actually stores words. The processor actually addresses the memory in words, and fetches a word at a time.
When you fetch a byte, it actually fetches a word, then ignores either the first half or the second half.
On 32-bit processors, it fetches a 32-bit word, then ignores three quarters; fetching a 16-bit word on a 32-bit processor ignores half the word.
If the 16-bit word you want to fetch (on a 16-bit processor) isn't aligned, then the processor has to fetch two words, take half of each word and then re-combine them. So even on processor designs where it works, it's often slower.
A lot of processor designs don't bother - either they just won't allow it, or they force the operating system to handle it (which is very slow).
(Not all types of processors work this way - e.g. 8-bit processors usually fetch a byte at a time)
Can't the memory/stack pointer point to 0001 in the example and then read a word of information starting there?
If the processor supports it, yes.

Do memory addresses contain implicit hex digits?

What is the value of a memory address that is less than 12 hex digits on a 64-bit computer?
For instance, when I run gdb on a simple assembly program and run (gdb) info frame I get:
Stack level 0, frame at 0x7fffffffd970:
rip = 0x40052f in main (file.s:11); saved rip = 0x7ffff7a2d830
source language asm.
Arglist at 0x7fffffffd960, args:
Locals at 0x7fffffffd960, Previous frame's sp is 0x7fffffffd970
Saved registers:
rbp at 0x7fffffffd960, rip at 0x7fffffffd968
The first part of the second line rip = 0x40052f in main (file.s:11) I believe states the value of the instruction pointer when I called info frame. But why is the memory address it holds not 12 hex digits?
Also, if I type (gdb) x 0x7fffffffd968 (which I expect to be 0x7ffff7a2d830) I get:
0x7fffffffd968: 0xf7a2d830
Does this mean that any memory address with less than 12 hex digits contains an implicit 7ff...?
No. On x86 or x86_64, a memory address is simply a number, but is commonly displayed using hexadecimal. And like most number notation systems, a shorter number just means a much smaller value, or if you like, there are implicit zeros before it.
So just like the decimal string "12" is much smaller than "12654321", the address 0x40052f is much smaller than the address 0x7ffff7a2d830. The two addresses are almost certainly in different virtual memory maps. (On Linux, you can view virtual memory maps by cat /proc/{pid}/maps.)
When you used the gdb x command, you didn't see the value you expected because gdb took a guess at what kind of data your address points at. The first time you use x in a gdb session, it defaults to showing 4 bytes (32 bits) per element, as though the address points at an array of uint32_t. Since addresses on x86_64 are 8 bytes (64 bits), you need x/g to tell gdb the element size is 8 bytes.

Why does the compiler allocate more space than sizeof(MyClass) for the object on the stack? [duplicate]

What is stack alignment?
Why is it used?
Can it be controlled by compiler settings?
The details of this question are taken from a problem faced when trying to use ffmpeg libraries with msvc, however what I'm really interested in is an explanation of what is "stack alignment".
The Details:
When runnig my msvc complied program which links to avcodec I get the
following error: "Compiler did not align stack variables. Libavcodec has
been miscompiled", followed by a crash in avcodec.dll.
avcodec.dll was not compiled with msvc, so I'm unable to see what is going on inside.
When running ffmpeg.exe and using the same avcodec.dll everything works well.
ffmpeg.exe was not compiled with msvc, it was complied with gcc / mingw (same as avcodec.dll)
Thanks,
Dan
Alignment of variables in memory (a short history).
In the past computers had an 8 bits databus. This means, that each clock cycle 8 bits of information could be processed. Which was fine then.
Then came 16 bit computers. Due to downward compatibility and other issues, the 8 bit byte was kept and the 16 bit word was introduced. Each word was 2 bytes. And each clock cycle 16 bits of information could be processed. But this posed a small problem.
Let's look at a memory map:
+----+
|0000|
|0001|
+----+
|0002|
|0003|
+----+
|0004|
|0005|
+----+
| .. |
At each address there is a byte which can be accessed individually.
But words can only be fetched at even addresses. So if we read a word at 0000, we read the bytes at 0000 and 0001. But if we want to read the word at position 0001, we need two read accesses. First 0000,0001 and then 0002,0003 and we only keep 0001,0002.
Of course this took some extra time and that was not appreciated. So that's why they invented alignment. So we store word variables at word boundaries and byte variables at byte boundaries.
For example, if we have a structure with a byte field (B) and a word field (W) (and a very naive compiler), we get the following:
+----+
|0000| B
|0001| W
+----+
|0002| W
|0003|
+----+
Which is not fun. But when using word alignment we find:
+----+
|0000| B
|0001| -
+----+
|0002| W
|0003| W
+----+
Here memory is sacrificed for access speed.
You can imagine that when using double word (4 bytes) or quad word (8 bytes) this is even more important. That's why with most modern compilers you can chose which alignment you are using while compiling the program.
Some CPU architectures require specific alignment of various datatypes, and will throw exceptions if you don't honor this rule. In standard mode, x86 doesn't require this for the basic data types, but can suffer performance penalties (check www.agner.org for low-level optimization tips).
However, the SSE instruction set (often used for high-performance) audio/video procesing has strict alignment requirements, and will throw exceptions if you attempt to use it on unaligned data (unless you use the, on some processors, much slower unaligned versions).
Your issue is probably that one compiler expects the caller to keep the stack aligned, while the other expects callee to align the stack when necessary.
EDIT: as for why the exception happens, a routine in the DLL probably wants to use SSE instructions on some temporary stack data, and fails because the two different compilers don't agree on calling conventions.
IIRC, stack alignment is when variables are placed on the stack "aligned" to a particular number of bytes. So if you are using a 16 bit stack alignment, each variable on the stack is going to start from a byte that is a multiple of 2 bytes from the current stack pointer within a function.
This means that if you use a variable that is < 2 bytes, such as a char (1 byte), there will be 8 bits of unused "padding" between it and the next variable. This allows certain optimisations with assumptions based on variable locations.
When calling functions, one method of passing arguments to the next function is to place them on the stack (as opposed to placing them directly into registers). Whether or not alignment is being used here is important, as the calling function places the variables on the stack, to be read off by the calling function using offsets. If the calling function aligns the variables, and the called function expects them to be non-aligned, then the called function won't be able to find them.
It seems that the msvc compiled code is disagreeing about variable alignment. Try compiling with all optimisations turned off.
As far as I know, compilers don't typically align variables that are on the stack. The library may be depending on some set of compiler options that isn't supported on your compiler. The normal fix is to declare the variables that need to be aligned as static, but if you go about doing this in other people's code, you'll want to be sure that they variables in question are initialized later on in the function rather than in the declaration.
// Some compilers won't align this as it's on the stack...
int __declspec(align(32)) needsToBe32Aligned = 0;
// Change to
static int __declspec(align(32)) needsToBe32Aligned;
needsToBe32Aligned = 0;
Alternately, find a compiler switch that aligns the variables on the stack. Obviously the "__declspec" align syntax I've used here may not be what your compiler uses.

what is "stack alignment"?

What is stack alignment?
Why is it used?
Can it be controlled by compiler settings?
The details of this question are taken from a problem faced when trying to use ffmpeg libraries with msvc, however what I'm really interested in is an explanation of what is "stack alignment".
The Details:
When runnig my msvc complied program which links to avcodec I get the
following error: "Compiler did not align stack variables. Libavcodec has
been miscompiled", followed by a crash in avcodec.dll.
avcodec.dll was not compiled with msvc, so I'm unable to see what is going on inside.
When running ffmpeg.exe and using the same avcodec.dll everything works well.
ffmpeg.exe was not compiled with msvc, it was complied with gcc / mingw (same as avcodec.dll)
Thanks,
Dan
Alignment of variables in memory (a short history).
In the past computers had an 8 bits databus. This means, that each clock cycle 8 bits of information could be processed. Which was fine then.
Then came 16 bit computers. Due to downward compatibility and other issues, the 8 bit byte was kept and the 16 bit word was introduced. Each word was 2 bytes. And each clock cycle 16 bits of information could be processed. But this posed a small problem.
Let's look at a memory map:
+----+
|0000|
|0001|
+----+
|0002|
|0003|
+----+
|0004|
|0005|
+----+
| .. |
At each address there is a byte which can be accessed individually.
But words can only be fetched at even addresses. So if we read a word at 0000, we read the bytes at 0000 and 0001. But if we want to read the word at position 0001, we need two read accesses. First 0000,0001 and then 0002,0003 and we only keep 0001,0002.
Of course this took some extra time and that was not appreciated. So that's why they invented alignment. So we store word variables at word boundaries and byte variables at byte boundaries.
For example, if we have a structure with a byte field (B) and a word field (W) (and a very naive compiler), we get the following:
+----+
|0000| B
|0001| W
+----+
|0002| W
|0003|
+----+
Which is not fun. But when using word alignment we find:
+----+
|0000| B
|0001| -
+----+
|0002| W
|0003| W
+----+
Here memory is sacrificed for access speed.
You can imagine that when using double word (4 bytes) or quad word (8 bytes) this is even more important. That's why with most modern compilers you can chose which alignment you are using while compiling the program.
Some CPU architectures require specific alignment of various datatypes, and will throw exceptions if you don't honor this rule. In standard mode, x86 doesn't require this for the basic data types, but can suffer performance penalties (check www.agner.org for low-level optimization tips).
However, the SSE instruction set (often used for high-performance) audio/video procesing has strict alignment requirements, and will throw exceptions if you attempt to use it on unaligned data (unless you use the, on some processors, much slower unaligned versions).
Your issue is probably that one compiler expects the caller to keep the stack aligned, while the other expects callee to align the stack when necessary.
EDIT: as for why the exception happens, a routine in the DLL probably wants to use SSE instructions on some temporary stack data, and fails because the two different compilers don't agree on calling conventions.
IIRC, stack alignment is when variables are placed on the stack "aligned" to a particular number of bytes. So if you are using a 16 bit stack alignment, each variable on the stack is going to start from a byte that is a multiple of 2 bytes from the current stack pointer within a function.
This means that if you use a variable that is < 2 bytes, such as a char (1 byte), there will be 8 bits of unused "padding" between it and the next variable. This allows certain optimisations with assumptions based on variable locations.
When calling functions, one method of passing arguments to the next function is to place them on the stack (as opposed to placing them directly into registers). Whether or not alignment is being used here is important, as the calling function places the variables on the stack, to be read off by the calling function using offsets. If the calling function aligns the variables, and the called function expects them to be non-aligned, then the called function won't be able to find them.
It seems that the msvc compiled code is disagreeing about variable alignment. Try compiling with all optimisations turned off.
As far as I know, compilers don't typically align variables that are on the stack. The library may be depending on some set of compiler options that isn't supported on your compiler. The normal fix is to declare the variables that need to be aligned as static, but if you go about doing this in other people's code, you'll want to be sure that they variables in question are initialized later on in the function rather than in the declaration.
// Some compilers won't align this as it's on the stack...
int __declspec(align(32)) needsToBe32Aligned = 0;
// Change to
static int __declspec(align(32)) needsToBe32Aligned;
needsToBe32Aligned = 0;
Alternately, find a compiler switch that aligns the variables on the stack. Obviously the "__declspec" align syntax I've used here may not be what your compiler uses.