Can a handled exception on Windows call terminate()? - c++

I have a C++ application deployed on a Windows server VM in Azure.
It's been running fine for a few days, and then WER took a crash dump. The faulting thread had this stack trace:
00 0000004a`7fb7ac00 00007ffa`9505e15f ucrtbase!abort+0x4e
01 0000004a`7fb7ac30 00007ffa`911239ba ucrtbase!terminate+0x1f
02 0000004a`7fb7ac60 00007ffa`9112c710 VCRUNTIME140!__std_terminate+0xa
03 0000004a`7fb7ac90 00007ffa`9112282a VCRUNTIME140!_CallSettingFrame+0x20
04 0000004a`7fb7acc0 00007ffa`91121b82 VCRUNTIME140!__FrameUnwindToState+0x136
05 (Inline Function) --------`-------- VCRUNTIME140!__FrameUnwindToEmptyState+0x7e
06 0000004a`7fb7ad30 00007ffa`9112be80 VCRUNTIME140!__InternalCxxFrameHandler+0x19a
07 0000004a`7fb7ad90 00007ffa`9897a5cd VCRUNTIME140!__CxxFrameHandler+0x90
08 0000004a`7fb7ade0 00007ffa`9891068a ntdll!RtlpExecuteHandlerForUnwind+0xd
09 0000004a`7fb7ae10 00007ffa`9112c249 ntdll!RtlUnwindEx+0x38a
0a 0000004a`7fb7b4f0 00007ffa`91122979 VCRUNTIME140!_UnwindNestedFrames+0x109
0b 0000004a`7fb7bab0 00007ffa`91122090 VCRUNTIME140!CatchIt+0xb5
0c 0000004a`7fb7bb30 00007ffa`91121c7e VCRUNTIME140!FindHandler+0x3f0
0d 0000004a`7fb7bc00 00007ffa`9112be80 VCRUNTIME140!__InternalCxxFrameHandler+0x296
0e 0000004a`7fb7bc60 00007ffa`9897a54d VCRUNTIME140!__CxxFrameHandler+0x90
0f 0000004a`7fb7bcb0 00007ffa`9890fcf3 ntdll!RtlpExecuteHandlerForException+0xd
10 0000004a`7fb7bce0 00007ffa`98911a09 ntdll!RtlDispatchException+0x373
11 0000004a`7fb7c3e0 00007ffa`95c53c58 ntdll!RtlRaiseException+0x2d9
12 0000004a`7fb7cbc0 00007ffa`91124572 KERNELBASE!RaiseException+0x68
13 0000004a`7fb7cca0 00007ffa`784bd3ce VCRUNTIME140!_CxxThrowException+0xc2
14 0000004a`7fb7cd20 00007ffa`78350000 MyModule!operator new[]+0x2e
15 0000004a`7fb7cd28 0000004a`7fb7cf88 MyModule!__imp_?id#?$codecvt#DDU_Mbstatet###std##2V0locale#2#A
16 0000004a`7fb7cd30 00000000`00000000 0x0000004a`7fb7cf88
(I replaced the real module name with MyModule!)
Questions:
I'm guessing (by the function names in the stack trace, e.g. RtlpExecuteHandlerForException) that this flow if for handled exceptions. Is that correct? if yes, why is the application crashing nevertheless?
The __imp_?id#?$codecvt#DDU_Mbstatet###std##2V0locale#2#A at the bottom of the stack looks mangled. I've tried to de-mangle it online and got __imp_public: static class std::locale::id std::codecvt<char,char,struct _Mbstatet>::id. Does that make sense? it looks like a reference to a field, not a function
I've tried running !analyze -v and it gives:
EXCEPTION_CODE: (NTSTATUS) 0xc0000409 - The system detected an overrun
of a stack-based buffer in this application. This overrun could
potentially allow a malicious user to gain control of this
application.
What can be the cause of such exception? Can you please provide a short snippet which triggers such an exception in Windows?)

Related

How to retrieve underlying block device IO error

Consider a device in the system, something under /dev/hdd[sg][nvme]xx
Open the device, get the file descriptor and start working with it (read(v)/write(v)/lseek, etc), at some point you may get EIO. How do you retrieve the underlying error reported by the device driver?
EDIT001: in case it is impossible using unistd functions, maybe there is other ways to work with block devices which can provide more low-level information like sg_scsi_sense_hdr?
You can't get any more error detail out of the POSIX functions. You're onto the right track with the SCSI generic stuff though. But, boy, it's loaded with hair. Check out the example in sg3_utils of how to do a SCSI READ(16). This will let you look at the sense data when it comes back:
https://github.com/hreinecke/sg3_utils/blob/master/examples/sg_simple16.c
Of course, this technique doesn't work with NVMe drives. (At least, not to my knowledge).
One concept I've played with in the past is to use normal POSIX/libc block I/O functions like pread and pwrite until I get an EIO out. At that point, you can bring in the SCSI-generic versions to try to figure out what happened. In the ideal case, a pread or lseek/read fails with EIO. You then turn around and re-issue it using a SG READ (10) or (16). If it's not just a transient failure, this may return sense data that your application can use.
Here's an example, using the command-line sg_read program. I have an iSCSI attached disk that I'm reading and writing. On the target, I remove its LUN mapping. dd reports EIO:
# dd if=/dev/sdb of=/tmp/output bs=512 count=1 iflag=direct
dd: error reading ‘/dev/sdb’: Input/output error
but sg_read reports some more useful information:
[root#localhost src]# sg_read blk_sgio=1 bs=512 cdbsz=10 count=512 if=/dev/sdb odir=1 verbose=10
Opened /dev/sdb for SG_IO with flags=0x4002
read cdb: 28 00 00 00 00 00 00 00 80 00
duration=9 ms
reading: SCSI status: Check Condition
Fixed format, current; Sense key: Illegal Request
Additional sense: Logical unit not supported
Raw sense data (in hex):
70 00 05 00 00 00 00 0a 00 00 00 00 25 00 00 00
00 00
sg_read: SCSI READ failed
Some error occurred, remaining block count=512
0+0 records in
You can see the Logical unit not supported additional sense code in the above output, indicating that there's no such LU at the target.
Possible? Yes. But as you can see from the code in sg_simple16.c, it's not easy!

Memory leak when using anything involving wxFileName

I'm making use of wxWidgets in my program for directory management and compressing/uncompromising collections of files. As I've been building my file system, I've noticed that I get memory leaks every run. After a lot of testing, I realized that any time I use any functions related to wxFileName, I get a memory leak. I'm using wx widgets 3.0.1, and my standalone example is as follows.
#include <wx\filename.h>
int main()
{
wxFileName::Mkdir("Test");
return 0;
}
The result is the same if I make an instance of the wxFileName class.
How do I make wx widgets not create a memory leak? I want to be able to package large collections of files in one file, and read the data from them with various other libraries (via extracting the zip to a temporary folder and reading the data from there). I haven't been able to get any other library to zip/unzip entire folders, so I really need to be able to use wxWidgets without a memory leak.
I read in another thread that the visual studios debugger is falsely identifying the memory leaks, but I ran it through AQtime and it confirmed that there was indeed a memory leak.
The exact debug output involving the memory leak is as follows:
Detected memory leaks!
Dumping objects ->
{1087} normal block at 0x009B4BC0, 64 bytes long.
Data: <\+= d+= l+= t+= > 5C 2B 3D 00 64 2B 3D 00 6C 2B 3D 00 74 2B 3D 00
{1086} normal block at 0x009B4880, 772 bytes long.
Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
{1085} normal block at 0x009B4680, 28 bytes long.
Data: < H > 80 48 9B 00 C1 00 00 00 00 00 00 00 CD CD CD CD
Object dump complete.
After a bit of digging (it WOULD be the digging I did AFTER posting the question) I found that when you're using wxWidgets without creating a wxWidgets app object, you need to use the following two functions:
wxInitialize()
and
wxUninitialize()
So the fixed version of my code is as follows:
#include <wx/app.h>
#include <wx\filename.h>
int main()
{
wxInitialize();
wxFileName::Mkdir("Waka Waka");
wxUninitialize();
return 0;
}
I suggest if anyone is using wxWidgets purely for the file management to either call these functions in the constructor and destructor of whatever class handles files, or at the beginning and end of your program's main loop.

InterThread communication

I am working on client server application where I need to send the command read from the script file .
The format of the script file is as follows.
CONNECT <www.abcd.com,80>
SEND : AB 40 01 FF 00 00 00 09 01 01 07 00 00 C0 A8 01 87 AE
MATCH<s,10>: AB 40 01 FF 00 00 00 09 01 01 07 00 00 C0 A8 01 87 AE
SEND : AB 34 01 FF00 00 00 0C 01 01 07 00 01 01 07 00 FF FF FF FF AE
DISCONNECT
note: s in match is wait time in seconds.
here second byte is Msg ID.
When it encounter Match Command the Program should wait for match for specified second and then proceed to next command.
I have two thread running in the application
Listener Thread- it will receive data from server.(select() is used here)
it will be launched when the program encounter the connect command and goes off when
encounter disconnect in config.
main thread which will read command from the config file and execute.
Now when match is encounter main thread should send the match string to the
Listener Thread for matching and wait there for signal from Listener Thread.
The Listener Thread will match the string with data received from the server if it matches it will single the event(SetEvent() windows) to main thread and then main thread will log "Match found" other wise if time is elasped then it will log as "Match not found"
I thought of having a global variable char* g_MatchString.The main thread will update this variable whenever there is match command and wait for event(windows event) to be singled and the wait time will be equal to match time.
I need input from you guys whether my approach is correct or not.
Don't use a global. That just creates the potential for race conditions when someone adds complexity in the future. The match string should be passed to the thread as an input argument. You don't say how you're launching the thread, but if you're using _beginthread() you simply allocate a buffer and pass it to _beginthread() in the arglist parameter. When the listener thread terminates, the main thread can safely free the match string buffer. This keeps it nicely self-contained and free of potential race conditions if additional threads are ever added. If you're launching the thread with CreateThread(), you would pass it via the lpParameter parameter.
Other than the global, I think your approach is sound.
Since the main thread is waiting on the listener thread, and the listener thread's sole purpose is to read inbound data, I would suggest just getting rid of the listener thread altogether and let the main thread do the reading directly.

Tracking memory leak

The situation:
I see around + 120kbytes increase when closing the class,
so when i close the clas few times memory increases - and i need to find out what is causing this.
Just looking for any good tip or trick of how to find out what is not freed/released with vstudio 2010 - any ideas?
Here's more clearly what i do ( very simplified )
class cSomeClass
{
cSomeClass();
~cSomeClass();
int Initialize();
void Deinitialize();
}
cSomeClass cCamera;
main()
{
Sleep(10000);
// Do Init / Deinit to find out if we are freeing the memory
while(1)
{
// Init camera
if(cCamera.Initialize()==0)
{
// Rest for a while
Sleep(1500);
cCamera.Deinitialize();
// Rest for a while
Sleep(1500);
}
}
}
I just did a small application to init / deinit the class object to see
in the 'task manager' if my memory for this application is returning to it's starting value - but it is not, it keeps incrementing every time i initialize the cSomeClass - so i believe i have something that is initialized but not freed in Deinitialize.
Update:
I don't think it is a simple memory growth, when application is launched it stabilizes itself after 10 seconds to let's say : 1MB of ram then when while(1) kicks in every Initialize i call i get +120kBytes in overall application memory ( checked in task manager ).
Update:
Thanks to Chad - got it sniffed with
_CrtDumpMemoryLeaks
Detected memory leaks!
Dumping objects ->
{76} normal block at 0x003F4BC8, 32 bytes long.
Data: <Logitech QuickCa> 4C 6F 67 69 74 65 63 68 20 51 75 69 63 6B 43 61
{75} normal block at 0x003F4B80, 8 bytes long.
Data: < K? > 20 4B 3F 00 00 00 00 00
{74} normal block at 0x003F4B20, 32 bytes long.
Data: < K? K? > 80 4B 3F 00 C8 4B 3F 00 CD CD CD CD CD CD CD CD
{70} normal block at 0x003F4A30, 8 bytes long.
Data: < )i > 0C 29 69 00 00 00 00 00
Object dump complete.
The most straight forward method is to use the Windows API functions for memory usage tracking, like _CrtDumpMemoryLeaks.
Using this in conjunction with _CrtMemCheckpoint can prove to be vital when tracking down stubborn leaks.
If you are using MFC, you can optionally define DEBUG_NEW, which adds additional tracking to the global new/delete operators, giving you file and line numbers for each allocation that leaks, this can be extremely helpful as well, but it doesn't work with some implementations of new (std::nothrow, for instance).
I'm not sure exactly what you mean by "closing the class", but if you're able to run your code in Linux valgrind is always a great option to track memory leaks. Purify also works well in Windows but it's $$.
Another approach is to try to stop the problem up front: Use smart pointers instead of raw pointers.
Finally make sure that you're actually seeing a leak and not just a memory growth up to a certain plateau.

Dump only a portion of memory in VS 2005

Does any one know if there is a way to dump only a chunk of memory to disk using VS? Basically, I want to give it an address and a length, and have it write the memory to disk. That way I can do a binary diff.
Thanks.
I'm kind of surprised VS won't let you do that from the Memory dump window...
You might be able to get what you want (or close to it) with the VS command window:
>Tools.LogCommandWindowOutput c:\temp\testdump.log /overwrite
>Debug.ListMemory /Count:16 0x00444B20
0x00444B20 00 00 00 00 00 00 00 00 13 00 12 00 86 07 19 00 ................
>Tools.LogCommandWindowOutput /off
If you're willing to use WinDBG, (or ntsd/cdb) you can use the .writemem debugger command to do exactly what you want.
I believe you can only save a complete binary minidump. However, you can use the Debug Memory window and copy/paste to a text file to do memory diffs.
OK, this I have tried in VS 2008, but I believe VS 2005 should allow the same:
If the memory is a string (if it doesn't contain zero bytes), you can put the following into a watch window: (unsigned char*)(ptr),1024 to see 1kB in the text visualizer. However, this stops at zero bytes, so if you have binary data, this won't work.