When is the handle with name \GLOBAL?? is created? - c++

I have a dump of process where the handle count in the process did reached 16 million handles (which is the maximum allowed handles per process). Hence the process got hanged.
From dump (This is second dump where the handle count is high but not max limit.) I get following data :
53778 Handles
Type Count
None 2
Event 238
Section 3
File 84
Port 16
**Directory 53120**
Mutant 35
WindowStation 2
Semaphore 151
Key 42
Token 4
Process 1
0:000> !handle 9735 f
Handle 00009735
Type Directory
Attributes 0x10
GrantedAccess 0x1:
HandleCount 53575
PointerCount 53788
Name \GLOBAL??
No object specific information available
There are many such handles open with Name : \GLOBAL?? and type Directory. Here I want to know in what scenarios do we see this particular handle being created?
Is there any way to know the code where the leak is occurring from the full dump?

I believe you are using WinDBG.
If I am not wrong, "\GLOBAL??" indicates that your symbolic link is relates to all sessions. On Win2K it was "\??". Symbolic links and Handles can be local to a session. For an example: I can create a Mutex handle and make it local to each terminal service sessions. This can be done by prefixing the mutex name explicitly with a "Global\" or "Local\" to create the object in the global or local session name space.
http://msdn.microsoft.com/en-us/library/ms682411(VS.85).aspx

Is this reproducible? If so, you should try the !htrace extension.

In Windows NT, the old DOS filesystem is essentially a set of shortcuts. This is necessary because it's a multi-user filesystem. Your H:\ drive might differ from someone elses H:\ drive. Hence, both are implemented as shortcuts or symbolic links.
SysInternals Process Monitor has a handle viewm and IIRC can capture a stack dump for each file operation. That of course adds up quickly; you'll need to learn its filters.

Related

Vista and FindFirstFileEx(..., FIND_FIRST_EX_LARGE_FETCH)

i have an application that queries a specific folder for its contents with a quick interval. Up to the moment i was using FindFirstFile but even by applying search pattern i feel there will be performance problems in the future since the folder can get pretty big; in fact it's not in my hand to restrict it at all.
Then i decided to give FindFirstFileEx a chance, in combination with some tips from this question.
My exact call is the following:
const char* search_path = "somepath/*.*";
WIN32_FIND_DATA fd;
HANDLE hFind = ::FindFirstFileEx(search_path, FindExInfoBasic, &fd, FindExSearchNameMatch, NULL, FIND_FIRST_EX_LARGE_FETCH);
Now i get pretty good performance but what about compatibility? My application requires Windows Vista+ but given the following, regarding FIND_FIRST_EX_LARGE_FETCH:
This value is not supported until Windows Server 2008 R2 and Windows 7.
I can pretty much compile it on my Windows 7 but what will happens if someone runs this on a Vista machine? Does the function downgrade to a 0
(default) in this case? It's safe to not test against operating system?
UPDATE:
I said above about feeling the performance is not good. In fact my numbers are following on a fixed set of files (about 100 of them):
FindFirstFile -> 22 ms
FindFirstFile -> 4 ms (using specific pattern; however all files may wanted)
FindFirstFileEx -> 1 ms (no matter patterns or full list)
What i feel about is what will happen if folder grows say 50k files? that's about x500 bigger and still not big enough. This is about 11 seconds for an application querying on 25 fps (it's graphical)
Just tested under WinXP (compiled under win7). You'll get 0x57 (The parameter is incorrect) when it calls ::FindFirstFileEx() with FIND_FIRST_EX_LARGE_FETCH. You should check windows version and dynamically choose the value of additional parameter.
Also FindExInfoBasic is not supported before Windows Server 2008 R2 and Windows 7. You'll also get run-time 0x57 error due to this value. It must be changed to another alternative if binary is run under old windows version.
at first periodic queries a specific folder for its contents with a quick interval - not the best solution I think.
you need call ReadDirectoryChangesW - as result you will be not need do periodic queries but got notifies when files changed in directory. the best way bind directory handle with BindIoCompletionCallback or CreateThreadpoolIo and call first time direct call ReadDirectoryChangesW. then when will be changes - you callback will be automatic called and after you process data - call ReadDirectoryChangesW again from callback - until you got STATUS_NOTIFY_CLEANUP (in case BindIoCompletionCallback) or ERROR_NOTIFY_CLEANUP (in case CreateThreadpoolIo) in callback (this mean you close directory handle for stop notify) or some error.
after this (first call to ReadDirectoryChangesW ) you need call FindFirstFileEx/FindNextFile loop but only once - and handle all returned files as FILE_ACTION_ADDED notify
and about performance and compatibility.
all this is only as information. not recommended to use or not use
if you need this look to - ZwQueryDirectoryFile - this give you very big win performance
you only once need open File handle, but not every time like with
FindFirstFileEx
but main - look to ReturnSingleEntry parameter. this is key
point - you need set it to FALSE and pass large enough buffer to
FileInformation. if set ReturnSingleEntry to TRUE function
and return only one file per call. so if folder containing N files -
you will be need call ZwQueryDirectoryFile N times. but with
ReturnSingleEntry == FALSE you can got all files in single call, if buffer will be large enough. in all case you serious reduce
the number of round trips to the kernel, which is very costly
operation . 1 query with N files returned much more faster than N
queries. the FIND_FIRST_EX_LARGE_FETCH and do this - set
ReturnSingleEntry to TRUE - but in current implementation (i check this on latest win 10) system do this only in
FindNextFile calls, but in first call to
FindFirstFileEx it (by unknown reason) still use
ReturnSingleEntry == TRUE - so will be how minimum 2 calls to the ZwQueryDirectoryFile, when possible have single call
(if buffer will be large enough of course) and if direct use
ZwQueryDirectoryFile - you control buffer size. you can
allocate once say 1MB for buffer, and then use it in periodic
queries. (without reallocation). how large buffer use
FindFirstFileEx with FIND_FIRST_EX_LARGE_FETCH - you can
not control (in current implementation this is 64kb - quite
reasonable value)
you have much more reach choice for FileInformationClass - less
informative info class - less data size, faster function worked.
about compatibility? this exist and worked from how minimum win2000 to latest win10 with all functional. (in documentation - "Available starting with Windows XP", however in ntifs.h it declared as #if (NTDDI_VERSION >= NTDDI_WIN2K) and it really was already in win2000 - but no matter- XP support more than enough now)
but this is undocumented, unsupported, only for kernel mode, no lib file.. ?
documented - as you can see, this is both for user and kernel mode - how you think - how is FindFirstFile[Ex] / FindNextFile - working ? it call ZwQueryDirectoryFile - no another way. all calls to kernel only through ntdll.dll - this is fundamental. ( yes still possible that ntdll.dll stop export by name and begin export by ordinal only for show what is unsupported really was). lib file exist, even two ntdll.lib and ntdllp.lib (here more api compare first) in any WDK. headers, where declared ? #include <ntifs.h>. but it conflict with #include <windows.h> - yes conflict, but if include ntifs.h in namespace with some tricks - possible avoid conflicts

Communicate with EPP using wosa xfs

I'm creating a program to communicate with the EPP like if I press any key the application should recognise it and print it back for now for illustration on how to create an application to communicate with the EPP.
Thank you.
I've found what I will call a function on "cwa 14050-6:2005" on page 42 "WFS_CMD_PIN_GET_DATA" and the description states that "This function is used to return keystrokes entered by user." My problem is writing or calling this function and when you go to page 43 on the output param "LPWFSPINDATA lpPinData;" which the "typedef" of it contains I will say a variable "lpPinKeys" which points to an array of pointers to WFSPINKEY structures that contains the keys entered by the user. So after going through this document I saw that in my case I need this, this is because when let's say I key in 1234 enter the application can be able to assign 1234 to a variable so that it can be verified and enter will cause the app to begin the keys verification to see if they match that's from database/server database and if they do then the app will call the relevant service else error will be displayed.
My main problem is calling this within my app. Even if the app can for now display back or assign the keys to a variable from a PIN pad (EPP). For now I don't use any encrypt just want to get and verify the plain keys.
If you require EPP development for NCR machines, you have to use Key Library API. This is not a part of CEN XFS standard, but is explained in the NCR XFS documentation.

how to JUDGE other program's result via cpp?

I've got a series of cpp source file and I want to write another program to JUDGE if they can run correctly (give input and compare their output with standart output) . so how to:
call/spawn another program, and give a file to be its standard input
limit the time and memory of the child process (maybe setrlimit thing? is there any examples?)
donot let the process to read/write any file
use a file to be its standard output
compare the output with the standard output.
I think the 2nd and 3rd are the core part of this prob. Is there any way to do this?
ps. system is Linux
To do this right, you probably want to spawn the child program with fork, not system.
This allows you to do a few things. First of all, you can set up some pipes to the parent process so the parent can supply the input to the child, and capture the output from the child to compare to the expected result.
Second, it will let you call seteuid (or one of its close relatives like setreuid) to set the child process to run under a (very) limited user account, to prevent it from writing to files. When fork returns in the parent, you'll want to call setrlimit to limit the child's CPU usage.
Just to be clear: rather than directing the child's output to a file, then comparing that to the expected output, I'd capture the child's output directly via a pipe to the parent. From there the parent can write the data to a file if desired, but can also compare the output directly to what's expected, without going through a file.
std::string command = "/bin/local/app < my_input.txt > my_output_file.txt 2> my_error_file.txt";
int rv = std::system( command.c_str() );
1) The system function from the STL allows you to execute a program (basically as if invoked from a shell). Note that this approach is inherenly insecure, so only use it in a trusted environment.
2) You will need to use threads to be able to achieve this. There are a number of thread libraries available for C++, but I cannot give you recommendation.
[After edit in OP's post]
3) This one is harder. You either have to write a wrapper that monitors read/write access to files or do some Linux/Unix privilege magic to prevent it from accessing files.
4) You can redirect the output of a program (that it thinks goes to the standard output) by adding > outFile.txt after the way you would normally invoke the program (see 1)) -- e.g. otherapp > out.txt
5) You could run diff on the saved file (from 3)) to the "golden standard"/expected output captured in another file. Or use some other method that better fits your need (for example you don't care about certain formatting as long as the "content" is there). -- This part is really dependent on your needs. diff does a basic comparing job well.

virtual filesystem design

Im starting a protector/packer/binder like project.
the goal is when you have a full app directory with
/images/
/music/
base *.ini files
dlls
exes
you just use packer.exe on it and all these files are packed, encrypted, and stored in the resulting exe.
the resulting exe then creates a transparent virtual filesystem that falls back to the "real" one if a file is not found.
i allready can handle (not very accurately) loading dlls from memory, etc but i have a problem with the hmm hooks..
for now, as a ProofOfConcept im attaching a debbuger (written in c++) to a target.exe
it looks somewhat like
======= Started [target.exe] =======
> Placing breakpoint on EP : 0x401130
Process started
Loaded module : [target.exe]
Loaded module : [ntdll.dll]
Loaded module : [kernel32.dll]
[...]
Break point at [0x401130]
> Restored EP byte.
Loaded module : [bass.dll]
Break point at [0x760fcc4e]
Found set bp : kernel32!CreateFileW
[!] CreateFileW Callback Function :
FileName : C:\Users\user\Desktop\cppve\loader\bin\Debug\target.exe
Access : 0x80000000
Return Addr: 0x741b91e6
> Re-setting bp at [0x760fcc4e]
Break point at [0x760fcc4e]
Found set bp : kernel32!CreateFileW
[!] CreateFileW Callback Function :
FileName : .\beyond_v.mod
Access : 0x80000000
Return Addr: 0x760fcfa0
i am handling breakpoints in the debugger for things like CreateFileW ReadFile etc
im having problems in supplying the target with useable data.
should i create a fake handle and then catch it and process it ? or are there too many things that can go very wrong with that approach ?
here is a sample callback function for CreateFileW
void callback_createfilew(CONTEXT* ct){
//stub
cout<<"[!] CreateFileW Callback Function :"<<endl;
void* returnaddr=MemReadDwordPtr(hProcess,(void*)ct->Esp);
string fn=MemReadCString(hProcess,MemReadDwordPtr(hProcess,(void*)ct->Esp+4),true);
void* access=MemReadDwordPtr(hProcess,(void*)ct->Esp+8);
void* sharemode=MemReadDwordPtr(hProcess,(void*)ct->Esp+12);
void* dwCreationDisposition=MemReadDwordPtr(hProcess,(void*)ct->Esp+20);
void* dwFlagsAndAttributes=MemReadDwordPtr(hProcess,(void*)ct->Esp+24);
cout<<" FileName : "<<fn<<endl;
cout<<" Access : "<<(void*)access<<endl;
cout<<" Return Addr: "<<(void*)returnaddr<<endl;
if(fn.compare(".\\beyond_v.mod")==0){
// this is wrong, we need to call it from the target process...
HANDLE ret=CreateFileA(".\\_beyond_v.mod",(DWORD)access,(DWORD)sharemode,NULL,(DWORD)dwCreationDisposition,(DWORD)dwFlagsAndAttributes,NULL);
ct->Esp+=0x20;
ct->Eax=(DWORD)ret;
ct->Eip=(DWORD)returnaddr;
}
should i make a codecave in the process and push shellcodes [ Edit: sorry, i use many of these words to describe different things but i think you will catch what i ment :) ] there to execute my faking code ?
or maybe inject a dll that will handle int3s and pass control to it via exception handlers set up by the loader ? however that can proove to be tricky... that dll would have to be in the virtual filesystem ! so i would have to hand-load it before any other initialisation takes place.
i would like, in the final version, to completly drop the debugger. it will only cause problems and seriously comprimise the protector part of the project.
If you want your "packer" to operate transparently on precompiled binaries and want everything to be inside the resultant single binary, the packer needs to add the hooking code to the binary, perhaps making it execute as the very first thing and only then pass control to the original entry point of the binary. This isn't very trivial, although is certainly doable.
But you have another problem here. This hooking code will contain the decryption code and probably the key as well and this all thing is breakable by a good programmer with a debugger and some other tools.
As for fake handles, I'd see if it's possible to open a file multiple times and get different handles. If it is, just open any existing file for reading in a shared mode, remember the handle and use it for an in-memory file. Need another handle? Open the file again to get one. That will guarantee no collision with other real handles.

What is a good way to create a string for crash reporting Win32 C++ that reflects the cause of the crash?

We're using Fogbugz for tracking issues and I am in the middle of writing a C++ wrapper around the XML API for Fogbugz.
The best practice seems to be to use the "scout" field so that similar/same crashes are just counted but not reported again. To do that we need a unique string for a particular cause of a crash.
In Win32 - after getting a dmp file or other crash handler what is a good way to make a unique string for a crash? (we're going to create a dmp file and send it to the fogbugz server)
In previous postings/articles/etc Joel has made various suggestions but much of those counted on a language like C# that use reflection and have a lot of information that is either harder to get or not possible to get.
Have any other people gotten things like stack traces or other things to make scout entries in fogbugz?
EDIT
To clarify - we don;t want a unique id for every incident - there are likely crashes that have the same code path. We want to capture that. I was thinking that we would get the last few stack calls that are in our code (not ones from win32 DLLs) - but not sure how to go about doing this.
Reporting every crash as unique is not right. Reporting all crashes under the same case is not right. Different users repeating a scenario that causes a crash should map to the same incident.
EDIT
What I think we want is a general "signature" of a crash - based on what is on the stack. Similar stacks should have the same signature. For example - take the top 5 methods that are in our app and then the first call (if any) we make into an MS DLL. This would probably be sufficient for a signature and would likely correlate the crashes that are "the same".
So how does one get the list of methods on the stack? And how can you tell if they are from your own app or in another DLL?
EDIT - NOTE
We want to create a "bucket id"/signature while in the exception handler so that we can create the minidump and send it to fogbugz as a scout description. Alternatively we can load up the dump on t he next start of the app and send it then with a signature we generate.
Here in my project I use the Address Memory of the Crash as a "Unique" ID.
IMO the best thing you can use will be bucket id from dump analysis. Use properly configured Debugging Tools for Windows (windbg), one can do !analyze -v and classify your dumps into different buckets based on bucket id. Bucket id guaranteed that if two dumps are the same, their bucket id will be the same. That solves part of the puzzle.
Many times two dumps rooted from same problem will create different bucket id's (maybe version difference, say your 1.0 and 1.1 both crash at same point). You can use faulting module and stack signature to correlate bugs from the same point of fault.
There will be certain things that causes very random dumps (e.g. heap corruption, the faulting module is typically the victim). Therefore dump analysis should be considered best-effort. When you can't, you can't.
I used something like this to generate exceptions in my last app (MSVC), so every error would get logged with the sourcefile and line it occured on:
class Error {
//...
public: Error(string file, string line, string error) ;
};
#define ERROR(err) Error(__FILE__, __LINE__, err)
It's probably a little bit late, but I will add my solution here, too, in case it can help other people.
You can do this using fools from "Debugging Tools for Windows", for example windbg.exe or better kd.exe.
Running the command "kd.exe -z "path_to_dump.dmp" -c "kd;q" >> dumpstack.txt, you might get the following result:
Microsoft (R) Windows Debugger Version 10.0.15063.400 X86
Copyright (c) Microsoft Corporation. All rights reserved.
Loading Dump File [d:\work\bugs\14122\myexe.exe.2624.dmp]
User Mini Dump File with Full Memory: Only application data is available
************* Symbol Path validation summary **************
Response Time (ms) Location
Deferred srv*C:\Symbols*http://msdl.microsoft.com/download/symbols
Symbol search path is: srv*C:\Symbols*http://msdl.microsoft.com/download/symbols
Executable search path is:
Windows 10 Version 15063 MP (4 procs) Free x86 compatible
Product: WinNt, suite: SingleUserTS
15063.0.x86fre.rs2_release.170317-1834
Machine Name:
Debug session time: Fri Oct 13 00:09:01.000 2017 (UTC + 1:00)
System Uptime: 0 days 0:18:33.797
Process Uptime: 0 days 0:03:40.000
................................................................
.....................................................
Loading unloaded module list
..............................
This dump file has an exception of interest stored in it.
The stored exception information can be accessed via .ecxr.
(a40.2580): Security check failure or stack buffer overrun - code c0000409 (first/second chance not available)
eax=00000001 ebx=00000000 ecx=00000007 edx=77cc4350 esi=00000000 edi=00000000
eip=62ae7666 esp=0b75e17c ebp=0b75e1a8 iopl=0 nv up ei pl nz na po nc
cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000202
msvcr120!abort+0x28:
62ae7666 cd29 int 29h
0:068> kd: Reading initial command 'kb;q'
ChildEBP RetAddr Args to Child
0b75e178 62addc5f 935dda1f 00000000 00000000 msvcr120!abort+0x28
0b75e1a8 0b75e7d4 62a9b436 0b75e1dc 62a52aa5 msvcr120!terminate+0x33
WARNING: Frame IP not in any known module. Following frames may be wrong.
0b75e1ac 62a9b436 0b75e1dc 62a52aa5 00000000 0xb75e7d4
0b75e1b4 62a52aa5 00000000 62a59740 0b75e7d4 msvcr120!__FrameUnwindToState+0x89
0b75e1c8 62a52b33 00000000 00000000 00000000 msvcr120!_EH4_CallFilterFunc+0x12
0b75e1f4 62a5a0f3 62b1f7b8 62a4f7c6 0b75e324 msvcr120!_except_handler4_common+0x8e
0b75e214 77cd6152 0b75e324 0b75e7c4 0b75e344 msvcr120!_except_handler4+0x1e
0b75e238 77cd6124 0b75e324 0b75e7c4 0b75e344 ntdll!ExecuteHandler2+0x26
0b75e30c 77cc4266 0b75e324 0b75e344 0b75e324 ntdll!ExecuteHandler+0x24
0b75e30c 74cf28f2 0b75e324 0b75e344 0b75e324 ntdll!KiUserExceptionDispatcher+0x26
0b75e684 62a59339 e06d7363 00000001 00000003 KERNELBASE!RaiseException+0x62
0b75e6c4 6001821c 0b75e6e4 6004e1bc 946a8f2a msvcr120!_CxxThrowException+0x5b
0b75e6f8 60018042 0b75e720 946a8efa ffffffff mymodule!FunctionC+0x7c
0b75e730 60016544 946a8ece ffffffff 092889d8 mymodule!FunctionB+0x32
0b75e754 600166b8 00842338 6000588d 00000001 myothermodule!FunctionB+0x44
From this stack, you can create a unique bucket if you take for example only your methods from the stack and concatenate them in a string: "mymodule!FunctionC+0x7c;mymodule!FunctionB+0x32;myothermodule!FunctionB+0x44". In order for this to work, you need to have access to you personal symbols server, either using the environment variable _NT_SYMBOL_PATH or with the -y command line switch.
You can alternatively create a string from the return addresses only (second column): "62addc5f,0b75e7d4,62a9b436,62a52aa5,62a52b33,62a5a0f3,77cd6152,77cd6124,77cc4266,74cf28f2,62a59339,6001821c,60018042,60016544,600166b8"
Just use an MD5 string generated from the dump file and you will likely to get a unique string for every crash.
I would start with collecting the data on how often every function in your code has been "flashed" in a crash report stack trace. Every report would have to be added to some kind of database, and every function would have to be indexed so that you could later query, which functions seem to crash more often than others. (And of course, functions like main() will be in every report, but that's understandable).
Or, you think that only crash reports seem to be the problem, you could just remove all those entries from crash stack traces, and then hash the rest (your functions). That way you could see if any particular call chain of your own functions causes a crash repeatedly, no matter what external functions have been called in between.
Then of course, some of the more complicated problems will not be captured this way anyway, as the stack trace will be completely different. To help that, you could record other data from your application along with the stack trace in every report, like sizes of buffers, counters, states of different parts of the application and so on... And then do some statistics on that.