ReadDebuggeeMemoryEx failed to read debuggee - c++

I have write a debugger extension for my VisualStuido2010 to display my class type. I write my code base on the EEAddin sample provided by Microsoft. But I failed on call ReadDebuggeeMemoryEx.
I can't get any reason for this fail. GetLastError() returns 0.
ObjectId objid;
DWORD nGot;
int state = E_FAIL;
if ( pHelper->ReadDebuggeeMemoryEx(pHelper, pHelper->GetRealAddress(pHelper), sizeof(ObjectId), &objid, &nGot) )
{
}else { log("Fail ReadDebuggeeMemoryEx %d\n", GetLastError());}

The function ReadDebuggeeMemoryEx(...) returns a HRESULT not a BOOL.
Try something like :
if (pHelper->ReadDebuggeeMemoryEx(...) == S_OK) {
// good
} else {
// bad
}

Related

Determining source code filename and line for function using Visual Studio PDB

I'm trying to programmatically determine the filename and line of the source code for a function definition using its address and the pdb database file generated by visual studio for the module the function is defined in.
For instance, I have a function Lua::asset::get_supported_export_file_extensions defined in the module shared.dll and I want to determine the source code location for it.
To get the relative virtual address (rva) of the function, I subtract the module base address from the absolute virtual function address like so:
static MODULEENTRY32 GetModuleInfo(std::uint32_t ProcessID, const char* ModuleName)
{
void* hSnap = nullptr;
MODULEENTRY32 Mod32 = {0};
if ((hSnap = CreateToolhelp32Snapshot(TH32CS_SNAPMODULE, ProcessID)) == INVALID_HANDLE_VALUE)
return Mod32;
Mod32.dwSize = sizeof(MODULEENTRY32);
while (Module32Next(hSnap, &Mod32))
{
if (!strcmp(ModuleName, Mod32.szModule))
{
CloseHandle(hSnap);
return Mod32;
}
}
CloseHandle(hSnap);
return {0};
}
int main(int argc,char *argv[])
{
auto dllInfo = GetModuleInfo(GetCurrentProcessId(),"shared.dll");
auto rva = (DWORD)((uint64_t)&Lua::asset::get_supported_export_file_extensions -(uint64_t)dllInfo.modBaseAddr);
[...]
}
This gives me the rva 0x44508 for the function.
To confirm the address, I've tried using the dbh debugging tool from the Windows SDK to look up the function's address (as well as the base address for the module):
shared [1000000]: enum shared!*get_supported_export_file_extensions*
index address name
1 1e376c0 : `Lua::asset::get_supported_export_file_extensions'::`1'::dtor$0
3 18409c0 : Lua::asset::get_supported_export_file_extensions
shared [1000000]: scope 18409c0
name : path\to\object_files\lasset.obj
addr : 0
size : 0
flags : 0
type : 0
modbase : 1000000
value : 0
reg : 0
scope : SymTagNull (0)
tag : SymTagCompiland (2)
index : 6
I would expect it to give me the same rva if I subtract the base address from the function address, but instead it gives me a rva of 0x18409c0 -0x1000000 = 0x8409c0
For convenience I will be referring to the addresses as:
0x44508 = calculated address
0x8409c0 = dbh address
I then used the Debug Interface Access SDK to look up both addresses in the pdb to determine why I'm getting differing results:
static BOOL find_function_in_pdb(DWORD rva,enum SymTagEnum tag)
{
std::string pdbFilePath = "path/to/shared.pdb";
CComPtr<IDiaDataSource> pSource;
if(FAILED(CoInitializeEx(NULL,COINIT_MULTITHREADED)))
return FALSE;
auto hr = CoCreateInstance(
CLSID_DiaSource,
NULL,
CLSCTX_INPROC_SERVER,
__uuidof(IDiaDataSource),
(void **) &pSource
);
if(FAILED(hr))
return FALSE;
wchar_t wszFilename[_MAX_PATH];
mbstowcs(wszFilename,pdbFilePath.data(),sizeof(wszFilename) /sizeof(wszFilename[0]));
if(FAILED(pSource->loadDataFromPdb(wszFilename)))
return FALSE;
IDiaSession *session;
IDiaSymbol *globalSymbol = nullptr;
IDiaEnumTables *enumTables = nullptr;
IDiaEnumSymbolsByAddr *enumSymbolsByAddr = nullptr;
if(FAILED(pSource->openSession(&session)))
return FALSE;
if(FAILED(session->get_globalScope(&globalSymbol)))
return FALSE;
if(FAILED(session->getEnumTables(&enumTables)))
return FALSE;
if(FAILED(session->getSymbolsByAddr(&enumSymbolsByAddr)))
return FALSE;
IDiaSymbol *symbol;
if(session->findSymbolByRVA(rva,tag,&symbol) == S_OK)
{
BSTR name;
symbol->get_name(&name);
std::cout<<"Name: "<<ConvertBSTRToMBS(name)<<std::endl;
ULONGLONG length = 0;
if(symbol->get_length(&length) == S_OK)
{
IDiaEnumLineNumbers *lineNums[100];
if(session->findLinesByRVA(rva,length,lineNums) == S_OK)
{
auto &l = lineNums[0];
CComPtr<IDiaLineNumber> line;
IDiaLineNumber *lineNum;
ULONG fetched = 0;
for(uint8_t i=0;i<5;++i) {
if(l->Next(i,&lineNum,&fetched) == S_OK && fetched == 1)
{
DWORD l;
IDiaSourceFile *srcFile;
if(lineNum->get_sourceFile(&srcFile) == S_OK)
{
BSTR fileName;
srcFile->get_fileName(&fileName);
std::cout<<"File: "<<ConvertBSTRToMBS(fileName)<<std::endl;
}
if(lineNum->get_lineNumber(&l) == S_OK)
std::cout<<"Line: "<<+l<<std::endl;
}
}
}
}
}
return TRUE;
}
int main(int argc,char *argv[])
{
find_function_in_pdb(0x44508 /* calculated address */,SymTagEnum::SymTagPublicSymbol);
find_function_in_pdb(0x8409c0 /* dbh address */,SymTagEnum::SymTagFunction);
[...]
}
It does actually find both addresses and both point to a symbol with a name matching my function, however the symbol at the calculated address is a SymTagPublicSymbol and the symbol at the dbh address is a SymTagFunction.
I'm guessing that means that the calculated address is for the public symbol and the dbh address for the private symbol? (https://learn.microsoft.com/en-us/windows-hardware/drivers/debugger/public-and-private-symbols)
So far so good, the problem is the public symbol does not have any source code information associated with it, but the private symbol does. Assuming I'm correct so far (which I'm not quite sure about), my question boils down to:
How can I get the private symbol/address from the public symbol/address? I need a solution that I can implement programmatically.
After some more experimentation I found the solution:
IDiaSymbol *publicSymbol;
DWORD publicRva = 0x44508;
if(session->findSymbolByRVA(publicRva,SymTagEnum::SymTagPublicSymbol,&publicSymbol) == S_OK)
{
DWORD privateRva;
IDiaSymbol *privateSymbol;
if(
publicSymbol->get_targetRelativeVirtualAddress(&privateRva) == S_OK &&
session->findSymbolByRVA(privateRva,SymTagEnum::SymTagFunction,&privateSymbol) == S_OK
)
{
// Do stuff with private symbol
}
}
get_targetRelativeVirtualAddress gives me 0x8409c0, the address of the private symbol which contains the source code information.
As for why this works, I have no idea. According to the documentation, get_targetRelativeVirtualAddress is only supposed to be valid for SymTagThunk symbols and returns the rva of a "thunk target". I don't think the public symbol is a thunk target, but it works without errors and gives me exactly what I need.

per thread c++ guard to prevent re-entrant function calls

I've got function that call the registry that can fail and print the failure reason.
This function can also be called directly or indirectly from the context of a dedicated built-in printing function, and I wish to avoid printing the reason in this case to avoid endless recursion.
I can use thread_local to define per thread flag to avoid calling the print function from this function, but I guess it's rather widespread problem, so I'm looking for std implementation for this guard or any other well debugged code.
Here's an example that just made to express the problem.
Each print function comes with log level, and it's being compared with the current log level threshold that reside in registry. if lower than threshold, the function returns without print. However, in order to get the threshold, additional print can be made, so I wanted to create a guard that will prevent the print from getPrintLevelFromRegistry if it's called from print
int getPrintLevelFromRegistry() {
int value = 0;
DWORD res = RegGetValueW("//Software//...//logLevel" , &value);
if (res != ERROR_SUCCESS) {
print("couldn't find registry key");
return 0;
}
return value;
}
void print(const char * text, int printLoglevel) {
if (printLogLevel < getPrintLevelFromRegistry()) {
return;
}
// do the print itself
...
}
Thanks !
The root of the problem is that you are attempting to have your logging code log itself. Rather than some complicated guard, consider the fact that you really don't need to log a registry read. Just have it return a default value and just log the error to the console.
int getPrintLevelFromRegistry() {
int value = 0;
DWORD res = RegGetValueW("//Software//...//logLevel" , &value);
if (res != ERROR_SUCCESS) {
OutputDebugStringA("getPrintLevelFromRegistry: Can't read from registry\r\n");
}
return value;
}
Further, it's OK to read from the registry on each log statement, but it's redundant and unnecessary.
Better:
int getPrintLevelFromRegistry() {
static std::atomic<int> cachedValue(-1);
int value = cachedValue;
if (value == -1) {
DWORD res = RegGetValueW("//Software//...//logLevel" , &value);
if (res == ERROR_SUCCESS) {
cachedValue = value;
}
}
return value;
}

How to convert OID of a code-signing algorithm from CRYPT_ALGORITHM_IDENTIFIER to a human readable string?

When I'm retrieving a code signing signature from an executable file on Windows, the CERT_CONTEXT of the certificate points to the CERT_INFO, that has CRYPT_ALGORITHM_IDENTIFIER SignatureAlgorithm member that contains the algorithm used for signing.
How do I convert that to a human readable form as such?
For instance, SignatureAlgorithm.pszObjId may be set to "1.2.840.113549.1.1.11" string, which is szOID_RSA_SHA256RSA according to this long list. I guess I can make a very long switch statement for it, and link it to "sha256", but I'd rather avoid it since I don't know what most of those values are. Is there an API that can do all that for me?
Use CryptFindOIDInfo to get information about a OID including the display name and the CNG algorithm identifier string:
void PrintSigAlgoName(CRYPT_ALGORITHM_IDENTIFIER* pSigAlgo)
{
if(pSigAlgo && pSigAlgo->pszObjId)
{
PCCRYPT_OID_INFO pCOI = CryptFindOIDInfo(CRYPT_OID_INFO_OID_KEY, pSigAlgo->pszObjId, 0);
if(pCOI && pCOI->pwszName)
{
_tprintf(_T("%ls"), pCOI->pwszName);
}
else
{
_tprintf(_T("%hs"), pSigAlgo->pszObjId);
}
}
}
Expanding on the answer of Anders. You can also get this information from the result of a call to WinVerifyTrust(). It is deeply nested inside CRYPT_PROVIDER_DATA:
GUID policyGUID = WINTRUST_ACTION_GENERIC_VERIFY_V2;
WINTRUST_DATA trustData;
// omitted: prepare trustData
DWORD lStatus = ::WinVerifyTrust( NULL, &policyGUID, &trustData );
if( lStatus == ERROR_SUCCESS )
{
CRYPT_PROVIDER_DATA* pData = ::WTHelperProvDataFromStateData( trustData.hWVTStateData );
if( pData && pData->pPDSip && pData->pPDSip->psIndirectData &&
pData->pPDSip->psIndirectData->DigestAlgorithm.pszObjId )
{
CRYPT_ALGORITHM_IDENTIFIER const& sigAlgo = pData->pPDSip->psIndirectData->DigestAlgorithm;
PCCRYPT_OID_INFO pCOI = ::CryptFindOIDInfo( CRYPT_OID_INFO_OID_KEY, sigAlgo.pszObjId, 0 );
if(pCOI && pCOI->pwszName)
{
_tprintf(_T("%ls"), pCOI->pwszName);
}
else
{
_tprintf(_T("%hs"), sigAlgo.pszObjId);
}
}
}
Note: Detailed error checking omitted for brevity!
Note2: From Win 8 onwards (and patched Win 7), WinVerifyTrust can be used to verify and get information about multiple signatures of a file, more info in this Q&A.

don't know how to use IShellWindows::Item correctly

I'm using VC6 on XP system.
The following is my code. It runs perfectly on my computer, but on other computers it seems that pisw->Item(v, &pidisp) doesn't equals to S_OK. Now I'm trying to figure out what's wrong here
IShellWindows *pisw;
if (SUCCEEDED(CoCreateInstance(CLSID_ShellWindows, NULL, CLSCTX_ALL,
IID_IShellWindows, (void**)&pisw))) {
VARIANT v;
V_VT(&v) = VT_I4;
IDispatch *pidisp;
found = FALSE;
for (V_I4(&v) = 0; !found && pisw->Item(v, &pidisp) == S_OK; V_I4(&v)++) {
IWebBrowserApp *piwba;
if (SUCCEEDED(pidisp->QueryInterface(IID_IWebBrowserApp, (void**)&piwba))) {
// blablabla....do something..
}
So I changed some code to
...
IDispatch *pidisp;
hr = pisw->Item(v, &pidisp);
if (SUCCEEDED(hr))
{
for (V_I4(&v) = 0; !found ; V_I4(&v)++) {
IWebBrowserApp *piwba;
if (SUCCEEDED(pidisp->QueryInterface(IID_IWebBrowserApp, (void**)&piwba))) {
// blablabla....do something..
}
}
then the return value of hr becomes to 1. And it gets access violation errors when running to "pidisp->.." step. Can anyone help me?
The original code incorrectly tests the result of pisw->Item(v, &pidisp). Weird, because it does use the correct check later on.
The problem is that there are many success return values besides S_OK. Your fix is correct, you should use SUCCEEDED(hr), but you incorrectly moved the loop INSIDE the SUCCEEDED(hr) test. You should check SUCCEEDED(hr) for every value of V_I4(&v).
Your S_FALSE result is because you now call hr = pisw->Item(v, &pidisp); before the loop, which means v is uninitialized (garbage). Assume for a moment that its garbage value is 728365. S_FALSE means: the call succeeded, but there are less than 728365 windows.
MSDN IShellWindows::Item:
Return value Type: HRESULT S_FALSE (1) The specified window was not
found.
The item you are looking was not found, and you obviously don't get a valid pidisp. Trying to use it results - expectedly - in access violation.
You need to handle "item not found" case properly, and check your v argument as well.

Why the CString(LPCTSTR lpsz) constructor check the high two bytes of lpsz?

I am reading the source code of CString in MFC. I am very curious about the implementation way of constructor CString::CString(LPCTSTR lpsz).
In my understanding, before copying the string indicated by lpsz, it only needs check whether lpsz is NULL but no need to combine with checking if HIWORD(lpsz) is NULL.
Is any MFC guy passing here and willing to give some explanations?
CString::CString(LPCTSTR lpsz)
{
Init();
if (lpsz != NULL && HIWORD(lpsz) == NULL)
{
UINT nID = LOWORD((DWORD)lpsz);
if (!LoadString(nID))
TRACE1("Warning: implicit LoadString(%u) failed\n", nID);
}
else
{
int nLen = SafeStrlen(lpsz);
if (nLen != 0)
{
AllocBuffer(nLen);
memcpy(m_pchData, lpsz, nLen*sizeof(TCHAR));
}
}
}
It checks whether it is passed an actual pointer or an integer resource identifier from MAKEINTRESOURCE. In the latter case it loads the string from the resources.
That is for loading a string resource. See the LoadString() call.