GetLogicalDrives() for loop - c++

I am new to the win32 api and need help trying to understand how the GetLogicalDrives() function works. I am trying to populate a cbs_dropdownlist with all the available drives that are not in use. here is what I have so far. I would appreciate any help.
void FillListBox(HWND hWndDropMenu)
{
DWORD drives = GetLogicalDrives();
for (int i=0; i<26; i++)
{
SendMessage(hWndDropMenu, CB_ADDSTRING, 0, (LPARAM)drives);
}
}

The function GetLogicalDrives returns a bitmask of the logical drives available. Here is how you would do it:
DWORD drives = GetLogicalDrives();
for (int i=0; i<26; i++)
{
if( !( drives & ( 1 << i ) ) )
{
TCHAR driveName[] = { TEXT('A') + i, TEXT(':'), TEXT('\\'), TEXT('\0') };
SendMessage(hWndDropMenu, CB_ADDSTRING, 0, (LPARAM)driveName);
}
}
The code checks whether the i-th bit in the bitmask is not set to 1 or true.

GetLogicalDrives returns a bitmask and to inspect it you need to use bitwise operators. To see if drive A is in use:
GetLogicalDrives() & 1 == 1
If drive A is unavailable, GetLogicalDrives() & 1 would yield 0 and the condition would fail.
To check the next drive you'll need to use the next multiple of 2, GetLogicalDrives() & 2, GetLogicalDrives() & 4 and so on.
You could use GetLogicalDriveStrings but this returns the inverse of what you want, all the used logical drives.
I would build a table instead, and index into that:
const char *drive_names[] =
{
"A:",
"B:",
...
"Z:"
};
Then your loop could be:
DWORD drives_bitmask = GetLogicalDrives();
for (DWORD i < 0; i < 32; i++)
{
// Shift 1 to a multiple of 2. 1 << 0 = 1 (0000 0001), 1 << 1 = 2 etc.
DWORD mask_index = 1 << i;
if (drives_bitmask & i == 0)
{
// Drive unavailable, add it to list.
const char *name = drive_names[i];
// ... do GUI work.
}
}

Related

Pass Byte Array as std::vector<char> from Node.js to C++ Addon

I have some constraints where the addon is built with nan.h and v8 (not the new node-addon-api).
The end function is a part of a library. It accepts std::vector<char> that represents the bytes of an image.
I tried creating an image buffer from Node.js:
const img = fs.readFileSync('./myImage.png');
myAddonFunction(Buffer.from(img));
I am not really sure how to continue from here. I tried creating a new vector with a buffer, like so:
std::vector<char> buffer(data);
But it seems like I need to give it a size, which I am unsure how to get. Regardless, even when I use the initial buffer size (from Node.js), the image fails to go through.
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
[1] 16021 abort (core dumped)
However, when I read the image directly from C++, it all works fine:
std::ifstream ifs ("./myImage.png", std::ios::binary|std::ios::ate);
std::ifstream::pos_type pos = ifs.tellg();
std::vector<char> buffer(pos);
ifs.seekg(0, std::ios::beg);
ifs.read(&buffer[0], pos);
// further below, I pass "buffer" to the function and it works just fine.
But of course, I need the image to come from Node.js. Maybe Buffer is not what I am looking for?
Here is an example based on N-API; I would also encourage you to take a look similar implementation based on node-addon-api (it is an easy to use C++ wrapper on top of N-API)
https://github.com/nodejs/node-addon-examples/tree/master/array_buffer_to_native/node-addon-api
#include <assert.h>
#include "addon_api.h"
#include "stdio.h"
napi_value CArrayBuffSum(napi_env env, napi_callback_info info)
{
napi_status status;
const size_t MaxArgExpected = 1;
napi_value args[MaxArgExpected];
size_t argc = sizeof(args) / sizeof(napi_value);
status = napi_get_cb_info(env, info, &argc, args, nullptr, nullptr);
assert(status == napi_ok);
if (argc < 1)
napi_throw_error(env, "EINVAL", "Too few arguments");
napi_value buff = args[0];
napi_valuetype valuetype;
status = napi_typeof(env, buff, &valuetype);
assert(status == napi_ok);
if (valuetype == napi_object)
{
bool isArrayBuff = 0;
status = napi_is_arraybuffer(env, buff, &isArrayBuff);
assert(status == napi_ok);
if (isArrayBuff != true)
napi_throw_error(env, "EINVAL", "Expected an ArrayBuffer");
}
int32_t *buff_data = NULL;
size_t byte_length = 0;
int32_t sum = 0;
napi_get_arraybuffer_info(env, buff, (void **)&buff_data, &byte_length);
assert(status == napi_ok);
printf("\nC: Int32Array size = %d, (ie: bytes=%d)",
(int)(byte_length / sizeof(int32_t)), (int)byte_length);
for (int i = 0; i < byte_length / sizeof(int32_t); ++i)
{
sum += *(buff_data + i);
printf("\nC: Int32ArrayBuff[%d] = %d", i, *(buff_data + i));
}
napi_value rcValue;
napi_create_int32(env, sum, &rcValue);
return (rcValue);
}
The JavaScript code to call the addon
'use strict'
const myaddon = require('bindings')('mync1');
function test1() {
const array = new Int32Array(10);
for (let i = 0; i < 10; ++i)
array[i] = i * 5;
const sum = myaddon.ArrayBuffSum(array.buffer);
console.log();
console.log(`js: Sum of the array = ${sum}`);
}
test1();
The Output of the code execution:
C: Int32Array size = 10, (ie: bytes=40)
C: Int32ArrayBuff[0] = 0
C: Int32ArrayBuff[1] = 5
C: Int32ArrayBuff[2] = 10
C: Int32ArrayBuff[3] = 15
C: Int32ArrayBuff[4] = 20
C: Int32ArrayBuff[5] = 25
C: Int32ArrayBuff[6] = 30
C: Int32ArrayBuff[7] = 35
C: Int32ArrayBuff[8] = 40
C: Int32ArrayBuff[9] = 45
js: Sum of the array = 225

ToUnicodeEx not printing "Greater Than"

I wrote a function which processes the user keyboard in order to write text in an app.
In order to do that I use the ToUnicodeEx function which uses an array of Key states.
The function is working perfectly fine for every possible input, except for one : I cannot display the ">" sign, which is supposed to be the combination of "SHIFT + <" : it displays the "<" sign instead, as if the SHIFT key was not pressed, whereas it knows it it pressed.
Has somebody already experienced the same and knows what the problem is?
You will find my function code below :
void MyFunction(bool bCapsLockDown)
{
IOClass io = GetMyIOInstance();
HKL layout = GetKeyboardLayout( 0 );
uchar uKeyboardState[256];
WCHAR oBuffer[5] = {};
//Initialization of KeyBoardState
for (uint i = 0; i < 256; ++i)
{
uKeyBoardState[i] = 0;
}
// Use of my ConsultKeyState to get the status of pressed keys
if ( ConsultKeyState( VK_SHIFT ) || bCapsLockDown )
{
uKeyboardState[VK_CAPITAL] = 0xff;
}
if ( ConsultKeyState( VK_CONTROL ) )
{
uKeyboardState[VK_CONTROL] = 0xff;
}
if ( ConsultKeyState( VK_MENU ) )
{
uKeyboardState[VK_MENU] = 0xff;
}
if ( ConsultKeyState( VK_RMENU ) )
{
uKeyboardState[VK_MENU] = 0xff;
uKeyboardState[VK_CONTROL] = 0xff;
}
for ( uint iVK = 0; iVK < 256; ++iVK )
{
bool bKeyDown = ConsultKeyState( iVK ) != 0;
uint iSC = MapVirtualKeyEx( iVK, MAPVK_VK_TO_VSC, layout );
bool bKeyAlreadyDown = io.KeysDown[iVK];
io.KeysDown[iVK] = bKeyDown;
if ( io.KeysDown[iVK] && bKeyAlreadyDown == false )
{
int iRet = ToUnicodeEx( iVK, iSC, uKeyboardState, (LPWSTR)oBuffer, 4, 0, layout );
if( iRet > 0 && (iswgraph( (unsigned short) oBuffer[0] ) || oBuffer[0] == ' ') )
io.AddInputCharacter( (unsigned short) oBuffer[0] );
}
}
}
Edit :
To summarize, my question is :
What would be the good combination VirtualKey + KeyBoardState to get a ">" displayed?
The problem is that shift and caps lock do not have the same effect on a keyboard. For example, (on a UK keyboard) pressing shift+1 = !, but pressing 1 with caps lock on still give you 1.
Your code currently treats them the same, though, and it is using VK_CAPITAL in both cases. This will give you the same effect as if caps lock were on, which is not what you want in this case.
The solution is therefore to break out your logic and use VK_SHIFT when you really want shift to be pressed and VK_CAPITAL when you want caps lock to be active.

Setting a hardwarebreakpoint in multithreaded application doesn't fire

I wrote a little debugger for analysing and looging certain problems. Now I implemented a hardwarebreakpoint for detecting the access of a memory address being overwritten. When I run my debugger with a test process, then everything works fine. When I access the address, the breakpoint fires and the callstack is logged. The problem is, when I run the same against an application running multiple threads. I'm replicating the breakpoint into every thread that gets created and also the main thread. None of the functions report an error and everything looks fine, but when the address is accessed, the breakpoint never fires.
So I wonder if there is some documentation where this is described or if there are additionaly things that I have to do in case of a multithreaded application.
The function to set the breakpoint is this:
#ifndef _HARDWARE_BREAKPOINT_H
#define _HARDWARE_BREAKPOINT_H
#include "breakpoint.h"
#define MAX_HARDWARE_BREAKPOINT 4
#define REG_DR0_BIT 1
#define REG_DR1_BIT 4
#define REG_DR2_BIT 16
#define REG_DR3_BIT 64
class HardwareBreakpoint : public Breakpoint
{
public:
typedef enum
{
REG_INVALID = -1,
REG_DR0 = 0,
REG_DR1 = 1,
REG_DR2 = 2,
REG_DR3 = 3
} Register;
typedef enum
{
CODE,
READWRITE,
WRITE,
} Type;
typedef enum
{
SIZE_1,
SIZE_2,
SIZE_4,
SIZE_8,
} Size;
typedef struct
{
void *pAddress;
bool bBusy;
Type nType;
Size nSize;
Register nRegister;
} Info;
public:
HardwareBreakpoint(HANDLE hThread);
virtual ~HardwareBreakpoint(void);
/**
* Sets a hardware breakpoint. If no register is free or an error occured
* REG_INVALID is returned, otherwise the hardware register for the given breakpoint.
*/
HardwareBreakpoint::Register set(void *pAddress, Type nType, Size nSize);
void remove(void *pAddress);
void remove(Register nRegister);
inline Info const *getInfo(Register nRegister) const { return &mBreakpoint[nRegister]; }
private:
typedef Breakpoint super;
private:
Info mBreakpoint[MAX_HARDWARE_BREAKPOINT];
size_t mRegBit[MAX_HARDWARE_BREAKPOINT];
size_t mRegOffset[MAX_HARDWARE_BREAKPOINT];
};
#endif // _HARDWARE_BREAKPOINT_H
void SetBits(DWORD_PTR &dw, size_t lowBit, size_t bits, size_t newValue)
{
DWORD_PTR mask = (1 << bits) - 1;
dw = (dw & ~(mask << lowBit)) | (newValue << lowBit);
}
HardwareBreakpoint::HardwareBreakpoint(HANDLE hThread)
: super(hThread)
{
mRegBit[REG_DR0] = REG_DR0_BIT;
mRegBit[REG_DR1] = REG_DR1_BIT;
mRegBit[REG_DR2] = REG_DR2_BIT;
mRegBit[REG_DR3] = REG_DR3_BIT;
CONTEXT ct;
mRegOffset[REG_DR0] = reinterpret_cast<size_t>(&ct.Dr0) - reinterpret_cast<size_t>(&ct);
mRegOffset[REG_DR1] = reinterpret_cast<size_t>(&ct.Dr1) - reinterpret_cast<size_t>(&ct);
mRegOffset[REG_DR2] = reinterpret_cast<size_t>(&ct.Dr2) - reinterpret_cast<size_t>(&ct);
mRegOffset[REG_DR3] = reinterpret_cast<size_t>(&ct.Dr3) - reinterpret_cast<size_t>(&ct);
memset(&mBreakpoint[0], 0, sizeof(mBreakpoint));
for(int i = 0; i < MAX_HARDWARE_BREAKPOINT; i++)
mBreakpoint[i].nRegister = (Register)i;
}
HardwareBreakpoint::Register HardwareBreakpoint::set(void *pAddress, Type nType, Size nSize)
{
CONTEXT ct = {0};
super::setAddress(pAddress);
ct.ContextFlags = CONTEXT_DEBUG_REGISTERS;
if(!GetThreadContext(getThread(), &ct))
return HardwareBreakpoint::REG_INVALID;
size_t iReg = 0;
for(int i = 0; i < MAX_HARDWARE_BREAKPOINT; i++)
{
if (ct.Dr7 & mRegBit[i])
mBreakpoint[i].bBusy = true;
else
mBreakpoint[i].bBusy = false;
}
Info *reg = NULL;
// Address already used?
for(int i = 0; i < MAX_HARDWARE_BREAKPOINT; i++)
{
if(mBreakpoint[i].pAddress == pAddress)
{
iReg = i;
reg = &mBreakpoint[i];
break;
}
}
if(reg == NULL)
{
for(int i = 0; i < MAX_HARDWARE_BREAKPOINT; i++)
{
if(!mBreakpoint[i].bBusy)
{
iReg = i;
reg = &mBreakpoint[i];
break;
}
}
}
// No free register available
if(!reg)
return HardwareBreakpoint::REG_INVALID;
*(void **)(((char *)&ct)+mRegOffset[iReg]) = pAddress;
reg->bBusy = true;
ct.Dr6 = 0;
int st = 0;
if (nType == CODE)
st = 0;
if (nType == READWRITE)
st = 3;
if (nType == WRITE)
st = 1;
int le = 0;
if (nSize == SIZE_1)
le = 0;
else if (nSize == SIZE_2)
le = 1;
else if (nSize == SIZE_4)
le = 3;
else if (nSize == SIZE_8)
le = 2;
SetBits(ct.Dr7, 16 + iReg*4, 2, st);
SetBits(ct.Dr7, 18 + iReg*4, 2, le);
SetBits(ct.Dr7, iReg*2, 1, 1);
ct.ContextFlags = CONTEXT_DEBUG_REGISTERS;
if(!SetThreadContext(getThread(), &ct))
return REG_INVALID;
return reg->nRegister;
}
I'm setting the breakpoint in the main debugger loop whenever a new thread is created CREATE_THREAD_DEBUG_EVENT but looking at the sourcecode of GDB it seems not to be done there, so maybe that is to early?
So I finally found the answer to this problem.
In the debug event loop, I'm monitoring the events that windows sends me. One of those events is CREATE_THREAD_DEBUG_EVENT which I used to set the hardware breakpoint whenever a new thread was created.
The problem is, that the notification of this event comes before the thread got actually started. So Windows is setting the context for the first time AFTER this event is sent, which of course overwrites any context data that I have set before.
The solution I implemented now is, when a CREATE_THREAD_DEBUG_EVENT comes I put a software breakpoint at the start adress of the thread, so that the first instruction is my breakpoint. When I receive the breakpoint event, I restore the original code and install the hardware breakpoint, which now fires fine.
If there is a better solution, I'm all ears. :)

Get file offset from a loaded DLL's function

I'd like to ask, how could I locate a specific (exported) function inside a DLL. For example I'd like to locate ReadProcessMemory inside Kernel32. I wouldn't like to rely on Import table, I'd like to locate different APIs based on their addresses what I get with a custom function.
I tried to make a small research on VA, RVA & File offsets, but I didn't succeed. Here's an example which I tried, but it isn't working (returns 0 in all cases):
DWORD Rva2Offset(DWORD dwRva, UINT_PTR uiBaseAddress)
{
WORD wIndex = 0;
PIMAGE_SECTION_HEADER pSectionHeader = NULL;
PIMAGE_NT_HEADERS pNtHeaders = NULL;
pNtHeaders = (PIMAGE_NT_HEADERS) (uiBaseAddress + ((PIMAGE_DOS_HEADER) uiBaseAddress)->e_lfanew);
pSectionHeader = (PIMAGE_SECTION_HEADER) ((UINT_PTR) (&pNtHeaders->OptionalHeader) + pNtHeaders->FileHeader.SizeOfOptionalHeader);
if (dwRva < pSectionHeader[0].PointerToRawData)
return dwRva;
for (wIndex = 0; wIndex < pNtHeaders->FileHeader.NumberOfSections; wIndex++)
{
if (dwRva >= pSectionHeader[wIndex].VirtualAddress && dwRva < (pSectionHeader[wIndex].VirtualAddress + pSectionHeader[wIndex].SizeOfRawData))
return (dwRva - pSectionHeader[wIndex].VirtualAddress + pSectionHeader[wIndex].PointerToRawData);
}
return 0;
}
Could you help me how could I accomplish this simple task?
Thank you.
P.s.: I'm not sticking to the function above, both if you can point out what's the problem, or give a better source would be awesome.
This gives you the relative virtual address
uintptr_t baseAddr = (uintptr_t)GetModuleHandle("nameOfExe.exe");
uintptr_t relativeAddr = functionAddress - baseAddr;
This converts relative virtual address to File offset:
DWORD RVAToFileOffset(IMAGE_NT_HEADERS32* pNtHdr, DWORD dwRVA)
{
int i;
WORD wSections;
PIMAGE_SECTION_HEADER pSectionHdr;
pSectionHdr = IMAGE_FIRST_SECTION(pNtHdr);
wSections = pNtHdr->FileHeader.NumberOfSections;
for (i = 0; i < wSections; i++)
{
if (pSectionHdr->VirtualAddress <= dwRVA)
if ((pSectionHdr->VirtualAddress + pSectionHdr->Misc.VirtualSize) > dwRVA)
{
dwRVA -= pSectionHdr->VirtualAddress;
dwRVA += pSectionHdr->PointerToRawData;
return (dwRVA);
}
pSectionHdr++;
}
return (-1);
}

Scanning process memory causes crash

i have injected my DLL into process and i try to scan memory for addresses with same value as mine, but it results in a crash after i get 1st address , it should be 10 addresses
for(DWORD i = MEM_START; i< MEM_END ;i++)
{
VirtualQuery((void*)i,pMemInfo,sizeof(MEMORY_BASIC_INFORMATION));
if(pMemInfo->AllocationProtect == PAGE_READONLY || PAGE_EXECUTE_WRITECOPY || PAGE_READWRITE || PAGE_WRITECOMBINE)
{
if(*(DWORD*)i==1337)
{
addresses.push_back(i);
}
}
}
I believe my protection check is wrong but not quite sure.
virtual memory scanner
MEMORY_BASIC_INFORMATION mbi = {0};
unsigned char *pAddress = NULL,
*pEndRegion = NULL;
DWORD dwFindData = 0xBAADF00D,
dwProtectionMask = PAGE_READONLY | PAGE_EXECUTE_WRITECOPY
| PAGE_READWRITE | PAGE_WRITECOMBINE;
while( sizeof(mbi) == VirtualQuery(pEndRegion, &mbi, sizeof(mbi)) ){
pAddress = pEndRegion;
pEndRegion += mbi.RegionSize;
if ((mbi.AllocationProtect & dwProtectionMask) && (mbi.State & MEM_COMMIT)){
for (pAddress; pAddress < pEndRegion ; pAddress++){
if (*pAddress == dwFindData){
// dostaff
}
}
}
}
Yes, several mistakes. You'll need to use the | operator instead of ||. The value of i is not meaningful, you must use MEMORY_BASIC_INFORMATION.AllocationBase to find where a region begins. And .RegionSize to know how big it is. The next value you pass to VirtualQuery should be .AllocationBase + .RegionSize to find the next region.
That's not how the || operator works. You may find it more readable to use a switch statement instead.
for (DWORD i = MEM_START; i < MEM_END ;i++)
{
VirtualQuery((void*)i, pMemInfo, sizeof(MEMORY_BASIC_INFORMATION));
switch (pMemInfo->AllocationProtect)
{
case PAGE_READONLY:
case PAGE_EXECUTE_WRITECOPY:
case PAGE_READWRITE:
case PAGE_WRITECOMBINE:
if(*(DWORD*)i==1337)
{
addresses.push_back(i);
}
}
}