I am using a getJobs function I found to get current print jobs in my printer (not print device). So far I can tell how many print jobs are in the queue of my virtual printer, and I have the information from JOB_INFO_ strucs to mess with, but I am trying to use SetJob() to delete the job from the print queue (after storing the information I want). With this I get an error:
0xC0000005: Access violation reading location 0x00002012.
My question is, what exactly am I doing wrong? I've tried putting 0 as level and NULL for pJob, then I do not get an error but the print job is still in the queue. I can't seem to find anyone else that has examples with explenations.
BOOL getJobs(LPTSTR printerName) {
HANDLE hPrinter; //Printer handle variable
DWORD dwNeeded, dwReturned, i; //Mem needed, jobs found, variable for loop
JOB_INFO_1 *pJobInfo; // pointer to structure
//Find printer handle
if (!OpenPrinter(printerName, &hPrinter, NULL)) {
return FALSE;
}
//Get amount of memory needed
if (!EnumJobs(hPrinter, 0, 0xFFFFFFFF, 1, NULL, 0, &dwNeeded, &dwReturned)) {
if (GetLastError() != ERROR_INSUFFICIENT_BUFFER) {
ClosePrinter(hPrinter);
return FALSE;
}
}
//Allocate the memory, if you cant end function
if ((pJobInfo = (JOB_INFO_1 *)malloc(dwNeeded)) == NULL) {
ClosePrinter(hPrinter);
return FALSE;
}
//Get job info struc
if (!EnumJobs(hPrinter, 0, 0xFFFFFFFF, 1, (LPBYTE)pJobInfo, dwNeeded, &dwNeeded, &dwReturned)) {
ClosePrinter(hPrinter);
free(pJobInfo);
return FALSE;
}
//If there are printjobs, get document name and data type. put into docinfo1 struc and return true
if (dwReturned > 0){
docinfo1.pDocName = pJobInfo[1].pDocument;
docinfo1.pDatatype = pJobInfo[1].pDatatype;
SetJob(hPrinter, pJobInfo[1].JobId, 2, (LPBYTE)pJobInfo, JOB_CONTROL_DELETE);
ClosePrinter(hPrinter);
free(pJobInfo);
return TRUE;
}
//No print jobs, Free memory and finish up :>
ClosePrinter(hPrinter);
free(pJobInfo);
return FALSE;
}
Help is much appreciated.
EDIT: The issue ended up being a simple mistake in where I told SetJob the wrong struct type.
Aside from specifying a JOB_INFO_2 struct when you're actually passing a JOB_INFO_1 struct (as was pointed out in comments), you're also trying to use the second element of pJobInfo[], which may not even exist:
SetJob(hPrinter, pJobInfo[1].JobId, 2, (LPBYTE)pJobInfo, JOB_CONTROL_DELETE);
Change it to:
SetJob(hPrinter, pJobInfo[0].JobId, 1, (LPBYTE)pJobInfo, JOB_CONTROL_DELETE);
Or better yet, do this because all you need to delete a print job is the job ID:
SetJob(hPrinter, pJobInfo[0].JobId, 0, NULL, JOB_CONTROL_DELETE);
Related
I'm continuing my quest to find the Holy Grail - or, really, just to write some C++ code that does some very basic ACL modifications on files. And, I'm continuing to bang my head against this wall with various challenges. The most recent one is that the "Trustee" returned by the GetAce() function doesn't seem to be a recognizable Trustee, even though I know exactly what it should be.
At a high level, what I'm currently trying to do is remove an Explicit ACE on a specific file that is DENYing access to the well-known Everyone group. Interestingly, if, instead of GetAce(), I use GetExplicitEntriesFromAcl(), I am able to get a recognizable SID and it compares correctly to the Everyone SID using EqualSid(); however, I don't know how to take the information from GetExplicitEntriesFromAcl() and then use that to remove the ACE from the ACL completely, because I don't get the absolute index of the ACE at that point, and there doesn't seem to be a RemoveExplicitEntriesFromAcl()-type function.
Current code, using GetAce() is below, but I'm open to other suggestions on how to reliably locate and remove a DENY ACE for the Everyone group.
BOOL removeAce(LPTSTR filePath) {
PACL existingAcl;
PSECURITY_DESCRIPTOR securityDescriptor;
PSID everyoneSid;
// Get PSID for well-known "Everyone" group
SID_IDENTIFIER_AUTHORITY sidAuthority = SECURITY_WORLD_SID_AUTHORITY;
if (!AllocateAndInitializeSid(&sidAuthority, 1, SECURITY_WORLD_RID, 0, 0, 0, 0, 0, 0, 0, &everyoneSid))
return false;
// Make sure we really have a file path
if (filePath == NULL || wcscmp(filePath, L"") == 0)
return false;
// Retrieve the ACL for the file, or bail out.
if (GetNamedSecurityInfo(filePath, SE_FILE_OBJECT, DACL_SECURITY_INFORMATION, NULL, NULL, &existingAcl, NULL, &securityDescriptor) != ERROR_SUCCESS)
return false;
EXPLICIT_ACCESS* aclEntries;
ULONG numEntries = existingAcl->AceCount;
// Loop through all ACEs
for (DWORD i = 0; i < numEntries; i++) {
// Going to cast as a ACCESS_DENIED_ACE entry.
ACCESS_DENIED_ACE* entry;
// Try to get the ACE at current index, or continue to next one.
if (!GetAce(existingAcl, i, (LPVOID*) &entry))
continue;
// We're only interested in DENY entries.
if (entry->Header.AceType != ACCESS_DENIED_ACE_TYPE)
continue;
// Cast a couple of things to make it easier below
PSID thisSid = (PSID)entry->SidStart;
EXPLICIT_ACCESS* eaEntry = (EXPLICIT_ACCESS*)entry;
/* This is where I start to have problems - I put this code
* in strictly for debug purposes, and it always hits the
* default case, indicating that the TrusteeForm is either
* some unknown type, or I'm doing something insanely
* wrong and pointing to the wrong block of memory
* somewhere.
*/
switch (thisEntry->Trustee.TrusteeForm) {
case TRUSTEE_IS_SID:
OutputDebugString(L"We have a SID.\n");
break;
case TRUSTEE_IS_NAME:
OutputDebugString(L"We have a name.\n");
break;
case TRUSTEE_IS_OBJECTS_AND_SID:
OutputDebugString(L"We have objects and a SID.\n");
break;
case TRUSTEE_IS_OBJECTS_AND_NAME:
OutputDebugString(L"We have objects and a name.\n");
break;
default:
OutputDebugString(L"We have an alien Trustee type.\n");
}
/* This is the ultimate goal - being able to match the Trustee
* of the current ACE being evaluated with the Everyone SID, but
* this if statement never evaluates to True, and, so far, it
* seems to be because the TrusteeForm isn't recognized as a SID
* - or anything else for that matter.
*/
if (eaEntry->Trustee.TrusteeForm == TRUSTEE_IS_SID
&& EqualSid((PSID)entry->SidStart, everyoneSid))
OutputDebugString(L"This is our EVERYONE entry.\n");
}
return true;
}
SidStart is not a pointer to a SID; it is the first 4 bytes of a SID. You need to access it like so:
PSID thisSid = (PSID)&(entry->SidStart);
I am following the example in https://learn.microsoft.com/en-gb/windows/win32/api/iphlpapi/nf-iphlpapi-getpertcp6connectionestats?redirectedfrom=MSDN to get the TCP statistics. Although, I got it working and get the statistics in the first place, still I want to record them every a time interval (which I haven't managed to do so), and I have the following questions.
The SetPerTcpConnectionEStats () fails with status != NO_ERROR and equal to 5. Although, it fails, I can get the statistics. Why?
I want to get the statistics every, let's say 1 second. I have tried two different ways; a) to use a while loop and use a std::this_thread::sleep_for(1s), where I could get the statistics every ~1sec, but the whole app was stalling (is it because of the this), I supposed that I am blocking the operation of the main, and b) (since a) failed) I tried to call TcpStatistics() from another function (in different class) that is triggered every 1 sec (I store clientConnectRow to a global var). However, in that case (b), GetPerTcpConnectionEStats() fails with winStatus = 1214 (ERROR_INVALID_NETNAME) and of course TcpStatistics() cannot get any of the statistics.
a)
ClassB::ClassB()
{
UINT winStatus = GetTcpRow(localPort, hostPort, MIB_TCP_STATE_ESTAB, (PMIB_TCPROW)clientConnectRow);
ToggleAllEstats(clientConnectRow, TRUE);
thread t1(&ClassB::TcpStatistics, this, clientConnectRow);
t1.join();
}
ClassB::TcpStatistics()
{
while (true)
{
GetAndOutputEstats(row, TcpConnectionEstatsBandwidth)
// some more code here
this_thread::sleep_for(milliseconds(1000));
}
}
b)
ClassB::ClassB()
{
MIB_TCPROW client4ConnectRow;
void* clientConnectRow = NULL;
clientConnectRow = &client4ConnectRow;
UINT winStatus = GetTcpRow(localPort, hostPort, MIB_TCP_STATE_ESTAB, (PMIB_TCPROW)clientConnectRow);
m_clientConnectRow = clientConnectRow;
TcpStatistics();
}
ClassB::TcpStatistics()
{
ToggleAllEstats(m_clientConnectRow , TRUE);
void* row = m_clientConnectRow;
GetAndOutputEstats(row, TcpConnectionEstatsBandwidth)
// some more code here
}
ClassB::GetAndOutputEstats(void* row, TCP_ESTATS_TYPE type)
{
//...
winStatus = GetPerTcpConnectionEStats((PMIB_TCPROW)row, type, NULL, 0, 0, ros, 0, rosSize, rod, 0, rodSize);
if (winStatus != NO_ERROR) {wprintf(L"\nGetPerTcpConnectionEStats %s failed. status = %d", estatsTypeNames[type], winStatus); //
}
else { ...}
}
ClassA::FunA()
{
classB_ptr->TcpStatistics();
}
I found a work around for the second part of my question. I am posting it here, in case someone else find it useful. There might be other solutions too, more advanced, but this is how I did it myself. We have to first Obtain MIB_TCPROW corresponding to the TCP connection and then to Enable Estats collection before dumping current stats. So, what I did was to add all of these in a function and call this instead, every time I want to get the stats.
void
ClassB::FunSetTcpStats()
{
MIB_TCPROW client4ConnectRow;
void* clientConnectRow = NULL;
clientConnectRow = &client4ConnectRow;
//this is for the statistics
UINT winStatus = GetTcpRow(lPort, hPort, MIB_TCP_STATE_ESTAB, (PMIB_TCPROW)clientConnectRow); //lPort & hPort in htons!
if (winStatus != ERROR_SUCCESS) {
wprintf(L"\nGetTcpRow failed on the client established connection with %d", winStatus);
return;
}
//
// Enable Estats collection and dump current stats.
//
ToggleAllEstats(clientConnectRow, TRUE);
TcpStatistics(clientConnectRow); // same as GetAllEstats() in msdn
}
I'm trying to share an array of structs through shared named memory using the WINAPI. I'm able to create and manage the shared memory but when trying to share an array of structs the size of the array is always 0 upon reading.
Below is test code i have written which should write/read an array of 10 entries, but even this is failing. My goal is however to write/read a dynamic array of structs containing 2 dynamic arrays and the info they already contain at the moment.
I'm aware i shouldn't share pointers between processes as they could point to a random value. Therefor i'm allocating memory for the arrays using new.
This is what i have so far:
Shared in both processes:
#define MEMSIZE 90024
typedef struct {
int id;
int type;
int count;
} Entry;
Process 1:
extern HANDLE hMapObject;
extern void* vMapData;
std::vector<Entry> entries;//collection of entries
BOOL DumpEntries(TCHAR* memName) {//Returns true, writing 10 entries
int size = min(10, entries.size());
Entry* eArray = new Entry[size];
for (int i = 0; i < size; i++) {
eArray[i] = entries.at(i);
}
::hMapObject = CreateFileMapping(INVALID_HANDLE_VALUE, NULL, PAGE_READWRITE, 0, MEMSIZE, memName);
if (::hMapObject == NULL) {
return FALSE;
}
::vMapData = MapViewOfFile(::hMapObject, FILE_MAP_ALL_ACCESS, 0, 0, MEMSIZE);
if (::vMapData == NULL) {
CloseHandle(::hMapObject);
return FALSE;
}
CopyMemory(::vMapData, eArray, (size * sizeof(Entry)));
UnmapViewOfFile(::vMapData);
//delete[] eArray;
return TRUE;
}
Process 2:
BOOL ReadEntries(TCHAR* memName, Entry* entries) {//Returns true reading 0 entries
HANDLE hMapFile = OpenFileMapping(FILE_MAP_ALL_ACCESS, FALSE, memName);
if (hMapFile == NULL) {
return FALSE;
}
Entry* tmpEntries = (Entry*)(MapViewOfFile(hMapFile, FILE_MAP_ALL_ACCESS, 0, 0, 10 * sizeof(Entry)));
if (tmpEntries == NULL) {
CloseHandle(hMapFile);
return FALSE;
}
entries = new Entry[10];
for (int i = 0; i < 10; i++) {
entries[i] = tmpEntries[i];
}
UnmapViewOfFile(tmpEntries);
CloseHandle(hMapFile);
return TRUE;
}
Writing the 10 entries seems to be working but when trying to read the memory it returns successfully and the size
of the array is 0, like so:
Entry* entries = NULL;
if (ReadEntries(TEXT("Global\Entries"), entries)) {
int size = _ARRAYSIZE(entries);
out = "Succesfully read: " + to_string(size);// Is always 0
}
So my question is, what am I doing wrong? I'm sharing the same struct between 2 processes, i'm allocating new memory for the entries to be written to and copying the memory with a size of 10 * sizeof(Entry);. When trying to read I also try to read 10 * sizeof(Entry); bytes and cast the data to a Entry*. Is there something I'm missing? All help is welcome.
Based on cursory examination, this code appears to attempt to map structures containing std::strings into shared memory, to be used by another process.
Unfortunately, this adventure is doomed, before it even gets started. Even if you get the array length to pass along correctly, I expect the other process to crash immediately, as soon as it even smells the std::string that the other process attempted to map into shared memory segments.
std::strings are non-trivial classes. A std::string maintains internal pointers to a buffer where the actual string data is kept; with the buffer getting allocated on the heap.
You do understand that sizeof(std::string) doesn't change, whether the string contains five characters, or the entire contents of "War And Peace", right? Stop and think for a moment, how that's possible, in just a few bytes that it takes to store a std::string?
Once you think about it for a moment, it should become crystal clear why mapping one process's std::strings into a shared memory segment, and then attempting to grab them by another process, is not going to work.
The only thing that can be practically mapped to/from shared memory is plain old data; although you could get away with aggregates, in some cases, too.
I'm afraid the problem only lies in the _ARRAYSIZE macro. I could not really find it in MSDN, but I found references for _countof or ARRAYSIZE in other pages. All are defined as sizeof(array)/sizeof(array[0]). The problem is that it only make sense for true arrays defined as Entry entries[10], but not for a pointer to such an array. Technically when you declare:
Entry* entries;
sizeof(entries) is sizeof(Entry *) that is the size of a pointer. It is smaller than the size of the struct so the result of the integer division is... 0!
Anyway, there are other problems in current code. The correct way to exchange a variable size array through shared memory is to use an ancillary structure containing a size and the array itself declared as incomplete:
struct EntryArray {
size_t size;
Entry entries[];
};
You could dump it that way:
BOOL DumpEntries(TCHAR* memName) {//Returns true, writing 10 entries
int size = min(10, entries.size());
EntryArray* eArray = (EntryArray *) malloc(sizeof(EntryArray) + size * sizeof(Entry));
for (int i = 0; i < size; i++) {
eArray->entries[i] = entries.at(i);
}
eArray->size = size;
::hMapObject = CreateFileMapping(INVALID_HANDLE_VALUE, NULL, PAGE_READWRITE, 0, MEMSIZE, memName);
if (::hMapObject == NULL) {
return FALSE;
}
::vMapData = MapViewOfFile(::hMapObject, FILE_MAP_ALL_ACCESS, 0, 0, MEMSIZE);
if (::vMapData == NULL) {
CloseHandle(::hMapObject);
return FALSE;
}
CopyMemory(::vMapData, eArray, (sizeof(EntryArray) + size * sizeof(Entry)));
UnmapViewOfFile(::vMapData);
free(eArray);
return TRUE;
}
You can note that as the last member of the struct is an incomplete array, it is allocated 0 size, so you must allocate the size of the struct + the size of the array.
You can then read it from memory that way:
size_t ReadEntries(TCHAR* memName, Entry*& entries) {//Returns the number of entries or -1 if error
HANDLE hMapFile = OpenFileMapping(FILE_MAP_ALL_ACCESS, FALSE, memName);
if (hMapFile == NULL) {
return -1;
}
EntryArray* eArray = (EntryArray*)(MapViewOfFile(hMapFile, FILE_MAP_ALL_ACCESS, 0, 0, 10 * sizeof(Entry)));
if (eArray == NULL) {
CloseHandle(hMapFile);
return -1;
}
entries = new Entry[10]; // or even entries = new Entry[eArray->size];
for (int i = 0; i < 10; i++) { // same: i<eArray->size ...
entries[i] = eArray->entries[i];
}
UnmapViewOfFile(eArray);
CloseHandle(hMapFile);
return eArray.size;
}
But here again you should note some differences. As the number of entries is lost when eArray vanishes, it is passed as the return value from the function. And and you want to modify the pointer passed as second parameter, you must pass it by reference (if you pass it by value, you will only change a local copy and still have NULL in original variable after function returns).
There are still some possible improvement in your code, because the vector entries is global when it could be passed as a parameter to DumpEntries, and hMapObject is also global when it could be returned by the function. And in DumpObject you could avoid a copy by building directly the EntryArray in shared memory:
HANDLE DumpEntries(TCHAR* memName, const std::vector<Entry>& entries) {
//Returns HANDLE to mapped file (or NULL), writing 10 entries
int size = min(10, entries.size());
HANDLE hMapObject = CreateFileMapping(INVALID_HANDLE_VALUE, NULL, PAGE_READWRITE, 0, MEMSIZE, memName);
if (hMapObject == NULL) {
return NULL;
}
void * vMapData = MapViewOfFile(hMapObject, FILE_MAP_ALL_ACCESS, 0, 0, MEMSIZE);
if (vMapData == NULL) {
CloseHandle(hMapObject);
return NULL;
}
EntryArray* eArray = (EntryArray*) vMapData;
for (int i = 0; i < size; i++) {
eArray->entries[i] = entries.at(i);
}
eArray->size = size;
UnmapViewOfFile(vMapData);
return hMapObject;
}
And last but not least, the backslash \ is a special quoting character in a string litteral, and it must quote itself. So you should write .TEXT("Global\\Entries")
I did it some changes to your code:
PROCESS 1:
BOOL DumpEntries(TCHAR* memName)
{
int size = entries.size() * sizeof(Entry) + sizeof(DWORD);
::hMapObject = CreateFileMapping(INVALID_HANDLE_VALUE, NULL, PAGE_READWRITE, 0, size, memName);
if (::hMapObject == NULL) {
return FALSE;
}
::vMapData = MapViewOfFile(::hMapObject, FILE_MAP_ALL_ACCESS, 0, 0, size);
if (::vMapData == NULL) {
CloseHandle(::hMapObject);
return FALSE;
}
(*(DWORD*)::vMapData) = entries.size();
Entry* eArray = (Entry*)(((DWORD*)::vMapData) + 1);
for(int i = entries.size() - 1; i >= 0; i--) eArray[i] = entries.at(i);
UnmapViewOfFile(::vMapData);
return TRUE;
}
PROCESS 2:
BOOL ReadEntries(TCHAR* memName, Entry** entries, DWORD &number_of_entries) {
HANDLE hMapFile = OpenFileMapping(FILE_MAP_ALL_ACCESS, FALSE, memName);
if (hMapFile == NULL) {
return FALSE;
}
DWORD *num_entries = (DWORD*)MapViewOfFile(hMapFile, FILE_MAP_ALL_ACCESS, 0, 0, 0);
if (num_entries == NULL) {
CloseHandle(hMapFile);
return FALSE;
}
number_of_entries = *num_entries;
if(number_of_entries == 0)
{
// special case: when no entries was found in buffer
*entries = NULL;
return true;
}
Entry* tmpEntries = (Entry*)(num_entries + 1);
*entries = new Entry[*num_entries];
for (UINT i = 0; i < *num_entries; i++) {
(*entries)[i] = tmpEntries[i];
}
UnmapViewOfFile(num_entries);
CloseHandle(hMapFile);
return TRUE;
}
PROCESS 2 (usage example):
void main()
{
Entry* entries;
DWORD number_of_entries;
if(ReadEntries(TEXT("Global\\Entries", &entries, number_of_entries) && number_of_entries > 0)
{
// do something
}
delete entries;
}
CHANGES:
I am not using a static size (MEMSIZE) when i map memory, i am calculating exactly memory requiered
I put a "header" to memory mapped, a DWORD for send to process 2 number of entries in buffer
your ReadEntries definition is wrong, i fix it changing Entry* to Entry**
NOTES:
you need to close ::hMapObject handle in process 1 before process 2 calls ReadEntries
you need to delete entries memory returned for ReadEntries in process 2, before you use it
this code works only under same windows user, if you want to communicate a services with user process (for example), you need to handle SECURITY_ATTRIBUTES member in CreateFileMapping procedure
I need the following for the exception handler in C++ code. Say, I have the following code block:
void myFunction(LPCTSTR pStr, int ncbNumCharsInStr)
{
__try
{
//Do work with 'pStr'
}
__except(1)
{
//Catch all
//But here I need to log `pStr` into event log
//For that I don't want to raise another exception
//if memory block of size `ncbNumCharsInStr` * sizeof(TCHAR)
//pointed by 'pStr' is unreadable.
if(memory_readable(pStr, ncbNumCharsInStr * sizeof(TCHAR)))
{
Log(L"Failed processing: %s", pStr);
}
else
{
Log(L"String at 0x%X, %d chars long is unreadable!", pStr, ncbNumCharsInStr);
}
}
}
Is there any way to implement memory_readable?
The VirtualQuery function might be able to help. The following is a quick stab at how you could implement memory_readable using it.
bool memory_readable(void *ptr, size_t byteCount)
{
MEMORY_BASIC_INFORMATION mbi;
if (VirtualQuery(ptr, &mbi, sizeof(MEMORY_BASIC_INFORMATION)) == 0)
return false;
if (mbi.State != MEM_COMMIT)
return false;
if (mbi.Protect == PAGE_NOACCESS || mbi.Protect == PAGE_EXECUTE)
return false;
// This checks that the start of memory block is in the same "region" as the
// end. If it isn't you "simplify" the problem into checking that the rest of
// the memory is readable.
size_t blockOffset = (size_t)((char *)ptr - (char *)mbi.AllocationBase);
size_t blockBytesPostPtr = mbi.RegionSize - blockOffset;
if (blockBytesPostPtr < byteCount)
return memory_readable((char *)ptr + blockBytesPostPtr,
byteCount - blockBytesPostPtr);
return true;
}
NOTE: My background is C, so while I suspect that there are better options than casting to a char * in C++ I'm not sure what they are.
You can use the ReadProcessMemory function. If the function returns 0, the address is not readable otherwise it is readable.
Return Value
If the function fails, the return value is 0 (zero). To get extended
error information, call GetLastError.
The function fails if the requested read operation crosses into an
area of the process that is inaccessible.
If the function succeeds, the return value is nonzero.
I am trying to use the Microsoft 'Crypt...' functions to generate an MD5 hash key from the data that is added to the hash object. I am also trying to use the 'CryptSetHashParam' to set the hash object to a particular hash value before adding data to it.
According to the Microsoft documentation (if I am interpreting it correctly), you should be able to do this by creating a duplicate hash of the original object, use the 'CryptGetHashParam' function to retrieve the hash size then use 'CryptSetHashParam' on the original object to set the hash value accordingly. I am aware that after using 'CryptGetHashParam' you are unable to add additional data to a hash object (which is why I thought you needed to create a duplicate), but I can't add data to either the original hash object or the duplicate hash object after using either 'CryptGetHashParam' (as expected), or 'CryptSetHashParam' (which I didn't expect).
Below are code extracts of the class I am writing and an example of how I am using the class functions:
The result I get after running the code is:
"AddDataToHash function failed - Errorcode: 2148073484.", which translates to: "Hash not valid for use in specified state.".
I've tried many different ways to try and get this working as intended, but the result is always the same. I accept that I am doing something wrong, but I can't see what it is I'm doing wrong. Any ideas please?
CLASS CONSTRUCTOR INITIALISATION.
CAuthentication::CAuthentication()
{
m_dwLastError = ERROR_SUCCESS;
m_hCryptProv = NULL;
m_hHash = NULL;
m_hDuplicateHash = NULL;
if(!CryptAcquireContext(&m_hCryptProv, NULL, NULL, PROV_RSA_FULL, CRYPT_MACHINE_KEYSET))
{
m_dwLastError = GetLastError();
if (m_dwLastError == 0x80090016 )
{
if(!CryptAcquireContext(&m_hCryptProv, NULL, NULL, PROV_RSA_FULL, CRYPT_NEWKEYSET | CRYPT_MACHINE_KEYSET))
{
m_dwLastError = GetLastError();
m_hCryptProv = NULL;
}
}
}
if(!CryptCreateHash(m_hCryptProv, CALG_MD5, 0, 0, &m_hHash))
{
m_dwLastError = GetLastError();
m_hHash = NULL;
}
}
FUNCTION USED TO SET THE HASH VALUE OF THE HASH OBJECT.
bool CAuthentication::SetHashKeyString(char* pszKeyBuffer)
{
bool bHashStringSet = false;
DWORD dwHashSize = 0;
DWORD dwHashLen = sizeof(DWORD);
BYTE byHash[DIGITAL_SIGNATURE_LENGTH / 2]={0};
if(pszKeyBuffer != NULL && strlen(pszKeyBuffer) == DIGITAL_SIGNATURE_LENGTH)
{
if(CryptDuplicateHash(m_hHash, NULL, 0, &m_hDuplicateHash))
{
if(CryptGetHashParam(m_hDuplicateHash, HP_HASHSIZE, reinterpret_cast<BYTE*>(&dwHashSize), &dwHashLen, 0))
{
if (dwHashSize == DIGITAL_SIGNATURE_LENGTH / 2)
{
char*pPtr = pszKeyBuffer;
ULONG ulTempVal = 0;
for(ULONG ulIdx = 0; ulIdx < dwHashSize; ulIdx++)
{
sscanf(pPtr, "%02X", &ulTempVal);
byHash[ulIdx] = static_cast<BYTE>(ulTempVal);
pPtr+= 2;
}
if(CryptSetHashParam(m_hHash, HP_HASHVAL, &byHash[0], 0))
{
bHashStringSet = true;
}
else
{
pszKeyBuffer = "";
m_dwLastError = GetLastError();
}
}
}
else
{
m_dwLastError = GetLastError();
}
}
else
{
m_dwLastError = GetLastError();
}
}
if(m_hDuplicateHash != NULL)
{
CryptDestroyHash(m_hDuplicateHash);
}
return bHashStringSet;
}
FUNCTION USED TO ADD DATA FOR HASHING.
bool CAuthentication::AddDataToHash(BYTE* pbyHashBuffer, ULONG ulLength)
{
bool bHashDataAdded = false;
if(CryptHashData(m_hHash, pbyHashBuffer, ulLength, 0))
{
bHashDataAdded = true;
}
else
{
m_dwLastError = GetLastError();
}
return bHashDataAdded;
}
MAIN FUNCTION CLASS USAGE:
CAuthentication auth;
.....
auth.SetHashKeyString("0DD72A4F2B5FD48EF70B775BEDBCA14C");
.....
if(!auth.AddDataToHash(pbyHashBuffer, ulDataLen))
{
TRACE("CryptHashData function failed - Errorcode: %lu.\n", auth.GetAuthError());
}
You can't do it because it doesn't make any sense. CryptGetHashParam with the HP_HASHVAL option finalizes the hash, so there is no way to add data to it. If you want to "fork" the hash so that you can finalize it at some point as well as add data to it, you must duplicate the hash object prior to finalizing. Then you add the data to one of the hash objects and finalize the other. For example, you might do this if you wanted record a cumulative hash after every 1024 bytes of a data stream. You should not call CryptSetHashParam on the hash object that you are continuing to add data to.
CryptSetHashParam with the HP_HASHVAL option is a brutal hack to overcome a limitation in the CryptoAPI. The CryptoAPI will only sign a hash object, so if you want to sign some data that might have been hashed or generated outside of CAPI, you have to "jam" it into a hash object.
EDIT:
Based on your comment, I think you are looking for a way to serialize the hash object. I cannot find any evidence that CryptoAPI supports this. There are alternatives, however, that are basically variants of my "1024 bytes" example above. If you are hashing a sequence of files, you could simply compute and save the hash of each file. If you really need to boil it down to one value, then you can compute a modified hash where the first piece of data you hash for file i is the finalized hash for files 0, 1, 2, ..., i-1. So:
H-1 = empty,
Hi = MD5 (Hi-1 || filei)
As you go along, you can save the last successfully computed Hi value. In case of interruption, you can restart at file i+1. Note that, like any message digest, the above is completely sensitive to both order and content. This is something to consider on a dynamically changing file system. If files can be added or changed during the hashing operation, the meaning of the hash value will be affected. It might be rendered meaningless. You might want to be certain that both the sequence and content of the files you are hashing is frozen during the entire duration of the hash.