WinHttpCrackUrl failed with 87 error - c++

URL_COMPONENTS urlComp;
LPCWSTR pwszUrl1 =
L"http://search.msn.com/results.asp?RS=CHECKED&FORM=MSNH&v=1&q=wininet";
DWORD dwUrlLen = 0;
// Initialize the URL_COMPONENTS structure.
ZeroMemory(&urlComp, sizeof(urlComp));
urlComp.dwStructSize = sizeof(urlComp);
// Set required component lengths to non-zero
// so that they are cracked.
urlComp.dwSchemeLength = (DWORD)-1;
urlComp.dwHostNameLength = (DWORD)-1;
urlComp.dwUrlPathLength = (DWORD)-1;
urlComp.dwExtraInfoLength = (DWORD)-1;
// Crack the URL.
if (!WinHttpCrackUrl( pwszUrl1, (DWORD)wcslen(pwszUrl1), 0, &urlComp))
{
printf("Error %u in WinHttpCrackUrl.\n", GetLastError());
}
This WinHttpCrackUrl api failed with 87(invalid param) on Win7( OS ) please any one suggest a solution or how can i decode my URL in an Easy way in server side ?.Also i want to know how can i distinguish %20 from encoded URL and actual data present in the URL. Example: localhost:8080\Server\search?value="value%20"

Sets required component lengths to expected value.
All to zero if doesn't need to crack.
urlComp.dwSchemeLength = (DWORD)0;
urlComp.dwHostNameLength = (DWORD)0;
urlComp.dwUrlPathLength = (DWORD)0;
urlComp.dwExtraInfoLength = (DWORD)0;
Or
std::wstring urlHost;
urlHost.resize(url.size());
urlComp.dwHostNameLength = (DWORD)urlHost.size();
urlComp.lpszHostName = &urlHost[0];

Related

WinVerifyTrust only works when using a file not a memory blob

I am trying to the WinVerifyTrust function to verify the signature of a msi.
And while this works for files on the file system I cannot get it to work with a memory blob.
This basis of my example to show the problem is the example program from microsoft
BOOL VerifyEmbeddedSignature(LPCWSTR pwszSourceFile)
{
std::basic_ifstream<BYTE> file(pwszSourceFile, std::ios::binary);
std::vector<BYTE> data((std::istreambuf_iterator<BYTE>(file)),std::istreambuf_iterator<BYTE>());
LONG lStatus;
/*
// this works
WINTRUST_FILE_INFO FileData;
memset(&FileData, 0, sizeof(FileData));
FileData.cbStruct = sizeof(WINTRUST_FILE_INFO);
FileData.pcwszFilePath = pwszSourceFile;
FileData.hFile = NULL;
FileData.pgKnownSubject = NULL;
*/
WINTRUST_BLOB_INFO FileData{};
memset(&FileData, 0, sizeof(FileData));
FileData.cbStruct = sizeof(WINTRUST_BLOB_INFO);
FileData.gSubject = WIN_TRUST_SUBJTYPE_RAW_FILE;
FileData.cbMemObject = static_cast<DWORD>(data.size());
FileData.pbMemObject = data.data();
GUID WVTPolicyGUID = WINTRUST_ACTION_GENERIC_VERIFY_V2;
WINTRUST_DATA WinTrustData;
memset(&WinTrustData, 0, sizeof(WinTrustData));
WinTrustData.cbStruct = sizeof(WinTrustData);
WinTrustData.pPolicyCallbackData = NULL;
WinTrustData.pSIPClientData = NULL;
WinTrustData.dwUIChoice = WTD_UI_NONE;
WinTrustData.fdwRevocationChecks = WTD_REVOKE_NONE;
//WinTrustData.dwUnionChoice = WTD_CHOICE_FILE; -- works
WinTrustData.dwUnionChoice = WTD_CHOICE_BLOB;
WinTrustData.dwStateAction = WTD_STATEACTION_VERIFY;
WinTrustData.hWVTStateData = NULL;
WinTrustData.pwszURLReference = NULL;
WinTrustData.dwUIContext = 0;
WinTrustData.pBlob = &FileData;
//WinTrustData.pFile = &FileData; -- works
lStatus = WinVerifyTrust(
NULL,
&WVTPolicyGUID,
&WinTrustData);
WinTrustData.dwStateAction = WTD_STATEACTION_CLOSE;
WinVerifyTrust(
NULL,
&WVTPolicyGUID,
&WinTrustData);
return lStatus == ERROR_SUCCESS;
}
In the above example using the WINTRUST_FILE_INFO works but the WINTRUST_BLOB_INFO does not.
The error that I get with a memory blob is always TRUST_E_PROVIDER_UNKNOWN.
I assume that the problem might be the WIN_TRUST_SUBJTYPE_RAW_FILE subject type but I don't know which one I should use for a msi file.
I am wondering if the sign check is possible at all with memory blobs of msi files.
The WIN_TRUST_SUBJTYPE_RAW_FILE is not the suitable subject GUID. You could The function CryptSIPRetrieveSubjectGuid retrieves a GUID based on the header information in a specified file.

OLE DB Bulk Copy Operation Always Loads True into Bit Columns

I'm using the OLE DB bulk copy operation against a SQL Server database but I'm having trouble while loading data into bit columns - they are always populated with true!
I created a simple reproduction program from Microsoft's sample code with the snippet that I adjusted below. My program includes a little SQL script to create the destination table. I had to download and install the x64 version of the SQL Server OLE DB driver to build this.
// Set up custom bindings.
oneBinding.dwPart = DBPART_VALUE | DBPART_LENGTH | DBPART_STATUS;
oneBinding.iOrdinal = 1;
oneBinding.pTypeInfo = NULL;
oneBinding.obValue = ulOffset + offsetof(COLUMNDATA, bData);
oneBinding.obLength = ulOffset + offsetof(COLUMNDATA, dwLength);
oneBinding.obStatus = ulOffset + offsetof(COLUMNDATA, dwStatus);
oneBinding.cbMaxLen = 1; // Size of varchar column.
oneBinding.pTypeInfo = NULL;
oneBinding.pObject = NULL;
oneBinding.pBindExt = NULL;
oneBinding.dwFlags = 0;
oneBinding.eParamIO = DBPARAMIO_NOTPARAM;
oneBinding.dwMemOwner = DBMEMOWNER_CLIENTOWNED;
oneBinding.bPrecision = 0;
oneBinding.bScale = 0;
oneBinding.wType = DBTYPE_BOOL;
ulOffset = oneBinding.cbMaxLen + offsetof(COLUMNDATA, bData);
ulOffset = ROUND_UP(ulOffset, COLUMN_ALIGNVAL);
if (FAILED(hr =
pIFastLoad->QueryInterface(IID_IAccessor, (void**)&pIAccessor)))
return hr;
if (FAILED(hr = pIAccessor->CreateAccessor(DBACCESSOR_ROWDATA,
1,
&oneBinding,
ulOffset,
&hAccessor,
&oneStatus)))
return hr;
// Set up memory buffer.
pData = new BYTE[40];
if (!(pData /* = new BYTE[40]*/)) {
hr = E_FAIL;
goto cleanup;
}
pcolData = (COLUMNDATA*)pData;
pcolData->dwLength = 1;
pcolData->dwStatus = 0;
for (i = 0; i < 10; i++)
{
if (i % 2 == 0)
{
pcolData->bData[0] = 0x00;
}
else
{
pcolData->bData[0] = 0xFF;
}
if (FAILED(hr = pIFastLoad->InsertRow(hAccessor, pData)))
goto cleanup;
}
It's entirely likely that I'm putting the wrong value into the buffer, or have some other constant value incorrect. I did find an article describing the safety of various data type conversions and it looks like byte to bool is safe... but how would the buffer know what kind of data I'm putting in there if it's just a byte array?
Figured this out, I had not correctly switched over the demo from loading strings to fixed-width values. For strings, the data blob gets a 1-width pointer to the value whereas fixed-width values get the actual data.
So my COLUMNDATA struct now looks like this:
// How to lay out each column in memory.
struct COLUMNDATA {
DBLENGTH dwLength; // Length of data (not space allocated).
DWORD dwStatus; // Status of column.
VARIANT_BOOL bData; // Value, or if a string, a pointer to the value.
};
With the relevant length fix here:
pcolData = (COLUMNDATA*)pData;
pcolData->dwLength = sizeof(VARIANT_BOOL); // using a short.. make it two
pcolData->dwStatus = DBSTATUS_S_OK; // Indicates that the data value is to be used, not null
And the little value-setting for loop looks like this:
for (i = 0; i < 10; i++)
{
if (i % 2 == 0)
{
pcolData->bData = VARIANT_TRUE;
}
else
{
pcolData->bData = VARIANT_FALSE;
}
if (FAILED(hr = pIFastLoad->InsertRow(hAccessor, pData)))
goto cleanup;
}
I've updated the repository with the working code. I was inspired to make this change after reading the documentation for the obValue property.

GSSAPI: Error 0x80090311 returned by InitializeSecurityContext

I need to set up an active directory authentication system using Kerberos. My AcquireCredentialsHandleA class looks as follows. https://learn.microsoft.com/en-us/windows/win32/secauthn/acquirecredentialshandle--kerberos
SEC_WINNT_AUTH_IDENTITY AuthData, *pAuthData = NULL;
#ifdef UNICODE
AuthData.Flags = SEC_WINNT_AUTH_IDENTITY_UNICODE;
#else
AuthData.Flags = SEC_WINNT_AUTH_IDENTITY_ANSI;
#endif
unsigned char username[200] = "user";
unsigned char domain[200] = "domain.com";
unsigned char password[100] = "secret";
AuthData.User = &username[0]; //username
AuthData.Domain = &domain[0]; //domain
AuthData.Password = &password[0]; //password
AuthData.UserLength = AuthData.User ? sizeof(AuthData.User) : 0;
AuthData.DomainLength = AuthData.Domain ? sizeof(AuthData.Domain) : 0;
AuthData.PasswordLength = AuthData.Password ? sizeof(AuthData.Password) : 0;
Status = g_pSSPI->AcquireCredentialsHandleA(
NULL, // Name of principal //pN
ppPackageInfo[2].Name,//"kerberos" Name of package
SECPKG_CRED_OUTBOUND, // Flags indicating use
NULL, // Pointer to logon ID
pAuthData, //NULL, // Package specific data
NULL, // Pointer to GetKey() func
NULL, // Value to pass to GetKey()
phCreds, // (out) Cred Handle
&tsExpiry
);
This returns Success. However, when I call my InitializeSecurityContextAfunction it gives me 0x80090311 error which means SEC_E_NO_AUTHENTICATING_AUTHORITY. I have tried all sorts of possible domain name etc. When I do ksetup in the powershell it can generate ticket with same credentials. But the code always fails. Can anyone spot any problem here?
dwSSPIFlags = ISC_REQ_ALLOCATE_MEMORY | ISC_REQ_USE_SESSION_KEY | ISC_REQ_CONNECTION; // Not sure at all about them flags
OutBuffers[0].pvBuffer = NULL;
OutBuffers[0].BufferType = SECBUFFER_TOKEN; //2
//OutBuffers[0].cbBuffer = 0;
OutBuffers[0].cbBuffer = 7084;
OutBuffer.cBuffers = 1;
OutBuffer.pBuffers = OutBuffers;
OutBuffer.ulVersion = SECBUFFER_VERSION; //0
Status = g_pSSPI->InitializeSecurityContextA
(
phCreds,
fHaveCtxtHandle ? phContext : NULL,//phContext, can be NULL for the first call
server,
dwSSPIFlags,
0,
SECURITY_NETWORK_DREP,//SECURITY_NATIVE_DREP,
fHaveCtxtHandle ? &InBuffer : NULL,
0,
fHaveCtxtHandle ? NULL : phContext ,
&OutBuffer,
&dwSSPIOutFlags,
&tsExpiry
);
Can anyone help me with this? Thanks a lot. :)
--------------------------------------------------------------- edit ---------------------------------------------------------------
std::string fqdn = "HTTP/staging.company.com";
char * server = new char[fqdn.size() + 1];
std::copy(fqdn.begin(), fqdn.end(), server);
server[fqdn.size()] = '\0';
So this is how I set up the Service name. I have this service name registered in my active directory as well. Though the address does not offer any particular service yet. Do you think that could be the problem?
Another details: The Active directory is not in the same office, we tunnel to it via router.

Invalid parameter error using GetDefaultCommConfig

I came up with the same issue,in which I got a LPTSTR portname param as input from a function.I have to convert this into wstring,so that I can fetch the Port paramaters.
below is the code snippet in which am trying to copy lptstr to wstring.
void C_PORT_MONITOR::SetPrinterComPortParam(LPTSTR PortName)
{
#ifdef _UNICODE
std::wstring l_ComPortName;
#else
std::string l_ComPortName;
#endif
DWORD dwSize,le = 0;
dwSize = sizeof(COMMCONFIG);
LPCOMMCONFIG lpCC = (LPCOMMCONFIG) new BYTE[dwSize];
l_ComPortName = PortName;//mPortName;
if(l_ComPortName.length() <= 0 )
return;
bool SetFlag = false;
//Get COMM port params called to get size of config. block
int length = l_ComPortName.length();
int iPos = l_ComPortName.find_first_of(':');
int iChc = length- iPos; //remove the charactrers after :
l_ComPortName = l_ComPortName.substr(0, (length- iChc)); //remove the characters from colon //COM1
//Get COMM port params with defined size
BOOL ret = GetDefaultCommConfig(l_ComPortName.c_str(), lpCC, &dwSize);
_RPT1(_CRT_WARN, "C_PORT_MONITOR::SetPrinterComPortParam length=%x,iPos=%x,iChc=%x,l_ComPortName=%s",length, iPos, iChc, l_ComPortName);
if(!ret)
{
le = GetLastError();
_RPT1(_CRT_WARN ,"C_PORT_MONITOR::SetPrinterComPortParam LastError=%x",le);
}
I need to assign this portname to l_comportname. and I need to create a substring from this l_comportname as COM1 and I have to use this substring in getdafaultcommconfig()
Your error is the second parameter not the first. Your debugging statement is bugged because it doesn't account for wide strings %s is for narrow strings only, you should use %S for a wide string.
Here's the real error
dwSize = sizeof(COMMCONFIG);
LPCOMMCONFIG lpCC = (LPCOMMCONFIG) new BYTE[dwSize];
lpCC->dwSize = sizeof(COMMCONFIG); // this line is needed
You might need this as well (the documentation isn't very clear)
lpCC->wVersion = 1;
It's very common in Windows programming that you have to initialize a struct with the size of the struct.
Ref: https://technet.microsoft.com/en-us/aa363188(v=vs.90)

NTE_BAD_DATA in CryptSetKeyParam while setting KP_P in wincrypt

I am having the below code. I am setting a prime for diffie-hellman algorithm using char *.
I am getting bad data after i set the prime. Where am i doing wrong?
I followed the same example in this link.
https://msdn.microsoft.com/en-us/library/aa381969(VS.85).aspx#exchanging_diffie-hellman_keys
What is the correct way to set prime in diffie-hellman using wincrypt?
#define DHKEYSIZE 1024
int fld_sz = 256;
BYTE* g_rgbPrime = new BYTE[DHKEYSIZE/8];
char * prime = "A1BD60EBD2D43C53FA78D938C1EF8C9AD231F9862FC402739302DEF1B6BEB01E5BE59848A04C48B0069A8FB56143688678F7CC1097B921EA3E13E1EF9B9EB5381BEFDE7BBF614C13827493A1CA31DA76B4083B62C5073451D6B1F06A2F1049C291464AC68CBB2F69474470BBAD374073392696B6447C82BF55F20B2D015EB97B";
string s_prime(prime, fld_sz);
vector<std::string> res;
// split the string two charactes for converting into hex format
for (size_t i = 0; i < fld_sz; i += 2)
res.push_back(s_prime.substr(i, 2));
for(int i = 0; i < res.size(); i++) {
BYTE b = static_cast<BYTE>(std::stoi(res[i], 0, 16));
g_rgbPrime[i] = b;
}
BYTE g_rgbGenerator[128] =
{
0x02
};
BOOL fReturn;
HCRYPTPROV hProvParty1 = NULL;
HCRYPTPROV hProvParty2 = NULL;
CRYPT_DATA_BLOB P;
CRYPT_DATA_BLOB G;
HCRYPTKEY hPrivateKey1 = NULL;
HCRYPTKEY hPrivateKey2 = NULL;
PBYTE pbKeyBlob1 = NULL;
PBYTE pbKeyBlob2 = NULL;
HCRYPTKEY hSessionKey1 = NULL;
HCRYPTKEY hSessionKey2 = NULL;
PBYTE pbData = NULL;
/************************
Construct data BLOBs for the prime and generator. The P and G
values, represented by the g_rgbPrime and g_rgbGenerator arrays
respectively, are shared values that have been agreed to by both
parties.
************************/
P.cbData = DHKEYSIZE / 8;
P.pbData = (BYTE*)(g_rgbPrime);
G.cbData = DHKEYSIZE / 8;
G.pbData = (BYTE*)(g_rgbGenerator);
/************************
Create the private Diffie-Hellman key for party 1.
************************/
// Acquire a provider handle for party 1.
fReturn = CryptAcquireContext(
&hProvParty1,
NULL,
MS_ENH_DSS_DH_PROV,
PROV_DSS_DH,
CRYPT_VERIFYCONTEXT);
if(!fReturn)
{
goto ErrorExit;
}
// Create an ephemeral private key for party 1.
fReturn = CryptGenKey(
hProvParty1,
CALG_DH_EPHEM,
DHKEYSIZE << 16 | CRYPT_EXPORTABLE | CRYPT_PREGEN,
&hPrivateKey1);
if(!fReturn)
{
goto ErrorExit;
}
// Set the prime for party 1's private key.
fReturn = CryptSetKeyParam(
hPrivateKey1,
KP_P,
(PBYTE)&P,
0);
if(!fReturn)
{
std::cout << GetLastError() << endl;
goto ErrorExit;
}
// Set the generator for party 1's private key.
fReturn = CryptSetKeyParam(
hPrivateKey1,
KP_G,
(PBYTE)&G,
0);
if(!fReturn)
{
std::cout << GetLastError() << endl;
goto ErrorExit;
}
Thanks in advance.
Update 1:
Thanks to #RbMm I was able to set the prime. The problem was with DHKEYSize. However i am getting an error in while setting KP_X. updated the code above to reflect the new code.
Here i converted the string to hex bytes array.
size of prime KP_P (and KP_G) and DH key size hard connected. must be cbKey == 8*cbP. look for example Diffie-Hellman Client Code for Creating the Master Key:
as key size if used cbP * 8 where cbP size of prime P. in your link also P.cbData = DHKEYSIZE/8;
also in code instead hard-code size of P (and G) you can get it in runtime:
ULONG dwDataLen;
CryptGetKeyParam(hPrivateKey1, KP_P, 0, &(dwDataLen = 0), 0);
CryptGetKeyParam(hPrivateKey1, KP_G, 0, &(dwDataLen = 0), 0);
and you can sure that dwDataLen == DHKEYSIZE / 8 where DHKEYSIZE is key size.
because you use 512 as key size, the length of data for P and G must be 512/8=64. but you use 256 (for P) and 1 (for G). as result and error.