I'm getting a memory leak when calling a .Net dll from C++ in the function below. I'm thinking it is with the SafeArray declaration but I don't know how else to release the memory besides SafeArrayDestroyDescriptor. What else am I missing?
VARIANT_BOOL SendPack(IDotNetDll* wb, WW_POLL pl)
{
HRESULT hr = 0;
VARIANT_BOOL bretval;
BYTE destination = pl.uDest;
BYTE raw[2];
raw[0] = pl.uAPI;
raw[1] = pl.uOpcode;
SAFEARRAY* bytes = NULL;
hr = SafeArrayAllocDescriptor(1, &bytes);
bytes->cbElements = sizeof(raw[0]);
bytes->rgsabound[0].cElements = sizeof(raw);
bytes->rgsabound[0].lLbound = 0;
bytes->pvData = raw;
bytes->fFeatures = FADF_AUTO | FADF_FIXEDSIZE;
wb->SendMessage(destination,(BYTE)4, bytes, VARIANT_FALSE,
200.0,&bretval);
SafeArrayDestroyDescriptor(bytes);
return bretval;
}
Edit:
I also tried this method using a variant
VARIANT_BOOL SendPacket(IDotNetDll* wb, WW_POLL pl)
{
HRESULT hr = 0;
VARIANT_BOOL bretval;
_variant_t var;
void * pArrayData = NULL;
var.vt = VT_ARRAY | VT_UI1;
SAFEARRAYBOUND rgsabound1[1];
rgsabound1[0].cElements = 5;
rgsabound1[0].lLbound = 0;
var.parray = SafeArrayCreate(VT_UI1, 1, rgsabound1);
BYTE destination = pl.uDest;
SafeArrayAccessData(var.parray, &pArrayData);
BYTE raw[5];
raw[0] = pl.uAPI;
raw[1] = pl.uOpcode;
raw[2] = pl.uPayLoad[0];
raw[3] = pl.uPayLoad[1];
raw[4] = pl.uPayLoad[2];
memcpy(pArrayData, raw, 5);
SafeArrayUnaccessData(var.parray);
//Send packet
wb->SendMessage(destination,
(BYTE)4,
var.parray,
VARIANT_FALSE,
200.0, &bretval);
var.Clear();
return bretval;
}
Here is the memory usage from VS 2015 profiler
Related
I don't know why I can't unpack the authentification buffer used in CredUIPromptForWindowsCredentials with CredUnPackAuthenticationBufferW, I always get ERROR_INSUFFICIENT_BUFFER error.
I will appreciate your help.
std::wstring caption = L"Caption";
std::wstring msg= L"Msg";
CREDUI_INFOW credui = {};
credui.cbSize = sizeof(credui);
credui.hwndParent = nullptr;
credui.pszMessageText = msg.c_str();
credui.pszCaptionText = caption.c_str();
credui.hbmBanner = nullptr;
ULONG authPackage = 0;
LPVOID outCredBuffer = nullptr;
ULONG outCredSize = 0;
BOOL save = false;
LPWSTR pszUserName = nullptr;
DWORD pcchlMaxUserName = 0;
LPWSTR pszDomainName = nullptr;
DWORD pcchMaxDomainName = 0;
LPWSTR pszPassword = nullptr;
DWORD pcchMaxPassword = 0;
DWORD result = CredUIPromptForWindowsCredentialsW(&credui,
0,
&authPackage,
nullptr,
0,
&outCredBuffer,
&outCredSize,
&save,
CREDUIWIN_ENUMERATE_ADMINS);
std::cout <<CredUnPackAuthenticationBufferW(CRED_PACK_PROTECTED_CREDENTIALS
,outCredBuffer
,outCredSize
,pszUserName
,&pcchlMaxUserName
,pszDomainName
,&pcchMaxDomainName
,pszPassword
,&pcchMaxPassword) << std::endl;
std::cout << GetLastError() << std::endl; // out put 122 == ERROR_INSUFFICIENT_BUFFER
this is typical winapi pattern - api must return some information in the memory buffer. but instead allocate buffer yourself - it obligate caller to allocate buffer.
so caller must allocate buffer itself and pass it pointer and size to api.
api check buffer size - if it large enough fill information to buffer, otherwise return ERROR_INSUFFICIENT_BUFFER (assume that no another errors) or sometime ERROR_MORE_DATA. which which concrete error reurned ERROR_INSUFFICIENT_BUFFER or ERROR_MORE_DATA usual direct documented for api call. different between this 2 errors: ERROR_INSUFFICIENT_BUFFER - mean no any info filled to buffer at all, when ERROR_MORE_DATA mean some data is returned, but incomplete.
and api return to user, via some out parameter, required buffer size in this case. frequently this is done via the same inout parameter - pointer to DWORD. in input specifies the size of user allocated buffer, in output - specifies the required size of buffer or size of returned data
frequently which buffer size is required - unknown at begin. so we need or call api with 0 size buffers(s) first, or allocate some, supposedly sufficient buffer size. if buffer will be insuffient - reallocate or extend it and call api again. for some api (like CredUnPackAuthenticationBufferW) the required output buffer does not change with time (if input parameters not changed), but usual output buffer size may change between calls - even second call with buffer size returned by first call can fail with buffer size error (because returned data may grow between calls). in this case need call api in do/while(error == ERROR_INSUFFICIENT_BUFFER/ERROR_MORE_DATA) loop. but even in case output buffer does not change with time we can better do this is loop with single api call inside, instead 2 api calls.
for concrete case code can look like
ULONG cred()
{
CREDUI_INFO ci = { sizeof(ci) };
BOOL bSave = FALSE;
PVOID pvOutAuthBuffer;
ULONG ulOutAuthBufferSize;
ULONG ulAuthPackage = 0;
ULONG dwError = CredUIPromptForWindowsCredentials(
&ci, NOERROR, &ulAuthPackage, 0, 0,
&pvOutAuthBuffer, &ulOutAuthBufferSize,
&bSave, CREDUIWIN_ENUMERATE_ADMINS );
if (dwError == NOERROR)
{
ULONG cchUserName = 0;
ULONG cchPassword = 0;
ULONG cchDomain = 0;
static volatile UCHAR guz = 0;
PWSTR stack = (PWSTR)alloca(guz);
PWSTR szUserName = 0, szPassword = 0, szDomainName = 0;
ULONG cchNeed, cchAllocated = 0;
do
{
if (cchAllocated < (cchNeed = cchUserName + cchPassword + cchDomain))
{
szUserName = (PWSTR)alloca((cchNeed - cchAllocated) * sizeof(WCHAR));
cchAllocated = (ULONG)(stack - szUserName);
szPassword = szUserName + cchUserName;
szDomainName = szPassword + cchPassword;
}
dwError = CredUnPackAuthenticationBuffer(
CRED_PACK_PROTECTED_CREDENTIALS,
pvOutAuthBuffer, ulOutAuthBufferSize,
szUserName, &cchUserName,
szDomainName, &cchDomain,
szPassword, &cchPassword)
? NOERROR : GetLastError();
if (dwError == NOERROR)
{
DbgPrint("%S#%S %S\n", szDomainName, szUserName, szPassword);
break;
}
} while (dwError == ERROR_INSUFFICIENT_BUFFER);
CoTaskMemFree(pvOutAuthBuffer);
}
return dwError;
}
#RbMm - you're right! I tested it with LogonUser, and it works perfectly. Thanks.
And for a ready solution, I got this :
bool Authenticate_ADMIN_User(std::wstring caption, std::wstring msg, int maxReAsks = 0)
{
CREDUI_INFOW credui = {};
credui.cbSize = sizeof(credui);
credui.hwndParent = nullptr;
credui.pszMessageText = msg.c_str();
credui.pszCaptionText = caption.c_str();
credui.hbmBanner = nullptr;
ULONG authPackage = 0,
outCredSize = 0;
LPVOID outCredBuffer = nullptr;
BOOL save = false;
DWORD err = 0;
int tries = 0;
bool reAsk = false;
do
{
tries++;
if(CredUIPromptForWindowsCredentialsW(&credui,
err,
&authPackage,
nullptr,
0,
&outCredBuffer,
&outCredSize,
&save,
CREDUIWIN_ENUMERATE_ADMINS)
!= ERROR_SUCCESS)
return false;
ULONG cchUserName = 0;
ULONG cchPassword = 0;
ULONG cchDomain = 0;
ULONG cchNeed, cchAllocated = 0;
static volatile UCHAR guz = 0;
PWSTR stack = (PWSTR)alloca(guz);
PWSTR szUserName = nullptr, szPassword = nullptr, szDomainName = nullptr;
BOOL ret;
do{
if (cchAllocated < (cchNeed = cchUserName + cchPassword + cchDomain))
{
szUserName = (PWSTR)alloca((cchNeed - cchAllocated) * sizeof(WCHAR));
cchAllocated = (ULONG)(stack - szUserName);
szPassword = szUserName + cchUserName;
szDomainName = szPassword + cchPassword;
}
ret = CredUnPackAuthenticationBuffer(
CRED_PACK_PROTECTED_CREDENTIALS , outCredBuffer, outCredSize, szUserName, &cchUserName,
szDomainName, &cchDomain, szPassword,
&cchPassword);
}while(!ret && GetLastError() == ERROR_INSUFFICIENT_BUFFER);
SecureZeroMemory(outCredBuffer, outCredSize);
CoTaskMemFree(outCredBuffer);
HANDLE handle = nullptr;
if (LogonUser(szUserName,
szDomainName,
szPassword,
LOGON32_LOGON_INTERACTIVE,
LOGON32_PROVIDER_DEFAULT,
&handle))
{
CloseHandle(handle);
return true;
}
else
{
err = ERROR_LOGON_FAILURE;
reAsk = true;
}
}while(reAsk && tries < maxReAsks);
return false;
}
I have a byte array of a .NET application inside c++ code.
I want to execute this .NET application without writing those bytes on the disk. ICLRRuntimeHost::ExecuteInDefaultAppDomain expects a path to the assembly so it's out of the equation here. I'm looking for a possible way (or hack) to pass the binary directly to the clr.
So what i can do ?
//todo error checks/cleanup
HRESULT hr;
ICLRMetaHost *pMetaHost = NULL;
ICLRRuntimeInfo *pRuntimeInfo = NULL;
ICorRuntimeHost *pCorRuntimeHost = NULL;
IUnknownPtr spAppDomainThunk = NULL;
_AppDomainPtr spDefaultAppDomain = NULL;
bstr_t bstrAssemblyName(L"");
_AssemblyPtr spAssembly = NULL;
bstr_t bstrClassName(L"");
_TypePtr spType = NULL;
variant_t vtEmpty;
bstr_t bstrStaticMethodName(L"Main");
variant_t vtLengthRet;
hr = CLRCreateInstance(CLSID_CLRMetaHost, IID_PPV_ARGS(&pMetaHost));
const wchar_t* pszVersion = L"v2.0.50727";
hr = pMetaHost->GetRuntime(pszVersion, IID_PPV_ARGS(&pRuntimeInfo));
BOOL fLoadable;
hr = pRuntimeInfo->IsLoadable(&fLoadable);
if (!fLoadable) { wprintf(L".NET runtime %s cannot be loaded\n", pszVersion); return; }
hr = pRuntimeInfo->GetInterface(CLSID_CorRuntimeHost, IID_PPV_ARGS(&pCorRuntimeHost));
hr = pCorRuntimeHost->Start();
hr = pCorRuntimeHost->GetDefaultDomain(&spAppDomainThunk);
hr = spAppDomainThunk->QueryInterface(IID_PPV_ARGS(&spDefaultAppDomain));
SAFEARRAYBOUND bounds[1];
bounds[0].cElements = array_len;
bounds[0].lLbound = 0;
SAFEARRAY* arr = SafeArrayCreate(VT_UI1, 1, bounds);
SafeArrayLock(arr);
memcpy(arr->pvData, bytearray, array_len);
SafeArrayUnlock(arr);
hr = spDefaultAppDomain->Load_3(arr, &spAssembly);
hr = spAssembly->GetType_2(bstrClassName, &spType);
hr = spType->InvokeMember_3(bstrStaticMethodName, static_cast<BindingFlags>(BindingFlags_InvokeMethod | BindingFlags_Static | BindingFlags_Public), NULL, vtEmpty, nullptr, &vtLengthRet);
SafeArrayDestroy(arr);
Copying a ID3D11Texture2D to a byte array is causing out of memory error c++
The copy is happening every frame and it runs for around 5 seconds until I get the memory error, which is happening on either
HRESULT hr = device->CreateTexture2D(&description, NULL, &texTemp);
Result in GetImageData returning NULL
OR
when a new byte array is created at
BYTE* dest = new BYTE[(*nWidth)*(*nHeight) * 4];
Giving,
Unhandled exception at 0x76414DBD in NDIlib_Send_Video.x86.exe: Microsoft C++ exception: std::bad_alloc at memory location 0x0107F5D8
I'm not sure which memory I should be releasing, probably the byte arrays.., I've tried this using delete[] aByteArray, but this causes a crash.
Here's my function for doing the copy,
BYTE* GetImageData(ID3D11Device* device, ID3D11DeviceContext* context, ID3D11Texture2D* texture, /*OUT*/ int* nWidth, /*OUT*/ int* nHeight) {
if (texture) {
D3D11_TEXTURE2D_DESC description;
texture->GetDesc(&description);
description.BindFlags = 0;
description.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE;
description.Usage = D3D11_USAGE_STAGING;
ID3D11Texture2D* texTemp = NULL;
HRESULT hr = device->CreateTexture2D(&description, NULL, &texTemp);
if (FAILED(hr))
{
if (hr == E_OUTOFMEMORY) {
printf("GetImageData - CreateTexture2D - OUT OF MEMORY \n");
}
if (texTemp)
{
texTemp->Release();
texTemp = NULL;
}
return NULL;
}
context->CopyResource(texTemp, texture);
D3D11_MAPPED_SUBRESOURCE mapped;
unsigned int subresource = 0;
hr = context->Map(texTemp, 0, D3D11_MAP_READ, 0, &mapped);
if (FAILED(hr))
{
printf("GetImageData - Map - FAILED \n");
texTemp->Release();
texTemp = NULL;
return NULL;
}
*nWidth = description.Width;
*nHeight = description.Height;
const int pitch = mapped.RowPitch;
BYTE* source = (BYTE*)(mapped.pData);
BYTE* dest = new BYTE[(*nWidth)*(*nHeight) * 4];
BYTE* destTemp = dest;
for (int i = 0; i < *nHeight; ++i)
{
memcpy(destTemp, source, *nWidth * 4);
source += pitch;
destTemp += *nWidth * 4;
}
context->Unmap(texTemp, 0);
return dest;
}
else {
printf("GetImageData - texture null - FAILED \n");
return NULL;
}
}
And here's how I'm using GetImageData from my main function
BYTE* p_frame = (BYTE*)GetImageData(g_d3dDevice, g_d3dDeviceContext, g_pDeskTexture, &w, &h);
if (p_frame == NULL) printf("GetImageData - Returned NULL \n");
else {
printf("GetImageData - OK \n");
}
Where do you delete dest* or in the case after being returned p_frame*? You are allocating data using new and returning this pointer, if you don't delete this you will eventually run out of memory, especially if you're calling this every frame.
You should also consider some form of memory management and define ownership of what's being allocated, passing back raw pointers is a prime suspect for memory leaks.
We have an old C++ COM .dll that a customer calls from their C++ client.
We are trying to replace our old .dll with a new COM registered one written in .NET.
The C++ client can call our new .dll but crashes after a particular method call to us.
It seems we are returning something wrong in our variable "outNoOfChildren" or in the outChildList array below.
(The error message in the client reads: "Unknown exception caught.
Number of childs Measured: -2147467259 Limits: = 2", it expects 2 which we are trying to return.)
Strange thing is that the method call in our new .NET .dll seems to work from another test client we have (VB6).
This is the function in the C++ .dll that we are trying to replace:
STDMETHODIMP marcomBase::XgetChildren(VARIANT inUser,
VARIANT inId,
VARIANT *outNoOfChildren,
VARIANT *outChildList,
VARIANT *outErrMsg,
VARIANT *status)
long stat= 1;
char user[255+1];
char id[255+1];
long noOfChildren = 0;
char errMsg[255+1] = "";
_bstr_t temp_bstr;
long inparamType;
long dumInt;
SAFEARRAY *pArray;
long ix[2];
VARIANT var;
SAFEARRAYBOUND rgsabound[2];
ChildT childList;
ChildT *childListTemp = NULL;
ChildT *childListTempNext = NULL;
childList.nextChild = NULL;
getInParam( inUser, &inparamType, user, &dumInt);
if (inparamType != VT_BSTR)
{
strcpy(errMsg, "Parameter 1 incorrect, type must be VT_BSTR");
stat = 0;
}
if (stat == 1)
{
getInParam( inId, &inparamType, id, &dumInt);
if (inparamType != VT_BSTR)
{
strcpy(errMsg, "Parameter 2 incorrect, type must be VT_BSTR");
stat = 0;
}
}
if (stat == 1)
{
stat = barApiObj.getChildren(user, id, &noOfChildren, childList, errMsg);
}
outNoOfChildren->vt = VT_I4;
outNoOfChildren->lVal = noOfChildren;
rgsabound[0].lLbound = 1;
rgsabound[0].cElements = noOfChildren;
rgsabound[1].lLbound = 1;
rgsabound[1].cElements = 3;
pArray = ::SafeArrayCreate(VT_VARIANT, 2, rgsabound);
outChildList->vt = VT_ARRAY|VT_VARIANT;
outChildList->parray = pArray;
ix[0] = 1;
childListTemp = childList.nextChild;
while (childListTemp != NULL)
{
temp_bstr = childListTemp->child;
var.vt = VT_BSTR;
var.bstrVal = temp_bstr.copy();
ix[1] = 1;
::SafeArrayPutElement(pArray, ix, &var);
temp_bstr = childListTemp->prodNo;
var.vt = VT_BSTR;
var.bstrVal = temp_bstr.copy();
ix[1] = 2;
::SafeArrayPutElement(pArray, ix, &var);
temp_bstr = childListTemp->rev;
var.vt = VT_BSTR;
var.bstrVal = temp_bstr.copy();
ix[1] = 3;
::SafeArrayPutElement(pArray, ix, &var);
ix[0] = ix[0] + 1;
childListTempNext = childListTemp->nextChild;
delete childListTemp;
childListTemp = childListTempNext;
}
temp_bstr = errMsg;
outErrMsg->vt = VT_BSTR;
outErrMsg->bstrVal = temp_bstr.copy();
status->vt = VT_I4;
status->lVal = stat;
return S_OK;
}
This is the .NET .dll that we have written:
.NET Interface:
...
[DispId(12)]
[Description("method XgetChildren")]
object XgetChildren(object inUser, object inId, out object outNoOfChildren, out object outChildList, out object outErrMsg);
...
.NET Class
public object XgetChildren(object inUser, object inId, out object outNoOfChildren, out object outChildList, out object outErrMsg)
{
outNoOfChildren = 0;
outChildList = null;
outErrMsg = "";
List<IndividualInfo> children = null;
try
{
PrevasMesExternalServiceClient client = getClient();
children = client.GetChildren(new SerialNo() { Number = inId.ToString() });
}
catch (Exception ex)
{
return Constants.STATUS_FAIL;
}
int[] myLengthsArray = { children.Count, 3 }; //Length of each array
int[] myBoundsArray = { 1, 1 }; //Start index of each array
Array myArray = Array.CreateInstance(typeof(string), myLengthsArray, myBoundsArray); //Create 1-based array of arrays
for (int i = 0; i < children.Count; i++)
{
myArray.SetValue(Utils.getStrValue(children[i].SerialNumber, Constants.pmesIDNO_LENGTH), i+1, 1);
myArray.SetValue(Utils.getStrValue(children[i].ProductNumber, Constants.pmesPRODUCTNO_LENGTH), i+1, 2);
myArray.SetValue(Utils.getStrValue(children[i].Revision, Constants.pmesRSTATE_LENGTH), i+1, 3);
}
outChildList = myArray;
outNoOfChildren = children.Count;
return Constants.STATUS_OK;
}
Update:
The original C++ .dll looks like this in OLE/COM Object Viewer:
[id(0x0000000c), helpstring("method XgetChildren")]
HRESULT XgetChildren(
[in] VARIANT inUser,
[in] VARIANT inId,
[out] VARIANT* outNoOfChildren,
[out] VARIANT* outChildList,
[out] VARIANT* outErrMsg,
[out, retval] VARIANT* status);
After registering our new .NET .dll it looks like this in OLE/COM Object Viewer:
[id(0x0000000c), helpstring("method ")]
HRESULT XgetChildren(
[in] VARIANT inUser,
[in] VARIANT inId,
[out] VARIANT* outNoOfChildren,
[out] VARIANT* outChildList,
[out] VARIANT* outErrMsg,
[out, retval] VARIANT* pRetVal);
Update 2: I'm beginning to suspect the error may not be in "outNoOfChildren" but in the array "outChildList". I have updated the question and modified the code samples for this.
Any ideas much appreciated!
Measured: -2147467259 Limits: = 2"
That's a magic number. Convert it to hex and you get 0x80004005. That's an error code, the infamous E_FAIL or "Unspecified error". So somewhere in the dots that we can't see, you are interpreting an error return value as a number. Possibly a variant of type vtError and ignoring the variant type. That's all that can be concluded from the question, review the error handling in your code.
I have a piece of HTML source like this:
<FONT color=#5a6571>Beverly Mitchell</FONT> <FONT color=#5a6571>Shawnee Smith</FONT> <FONT color=#5a6571>Glenn Plummer</FONT> <NOBR>more >></NOBR>
I tried to retrieve the "color" value, like this:
MSHTML::IHTMLDocument2Ptr htmDoc1 = NULL;
SAFEARRAY *psaStrings1 = SafeArrayCreateVector(VT_VARIANT, 0, 1);
CoCreateInstance(CLSID_HTMLDocument, NULL, CLSCTX_INPROC_SERVER, IID_IHTMLDocument2, (void**) &htmDoc1);
VARIANT *param1 = NULL;
HRESULT hr = SafeArrayAccessData(psaStrings1, (LPVOID*)¶m1);
param1->vt = VT_BSTR;
param1->bstrVal = SysAllocString(varSrc1.bstrVal);
hr = SafeArrayUnaccessData(psaStrings1);
hr = htmDoc1->write(psaStrings1);
MSHTML::IHTMLElementPtr pElemBody1 = NULL;
MSHTML::IHTMLDOMNodePtr pHTMLBodyDOMNode1 =NULL;
hr = htmDoc1->get_body(&pElemBody1);
if(SUCCEEDED(hr))
{
hr = pElemBody1->QueryInterface(IID_IHTMLDOMNode,(void**)&pHTMLBodyDOMNode1);
if(SUCCEEDED(hr))
{
ProcessDomNodeSmartWrapper(pHTMLBodyDOMNode1, ProcTgtTagStrVec);
}
}
long lLength = 0;
MSHTML::IHTMLElementCollectionPtr pElemColl1 = NULL;
MSHTML::IHTMLElementPtr pChElem1 = NULL;
MSHTML::IHTMLStylePtr pStyle1 = NULL;
IDispatchPtr ppvdisp1 = NULL;
hr = htmDoc1->get_all(&pElemColl1);
hr = pElemColl1->get_length(&lLength);
for(long i = 0; i < lLength; i++)
{
_variant_t name(i);
_variant_t index(i);
ppvdisp1 = pElemColl1->item(name, index);
if(ppvdisp1 && SUCCEEDED(hr))
{
hr = ppvdisp1->QueryInterface(IID_IHTMLElement, (void **)&pChElem1);
if(pChElem1 && SUCCEEDED(hr))
{
BSTR bstrTagName = NULL;
pChElem1->get_tagName(&bstrTagName);
hr = pChElem1->get_style(&pStyle1);
if(pStyle1 && SUCCEEDED(hr))
{
_variant_t varFtCol;
hr = pStyle1->get_color(&varFtCol);
if(hr = S_OK && varFtCol)
{
hmStyles1[wstring(varFtCol.bstrVal)] = L"FontColor";
}
}
if(bstrTagName)
SysFreeString(bstrTagName);
} // if pStyle && SUCCEEDED(hr)
}//if ppvdisp && SUCCEEDED(hr)
}//for
But I can never get the "color" value - varFtCol.bstrVal is a bad pointer when I debug the program. This is what varFtCol showed when I debug the program:
- varFtCol {???} _variant_t
- tagVARIANT BSTR = 0x00000000 tagVARIANT
vt 8 unsigned short
- BSTR 0x00000000 wchar_t *
CXX0030: Error: expression cannot be evaluated
#5a6571 is a hex color represents for RGB value of (90,101,113).
How can I get this color info?
You shouldn't be getting style on pChElem1 because the color is not part of style in your case. Color is part of Font element.
Instead you must call pChElem1->getAttribute("color" . . .)
This will return #5a6571
The following code is in MFC. But you can easily convert to regular Win32 if you are not using MFC.
COLORREF GetColorFromHexString( CString szColor )
{
TCHAR *szScan;
CString strTemp;
CString strColor = szColor;
long lRR = 0,lGG = 0,lBB = 0;
//first we will remove # characters which come from XML document
strColor.TrimLeft(_T('#'));
strColor.TrimRight(_T('#'));
//it should be of the form RRGGBB
if (strColor.GetLength() == 6) {
//get red color, from the hexadecimal string
strTemp = strColor.Left(2);
lRR = _tcstol(LPCTSTR(strTemp),&szScan,16);
//get green color
strTemp = strColor.Mid(2,2);
lGG = _tcstol(LPCTSTR(strTemp),&szScan,16);
//get blue color
strTemp = strColor.Right(2);
lBB = _tcstol(LPCTSTR(strTemp),&szScan,16);
}
return RGB(lRR,lGG,lBB);
}
According to the MSDN documentation, IHTMLStyle::get_color may return either a BSTR or an integer value in the variant. Have you tried assigning varFtCol into an integer value and examining that result?
const int colorValue = static_cast<int>(varFtCol);
As a recommendation, when working with _variant_t, it is usually best to use the built-in casting operators than to direct access the members of the union itself.