I know that this topic is quite popular. However, I'd like to solve a simple problem... My target is to create a DLL in C++ able to import and export data matrices from VB (Excel or other programs). I read in some blogs that a common way to do that is to use the SAFEARRAYs.
So, I created the following C++ code:
VARIANT _stdcall ReadArrayVBA_v1(VARIANT *Input0) {
double dm = 0, rm = 0;
//OUTPUT DATA DEFINITION
VARIANT v;
SAFEARRAYBOUND Dim[1];
Dim[0].lLbound = 0; Dim[0].cElements = 4;
v.vt = VT_R8;
v.parray = SafeArrayCreate(VT_R8, 1, Dim);
//==============================================
SAFEARRAY* pSafeArrayInput0 = NULL; //Define a pointer SAFEARRAY Type
pSafeArrayInput0 = *V_ARRAYREF(Input0);
long lLBound = -1, lUBound = 1; //Preset dimension
SafeArrayGetLBound(pSafeArrayInput0, 1, &lLBound);
SafeArrayGetUBound(pSafeArrayInput0, 1, &lUBound);
long CntElements = lUBound - lLBound + 1;
long Index[1];
for (int i=0; i<CntElements; i++){
Index[0] = i;
SafeArrayGetElement(pSafeArrayInput0, Index, &dm);
rm = dm + 1;
SafeArrayPutElement(v.parray, Index, &rm);
}
return v;
}
It compiles and I call from VBA Excel in the following manner:
Private Declare Function ReadArrayVBA_v1 Lib "[PATH]\VARIANTtest.dll" (ByRef Input0 As Variant) As Variant
Sub VBACall()
Dim InputR0(1 To 4) As Variant
Dim vResult1 As Variant
InputR0(1) = 2
InputR0(2) = 20
InputR0(3) = 200
InputR0(4) = 240
vResult1 = ReadArrayVBA_v1(InputR0)
End Sub
The function gives me back values such as 1.2E-305 and similar. Why?
Related
I'm using the OLE DB bulk copy operation against a SQL Server database but I'm having trouble while loading data into bit columns - they are always populated with true!
I created a simple reproduction program from Microsoft's sample code with the snippet that I adjusted below. My program includes a little SQL script to create the destination table. I had to download and install the x64 version of the SQL Server OLE DB driver to build this.
// Set up custom bindings.
oneBinding.dwPart = DBPART_VALUE | DBPART_LENGTH | DBPART_STATUS;
oneBinding.iOrdinal = 1;
oneBinding.pTypeInfo = NULL;
oneBinding.obValue = ulOffset + offsetof(COLUMNDATA, bData);
oneBinding.obLength = ulOffset + offsetof(COLUMNDATA, dwLength);
oneBinding.obStatus = ulOffset + offsetof(COLUMNDATA, dwStatus);
oneBinding.cbMaxLen = 1; // Size of varchar column.
oneBinding.pTypeInfo = NULL;
oneBinding.pObject = NULL;
oneBinding.pBindExt = NULL;
oneBinding.dwFlags = 0;
oneBinding.eParamIO = DBPARAMIO_NOTPARAM;
oneBinding.dwMemOwner = DBMEMOWNER_CLIENTOWNED;
oneBinding.bPrecision = 0;
oneBinding.bScale = 0;
oneBinding.wType = DBTYPE_BOOL;
ulOffset = oneBinding.cbMaxLen + offsetof(COLUMNDATA, bData);
ulOffset = ROUND_UP(ulOffset, COLUMN_ALIGNVAL);
if (FAILED(hr =
pIFastLoad->QueryInterface(IID_IAccessor, (void**)&pIAccessor)))
return hr;
if (FAILED(hr = pIAccessor->CreateAccessor(DBACCESSOR_ROWDATA,
1,
&oneBinding,
ulOffset,
&hAccessor,
&oneStatus)))
return hr;
// Set up memory buffer.
pData = new BYTE[40];
if (!(pData /* = new BYTE[40]*/)) {
hr = E_FAIL;
goto cleanup;
}
pcolData = (COLUMNDATA*)pData;
pcolData->dwLength = 1;
pcolData->dwStatus = 0;
for (i = 0; i < 10; i++)
{
if (i % 2 == 0)
{
pcolData->bData[0] = 0x00;
}
else
{
pcolData->bData[0] = 0xFF;
}
if (FAILED(hr = pIFastLoad->InsertRow(hAccessor, pData)))
goto cleanup;
}
It's entirely likely that I'm putting the wrong value into the buffer, or have some other constant value incorrect. I did find an article describing the safety of various data type conversions and it looks like byte to bool is safe... but how would the buffer know what kind of data I'm putting in there if it's just a byte array?
Figured this out, I had not correctly switched over the demo from loading strings to fixed-width values. For strings, the data blob gets a 1-width pointer to the value whereas fixed-width values get the actual data.
So my COLUMNDATA struct now looks like this:
// How to lay out each column in memory.
struct COLUMNDATA {
DBLENGTH dwLength; // Length of data (not space allocated).
DWORD dwStatus; // Status of column.
VARIANT_BOOL bData; // Value, or if a string, a pointer to the value.
};
With the relevant length fix here:
pcolData = (COLUMNDATA*)pData;
pcolData->dwLength = sizeof(VARIANT_BOOL); // using a short.. make it two
pcolData->dwStatus = DBSTATUS_S_OK; // Indicates that the data value is to be used, not null
And the little value-setting for loop looks like this:
for (i = 0; i < 10; i++)
{
if (i % 2 == 0)
{
pcolData->bData = VARIANT_TRUE;
}
else
{
pcolData->bData = VARIANT_FALSE;
}
if (FAILED(hr = pIFastLoad->InsertRow(hAccessor, pData)))
goto cleanup;
}
I've updated the repository with the working code. I was inspired to make this change after reading the documentation for the obValue property.
I'm using openssl library to implement an ETSI standard and, more specifically, to realize communications with a PKI.
To do that I must use the ECIES encryption scheme but it isn't implemented in openssl.
I have found this piece of code here in the crypto++ google group:
int curve_id = EC_GROUP_get_curve_name(EC_KEY_get0_group((EC_KEY*)m_pPrivKey));
EC_KEY* temp_key = EC_KEY_new_by_curve_name(curve_id);
size_t uPubLen = i2o_ECPublicKey((EC_KEY*)m_pPrivKey, NULL);
o2i_ECPublicKey(&temp_key, (const byte**)&pCiphertext, uPubLen); // warnign this moves the pCiphertext pointer
uCiphertextSize -= uPubLen;
size_t SecLen = (EC_GROUP_get_degree(EC_KEY_get0_group((EC_KEY*)m_pPrivKey)) + 7) / 8;
byte* pSec = new byte[SecLen];
int ret = ECDH_compute_key(pSec, SecLen, EC_KEY_get0_public_key(temp_key), (EC_KEY*)m_pPrivKey, NULL);
ASSERT(ret == SecLen);
EC_KEY_free(temp_key);
CHashFunction GenFx(CHashFunction::eSHA1); // <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
uPlaintextSize = (uCiphertextSize > GenFx.GetSize()) ? (uCiphertextSize - GenFx.GetSize()) : 0;
int mac_key_len = 16;
int GenLen = uPlaintextSize + mac_key_len;
uint32 counter = 1;
CBuffer GenHash;
while(GenHash.GetSize() < GenLen)
{
GenFx.Add(pSec, SecLen);
CBuffer Buff;
Buff.WriteValue<uint32>(counter++, true);
GenFx.Add(&Buff);
GenFx.Finish();
GenHash.AppendData(GenFx.GetKey(), GenFx.GetSize());
GenFx.Reset();
}
GenHash.SetSize(GenLen); // truncate
delete pSec;
byte* key = GenHash.GetBuffer();
byte* macKey = key + uPlaintextSize;
unsigned char* result;
size_t mac_len = uCiphertextSize - uPlaintextSize;
ASSERT(mac_len == 20);
byte* mac_result = new byte[mac_len];
HMAC_CTX ctx;
HMAC_CTX_init(&ctx);
HMAC_Init_ex(&ctx, macKey, mac_key_len, EVP_sha1(), NULL);
HMAC_Update(&ctx, pCiphertext, uPlaintextSize);
HMAC_Final(&ctx, mac_result, &mac_len);
HMAC_CTX_cleanup(&ctx);
Ret = memcmp(pCiphertext + uPlaintextSize, mac_result, mac_len) == 0 ? 1 : 0;
delete mac_result;
ASSERT(pPlaintext == NULL);
pPlaintext = new byte[uPlaintextSize];
for(int i=0; i < uPlaintextSize; i++)
pPlaintext[i] = pCiphertext[i] ^ key[i];
But I am not sure that it works correctly and I don't know how use this piece of code.
Have anyone already implemented this scheme?
I managed to create a VHD and attached it. Afterwards, I created a disk(IOCTL CREATE_DISK) and set its layout using IOCTL_DISK_SET_DRIVE_LAYOUT_EX. Now, when I examine disk through Disk Management. I have a 14MB with a 7 MB partition, expectedly.
int sign = 80001;
CREATE_DISK disk;
disk.Mbr.Signature = sign;
disk.PartitionStyle = PARTITION_STYLE_MBR;
auto res = DeviceIoControl(device_handle, IOCTL_DISK_CREATE_DISK, &disk, sizeof(disk), NULL, 0, NULL, NULL);
res = DeviceIoControl(device_handle, IOCTL_DISK_UPDATE_PROPERTIES, 0, 0, NULL, 0, NULL, NULL);
LARGE_INTEGER partition_size;
partition_size.QuadPart = 0xF00;
DWORD driver_layout_ex_len = sizeof(DRIVE_LAYOUT_INFORMATION_EX);
DRIVE_LAYOUT_INFORMATION_EX driver_layout_info;
memset(&driver_layout_info, 0, sizeof(DRIVE_LAYOUT_INFORMATION_EX));
driver_layout_info.Mbr.Signature = sign;
driver_layout_info.PartitionCount = 1;
driver_layout_info.PartitionStyle = PARTITION_STYLE_MBR;
PARTITION_INFORMATION_EX part_info;
PARTITION_INFORMATION_MBR mbr_info;
part_info.StartingOffset.QuadPart = 32256;
part_info.RewritePartition = TRUE;
part_info.PartitionLength.QuadPart = partition_size.QuadPart/2 * 4096;
part_info.PartitionNumber = 1;
part_info.PartitionStyle = PARTITION_STYLE_MBR;
mbr_info.BootIndicator = TRUE;
mbr_info.HiddenSectors = 32256 / 512;
mbr_info.PartitionType = PARTITION_FAT32;
mbr_info.RecognizedPartition = 1;
part_info.Mbr = mbr_info;
driver_layout_info.PartitionEntry[0] = part_info;
auto res_layout = DeviceIoControl(device_handle, IOCTL_DISK_SET_DRIVE_LAYOUT_EX, &driver_layout_info, sizeof(driver_layout_info), NULL, 0, NULL, NULL);
Now, how do I partitionize this disk into two partitions? I want to create another partition out of the unpartitioned part of the disk(the other half basically). It says in the documentation is that PartitionEntry is an array of variable size(No, it is not it is an array of size 1.) Do I call set layout IOCTL for every partition I want to create? If so, how do you go about that? Is multi-partitioning possible through WINAPI interface?
P.S: I am aware that people usually invoke diskpart for this line of work.
Edit:
Adding second partition two layout was messing my stack up so I took another route (heap).
DWORD driver_layout_ex_len = sizeof(DRIVE_LAYOUT_INFORMATION_EX) + sizeof(PARTITION_INFORMATION_EX); // one layout+partition + partition
PDRIVE_LAYOUT_INFORMATION_EX driver_layout_info = (PDRIVE_LAYOUT_INFORMATION_EX) std::calloc(1, driver_layout_ex_len);
driver_layout_info->Mbr.Signature = sign;
driver_layout_info->PartitionCount = 2;
driver_layout_info->PartitionStyle = PARTITION_STYLE_MBR;
// omitted here..
PARTITION_INFORMATION_EX part_info2;
part_info2.StartingOffset.QuadPart = 32256 + part_info.PartitionLength.QuadPart;
part_info2.RewritePartition = TRUE;
part_info2.PartitionLength.QuadPart = partition_size.QuadPart / 2 * 4096;
part_info2.PartitionNumber = 2;
part_info2.PartitionStyle = PARTITION_STYLE_MBR;
part_info2.Mbr = mbr_info;
driver_layout_info->PartitionEntry[0] = part_info;
driver_layout_info->PartitionEntry[1] = part_info2;
auto res_layout = DeviceIoControl(device_handle, IOCTL_DISK_SET_DRIVE_LAYOUT_EX, driver_layout_info, driver_layout_ex_len, NULL, 0, NULL, NULL);
auto res_err = GetLastError();
Since it was overriding my device_handle I could not IOCTL at all. This improvement eliminated that. Do not forget to pass driver_layout_info instead of &driver_layout_info after this change.
It says in the documentation is that PartitionEntry is an array of
variable size(No, it is not it is an array of size 1.)
"Some Windows structures are variable-sized,
beginning with a fixed header, followed by
a variable-sized array. When these structures
are declared,
they often declare an array of size 1 where the
variable-sized array should be." Refer to #Raymond blog.
Here DRIVE_LAYOUT_INFORMATION_EX structure is an example:
typedef struct _DRIVE_LAYOUT_INFORMATION_EX {
DWORD PartitionStyle;
DWORD PartitionCount;
union {
DRIVE_LAYOUT_INFORMATION_MBR Mbr;
DRIVE_LAYOUT_INFORMATION_GPT Gpt;
} DUMMYUNIONNAME;
PARTITION_INFORMATION_EX PartitionEntry[1];
} DRIVE_LAYOUT_INFORMATION_EX, *PDRIVE_LAYOUT_INFORMATION_EX;
With this declaration, you would allocate memory for one such
variable-sized DRIVE_LAYOUT_INFORMATION_EX structure like this:
PDRIVE_LAYOUT_INFORMATION_EX driver_layout_info = (PDRIVE_LAYOUT_INFORMATION_EX)malloc(FIELD_OFFSET(DRIVE_LAYOUT_INFORMATION_EX, PartitionEntry[NumberOfPartitions]));
You would initialize the structure like this (Use 2 partitions as example):
DWORD NumberOfPartitions = 2;
LARGE_INTEGER partition_size;
partition_size.QuadPart = 0xF00;
PARTITION_INFORMATION_MBR mbr_info;
mbr_info.BootIndicator = TRUE;
mbr_info.HiddenSectors = 32256 / 512;
mbr_info.PartitionType = PARTITION_FAT32;
mbr_info.RecognizedPartition = TRUE;
PDRIVE_LAYOUT_INFORMATION_EX driver_layout_info = (PDRIVE_LAYOUT_INFORMATION_EX)malloc(FIELD_OFFSET(DRIVE_LAYOUT_INFORMATION_EX, PartitionEntry[NumberOfPartitions]));
for (DWORD Index = 0; Index < NumberOfPartitions ; Index++) {
driver_layout_info->PartitionEntry[Index].PartitionStyle = PARTITION_STYLE_MBR;
driver_layout_info->PartitionEntry[Index].PartitionNumber = Index + 1;
driver_layout_info->PartitionEntry[Index].RewritePartition = TRUE;
driver_layout_info->PartitionEntry[Index].PartitionLength.QuadPart = partition_size.QuadPart / 2 * 4096;
driver_layout_info->PartitionEntry[Index].Mbr = mbr_info;
}
driver_layout_info->Mbr.Signature = sign;
driver_layout_info->PartitionCount = 1;
driver_layout_info->PartitionStyle = PARTITION_STYLE_MBR;
driver_layout_info->PartitionEntry[0].StartingOffset.QuadPart = 32256;
driver_layout_info->PartitionEntry[1].StartingOffset.QuadPart = 32256 + driver_layout_info->PartitionEntry->StartingOffset.QuadPart;
DWORD driver_layout_ex_len = sizeof(DRIVE_LAYOUT_INFORMATION_EX) + sizeof(PARTITION_INFORMATION_EX);
Call free(driver_layout_info); after you use completely.
I am new in writing XLL, Anyone knows how to return a 6x1 array to Excel ?
The following is my function (some codes came from other post):
__declspec(dllexport) LPXLOPER12 WINAPI GetArr(char* arg1, char* arg2)
{
vector<double> arr = functionReturnVector;
XLOPER12 list;
list.xltype = xltypeMulti | xlbitDLLFree;
list.val.array.lparray = new XLOPER12[6];
list.val.array.rows = 6;
list.val.array.columns = 1;
for(int i = 0; i < 6; ++i) {
list.val.array.lparray[i] = arr[i]; // error: IntelliSense: no operator "=" matches these operands
}
return &list;
}
[2013-02-23]
Currently I read the codes from XLL RETURN ARRAY and review my code, it can compile but returns 0 ...
__declspec(dllexport) LPXLOPER12 WINAPI GetArr(void)
{
XLOPER xlArray, xlValues[2];
xlValues[0].xltype = xltypeNum;
xlValues[1].xltype = xltypeNum;
xlValues[0].val.num = 11;
xlValues[1].val.num = 17;
xlArray.xltype = xltypeMulti|xlbitDLLFree;
xlArray.val.array.rows = 1;
xlArray.val.array.columns = 2;
xlArray.val.array.lparray = &xlValues;
return &xlArray;
}
I can print an 1X2 array in Excel worksheet now, the following is my code.
**use Func signature type U
1.select one row and two columns
2.Type command =GetArray() in first cell
3.Ctrl + shift + enter
__declspec(dllexport) LPXLOPER12 WINAPI GetArray(void)
{
static XLOPER12 xlArray;
XLOPER12 xlValues[2];
xlValues[0].xltype = xltypeNum;
xlValues[1].xltype = xltypeNum;
xlValues[0].val.num = 123;
xlValues[1].val.num = 456;
xlArray.xltype = xltypeMulti|xlbitDLLFree;
xlArray.val.array.rows = 1;
xlArray.val.array.columns = 2;
xlArray.val.array.lparray = &xlValues;
return (LPXLOPER12)&xlArray;
}
Maybe http://xll.codplex.com can help you solve your problem, but as other people that are trying to help you, your code seems pretty far from shore.
On ne découvre pas de terre nouvelle sans consentir à perdre de vue, d'abord et longtemps, tout rivage.
Just kidding. Maybe that will work for you.
Here s what I'm doing in a nutshell.
In my class's cpp file I have:
std::vector<std::vector<GLdouble>> ThreadPts[4];
The thread proc looks like this:
unsigned __stdcall BezierThreadProc(void *arg)
{
SHAPETHREADDATA *data = (SHAPETHREADDATA *) arg;
OGLSHAPE *obj = reinterpret_cast<OGLSHAPE*>(data->objectptr);
for(unsigned int i = data->start; i < data->end - 1; ++i)
{
obj->SetCubicBezier(
obj->Contour[data->contournum].UserPoints[i],
obj->Contour[data->contournum].UserPoints[i + 1],
data->whichVector);
}
_endthreadex( 0 );
return 0;
}
SetCubicBezier looks like this:
void OGLSHAPE::SetCubicBezier(USERFPOINT &a,USERFPOINT &b, int ¤tvector )
{
std::vector<GLdouble> temp;
if(a.RightHandle.x == a.UserPoint.x && a.RightHandle.y == a.UserPoint.y
&& b.LeftHandle.x == b.UserPoint.x && b.LeftHandle.y == b.UserPoint.y )
{
temp.clear();
temp.push_back((GLdouble)a.UserPoint.x);
temp.push_back((GLdouble)a.UserPoint.y);
ThreadPts[currentvector].push_back(temp);
temp.clear();
temp.push_back((GLdouble)b.UserPoint.x);
temp.push_back((GLdouble)b.UserPoint.y);
ThreadPts[currentvector].push_back(temp);
}
}
The code that calls the threads looks like this:
for(int i = 0; i < Contour.size(); ++i)
{
Contour[i].DrawingPoints.clear();
if(Contour[i].UserPoints.size() < 2)
{
break;
}
HANDLE hThread[4];
SHAPETHREADDATA dat;
dat.objectptr = (void*)this;
dat.start = 0;
dat.end = floor((Contour[i].UserPoints.size() - 1) * 0.25);
dat.whichVector = 0;
dat.contournum = i;
hThread[0] = (HANDLE)_beginthreadex(NULL,0,&BezierThreadProc,&dat,0,0);
dat.start = dat.end;
dat.end = floor((Contour[i].UserPoints.size() - 1) * 0.5);
dat.whichVector = 1;
hThread[1] = (HANDLE)_beginthreadex(NULL,0,&BezierThreadProc,&dat,0,0);
dat.start = dat.end;
dat.end = floor((Contour[i].UserPoints.size() - 1) * 0.75);
dat.whichVector = 2;
hThread[2] = (HANDLE)_beginthreadex(NULL,0,&BezierThreadProc,&dat,0,0);
dat.start = dat.end;
dat.end = Contour[i].UserPoints.size();
dat.whichVector = 3;
hThread[3] = (HANDLE)_beginthreadex(NULL,0,&BezierThreadProc,&dat,0,0);
WaitForMultipleObjects(4,hThread,true,INFINITE);
}
Is there something wrong with this?
I'd expect it to fill ThreadPts[4]; ... There should never be any conflicts the way I have it set up. I usually get error writing at... on the last thread where dat->whichvector = 3. If I remove:
dat.start = dat.end;
dat.end = Contour[i].UserPoints.size();
dat.whichVector = 3;
hThread[3] = (HANDLE)_beginthreadex(NULL,0,&BezierThreadProc,&dat,0,0);
Then it does not seem to crash, what could be wrong?
Thanks
The problem is that you're passing the same dat structure to each thread as the argument to the threadproc.
For example, When you start thread 1, there's no guarantee that it will have read the information in the dat structure before your main thread starts loading that same dat structure with the information for thread 2 (and so on). In fact, you're constantly directly using that dat structure throughout the thread's loop, so the thread won't be finished with the structure passed to it until the thread is basically done with all its work.
Also note that currentvector in SetCubicBezier() is a reference to data->whichVector, which is referring to the exact same location in a threads. So SetCubicBezier() will be performing push_back() calls on the same object in separate threads because of this.
There's a very simple fix: you should use four separate SHAPETHREADDATA instances - one to initialize each thread.