this is an odd behaviour which I encouter.
I have a Visual Basic Interface for the User.
Declare PtrSafe Sub getPEC _
Lib "C:\Users\...somePath...\OPunit0011PUMP.dll" _
(ByVal typeOfPump As Integer, _
ByVal outflow As Double, _
ByRef pec As Double, _
ByRef RV As Integer)
The user specifies a pump by the integer typeOfPump. I pass this parameter as ByVal typeOfPump into my C++ DLL.
Here according to which pump it is some predeclared parameters a,...g are initialized using switch case
extern "C"
{
double PEC_backUp;
void __stdcall getPEC(int typeOfPump, double outflow, double &PEC, int &RV)
{
//polynomal trendline for different pumps
//y = a x^6 + b x^5 + c x^4 + d x^3 + e x^2 + f x^1 + g x^0
if (typeOfPump < 1)
{
RV = -1;
return;
}
double a, b, c, d, e, f, g;
#pragma region switch case
switch (typeOfPump)
{
//150 bar
case 1:
a = 1.1186E-08;
b = -1.49172E-05;
//...
break;
case 2:
//...
}
My problem is that switch case does NOT work. My default value is set to nine, but also every other case does NOT work. It simply neglects the switch case code.
Note also: The same odd behaviour can be seen in the If condition:
if (typeOfPump > 1)
{
RV = -1;
return;
}
Despite the fact that typeOfPump is assigned to NINE which is obviously bigger than one my function getPEC does not return at this point. On the other hand if I write
if (typeOfPump < 1)
{
RV = -1;
return;
}
my function will return here. I then assigned the value of typeOfPump to RV to monitor it in VBA and RV was set to nine.
Moreover, to make things even stranger it automatically changes the value of pec to 7.00000000005821 (using watch function of VBA) when it returns with RV = -1.
I guess my parameter are somehow not compatible for operations in my DLL. Did anyone see this before and how can I fix it?
Thank you in advance!
EDIT: I can do operations like
RV = typeOfPump * (int)outflow;
and obtain correct values. However, pec still shows some change in its value.
SCD EDIT: I have 64bit, Excel is 32bit, I'm compiling with x86.
I wrote a similar program on another computer 64bit, Excel 64bit, compiling with x64. There it worked!
3rd EDIT: integer of value 9 in VBA results in -65526 in C++ environment, given size of integer in my C++ environment is 4byte. Assuming range of 16bit variable for integer is −32,768 to 32,767. Doubling 32,767 and subtracting 9 leads to 65525.
In general if you're not sure why your code isn't working, print out the values that you are sure of, and see if they are actually what you think they are. Especially if mixing 32bit and 64 bit code. There are all kinds of compiler options which will affect the data format when it's used.
Write your code as a standalone exe, make sure that it works; write the dll with handshaking as simply as possible, make sure that works, and then make your exe a dll. You're trying to do too much in one step.
Related
I've always been using a xor encryption class for my 32 bit applications but recently I have started working on a 64 bit one and encountered the following crash: https://i.stack.imgur.com/jCBlJ.png
Here's the xor class I'm using:
// xor.h
#pragma once
template <int XORSTART, int BUFLEN, int XREFKILLER>
class XorStr
{
private:
XorStr();
public:
char s[BUFLEN];
XorStr(const char* xs);
~XorStr()
{
for (int i = 0; i < BUFLEN; i++) s[i] = 0;
}
};
template <int XORSTART, int BUFLEN, int XREFKILLER>
XorStr<XORSTART, BUFLEN, XREFKILLER>::XorStr(const char* xs)
{
int xvalue = XORSTART;
int i = 0;
for (; i < (BUFLEN - 1); i++)
{
s[i] = xs[i - XREFKILLER] ^ xvalue;
xvalue += 1;
xvalue %= 256;
}
s[BUFLEN - 1] = (2 * 2 - 3) - 1;
}
The crash occurs when I try to use the obfuscated string but doesnt necessarily happen 100% of the times (never happens on 32 bit, however). Here's a small example of a 64 bit app that will crash on the second obfuscated string:
#include <iostream>
#include "xor.h"
int main()
{
// no crash
printf(/*123456789*/XorStr<0xDE, 10, 0x017A5298>("\xEF\xED\xD3\xD5\xD7\xD5\xD3\xDD\xDF" + 0x017A5298).s);
// crash
printf(/*123456*/XorStr<0xE3, 7, 0x87E64A05>("\xD2\xD6\xD6\xD2\xD2\xDE" + 0x87E64A05).s);
return 0;
}
The same app will run perfectly fine if built in 32 bit.
Here's the HTML script to generate the obfuscated strings: https://pastebin.com/QsZxRYSH
I need to tweak this class to work on 64 bit because I have a lot of strings that I already have encrypted that I need to import from a 32 bit project into the one I'm working on at the moment, which is 64 bit. Any help is appreciated!
The access violation is because 0x87E64A05 is larger than the largest value a signed 32bit integer can hold (which is 0x7FFFFFFF).
Because int is likely 32bit, then XREFKILLER cannot hold 0x87E64A05 and so its value will be implementation-defined.
This value is then used later to subtract again from xs after the pointer passed was artificially advanced by the literal 0x87E64A05 which would be interpreted as long or long long to make the value fit, depending on whether long is 32bit or larger and therefore wouldn't narrowing into the implementation defined value.
Therefore you are effectively left with some random pointer in xs[i - XREFKILLER] and this is likely to give undefined behavior, e.g. an access violation.
If compiled for 32bit x86 it probably so happens that int and pointers have the same bit-size and that the implementation-defined over-/underflow and narrowing behaviors happen to be such that the addition and subtraction cancel correctly as expected. If however the pointer type is larger than 32bit this cannot function.
There is no point to XREFKILLER at all. It just does one calculation that is immediately reverted (if there is no over-/underflow).
Note that the fact that the compiler accepts the narrowing in the template argument at all is a bug. Your program is ill-formed and the compiler should give you an error message.
In GCC for example this bug persists until version 8.2, but has been fixed on current trunk (i.e. version 9).
You will have similar problems with XORSTART if char happens to be signed on your platform, because then your provided values wont fit into it. But in that case you will have to enable warnings, because that won't be a conversion making the program ill-formed. Also the behavior of ^ may not be as you expect if char is signed on your system.
It is not clear what the point of
s[BUFLEN - 1] = (2 * 2 - 3) - 1;
is. It should be:
s[BUFLEN - 1] = '\0';
Passing the resulting string to printf as first argument will lead to spurious undefined behavior if the result string happens to contain a % which would be interpreted as introduction to a format specifier. Use std::cout.
If you want to use printf you need to write std::printf and #include<cstdio> to guarantee that it will be available. However, since this is C++, you should be using std::cout instead anyway.
More fundamentally your output string may happen to contain a 0 other than the terminating one after your transformation. This would be interpreted as end of the C-style string. This seems like a major design flaw and you probably want to use std::string instead for that reason (and because it is better style).
A little background: I was working on some data conversion from C to C# by using a C++/CLI midlayer, and I noticed a peculiarity with the way the debugger shows floats and doubles, depending on which dll the code is executing in (see code and images below). At first I thought it had something to do with managed/unmanaged differences, but then I realized that if I completely left the C# layer out of it and only used unmanaged data types, the same behaviour was exhibited.
Test Case: To further explore the issue, I created an isolated test case to clearly identify the strange behaviour. I am assuming that anyone who may be testing this code already has a working Solution and dllimport/dllexport/ macros set up. Mine is called DLL_EXPORT. If you need a minimal working header file, let me know. Here the main application is in C and calling a function from a C++/CLI dll. I am using Visual Studio 2015 and both assemblies are 32 bit.
I am a bit concerned, as I am not sure if this is something I need to worry about or it's just something the debugger is doing (I am leaning towards the latter). And to be quite honest, I am just outright curious as to what's happening here.
Question: Can anyone explain the observed behaviour or at least point me in the right direction?
C - Calling Function
void floatTest()
{
float floatValC = 42.42f;
double doubleValC = 42.42;
//even if passing the address, behaviour is same as all others.
float retFloat = 42.42f;
double retDouble = 42.42;
int sizeOfFloatC = sizeof(float);
int sizeOfDoubleC = sizeof(double);
floatTestCPP(floatValC, doubleValC, &retFloat, &retDouble);
//do some dummy math to make compiler happy (i.e. no unsused variable warnings)
sizeOfFloatC = sizeOfFloatC + sizeOfDoubleC;//break point here
}
C++/CLI Header
DLL_EXPORT void floatTestCPP(float floatVal, double doubleVal,
float *floatRet, double *doubleRet);
C++/CLI Source
//as you can see, there are no managed types in this function
void floatTestCPP(float floatVal, double doubleVal, float *floatRet, double *doubleRet)
{
float floatLocal = floatVal;
double doubleLocal = doubleVal;
int sizeOfFloatCPP = sizeof(float);
int sizeOfDoubleCPP = sizeof(double);
*floatRet = 42.42f;
*doubleRet = 42.42;
//do some dummy math to make compiler happy (no warnings)
floatLocal = (float)doubleLocal;//break point here
sizeOfDoubleCPP = sizeOfFloatCPP;
}
Debugger in C - break point on last line of floatTest()
Debugger in C++/CLI - break point on the second to last line of floatTestCPP()
Consider Debugger in C++/CLI itself is not necessarily coded in C, C# or C++.
MS libraries support the "R" format: A string that can round-trip to an identical number. I suspect this or a g format was used.
Without MS source code, the following is only a good supposition:
The debug output is enough to distinguish the double from other nearby double. So code need not print "42.420000000000002", but "42.42" is sufficient - whatever format is used.
42.42 as an IEEE double is about 42.4200000000000017053025658242404460906982... and the debugger certainly need not print the exact value.
Potential; similar C code
int main(void) {
puts("12.34567890123456");
double d = 42.42;
printf("%.16g\n", nextafter(d,0));
printf("%.16g\n", d);
printf("%.17g\n", d);
printf("%.16g\n", nextafter(d,2*d));
d = 1 / 3.0f;
printf("%.9g\n", nextafterf(d,0));
printf("%.9g\n", d);
printf("%.9g\n", nextafterf(d,2*d));
d = 1 / 3.0f;
printf("%.16g\n", nextafter(d,0));
printf("%.16g\n", d);
printf("%.16g\n", nextafter(d,2*d));
}
output
12.34567890123456
42.41999999999999
42.42
42.420000000000002 // this level of precision not needed.
42.42000000000001
0.333333313
0.333333343
0.333333373
0.3333333432674407
0.3333333432674408
0.3333333432674409
For your code to convert a double to text with sufficient textual precision and back to double to "round-trip" the number, see Printf width specifier to maintain precision of floating-point value.
I have the following c++ code :
#include <OAIdl.h> // for VARIANT, BSTR etc
__declspec(dllexport) void __stdcall updatevar(VARIANT * x)
{
double nb = 3.0;
++nb;
}
with a function doing (almost) nothing (to avoid warnings)
and a .def file :
LIBRARY tmp0
EXPORTS
updatevar #1LIBRARY
I compile this with visual studio 2013 into a dll that I reference as follows in excel-2013's VBA :
Declare Sub updatevar Lib "C:\path\to\thedll.dll" (ByRef x As Variant)
and that I use like this :
Public Sub DoIt()
Dim r As Range
Set r = ActiveWorkbook.Sheets("Sheet1").Range("B2:C4")
Dim var As Variant
var = r.Value
Call updatevar(var)
End Sub
where the 6-cells excel range B2:C4 contains strings, dates, doubles and ints/longs.
I put a breakpoint in c++ code to inspect the variant pointed to that I receive, as remarked that the type of its 6 elements is always rightly resolved : dates go to a vt (variant type VT_DATE) equal to 7, doubles to a vt (variant type VT_R8) equal to 5, strings go to a vt (variant type VT_BSTR) of 8, except for ints/longs, that are mapped to VT_R8 and treated as doubles.
At the beginning I thought it was a c++ problem but, already inspecting the range r in the VBA code, and its Value2 field, showed to me that all ints/longs were treated in VBA as Variant/Double and not Variant/Long, and I have no idea why this is happening.
Note. I put c++ and dll tags as the people interested in these tags may also help given the context involving exchanging VARIANT's between c++ and VBA.
Remark. "Downcasting" from double to int is not an option, especially as VARIANT is supposed to know about ints/longs (VT_I4 = 2 and VT_I4 = 3 do exist.)
I´m creating a .dll library in C/C++ for VBA. It will contain functions for communication via RS232 serial port and data will be processed in Excel. Everything works fine but I´m confused of strange behavior of VBA that works under Excel. I have 2 functions. One for writing to port one for reading. When I´m sending a port number e.g. 3 from VBA to one of them, doesn´t matter which one and print it exactly after it was received by function it shows decimal value of 3 what is correct. But when I send exactly the same variable that consists number 3 to second one, function receive 51 what is a decimal value of "3" char. So at first VBA send integer then it changes somehow and send decimal value of "3" char. There is no code before printing received variable in my functions that can change value.
Here is simplified code of my functions just to show.
int __stdcall PortRead(short int & Port){
printf("%d %c\n",Port,Port);
return 0;
}
int __stdcall PortWrite(short int & Port, BSTR & Message){
printf("%d %c\n",Port,Port);
return 0;
}
Here is VBA code:
Declare Function PortRead Lib "rs232_r.dll" (ByRef x As Integer) As Integer
Declare Function PortWrite Lib "rs232_w.dll" (ByRef x As Integer, ByRef y As String) As Integer
Dim Message As String
Dim PortNumber As Integer
Sub Example()
PortNumber = 3
Message = ":trac:data?"
aa = PortWrite(PortNumber, Message)
Debug.Print aa
xx = PortRead(PortNumber)
Debug.Print xx
End Sub
As I said, passed values will be different when I'm sending one variable to 2 functions but when I change it like the next example both functions will receive the same correct value.
Declare Function PortRead Lib "rs232_r.dll" (ByRef x As Integer) As Integer
Declare Function PortWrite Lib "rs232_w.dll" (ByRef x As Integer, ByRef y As String) As Integer
Dim Message As String
Dim PortNumber1 As Integer
Dim PortNumber2 As Integer
Dim Number As Integer
Sub Example()
Number = 3
PortNumber1 = Number
PortNumber2 = Number
Message = ":trac:data?"
aa = PortWrite(PortNumber1, Message)
Debug.Print aa
xx = PortRead(PortNumber2)
Debug.Print xx
End Sub
I apology if this question has been already asked or if it is question for programmers from kindergarten but I am very curious. Thanks.
In VBA try changing ByRef to ByVal
The code posted maps "integer = int" and "integer = short int". Which is it? (Ar you compiling 32bit cpp or 64bit cpp?).
Excel will repair the stack if your declarations are invalid, and Excel will not crash, so you should not depend on crashing Excel to find invalid declarations.
When you call a C++ function from VBA ByRef, you might expect to receive a reference in the C++ function, like
short int & Port
But ByRef in VBA means "ByPointer" in C++.
So I found an issue. As you suggested I took closer look on parameters sent to function ByRef. After few experiments I discovered my mistake. I sent a number to function by reference that means I sent only address of memory not a value. Then my first function made few calculations because of conversion to char and creating string and saved a new changed value because of calculations to previous memory address. So when I was sending variable to second function also by reference, the value saved on memory address was already changed by my first function. So I was rewriting value of variable by myself. I should apology to Microsoft for my rude words that I said about its products while I was solving this problem. :) Thanks mates
I am writing a 64 bit Dll in C which is then used in Excel 64 bit, and I am following a sample project from https://sites.google.com/site/jrlhost/links/excelcdll.
The example is simple. We write a function in C which is to return the square value of an input. The function is exported in DLL and then used in Excel 64 bit.
I am facing the exact same problem as in the example:
"However, when you use squareForEXL as a worksheet function, it results in errors. On my desktop, it returns the correct result (e.g., "= squareForEXL(10)" yields 100) but then gives an "Out of Stack Space" error, either at some point when calling the function or when Excel is closed. On my laptop, it returns an incorrect result (e.g., "= squareForEXL(10)" yields 0). On both, Excel sometimes crashes."
The C function (squareForEXL) works fine when used in VBA, but it does not work as a worksheet function. One workaround is proposed in the article but I still want to see if there is any way to resolve the issue directly.
Below is the C and VBA code:
double _stdcall squareForEXL (double *x)
{
return *x * *x;
}
Declare PtrSafe Function squareForEXL Lib "C:\Working\XLSquare\x64\Debug\XLSquare.dll" (ByRef x As Double) As Double
You need to pass a reference to the squareForExl given in the example. That is you need,
double _stdcall squareForEXL (double &x)
{
return x * x;
}