I recently inherited a program that mixes C++ and C++/CLI (.NET). It interfaces to other components over the network, and to a Driver DLL for some special hardware. However, I am trying to figure out the best way to send the data over the network, as what is used seems non-optimal.
The data is stored in a C++ Defined Structure, something like:
struct myCppStructure {
unsigned int field1;
unsigned int field2;
unsigned int dataArray[512];
};
The program works fine when accessing the structure itself from C++/CLI. The problem is that to send it over the network the current code does something like the following:
struct myCppStructure* data;
IntPtr dataPtr(data);
// myNetworkSocket is a NetworkStream cast as a System::IO::Stream^
System::IO::BinaryWriter^ myBinWriter = gcnew BinaryWriter(myNetworkSocket);
__int64 length = sizeof(struct myCppStructure) / sizeof(__int64);
unsigned __int64* ptr = static_cast<__int64*>(dataPtr.toPointer());
for (unsigned int i = 0; i < (length / sizeof(unsigned __int64)); i++)
myBinWriter->Write((*ptr)++);
In normal C++ it'd usually be a call like:
myBinWriter->Write(ptr,length);
But I can't find anything equivalent in C++/CLI. System::IO::BinaryWriter only has basic types and some array<>^ versions of a few of them. Is there nothing more efficient?
P.S. These records are generated many times a second; so doing additional copying (e.g. Marshaling) it out of the question.
Note: The original question asked about C#. I failed to realize that what I was thinking of as C# was really "Managed C++" (aka C++/CLI) under .NET. The above has been edited to replace 'C#' references with 'C++/CLI' references - which I am using for any version of Managed C++, though I am notably using .NET 3.5.
Your structure consists of "basic types" and "array of them". Why can't you just wrote them sequentially using BinaryWriter? Something like
binWriter.Write(data.field1);
binWriter.Write(data.field2);
for(var i = 0; i < 512; i++)
binWriter.Write(data.dataArray[i]);
What you want to do is to find out how the C++ struct is packed, and define a struct with the correct StructLayout attribute.
To define the fixed length int[], you can defined a fixed size buffer inside it. Note that to use this you will have to mark your project /unsafe.
Now you're ready to convert that struct to a byte[] using two steps
Pin the array of structs in memory using a GCHandle.Alloc - this is fast and shouldn't be a performance bottleneck.
Now use Marshal.Copy (don't worry, this is as fast as a memcpy) with the source IntPtr = handle.AddrOfPinnedObject.
Now dispose the GCHandle and you're ready to write the bytes using the "Write" overload mentioned by Serg Rogovtsev.
Hope this helps!
In C# you could do the following. Start by defining the structure.
[StructLayout ( LayoutKind.Sequential )]
internal unsafe struct hisCppStruct
{
public uint field1;
public uint field2;
public fixed uint dataArray [ 512 ];
}
And then write it using the binary writer as follows.
hisCppStruct #struct = new hisCppStruct ();
#struct.field1 = 1;
#struct.field2 = 2;
#struct.dataArray [ 0 ] = 3;
#struct.dataArray [ 511 ] = 4;
using ( BinaryWriter bw = new BinaryWriter ( File.OpenWrite ( #"C:\temp\test.bin") ) )
{
int structSize = Marshal.SizeOf ( #struct );
int limit = structSize / sizeof ( uint );
uint* uintPtr = (uint*) &#struct;
for ( int i = 0 ; i < limit ; i++ )
bw.Write ( uintPtr [ i ] );
}
I'm pretty sure you can do exactly the same in managed C++.
Related
typedef unsigned char Byte;
...
void ReverseBytes( void *start, int size )
{
Byte *buffer = (Byte *)(start);
for( int i = 0; i < size / 2; i++ ) {
std::swap( buffer[i], buffer[size - i - 1] );
}
}
What this method does right now is it reverses bytes in memory. What I would like to know is, is there a better way to get the same effect? The whole "size / 2" part seems like a bad thing, but I'm not sure.
EDIT: I just realized how bad the title I put for this question was, so I [hopefully] fixed it.
The standard library has a std::reverse function:
#include <algorithm>
void ReverseBytes( void *start, int size )
{
char *istart = start, *iend = istart + size;
std::reverse(istart, iend);
}
A performant solution without using the STL:
void reverseBytes(void *start, int size) {
unsigned char *lo = start;
unsigned char *hi = start + size - 1;
unsigned char swap;
while (lo < hi) {
swap = *lo;
*lo++ = *hi;
*hi-- = swap;
}
}
Though the question is 3 ½ years old, chances are that someone else will be searching for the same thing. That's why I still post this.
If you need to reverse there is a chance that you can improve your algorithms and just use reverse iterators.
If you're reversing binary data from a file with different endianness you should probably use the ntoh* and hton* functions, which convert specified data sizes from network to host order and vice versa. ntohl for instance converts a 32 bit unsigned long from big endian (network order) to host order (little endian on x86 machines).
I would review the stl::swap and make sure it's optimized; after that I'd say you're pretty optimal for space. I'm reasonably sure that's time-optimal as well.
I am working with the SystemC TLM library. I would like to send a payload with two integers to a module that will perform an operation on those two integers. My question is simply how to setup and decode the payload.
Doulos provided documentation on both setting up and decoding here https://www.doulos.com/knowhow/systemc/tlm2/tutorial__1/
Setup
tlm::tlm_command cmd = static_cast(rand() % 2);
if (cmd == tlm::TLM_WRITE_COMMAND) data = 0xFF000000 | i;
trans->set_command( cmd );
trans->set_address( i );
trans->set_data_ptr( reinterpret_cast<unsigned char*>(&data) );
trans->set_data_length( 4 );
trans->set_streaming_width( 4 );
trans->set_byte_enable_ptr( 0 );
trans->set_dmi_allowed( false );
trans->set_response_status( tlm::TLM_INCOMPLETE_RESPONSE );
socket->b_transport( *trans, delay );
Decode
virtual void b_transport( tlm::tlm_generic_payload& trans, sc_time& delay )
{
tlm::tlm_command cmd = trans.get_command();
sc_dt::uint64 adr = trans.get_address() / 4;
unsigned char* ptr = trans.get_data_ptr();
unsigned int len = trans.get_data_length();
unsigned char* byt = trans.get_byte_enable_ptr();
unsigned int wid = trans.get_streaming_width();
So it looks to me like you would send a pointer to a memory location where there are two integers written.
|----------------------------------int1-------------------------|------------------------------------int2------------------------
|ptr+0x0|ptr+0x(wid)|ptr+0x(2*wid)|ptr+0x(3*wid) | ptr+0x(4*wid)|ptr+0x(5*wid)|ptr+0x(6*wid)|ptr+0x
----------|
(7*wid)|
Is my interpretation of this documentation correct?
How could you get those first 4 memory locations [3:0] and combine them into an int32 and how could you get the second 4 [7:4] and turn them into the second integer?
So it looks to me like you would send a pointer to a memory location
where there are two integers written.
Is my interpretation of this documentation correct?
Yes
To get them back you just need to copy them:
int32_t val0, val1;
memcpy(&val0, ptr, sizeof(int32_t));
memcpy(&val1, ptr + sizeof(int32_t), sizeof(int32_t));
or something like
int32_t val[2];
memcpy(val, ptr, sizeof val);
But make sure initiator keeps memory under the pointer valid long enough e.g. it might be better to avoid using keep data on the stack. And don't forget to check if payloads data length attribute has valid value - you want to detect those issues as soon as possible.
Here is my code for a MetaTraderWrapper.dll:
#define MT4_EXPFUNC __declspec(dllexport)
MT4_EXPFUNC void __stdcall PopMessageString(wchar_t *message)
{
auto result = L"Hello world !";
int n = wcslen( result );
wcscpy_s( message, n + 1, result );
}
On the MQL4-Caller side this Script is used:
#property strict
#import "MetaTraderWrapper.dll"
int PopMessageString( string & );
#import
//
void OnStart(){
string message;
if ( StringInit( message, 64 ) ){
PopMessageString( message );
int n = StringLen( message );
MessageBox( message );
}
}
In this way it works, when a message have been properly initialized with a StringInit() function and enough memory was allocated.
What I need to do is, to allocate the message variable not in MQL4 script, but within the DLL.
In a c++ function, should be something like this:
MT4_EXPFUNC void __stdcall PopMessageString(wchar_t *message)
{
auto result = L"Hello world !";
int n = wcslen( result );
// allocate here, but does not work
message = new wchar_t[n + 1]; // <--------- DOES NOT WORK
//
wcscpy_s( message, n + 1, result );
}
What can I do ?
Get acquainted with Wild Worlds of MQL4:Step 1: forget a string to be string ( it is a struct ... since 2014 )
Internal representation of the string type is a structure of 12 bytes long:
#pragma pack(push,1)
struct MqlString
{
int size; // 32-bit integer, contains size of the buffer, allocated for the string.
LPWSTR buffer; // 32-bit address of the buffer, containing the string.
int reserved; // 32-bit integer, reserved.
};
#pragma pack(pop,1)
So,
having headbanged into this one sunny Sunday afternoon, as the platform undergone a LiveUpdate and suddenly all DLL-call-interfaces using a string stopped work, it took a long way to absorb the costs of such "swift" engineering surprise.
You can re-use the found solution:
use an array of bytes - uchar[] and convert appropriately bytes of returned content on MQL4 side into string by service functions StringToCharArray() resp. CharArrayToString()
The DLL-.mqh-header file may also add these tricks and make these conversions "hidden" from MQL4-code:
#import <aDLL-file> // "RAW"-DLL-call-interfaces
...
// Messages:
int DLL_message_init( int &msg[] );
int DLL_message_init_size ( int &msg[], int size );
int DLL_message_init_data ( int &msg[], uchar &data[], int size );
...
#import
// ------------------------------------------------------ // "SOFT"-wrappers
...
int MQL4_message_init_data ( int &msg[], string data, int size ) { uchar dataChar[]; StringToCharArray( data, dataChar );
return ( DLL_message_init_data ( msg, dataChar, size ) );
}
Always be pretty carefull with appropriate deallocations, not to cause memory leaks.
Always be pretty cutious when new LiveUpdate changes the code-base and introduces new compiler + new documentation. Re-read whole documentation, as many life-saving details come into the help-file only after some next update and many details are hidden or reflected indirectly in chapters, that do not promise such information on a first look -- so, become as ready as D'Artagnan or red-scarfed pioneer -- you never know, where the next hit comes from :)
Today I get problems with serialization in MQL4.
I have a method, which I imported from a DLL:
In MQL4:
void insertQuery( int id,
string tableName,
double &values[4],
long ×[3],
int volume
);
In DLL:
__declspec(dllexport) void __stdcall insertQuery( int id,
wchar_t *tableName,
double *values,
long *times,
int volume
);
I tested it with this function calls in MQL4:
string a = "bla";
double arr[4] = { 1.1, 1.3, 0.2, 0.9 };
long A[3] = { 19991208, 19991308, 19992208 };
int volume = 1;
insertQuery( idDB, a, arr, A, volume );
Inside of this method I collect this values to files.
C++ :
stringstream stream;
stream << " '";
for (int i = 0; i < 2; ++i) {
stream << times[i] << "' , '";
}
stream << times[2] << ", ";
for (int i = 0; i < 4; ++i) {
stream << values[i] << ", ";
}
stream << volume;
wstring table(tableName);
query.append("INSERT INTO ");
query.append(table.begin(), table.end());
query.append(" VALUES (");
query.append(stream.str());
query.append(" )");
std::ofstream out("C:\\Users\\alex\\Desktop\\text.txt");
out << query;
out.close();
But in output file I receive this record:
INSERT INTO bla VALUES ( '19991208' , '0' , '19991308, 1.1, 1.3, 0.2, 0.9, 1 )
So my question is : why I lose one long value in array when I receive my record in DLL?
I tried a lot of ways to solve this problem ( I transfered two and three long values, etc ) and always I get a result that I lose second long value at serialization. Why?
The problem is cause because in MQL4, a long is an 8 bytes, while a long in C++ is a 4 bytes.
What you want is a long long in your C++ constructor.
Or you could also pass them as strings, then convert them into the appropriate type within your C++ code.
Well, be carefull, New-MQL4.56789 is not a c-compatible language
The first thing to test is to avoid passing MQL4 string into DLL calling interface, where really a c-lang string is expected.
Since old-MQL4 has been silently re-defined into a still-WIP-creeping syntax New-MQL4,the MQL4 string is not a string, but a struct.
Root-cause [ isolation ]:
Having swallowed the shock about string/struct trouble, if you can, first try to test the MQL4/DLL interactions without passing any string to proof, that all other parameters, passed by value and addressed by-ref, do get their way to the hands of a DLL-function as you expect.
If this works as you wish, proceed to the next step:
How to pass the very data to expected string representation, then?
Let me share a dirty hack I used for passing data where DLL expects string-s
#import "mql4TOOL.dll"
...
int mql4TOOL_msg_init_data ( int &msg[],
uchar &data[],
int size
);
...
#import
...
int tool_msg_init_data ( int &msg[], string data, int size ) { uchar dataChar[]; StringToCharArray( data, dataChar );
return ( mql4TOOL_msg_init_data ( msg, dataChar, size ) );
}
Yes, dirty, but works for years and saved us many tens-of-man*years of re-engineering upon a maintained code-base with heavy dependence on the MQL4/DLL interfacing in massively distributed heterogeneous computing systems.
The last resort:
If all efforts went in vain, go low level, passing a uchar[] as needed, where you assemble some serialised representation in MQL4 and parse that on the opposite end, before processing the intended functionality.
Ugly?
Yes, might look like that,butkeeps you focused on core-functionality and isolates you from any next shift of paradigm if not only strings cease to be strings et al.
I have this C++ code that shows how to extend a software by compiling it to a DLL and putting it in the application folder:
#include <windows.h>
#include <DemoPlugin.h>
/** A helper function to convert a char array into a
LPBYTE array. */
LPBYTE message(const char* message, long* pLen)
{
size_t length = strlen(message);
LPBYTE mem = (LPBYTE) GlobalAlloc(GPTR, length + 1);
for (unsigned int i = 0; i < length; i++)
{
mem[i] = message[i];
}
*pLen = length + 1;
return mem;
}
long __stdcall Execute(char* pMethodName, char* pParams,
char** ppBuffer, long* pBuffSize, long* pBuffType)
{
*pBuffType = 1;
if (strcmp(pMethodName, "") == 0)
{
*ppBuffer = (char*) message("Hello, World!",
pBuffSize);
}
else if (strcmp(pMethodName, "Count") == 0)
{
char buffer[1024];
int length = strlen(pParams);
*ppBuffer = (char*) message(itoa(length, buffer, 10),
pBuffSize);
}
else
{
*ppBuffer = (char*) message("Incorrect usage.",
pBuffSize);
}
return 0;
}
Is is possible to make a plugin this way using Cython? Or even py2exe? The DLL just has to have an entry point, right?
Or should I just compile it natively and embed Python using elmer?
I think the solution is to use both. Let me explain.
Cython makes it convenient to make a fast plugin using python but inconvenient (if at all possible) to make the right "kind" of DLL. You would probably have to use the standalone mode so that the necessary python runtime is included and then mess with the generated c code so an appropriate DLL gets compiled.
Conversely, elmer makes it convenient to make the DLL but runs "pure" python code which might not be fast enough. I assume speed is an issue because you are considering cython as opposed to simple embedding.
My suggestion is this: the pure python code that elmer executes should import a standard cython python extension and execute code from it. This way you don't have to hack anything ugly and you have the best of both worlds.
One more solution to consider is using shedskin, because that way you can get c++ code from your python code that is independent from the python runtime.