MQL4 C++ Dll change string argument in function call - c++

Here is my code for a MetaTraderWrapper.dll:
#define MT4_EXPFUNC __declspec(dllexport)
MT4_EXPFUNC void __stdcall PopMessageString(wchar_t *message)
{
auto result = L"Hello world !";
int n = wcslen( result );
wcscpy_s( message, n + 1, result );
}
On the MQL4-Caller side this Script is used:
#property strict
#import "MetaTraderWrapper.dll"
int PopMessageString( string & );
#import
//
void OnStart(){
string message;
if ( StringInit( message, 64 ) ){
PopMessageString( message );
int n = StringLen( message );
MessageBox( message );
}
}
In this way it works, when a message have been properly initialized with a StringInit() function and enough memory was allocated.
What I need to do is, to allocate the message variable not in MQL4 script, but within the DLL.
In a c++ function, should be something like this:
MT4_EXPFUNC void __stdcall PopMessageString(wchar_t *message)
{
auto result = L"Hello world !";
int n = wcslen( result );
// allocate here, but does not work
message = new wchar_t[n + 1]; // <--------- DOES NOT WORK
//
wcscpy_s( message, n + 1, result );
}
What can I do ?

Get acquainted with Wild Worlds of MQL4:Step 1: forget a string to be string ( it is a struct ... since 2014 )
Internal representation of the string type is a structure of 12 bytes long:
#pragma pack(push,1)
struct MqlString
{
int size; // 32-bit integer, contains size of the buffer, allocated for the string.
LPWSTR buffer; // 32-bit address of the buffer, containing the string.
int reserved; // 32-bit integer, reserved.
};
#pragma pack(pop,1)
So,
having headbanged into this one sunny Sunday afternoon, as the platform undergone a LiveUpdate and suddenly all DLL-call-interfaces using a string stopped work, it took a long way to absorb the costs of such "swift" engineering surprise.
You can re-use the found solution:
use an array of bytes - uchar[] and convert appropriately bytes of returned content on MQL4 side into string by service functions StringToCharArray() resp. CharArrayToString()
The DLL-.mqh-header file may also add these tricks and make these conversions "hidden" from MQL4-code:
#import <aDLL-file> // "RAW"-DLL-call-interfaces
...
// Messages:
int DLL_message_init( int &msg[] );
int DLL_message_init_size ( int &msg[], int size );
int DLL_message_init_data ( int &msg[], uchar &data[], int size );
...
#import
// ------------------------------------------------------ // "SOFT"-wrappers
...
int MQL4_message_init_data ( int &msg[], string data, int size ) { uchar dataChar[]; StringToCharArray( data, dataChar );
return ( DLL_message_init_data ( msg, dataChar, size ) );
}
Always be pretty carefull with appropriate deallocations, not to cause memory leaks.
Always be pretty cutious when new LiveUpdate changes the code-base and introduces new compiler + new documentation. Re-read whole documentation, as many life-saving details come into the help-file only after some next update and many details are hidden or reflected indirectly in chapters, that do not promise such information on a first look -- so, become as ready as D'Artagnan or red-scarfed pioneer -- you never know, where the next hit comes from :)

Related

Using std::span with buffers - C++

I am exploring a possibly safer and more convenient way to handle buffers, either with
fixed size known at compile time
size known at runtime
What is the advice of using static extent vs dynamic extent? The answer may seem obvious but I got confused when testing with examples below. It looks like I can manipulate extent by my own choice by choosing how I initialize the span. Any thoughts about the code examples? What are the correct ways to initialize the span in the examples?
Note that even if the examples use strings the normal use should be uint8_t or std::byte. The Windows and MS compiler specifics are not crucial.
edit:
Updated code
#define _CRTDBG_MAP_ALLOC
#define _CRTDBG_MAP_ALLOC_NEW
// ... includes
#include <cstdlib>
#include <crtdbg.h>
enum appconsts : uint8_t { buffersize = 0xFF };
HRESULT __stdcall FormatBuffer(std::span<wchar_t/*, appconsts::buffersize*/> buffer) noexcept
{
_RPTFN(_CRT_WARN, "FormatBuffer with size of buffer=%d, threadid=0x%8.8X\n", buffer.size(), ::GetCurrentThreadId());
errno_t er = swprintf_s(buffer.data(), buffer.size(), L"test of buffers with std::span, buffer size: %zu", buffer.size());
return er > 0 ? S_OK : E_INVALIDARG;
}
extern "C" int32_t __cdecl
wmain([[maybe_unused]] _In_ int32_t argc,[[maybe_unused]] _In_reads_(argc) _Pre_z_ wchar_t* argv[])
{
_RPTFN(_CRT_WARN, "Executing main thread, argc=%d, threadid=0x%8.8X\n", argc, ::GetCurrentThreadId());
int32_t ir = 0;
wchar_t buffer1[appconsts::buffersize];
ir = FormatBuffer(buffer1);
wchar_t* buffer2 = new wchar_t[appconsts::buffersize];
ir = FormatBuffer(std::span<wchar_t/*, appconsts::buffersize*/>(buffer2, appconsts::buffersize));
delete[] buffer2;
std::unique_ptr<wchar_t[]> buffer3 = std::make_unique<wchar_t[]>(appconsts::buffersize);
ir = FormatBuffer(std::span<wchar_t/*, appconsts::buffersize*/>(buffer3.get(), appconsts::buffersize));
std::vector<wchar_t> buffer4(appconsts::buffersize);
ir = FormatBuffer(std::span<wchar_t/*, appconsts::buffersize*/>(buffer4/*, appconsts::buffersize*/));
_CrtDumpMemoryLeaks();
return ir;
}
New code
Things get a bit clearer with a new version of the example. To get static extent, the size needs to be set in FormatBuffer and only the fixed buffer1 will fit. The cppreference text is a bit confusing. The following is giving dynamic extent.
enum appconsts : uint8_t { buffersize = 0xFF };
HRESULT __stdcall FormatBuffer(std::span<wchar_t/*, appconsts::buffersize*/> buffer) noexcept
{
_RPTFN(_CRT_WARN, "FormatBuffer with size of buffer=%d, span extent=%d, threadid=0x%8.8X\n",
buffer.size(), buffer.extent == std::dynamic_extent ? -1 : buffer.extent, ::GetCurrentThreadId());
errno_t er = swprintf_s(buffer.data(), buffer.size(), L"test of buffers with std::span, buffer size: %zu", buffer.size());
return er > 0 ? S_OK : E_INVALIDARG;
}
HRESULT __stdcall CreateBuffer(size_t runtimesize) noexcept
{
_RPTFN(_CRT_WARN, "CreateBuffer with runtime size of buffer=%d, threadid=0x%8.8X\n", runtimesize, ::GetCurrentThreadId());
HRESULT hr = S_OK;
wchar_t buffer1[appconsts::buffersize]{};
hr = FormatBuffer(buffer1);
std::unique_ptr<wchar_t[]> buffer3 = std::make_unique<wchar_t[]>(runtimesize /*appconsts::buffersize*/);
hr = FormatBuffer(std::span<wchar_t/*, runtimesize*/>(buffer3.get(), runtimesize));
std::vector<wchar_t> buffer4(appconsts::buffersize);
hr = FormatBuffer(std::span<wchar_t/*, appconsts::buffersize*/>(buffer4/*, appconsts::buffersize*/));
return hr;
}
extern "C" int32_t __cdecl
wmain([[maybe_unused]] _In_ int32_t argc,[[maybe_unused]] _In_reads_(argc) _Pre_z_ wchar_t* argv[])
{
_RPTFN(_CRT_WARN, "Executing main thread, argc=%d, threadid=0x%8.8X\n", argc, ::GetCurrentThreadId());
//(void)argc;(void)argv;
int32_t ir = 0;
ir = CreateBuffer(static_cast<size_t>(argc) * appconsts::buffersize);
return ir;
}
(the part on runtime error was removed from the question)
In a nutshell: use dynamic extent, and initialize in the simplest way, like:
wchar_t buffer1[appconsts::buffersize];
ir = FormatBuffer(buffer1);
wchar_t* buffer2 = new wchar_t[appconsts::buffersize];
ir = FormatBuffer({buffer2, appconsts::buffersize}); // won’t compile without the size
delete[] buffer2;
std::unique_ptr<wchar_t[]> buffer3 = std::make_unique<wchar_t[]>(appconsts::buffersize);
ir = FormatBuffer({buffer3.get(), appconsts::buffersize}); // won’t compile without the size
std::vector<wchar_t> buffer4(appconsts::buffersize);
ir = FormatBuffer(buffer4);
Out of these examples, only the first will work if the function expects a fixed-extent span (and only if that extent is exactly the same as array length). Which is good as according to the documentation, constructing a fixed-extent std::span with wrong size is an outright UB. Ouch.
Fixed-extent span is only useful if its size is a part of the API contract. Like, if your function needs 42 coefficients no matter what, std::span<double, 42> is the right way. But if your function may sensibly work with any buffer size, there is no point in hard-coding the particular size used today.
The problem is here:
wchar_t* buffer2 = new wchar_t(appconsts::buffersize);
ir = FormatBuffer(std::span<wchar_t/*, appconsts::buffersize*/>(buffer2, appconsts::buffersize));
delete buffer2; // Critical error detected c0000374
new wchar_t(appconsts::buffersize) doesn’t create a buffer of that size. It allocates a single wchar_t and initializes it with appconsts::buffersize as a value. To allocate an array, use new wchar_t[appconsts::buffersize]. And to free it, use delete[] buffer2.

SystemC Transfer Level Modeling Extract Two Integers from tlm_generic_payload

I am working with the SystemC TLM library. I would like to send a payload with two integers to a module that will perform an operation on those two integers. My question is simply how to setup and decode the payload.
Doulos provided documentation on both setting up and decoding here https://www.doulos.com/knowhow/systemc/tlm2/tutorial__1/
Setup
tlm::tlm_command cmd = static_cast(rand() % 2);
if (cmd == tlm::TLM_WRITE_COMMAND) data = 0xFF000000 | i;
trans->set_command( cmd );
trans->set_address( i );
trans->set_data_ptr( reinterpret_cast<unsigned char*>(&data) );
trans->set_data_length( 4 );
trans->set_streaming_width( 4 );
trans->set_byte_enable_ptr( 0 );
trans->set_dmi_allowed( false );
trans->set_response_status( tlm::TLM_INCOMPLETE_RESPONSE );
socket->b_transport( *trans, delay );
Decode
virtual void b_transport( tlm::tlm_generic_payload& trans, sc_time& delay )
{
tlm::tlm_command cmd = trans.get_command();
sc_dt::uint64 adr = trans.get_address() / 4;
unsigned char* ptr = trans.get_data_ptr();
unsigned int len = trans.get_data_length();
unsigned char* byt = trans.get_byte_enable_ptr();
unsigned int wid = trans.get_streaming_width();
So it looks to me like you would send a pointer to a memory location where there are two integers written.
|----------------------------------int1-------------------------|------------------------------------int2------------------------
|ptr+0x0|ptr+0x(wid)|ptr+0x(2*wid)|ptr+0x(3*wid) | ptr+0x(4*wid)|ptr+0x(5*wid)|ptr+0x(6*wid)|ptr+0x
----------|
(7*wid)|
Is my interpretation of this documentation correct?
How could you get those first 4 memory locations [3:0] and combine them into an int32 and how could you get the second 4 [7:4] and turn them into the second integer?
So it looks to me like you would send a pointer to a memory location
where there are two integers written.
Is my interpretation of this documentation correct?
Yes
To get them back you just need to copy them:
int32_t val0, val1;
memcpy(&val0, ptr, sizeof(int32_t));
memcpy(&val1, ptr + sizeof(int32_t), sizeof(int32_t));
or something like
int32_t val[2];
memcpy(val, ptr, sizeof val);
But make sure initiator keeps memory under the pointer valid long enough e.g. it might be better to avoid using keep data on the stack. And don't forget to check if payloads data length attribute has valid value - you want to detect those issues as soon as possible.

How to fix serialization problems MQL4?

Today I get problems with serialization in MQL4.
I have a method, which I imported from a DLL:
In MQL4:
void insertQuery( int id,
string tableName,
double &values[4],
long &times[3],
int volume
);
In DLL:
__declspec(dllexport) void __stdcall insertQuery( int id,
wchar_t *tableName,
double *values,
long *times,
int volume
);
I tested it with this function calls in MQL4:
string a = "bla";
double arr[4] = { 1.1, 1.3, 0.2, 0.9 };
long A[3] = { 19991208, 19991308, 19992208 };
int volume = 1;
insertQuery( idDB, a, arr, A, volume );
Inside of this method I collect this values to files.
C++ :
stringstream stream;
stream << " '";
for (int i = 0; i < 2; ++i) {
stream << times[i] << "' , '";
}
stream << times[2] << ", ";
for (int i = 0; i < 4; ++i) {
stream << values[i] << ", ";
}
stream << volume;
wstring table(tableName);
query.append("INSERT INTO ");
query.append(table.begin(), table.end());
query.append(" VALUES (");
query.append(stream.str());
query.append(" )");
std::ofstream out("C:\\Users\\alex\\Desktop\\text.txt");
out << query;
out.close();
But in output file I receive this record:
INSERT INTO bla VALUES ( '19991208' , '0' , '19991308, 1.1, 1.3, 0.2, 0.9, 1 )
So my question is : why I lose one long value in array when I receive my record in DLL?
I tried a lot of ways to solve this problem ( I transfered two and three long values, etc ) and always I get a result that I lose second long value at serialization. Why?
The problem is cause because in MQL4, a long is an 8 bytes, while a long in C++ is a 4 bytes.
What you want is a long long in your C++ constructor.
Or you could also pass them as strings, then convert them into the appropriate type within your C++ code.
Well, be carefull, New-MQL4.56789 is not a c-compatible language
The first thing to test is to avoid passing MQL4 string into DLL calling interface, where really a c-lang string is expected.
Since old-MQL4 has been silently re-defined into a still-WIP-creeping syntax New-MQL4,the MQL4 string is not a string, but a struct.
Root-cause [ isolation ]:
Having swallowed the shock about string/struct trouble, if you can, first try to test the MQL4/DLL interactions without passing any string to proof, that all other parameters, passed by value and addressed by-ref, do get their way to the hands of a DLL-function as you expect.
If this works as you wish, proceed to the next step:
How to pass the very data to expected string representation, then?
Let me share a dirty hack I used for passing data where DLL expects string-s
#import "mql4TOOL.dll"
...
int mql4TOOL_msg_init_data ( int &msg[],
uchar &data[],
int size
);
...
#import
...
int tool_msg_init_data ( int &msg[], string data, int size ) { uchar dataChar[]; StringToCharArray( data, dataChar );
return ( mql4TOOL_msg_init_data ( msg, dataChar, size ) );
}
Yes, dirty, but works for years and saved us many tens-of-man*years of re-engineering upon a maintained code-base with heavy dependence on the MQL4/DLL interfacing in massively distributed heterogeneous computing systems.
The last resort:
If all efforts went in vain, go low level, passing a uchar[] as needed, where you assemble some serialised representation in MQL4 and parse that on the opposite end, before processing the intended functionality.
Ugly?
Yes, might look like that,butkeeps you focused on core-functionality and isolates you from any next shift of paradigm if not only strings cease to be strings et al.

gnuradio source only outputting zeros

I made a custom source block that is reading switch values on a zedboard. It is accessing them via a proc driver that I wrote. The /var/log/kern.log is reporting proper output. The debug printf in the source block is reporting proper output.
However pushing the data to a filesink as well as a GUI number sink is only reading zeros. Did I not set up the block properly?
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#include <gnuradio/io_signature.h>
#include "switches_impl.h"
#include <stdio.h>
#include <stdlib.h>
#include <uinstd.h>
namespace gr {
namespace zedboard {
switches::sptr
switches::make()
{
return gnuradio::get_initial_sptr
(new switches_impl());
}
/*
* The private constructor
*/
switches_impl::switches_impl()
: gr::block("switches",
gr::io_signature::make(0,0,0),
gr::io_signature::make(1, 1, sizeof(unsigned int *)))
{}
/*
* Our virtual destructor.
*/
switches_impl::~switches_impl()
{
}
void
switches_impl::forecast (int noutput_items, gr_vector_int &ninput_items_required)
{
/* <+forecast+> e.g. ninput_items_required[0] = noutput_items */
}
int
switches_impl::general_work (int noutput_items,
gr_vector_int &ninput_items,
gr_vector_const_void_star &input_items,
gr_vector_void_star &output_items)
{
//const <+ITYPE+> *in = (const <+ITYPE+> *) input_items[0];
unsigned int *out = (unsigned int *) output_items[0];
// Do <+signal processing+>
// Tell runtime system how many input items we consumed on
// each input stream.
char buffer[5];
size_t size = 1;
size_t nitems = 5;
FILE* fp;
fp = fopen("/proc/zedSwitches","r");
if (fp == NULL)
{
printf("Cannot open for read\n");
return -1;
}
/*
Expect return format:
0x00
*/
fread(buffer, size, nitems, fp);
fclose(fp);
out=(unsigned int *)strtoul(buffer,NULL,0);
printf("read: 0x%02x",out);
consume_each (noutput_items);
// Tell runtime system how many output items we produced.
return noutput_items;
}
} /* namespace zedboard */
} /* namespace gr */
A pointer is a pointer to data, usually:
unsigned int *out = (unsigned int *) output_items[0];
out refers to the buffer for your output.
But you overwrite that pointer with another pointer:
out=(unsigned int *)strtoul(buffer,NULL,0);
which just bends around your copy of that pointer, and doesn't affect the content of that buffer at all. Basic C!
You probably meant to say something like:
out[0]= strtoul(buffer,NULL,0);
That will put your value into the first element of the buffer.
However, you tell GNU Radio that you not only produced a single item (the line above), but noutput_items:
return noutput_items;
That must read
return 1;
when you're only producing a single item, or you must actually produce as many items as you return.
Your consume_each call is nonsensical – GNU Radio Sources are typically instances of gr::sync_block, which means that you'd write a work() instead of a general_work() method as you did.
From the fact alone that this is a general_work and not a work I'd say you haven't used gr_modtool (with block type set to source!) to generate the stub for this block – you really should. Again, I'd like to point you to the Guided Tutorials which should really quickly explain usage of gr_modtool as well as the underlying GNU Radio concepts.

C++/CLI (.NET) equivalent of Native C++ writing structure to network

I recently inherited a program that mixes C++ and C++/CLI (.NET). It interfaces to other components over the network, and to a Driver DLL for some special hardware. However, I am trying to figure out the best way to send the data over the network, as what is used seems non-optimal.
The data is stored in a C++ Defined Structure, something like:
struct myCppStructure {
unsigned int field1;
unsigned int field2;
unsigned int dataArray[512];
};
The program works fine when accessing the structure itself from C++/CLI. The problem is that to send it over the network the current code does something like the following:
struct myCppStructure* data;
IntPtr dataPtr(data);
// myNetworkSocket is a NetworkStream cast as a System::IO::Stream^
System::IO::BinaryWriter^ myBinWriter = gcnew BinaryWriter(myNetworkSocket);
__int64 length = sizeof(struct myCppStructure) / sizeof(__int64);
unsigned __int64* ptr = static_cast<__int64*>(dataPtr.toPointer());
for (unsigned int i = 0; i < (length / sizeof(unsigned __int64)); i++)
myBinWriter->Write((*ptr)++);
In normal C++ it'd usually be a call like:
myBinWriter->Write(ptr,length);
But I can't find anything equivalent in C++/CLI. System::IO::BinaryWriter only has basic types and some array<>^ versions of a few of them. Is there nothing more efficient?
P.S. These records are generated many times a second; so doing additional copying (e.g. Marshaling) it out of the question.
Note: The original question asked about C#. I failed to realize that what I was thinking of as C# was really "Managed C++" (aka C++/CLI) under .NET. The above has been edited to replace 'C#' references with 'C++/CLI' references - which I am using for any version of Managed C++, though I am notably using .NET 3.5.
Your structure consists of "basic types" and "array of them". Why can't you just wrote them sequentially using BinaryWriter? Something like
binWriter.Write(data.field1);
binWriter.Write(data.field2);
for(var i = 0; i < 512; i++)
binWriter.Write(data.dataArray[i]);
What you want to do is to find out how the C++ struct is packed, and define a struct with the correct StructLayout attribute.
To define the fixed length int[], you can defined a fixed size buffer inside it. Note that to use this you will have to mark your project /unsafe.
Now you're ready to convert that struct to a byte[] using two steps
Pin the array of structs in memory using a GCHandle.Alloc - this is fast and shouldn't be a performance bottleneck.
Now use Marshal.Copy (don't worry, this is as fast as a memcpy) with the source IntPtr = handle.AddrOfPinnedObject.
Now dispose the GCHandle and you're ready to write the bytes using the "Write" overload mentioned by Serg Rogovtsev.
Hope this helps!
In C# you could do the following. Start by defining the structure.
[StructLayout ( LayoutKind.Sequential )]
internal unsafe struct hisCppStruct
{
public uint field1;
public uint field2;
public fixed uint dataArray [ 512 ];
}
And then write it using the binary writer as follows.
hisCppStruct #struct = new hisCppStruct ();
#struct.field1 = 1;
#struct.field2 = 2;
#struct.dataArray [ 0 ] = 3;
#struct.dataArray [ 511 ] = 4;
using ( BinaryWriter bw = new BinaryWriter ( File.OpenWrite ( #"C:\temp\test.bin") ) )
{
int structSize = Marshal.SizeOf ( #struct );
int limit = structSize / sizeof ( uint );
uint* uintPtr = (uint*) &#struct;
for ( int i = 0 ; i < limit ; i++ )
bw.Write ( uintPtr [ i ] );
}
I'm pretty sure you can do exactly the same in managed C++.