Using std::span with buffers - C++ - c++

I am exploring a possibly safer and more convenient way to handle buffers, either with
fixed size known at compile time
size known at runtime
What is the advice of using static extent vs dynamic extent? The answer may seem obvious but I got confused when testing with examples below. It looks like I can manipulate extent by my own choice by choosing how I initialize the span. Any thoughts about the code examples? What are the correct ways to initialize the span in the examples?
Note that even if the examples use strings the normal use should be uint8_t or std::byte. The Windows and MS compiler specifics are not crucial.
edit:
Updated code
#define _CRTDBG_MAP_ALLOC
#define _CRTDBG_MAP_ALLOC_NEW
// ... includes
#include <cstdlib>
#include <crtdbg.h>
enum appconsts : uint8_t { buffersize = 0xFF };
HRESULT __stdcall FormatBuffer(std::span<wchar_t/*, appconsts::buffersize*/> buffer) noexcept
{
_RPTFN(_CRT_WARN, "FormatBuffer with size of buffer=%d, threadid=0x%8.8X\n", buffer.size(), ::GetCurrentThreadId());
errno_t er = swprintf_s(buffer.data(), buffer.size(), L"test of buffers with std::span, buffer size: %zu", buffer.size());
return er > 0 ? S_OK : E_INVALIDARG;
}
extern "C" int32_t __cdecl
wmain([[maybe_unused]] _In_ int32_t argc,[[maybe_unused]] _In_reads_(argc) _Pre_z_ wchar_t* argv[])
{
_RPTFN(_CRT_WARN, "Executing main thread, argc=%d, threadid=0x%8.8X\n", argc, ::GetCurrentThreadId());
int32_t ir = 0;
wchar_t buffer1[appconsts::buffersize];
ir = FormatBuffer(buffer1);
wchar_t* buffer2 = new wchar_t[appconsts::buffersize];
ir = FormatBuffer(std::span<wchar_t/*, appconsts::buffersize*/>(buffer2, appconsts::buffersize));
delete[] buffer2;
std::unique_ptr<wchar_t[]> buffer3 = std::make_unique<wchar_t[]>(appconsts::buffersize);
ir = FormatBuffer(std::span<wchar_t/*, appconsts::buffersize*/>(buffer3.get(), appconsts::buffersize));
std::vector<wchar_t> buffer4(appconsts::buffersize);
ir = FormatBuffer(std::span<wchar_t/*, appconsts::buffersize*/>(buffer4/*, appconsts::buffersize*/));
_CrtDumpMemoryLeaks();
return ir;
}
New code
Things get a bit clearer with a new version of the example. To get static extent, the size needs to be set in FormatBuffer and only the fixed buffer1 will fit. The cppreference text is a bit confusing. The following is giving dynamic extent.
enum appconsts : uint8_t { buffersize = 0xFF };
HRESULT __stdcall FormatBuffer(std::span<wchar_t/*, appconsts::buffersize*/> buffer) noexcept
{
_RPTFN(_CRT_WARN, "FormatBuffer with size of buffer=%d, span extent=%d, threadid=0x%8.8X\n",
buffer.size(), buffer.extent == std::dynamic_extent ? -1 : buffer.extent, ::GetCurrentThreadId());
errno_t er = swprintf_s(buffer.data(), buffer.size(), L"test of buffers with std::span, buffer size: %zu", buffer.size());
return er > 0 ? S_OK : E_INVALIDARG;
}
HRESULT __stdcall CreateBuffer(size_t runtimesize) noexcept
{
_RPTFN(_CRT_WARN, "CreateBuffer with runtime size of buffer=%d, threadid=0x%8.8X\n", runtimesize, ::GetCurrentThreadId());
HRESULT hr = S_OK;
wchar_t buffer1[appconsts::buffersize]{};
hr = FormatBuffer(buffer1);
std::unique_ptr<wchar_t[]> buffer3 = std::make_unique<wchar_t[]>(runtimesize /*appconsts::buffersize*/);
hr = FormatBuffer(std::span<wchar_t/*, runtimesize*/>(buffer3.get(), runtimesize));
std::vector<wchar_t> buffer4(appconsts::buffersize);
hr = FormatBuffer(std::span<wchar_t/*, appconsts::buffersize*/>(buffer4/*, appconsts::buffersize*/));
return hr;
}
extern "C" int32_t __cdecl
wmain([[maybe_unused]] _In_ int32_t argc,[[maybe_unused]] _In_reads_(argc) _Pre_z_ wchar_t* argv[])
{
_RPTFN(_CRT_WARN, "Executing main thread, argc=%d, threadid=0x%8.8X\n", argc, ::GetCurrentThreadId());
//(void)argc;(void)argv;
int32_t ir = 0;
ir = CreateBuffer(static_cast<size_t>(argc) * appconsts::buffersize);
return ir;
}

(the part on runtime error was removed from the question)
In a nutshell: use dynamic extent, and initialize in the simplest way, like:
wchar_t buffer1[appconsts::buffersize];
ir = FormatBuffer(buffer1);
wchar_t* buffer2 = new wchar_t[appconsts::buffersize];
ir = FormatBuffer({buffer2, appconsts::buffersize}); // won’t compile without the size
delete[] buffer2;
std::unique_ptr<wchar_t[]> buffer3 = std::make_unique<wchar_t[]>(appconsts::buffersize);
ir = FormatBuffer({buffer3.get(), appconsts::buffersize}); // won’t compile without the size
std::vector<wchar_t> buffer4(appconsts::buffersize);
ir = FormatBuffer(buffer4);
Out of these examples, only the first will work if the function expects a fixed-extent span (and only if that extent is exactly the same as array length). Which is good as according to the documentation, constructing a fixed-extent std::span with wrong size is an outright UB. Ouch.
Fixed-extent span is only useful if its size is a part of the API contract. Like, if your function needs 42 coefficients no matter what, std::span<double, 42> is the right way. But if your function may sensibly work with any buffer size, there is no point in hard-coding the particular size used today.

The problem is here:
wchar_t* buffer2 = new wchar_t(appconsts::buffersize);
ir = FormatBuffer(std::span<wchar_t/*, appconsts::buffersize*/>(buffer2, appconsts::buffersize));
delete buffer2; // Critical error detected c0000374
new wchar_t(appconsts::buffersize) doesn’t create a buffer of that size. It allocates a single wchar_t and initializes it with appconsts::buffersize as a value. To allocate an array, use new wchar_t[appconsts::buffersize]. And to free it, use delete[] buffer2.

Related

MapViewOfFile read from a given position

I need to read a large file in parts to a limited buffer. My code works, but always reads from the beginning. I think I need to use dwFileOffsetHigh somehow and dwFileOffsetLow, but I can't figure out how. Mapper_Winapi_Uptr is a unique_ptr with a custom deleter, if necessary I can lay out its code.
System: 64bit Win10.
const std::vector<BYTE>& ReadFile(size_t pos) {
memory = Mapper_Winapi_Uptr{ static_cast<BYTE*>(MapViewOfFile(mapping, FILE_MAP_READ, 0, 0, bufferSize)) };
std::memcpy(&data[0], memory.get(), bufferSize);
return data;
}
You are mapping the view at file offset 0, ignoring your pos parameter which is presumably the desired file offset.
MapViewOfFile() takes a 64bit offset as input, split into 32bit low and high values. A size_t may be a 32bit or 64bit type, depending on compiler and platform. You can put your desired offset into a ULARGE_INTEGER first, that will give you the low and high values you can then give to MapViewOfFile().
Note that the file offset you give to MapViewOfFile() must be a multiple of the system allocation granularity. See Creating a View Within a File on MSDN for details on how to handle that.
Try something like this:
SYSTEM_INFO SysInfo;
GetSystemInfo(&SysInfo);
DWORD SysGran = SysInfo.dwAllocationGranularity;
...
const std::vector<BYTE>& ReadFile(size_t pos)
{
size_t MapViewStart = (pos / SysGran) * SysGran;
DWORD MapViewSize = (pos % SysGran) + bufferSize;
DWORD ViewDelta = pos - MapViewStart;
ULARGE_INTEGER ulOffset;
ulOffset.QuadPart = MapViewStart;
memory = Mapper_Winapi_Uptr{ static_cast<BYTE*>(MapViewOfFile(mapping, FILE_MAP_READ, ulOffset.HighPart, ulOffset.LowPart, bufferSize)) };
if (!memory.get()) {
// error handling ...
}
std::memcpy(&data[0], &(memory.get())[ViewDelta], bufferSize);
return data;
}

MQL4 C++ Dll change string argument in function call

Here is my code for a MetaTraderWrapper.dll:
#define MT4_EXPFUNC __declspec(dllexport)
MT4_EXPFUNC void __stdcall PopMessageString(wchar_t *message)
{
auto result = L"Hello world !";
int n = wcslen( result );
wcscpy_s( message, n + 1, result );
}
On the MQL4-Caller side this Script is used:
#property strict
#import "MetaTraderWrapper.dll"
int PopMessageString( string & );
#import
//
void OnStart(){
string message;
if ( StringInit( message, 64 ) ){
PopMessageString( message );
int n = StringLen( message );
MessageBox( message );
}
}
In this way it works, when a message have been properly initialized with a StringInit() function and enough memory was allocated.
What I need to do is, to allocate the message variable not in MQL4 script, but within the DLL.
In a c++ function, should be something like this:
MT4_EXPFUNC void __stdcall PopMessageString(wchar_t *message)
{
auto result = L"Hello world !";
int n = wcslen( result );
// allocate here, but does not work
message = new wchar_t[n + 1]; // <--------- DOES NOT WORK
//
wcscpy_s( message, n + 1, result );
}
What can I do ?
Get acquainted with Wild Worlds of MQL4:Step 1: forget a string to be string ( it is a struct ... since 2014 )
Internal representation of the string type is a structure of 12 bytes long:
#pragma pack(push,1)
struct MqlString
{
int size; // 32-bit integer, contains size of the buffer, allocated for the string.
LPWSTR buffer; // 32-bit address of the buffer, containing the string.
int reserved; // 32-bit integer, reserved.
};
#pragma pack(pop,1)
So,
having headbanged into this one sunny Sunday afternoon, as the platform undergone a LiveUpdate and suddenly all DLL-call-interfaces using a string stopped work, it took a long way to absorb the costs of such "swift" engineering surprise.
You can re-use the found solution:
use an array of bytes - uchar[] and convert appropriately bytes of returned content on MQL4 side into string by service functions StringToCharArray() resp. CharArrayToString()
The DLL-.mqh-header file may also add these tricks and make these conversions "hidden" from MQL4-code:
#import <aDLL-file> // "RAW"-DLL-call-interfaces
...
// Messages:
int DLL_message_init( int &msg[] );
int DLL_message_init_size ( int &msg[], int size );
int DLL_message_init_data ( int &msg[], uchar &data[], int size );
...
#import
// ------------------------------------------------------ // "SOFT"-wrappers
...
int MQL4_message_init_data ( int &msg[], string data, int size ) { uchar dataChar[]; StringToCharArray( data, dataChar );
return ( DLL_message_init_data ( msg, dataChar, size ) );
}
Always be pretty carefull with appropriate deallocations, not to cause memory leaks.
Always be pretty cutious when new LiveUpdate changes the code-base and introduces new compiler + new documentation. Re-read whole documentation, as many life-saving details come into the help-file only after some next update and many details are hidden or reflected indirectly in chapters, that do not promise such information on a first look -- so, become as ready as D'Artagnan or red-scarfed pioneer -- you never know, where the next hit comes from :)

GetAdaptersInfo crashing

I'm currently trying to do some hardware generation for a friend of mine and I noticed that GetAdaptersInfo kinda behaves weirdly. According to MSDN pOutBufLen should be pointing to a variable holding the value of sizeof(IP_ADAPTER_INFO) (640). But when I use that value it returns 111 (ERROR_BUFFER_OVERFLOW) and sets outBufLen to 2560. When calling the function with outBufLen set to 2560 it just crashes.
Minimal reproduction code:
#include <windows.h>
#include <Iphlpapi.h>
int main()
{
IP_ADAPTER_INFO adapter_inf;
unsigned long int outBufLen = sizeof(IP_ADAPTER_INFO);
GetAdaptersInfo(nullptr, &outBufLen); // returning 111 (ERROR_BUFFER_OVERFLOW) and setting outBufLen to 2560
GetAdaptersInfo(&adapter_inf, &outBufLen); // crash during this call
return 0;
}
Don't know if it matters but 64-bit Windows 8 here.
GetAdaptersInfo(nullptr, &outBufLen);
After this returns a value in outBufLen you are expected to pass a buffer of that length in the subsequent call. You do not do that, hence the runtime error.
You need to allocate the pAdapterInfo dynamically using the length returned in outBufLen.
ULONG outBufLen = 0;
if (GetAdaptersInfo(nullptr, &outBufLen) != ERROR_BUFFER_OVERFLOW)
// handle error
PIP_ADAPTER_INFO pAdapterInfo = (PIP_ADAPTER_INFO) malloc(outBufLen);
if (GetAdaptersInfo(pAdapterInfo, &outBufLen) != ERROR_SUCCESS)
// handle error
I've used malloc here, and a C style cast, but you might prefer to use new and a C++ style cast. I didn't do that through my own lack of familiarity.
Obviously you need to free the memory when you are finished with it.

Why is my memcpy from a vector of bytes into aligned memory not actually copying anything?

__m128i* pData = reinterpret_cast<__m128i*>(
_aligned_malloc(
128,
16));
std::vector<byte> sectorBytes = dataSet.GetSectorBytes(index);
index &= 0xfff;
memcpy(pData, &sectorBytes[index], 128);
as you can see, I'm trying to copy 128 bytes from sectorBytes (which has no guarantees of 16-byte-alignment) into pData, which does. Unfortunately, after setting a breakpoint to check, I've found that while sectorBytes has exactly what I'd anticipated it to have, pData[0] through pData[7] contain only zeroes... so nothing is actually getting copied.
I don't understand why nothing's being copied... why is this happening?
Of course, the larger task I'm trying to accomplish is taking 128 bytes from a file at a specified offset, and then performing _mm_xor_si128() on them without an access violation popping up from trying to do simd operations on data that isn't 16-byte-aligned. In the interest of facilitating that discussion, here's GetSectorBytes():
std::vector<byte> GetSectorBytes(
_In_ const uint32_t index) const throw()
{
std::vector<byte> buffer(0x1000);
int indexOuter = index & 0xfffff000;
_OVERLAPPED positionalData;
positionalData.Offset = indexOuter;
positionalData.OffsetHigh = 0;
positionalData.hEvent = 0;
ReadFile(
hMappedFile,
&buffer[0],
0x1000,
NULL,
&positionalData);
return buffer;
}
hMappedFile was created with CreateFile2 with the FILE_FLAG_NO_BUFFERING flag set.

CreateFile2, WriteFile, and ReadFile: how can I enforce 16 byte alignment?

I'm creating and writing a file with CreateFile2 and WriteFile, then later using readfile with the to read 16 bytes at a time into an __m128i and then performing simd operations on it. Works fine in debug mode, but throws the access denied (0xc0000005) error code in release mode. In my experience, that happens when I'm trying to shove non 16-byte-aligned stuff into 16-byte-aligned stuff. However, I'm unsure where the lack of 16-byte-alignment is first rearing its ugly head.
#define simd __m128i
Is it in the CreateFile2() call?
_CREATEFILE2_EXTENDED_PARAMETERS extend = { 0 };
extend.dwSize = sizeof(CREATEFILE2_EXTENDED_PARAMETERS);
extend.dwFileAttributes = FILE_ATTRIBUTE_NORMAL;
extend.dwFileFlags = /*FILE_FLAG_NO_BUFFERING |*/ FILE_FLAG_OVERLAPPED;
extend.dwSecurityQosFlags = SECURITY_ANONYMOUS;
extend.lpSecurityAttributes = nullptr;
extend.hTemplateFile = nullptr;
hMappedFile = CreateFile2(
testFileName.c_str(),
GENERIC_READ | GENERIC_WRITE,
0,
OPEN_ALWAYS,
&extend);
...in the WriteFile() call?
_OVERLAPPED positionalData;
positionalData.Offset = 0;
positionalData.OffsetHigh = 0;
positionalData.hEvent = 0;
bool writeCheck = WriteFile(
hMappedFile,
&buffer[0],
vSize,
NULL,
&positionalData);
...in the later ReadFile() call?
const simd* FileNodePointer(
_In_ const uint32_t index) const throw()
{
std::vector<simd> Node(8);
_OVERLAPPED positionalData;
positionalData.Offset = index;
positionalData.OffsetHigh = 0;
positionalData.hEvent = 0;
ReadFile(
hMappedFile,
(LPVOID)&Node[0],
128,
NULL,
&positionalData);
return reinterpret_cast<const simd*>(&Node[0]);
}
How can I enforce 16-byte-alignment here?
Thanks!
TL;DR You have a classic "use after free" error.
None of these functions require 16 byte alignment. If buffering is enabled, they don't care about alignment at all, and if direct I/O is enabled, they require page alignment which is much more restrictive than 16 bytes.
If your data buffer is unaligned, it's because you created it that way. The file I/O is not moving your buffer in memory.
But your access violation is not caused by alignment problems at all, it is the dangling pointer you return from FileNodePointer:
return reinterpret_cast<const simd*>(&Node[0]);
That's a pointer into content of a vector with automatic lifetime, the vector destructor runs during the function return process and frees the memory containing the data you just read from the file.