GetAdaptersInfo crashing - c++

I'm currently trying to do some hardware generation for a friend of mine and I noticed that GetAdaptersInfo kinda behaves weirdly. According to MSDN pOutBufLen should be pointing to a variable holding the value of sizeof(IP_ADAPTER_INFO) (640). But when I use that value it returns 111 (ERROR_BUFFER_OVERFLOW) and sets outBufLen to 2560. When calling the function with outBufLen set to 2560 it just crashes.
Minimal reproduction code:
#include <windows.h>
#include <Iphlpapi.h>
int main()
{
IP_ADAPTER_INFO adapter_inf;
unsigned long int outBufLen = sizeof(IP_ADAPTER_INFO);
GetAdaptersInfo(nullptr, &outBufLen); // returning 111 (ERROR_BUFFER_OVERFLOW) and setting outBufLen to 2560
GetAdaptersInfo(&adapter_inf, &outBufLen); // crash during this call
return 0;
}
Don't know if it matters but 64-bit Windows 8 here.

GetAdaptersInfo(nullptr, &outBufLen);
After this returns a value in outBufLen you are expected to pass a buffer of that length in the subsequent call. You do not do that, hence the runtime error.
You need to allocate the pAdapterInfo dynamically using the length returned in outBufLen.
ULONG outBufLen = 0;
if (GetAdaptersInfo(nullptr, &outBufLen) != ERROR_BUFFER_OVERFLOW)
// handle error
PIP_ADAPTER_INFO pAdapterInfo = (PIP_ADAPTER_INFO) malloc(outBufLen);
if (GetAdaptersInfo(pAdapterInfo, &outBufLen) != ERROR_SUCCESS)
// handle error
I've used malloc here, and a C style cast, but you might prefer to use new and a C++ style cast. I didn't do that through my own lack of familiarity.
Obviously you need to free the memory when you are finished with it.

Related

Using std::span with buffers - C++

I am exploring a possibly safer and more convenient way to handle buffers, either with
fixed size known at compile time
size known at runtime
What is the advice of using static extent vs dynamic extent? The answer may seem obvious but I got confused when testing with examples below. It looks like I can manipulate extent by my own choice by choosing how I initialize the span. Any thoughts about the code examples? What are the correct ways to initialize the span in the examples?
Note that even if the examples use strings the normal use should be uint8_t or std::byte. The Windows and MS compiler specifics are not crucial.
edit:
Updated code
#define _CRTDBG_MAP_ALLOC
#define _CRTDBG_MAP_ALLOC_NEW
// ... includes
#include <cstdlib>
#include <crtdbg.h>
enum appconsts : uint8_t { buffersize = 0xFF };
HRESULT __stdcall FormatBuffer(std::span<wchar_t/*, appconsts::buffersize*/> buffer) noexcept
{
_RPTFN(_CRT_WARN, "FormatBuffer with size of buffer=%d, threadid=0x%8.8X\n", buffer.size(), ::GetCurrentThreadId());
errno_t er = swprintf_s(buffer.data(), buffer.size(), L"test of buffers with std::span, buffer size: %zu", buffer.size());
return er > 0 ? S_OK : E_INVALIDARG;
}
extern "C" int32_t __cdecl
wmain([[maybe_unused]] _In_ int32_t argc,[[maybe_unused]] _In_reads_(argc) _Pre_z_ wchar_t* argv[])
{
_RPTFN(_CRT_WARN, "Executing main thread, argc=%d, threadid=0x%8.8X\n", argc, ::GetCurrentThreadId());
int32_t ir = 0;
wchar_t buffer1[appconsts::buffersize];
ir = FormatBuffer(buffer1);
wchar_t* buffer2 = new wchar_t[appconsts::buffersize];
ir = FormatBuffer(std::span<wchar_t/*, appconsts::buffersize*/>(buffer2, appconsts::buffersize));
delete[] buffer2;
std::unique_ptr<wchar_t[]> buffer3 = std::make_unique<wchar_t[]>(appconsts::buffersize);
ir = FormatBuffer(std::span<wchar_t/*, appconsts::buffersize*/>(buffer3.get(), appconsts::buffersize));
std::vector<wchar_t> buffer4(appconsts::buffersize);
ir = FormatBuffer(std::span<wchar_t/*, appconsts::buffersize*/>(buffer4/*, appconsts::buffersize*/));
_CrtDumpMemoryLeaks();
return ir;
}
New code
Things get a bit clearer with a new version of the example. To get static extent, the size needs to be set in FormatBuffer and only the fixed buffer1 will fit. The cppreference text is a bit confusing. The following is giving dynamic extent.
enum appconsts : uint8_t { buffersize = 0xFF };
HRESULT __stdcall FormatBuffer(std::span<wchar_t/*, appconsts::buffersize*/> buffer) noexcept
{
_RPTFN(_CRT_WARN, "FormatBuffer with size of buffer=%d, span extent=%d, threadid=0x%8.8X\n",
buffer.size(), buffer.extent == std::dynamic_extent ? -1 : buffer.extent, ::GetCurrentThreadId());
errno_t er = swprintf_s(buffer.data(), buffer.size(), L"test of buffers with std::span, buffer size: %zu", buffer.size());
return er > 0 ? S_OK : E_INVALIDARG;
}
HRESULT __stdcall CreateBuffer(size_t runtimesize) noexcept
{
_RPTFN(_CRT_WARN, "CreateBuffer with runtime size of buffer=%d, threadid=0x%8.8X\n", runtimesize, ::GetCurrentThreadId());
HRESULT hr = S_OK;
wchar_t buffer1[appconsts::buffersize]{};
hr = FormatBuffer(buffer1);
std::unique_ptr<wchar_t[]> buffer3 = std::make_unique<wchar_t[]>(runtimesize /*appconsts::buffersize*/);
hr = FormatBuffer(std::span<wchar_t/*, runtimesize*/>(buffer3.get(), runtimesize));
std::vector<wchar_t> buffer4(appconsts::buffersize);
hr = FormatBuffer(std::span<wchar_t/*, appconsts::buffersize*/>(buffer4/*, appconsts::buffersize*/));
return hr;
}
extern "C" int32_t __cdecl
wmain([[maybe_unused]] _In_ int32_t argc,[[maybe_unused]] _In_reads_(argc) _Pre_z_ wchar_t* argv[])
{
_RPTFN(_CRT_WARN, "Executing main thread, argc=%d, threadid=0x%8.8X\n", argc, ::GetCurrentThreadId());
//(void)argc;(void)argv;
int32_t ir = 0;
ir = CreateBuffer(static_cast<size_t>(argc) * appconsts::buffersize);
return ir;
}
(the part on runtime error was removed from the question)
In a nutshell: use dynamic extent, and initialize in the simplest way, like:
wchar_t buffer1[appconsts::buffersize];
ir = FormatBuffer(buffer1);
wchar_t* buffer2 = new wchar_t[appconsts::buffersize];
ir = FormatBuffer({buffer2, appconsts::buffersize}); // won’t compile without the size
delete[] buffer2;
std::unique_ptr<wchar_t[]> buffer3 = std::make_unique<wchar_t[]>(appconsts::buffersize);
ir = FormatBuffer({buffer3.get(), appconsts::buffersize}); // won’t compile without the size
std::vector<wchar_t> buffer4(appconsts::buffersize);
ir = FormatBuffer(buffer4);
Out of these examples, only the first will work if the function expects a fixed-extent span (and only if that extent is exactly the same as array length). Which is good as according to the documentation, constructing a fixed-extent std::span with wrong size is an outright UB. Ouch.
Fixed-extent span is only useful if its size is a part of the API contract. Like, if your function needs 42 coefficients no matter what, std::span<double, 42> is the right way. But if your function may sensibly work with any buffer size, there is no point in hard-coding the particular size used today.
The problem is here:
wchar_t* buffer2 = new wchar_t(appconsts::buffersize);
ir = FormatBuffer(std::span<wchar_t/*, appconsts::buffersize*/>(buffer2, appconsts::buffersize));
delete buffer2; // Critical error detected c0000374
new wchar_t(appconsts::buffersize) doesn’t create a buffer of that size. It allocates a single wchar_t and initializes it with appconsts::buffersize as a value. To allocate an array, use new wchar_t[appconsts::buffersize]. And to free it, use delete[] buffer2.

RtlCaptureStackBackTrace is capturing no frame

I am using RtlCaptureStackBackTrace in my kernel mode driver and trying to get the call trace, but it is capturing zero frames. Code is :
PVOID *stackTrace = NULL;
PULONG traceHash = NULL;
USHORT capturedFrames = 0;
capturedFrames = RtlCaptureStackBackTrace(0, 10, stackTrace, traceHash);
when i check stacktrace, it is NULL.
you provide stackTrace as NULL but according to the documentation:
Caller-allocated array in which pointers to the return addresses
captured from the current stack trace are returned.
this means that you need to allocate an array for the BackTrace Parameter.
For Example:
#define ARRAY_SIZE 10
PVOID stackTrace[ARRAY_SIZE] = {0};
USHORT capturedFrames = 0;
capturedFrames = RtlCaptureStackBackTrace(0, ARRAY_SIZE, stackTrace, NULL);
Note: don't use it in production code, it is better if you use dynamic allocation using ExAllocatePoolWithTag())
Also look at this Microsoft Sample

CreateFile2, WriteFile, and ReadFile: how can I enforce 16 byte alignment?

I'm creating and writing a file with CreateFile2 and WriteFile, then later using readfile with the to read 16 bytes at a time into an __m128i and then performing simd operations on it. Works fine in debug mode, but throws the access denied (0xc0000005) error code in release mode. In my experience, that happens when I'm trying to shove non 16-byte-aligned stuff into 16-byte-aligned stuff. However, I'm unsure where the lack of 16-byte-alignment is first rearing its ugly head.
#define simd __m128i
Is it in the CreateFile2() call?
_CREATEFILE2_EXTENDED_PARAMETERS extend = { 0 };
extend.dwSize = sizeof(CREATEFILE2_EXTENDED_PARAMETERS);
extend.dwFileAttributes = FILE_ATTRIBUTE_NORMAL;
extend.dwFileFlags = /*FILE_FLAG_NO_BUFFERING |*/ FILE_FLAG_OVERLAPPED;
extend.dwSecurityQosFlags = SECURITY_ANONYMOUS;
extend.lpSecurityAttributes = nullptr;
extend.hTemplateFile = nullptr;
hMappedFile = CreateFile2(
testFileName.c_str(),
GENERIC_READ | GENERIC_WRITE,
0,
OPEN_ALWAYS,
&extend);
...in the WriteFile() call?
_OVERLAPPED positionalData;
positionalData.Offset = 0;
positionalData.OffsetHigh = 0;
positionalData.hEvent = 0;
bool writeCheck = WriteFile(
hMappedFile,
&buffer[0],
vSize,
NULL,
&positionalData);
...in the later ReadFile() call?
const simd* FileNodePointer(
_In_ const uint32_t index) const throw()
{
std::vector<simd> Node(8);
_OVERLAPPED positionalData;
positionalData.Offset = index;
positionalData.OffsetHigh = 0;
positionalData.hEvent = 0;
ReadFile(
hMappedFile,
(LPVOID)&Node[0],
128,
NULL,
&positionalData);
return reinterpret_cast<const simd*>(&Node[0]);
}
How can I enforce 16-byte-alignment here?
Thanks!
TL;DR You have a classic "use after free" error.
None of these functions require 16 byte alignment. If buffering is enabled, they don't care about alignment at all, and if direct I/O is enabled, they require page alignment which is much more restrictive than 16 bytes.
If your data buffer is unaligned, it's because you created it that way. The file I/O is not moving your buffer in memory.
But your access violation is not caused by alignment problems at all, it is the dangling pointer you return from FileNodePointer:
return reinterpret_cast<const simd*>(&Node[0]);
That's a pointer into content of a vector with automatic lifetime, the vector destructor runs during the function return process and frees the memory containing the data you just read from the file.

C++ memory allocation for windows

So I'm reading Windows via c/c++ fifth edition which was released before c11 so lacks some of the newer data types and methods, but was touted to be a great book on Windows.
I am just learning Windows development and c++ and when I posted questions related to file operations with code samples from the book, I got feedback that allocating buffers with the malloc function is not a good practice anymore as it requires freeing up the memroy. I should use vectors or strings instead.
That is ok. But what is the case with Windows's own data types? Here is a code sample from the book:
//initialization omitted
BOOL bResult = GetLogicalProcessorInformation(pBuffer, &dwSize);
if (GetLastError() != ERROR_INSUFFICIENT_BUFFER) {
_tprintf(TEXT("Impossible to get processor information\n"));
return;
}
pBuffer = (PSYSTEM_LOGICAL_PROCESSOR_INFORMATION)malloc(dwSize);
bResult = GetLogicalProcessorInformation(pBuffer, &dwSize);
Is there a better solution for this type of query than using malloc to allocate the proper amount of memory?
Or is declaring a vector of type PROCESOR INFORMATION STRUCTRUE the way to go?
The win32 api is sometimes a pain to use, but you could allways use the raw bytes in a std::vector<char> as a SYSTEM_LOGICAL_PROCESSOR_INFORMATION:
std::vector<char> buffer(sizeof(SYSTEM_LOGICAL_PROCESSOR_INFORMATION));
size_t buffersize = buffer.size();
SYSTEM_LOGICAL_PROCESSOR_INFORMATION *ptr
= (SYSTEM_LOGICAL_PROCESSOR_INFORMATION *)&(buffer[0]);
BOOL bResult = GetLogicalProcessorInformation(ptr, &buffersize);
if (GetLastError() == ERROR_INSUFFICIENT_BUFFER)
{
buffer.resize(buffersize);
ptr = (SYSTEM_LOGICAL_PROCESSOR_INFORMATION *)&(buffer[0]);
bResult = GetLogicalProcessorInformation(ptr, &buffersize);
}
Just be avare that the value of &(buffer[0]) may change after buffer.resize(...);
Other than that, I normally don't use the win32 api, so any bugs concerning how to call win32, you have to fix yourself
Take a look at the MSDN documentation and you will see that buffer should be "A pointer to a buffer that receives an array of SYSTEM_LOGICAL_PROCESSOR_INFORMATION structures. If the function fails, the contents of this buffer are undefined." So Zdeslav Vojkovic's answer will not work here (as Raymond Chen has pointed out). You could use std::vector<SYSTEM_LOGICAL_PROCESSOR_INFORMATION> in this case and then just call 'resize' with dwSize / sizeof(SYSTEM_LOGICAL_PROCESSOR_INFORMATION) as the argument. This would look something like:
using SLPI = SYSTEM_LOGICAL_PROCESSOR_INFORMATION;
std::vector<SLPI> slpi;
DWORD dwSize = 0;
if (!GetLogicalProcessorInformation(slpi.data(), &dwSize))
{
if (GetLastError() != ERROR_INSUFFICIENT_BUFFER) { /* error handling */ }
// Not really necessary, but good to make sure
assert(dwSize % sizeof(SLPI) == 0);
slpi.resize(dwSize / sizeof(SLPI));
if (!GetLogicalProcessorInformation(slpi.data(), &dwSize)) { /* error handling */ }
}
Personally, I'd prefer to wrap the above into a function and just return slpi so you don't need to go through this entire shenanigans every time you wish to make a call to GetLogicalProcessorInformation.

Basics of GetTokenInformation

I've been trying to get this call to cooperate, but to no success.
I'm trying to get the SID value for the current user in order to get the User's account rights (using LsaEnumerateAccountRights). Although I'm lost as to why My call to GetTokenInformation is returning false. No error in retrieving the process token.
Here is my work so far on the subject:
HANDLE h_Process;
HANDLE h_Token;
HANDLE h_retToken;
TOKEN_USER tp;
DWORD cb = sizeof(TOKEN_USER);
PDWORD ret;
DWORD dw_TokenLength;
h_Process = GetCurrentProcess();
if (OpenProcessToken(h_Process, TOKEN_READ, &h_Token) == FALSE)
{
printf("Error: Couldn't open the process token\n");
return -1;
}
if (GetTokenInformation(h_Token, TokenUser, &tp, cb, &dw_TokenLength) == FALSE)
{
printf("Error: Could not retrieve Token User information");
return -1;
}
And along with it, I might as well ask a follow up question that I have not yet encountered, how to retrieve the SID from the formed TOKEN_USER structure?
I apologize ahead of time for such a simple question, I'm just stumped and would like some help to continue. All the questions related to this one are far more complicated and give little insight to my current problem.
Thanks in advance,
Jon
According to the documentation For GetTokenInformation, if the function fails you can retrieve more information via a call to GetLastError.
Return Value
If the function succeeds, the return value is nonzero.
If the function fails, the return value is zero. To get extended error information, call GetLastError.
So you need to implement some checking for the extended error:
if (!GetTokenInformation(h_Token, TokenUser, &tp, cb, &dw_TokenLength))
{
int lastError = GetLastError();
// Should be a switch, of course. Omitted for brevity
if (lastError == ERROR_INSUFFICIENT_BUFFER)
{
//
}
}
As a general rule of thumb when using WinAPI functions that have varying buffer requirements, you typically
Call the function with a NULL buffer to determine the buffer size needed (in this case, returned in the ReturnLength parameter)
Allocate a buffer of the indicated size
Call the function again, passing the allocated buffer, to obtain the information
The first thing to understand is that Win32 UM (user mode) APIs that result into system calls generally require that you provide the buffer up front. This has to do with the fact that the kernel can access UM heap allocations, and UM cannot access KM allocations.
These calls typically follow a convention where you call once to get the required buffer size, and then call again with an allocated buffer that is large enough. It is even better though if you can create a reasonably sized buffer upfront. System calls can be expensive because of the context switching that it causes, so going from 2 calls to 1 can be a big performance improvement if it is a hot path.
Here is a sample of what you need. This has a loop that will try forever, but it is also common to just try twice. If the needed buffer is <= 128, it will only call once.
DWORD bytesReturned = 128;
LPVOID tokenUser = nullptr;
auto cleanup = ScopeExit([&]()
{
LocalFree(tokenUser);
});
for (;;) {
tokenUser = LocalAlloc(LMEM_FIXED, bytesReturned);
THROW_HR_IF_NULL(E_OUTOFMEMORY, tokenUser);
if (!GetTokenInformation(token.get(), TokenUser, &tokenUser, bytesReturned, &bytesReturned))
{
if (ERROR_INSUFFICIENT_BUFFER == GetLastError())
{
LocalFree(tokenUser);
tokenUser = nullptr;
continue;
}
THROW_HR(HRESULT_FROM_WIN32(GetLastError()));
}
break;
}
The other big problem with your code is that you are passing in a reference to TOKEN_USER tp. The API actually just takes a PVOID. The SID's buffer will just be in tokenUser's buffer. You will need to cast it to a TOKEN_USER* to correctly access the memory.