So, we've got a program from our professor, that supposedly will enumerate through the devieces connected to usb hub. I won't get into the specifics, cause that's not what my question is about.
This is one of the functions, and in specified line I've got a 2233 error which states: "'identifier' : arrays of objects containing zero-size arrays are illegal.
Each object in an array must contain at least one element."
void getDSPortData(HANDLE devHandle, UCHAR connectionIndex)
{
ULONG bytesReturned;
bool success;
PUSB_NODE_CONNECTION_INFORMATION pConnectionInformation = NULL;
ULONG nBytes = sizeof(USB_NODE_CONNECTION_INFORMATION) + sizeof(USB_PIPE_INFO) * 30; //15 EP IN + 15 EP OUT
pConnectionInformation = new USB_NODE_CONNECTION_INFORMATION[nBytes]; //error 2233
pConnectionInformation->ConnectionIndex = connectionIndex;
success = DeviceIoControl(devHandle, IOCTL_USB_GET_NODE_CONNECTION_INFORMATION, pConnectionInformation, nBytes,
pConnectionInformation, nBytes, &bytesReturned, NULL);
if (!success) {
delete[] pConnectionInformation;
return;
}
(...)
delete[] pConnectionInformation;
}
So I checked the usbioctl.h header file:
typedef struct _USB_NODE_CONNECTION_INFORMATION {
ULONG ConnectionIndex; /* INPUT */
/* usb device descriptor returned by this device
during enumeration */
USB_DEVICE_DESCRIPTOR DeviceDescriptor; /* OUTPUT */
UCHAR CurrentConfigurationValue;/* OUTPUT */
BOOLEAN LowSpeed;/* OUTPUT */
BOOLEAN DeviceIsHub;/* OUTPUT */
USHORT DeviceAddress;/* OUTPUT */
ULONG NumberOfOpenPipes;/* OUTPUT */
USB_CONNECTION_STATUS ConnectionStatus;/* OUTPUT */
USB_PIPE_INFO PipeList[0];/* OUTPUT */
} USB_NODE_CONNECTION_INFORMATION, *PUSB_NODE_CONNECTION_INFORMATION;
As it seems, the problem seems to stem from the PipeList[0] thing. So, is it even legal to type something like this? As far as I know that's declaration of zero elements array. And is there any way to trick it to make it work?
Related
When using the function getsockopt(...) with the level SOL_SOCKET and option SO_BSP_STATE, I am receiving the WSA error code WSAEFAULT, which states the following:
"One of the optval or the optlen parameters is not a valid part of the user address space, or the optlen parameter is too small."
However, I was passing in a correctly sized, user-mode buffer:
/* ... */
HRESULT Result = E_UNEXPECTED;
CSADDR_INFO Info = { 0 }; // Placed on the stack.
int InfoSize = sizeof (CSADDR_INFO); // The size of the input buffer to `getsockopt()`.
// Get the local address information from the raw `SOCKET`.
if (getsockopt (this->WsaSocket,
SOL_SOCKET,
SO_BSP_STATE,
reinterpret_cast <char *> (&Info),
&InfoSize) == SOCKET_ERROR)
{
Result = HRESULT_FROM_WIN32 (WSAGetLastError ());
}
else
{
Result = S_OK;
}
/* ... */
According to the socket option SO_BSP_STATE documentation under the remarks section of the getsockopt(...) function, the return value is of type CSADDR_INFO. Furthermore, the Microsoft documentation page for the SO_BSP_STATE socket option, states the following requirements:
optval:
"[...] This parameter should point to buffer equal to or larger than the size of a CSADDR_INFO structure."
optlen:
"[...] This size must be equal to or larger than the size of a CSADDR_INFO structure."
After doing some research, I stumbled upon some test code from WineHQ that was passing in more memory than sizeof(CSADDR_INFO) when calling getsockopt(...) (see lines 1305 and 1641):
union _csspace
{
CSADDR_INFO cs;
char space[128];
} csinfoA, csinfoB;
It also looks like the project ReacOS also references this same exact code (see reference). Even though this is a union, because sizeof(CSADDR_INFO) is always less than 128, csinfoA will always be 128 bytes in size.
Therefore, this got me wondering how many bytes the socket option SO_BSP_STATE actually requires when calling getsockopt(...). I created the following complete example (via Visual Studio 2019 / C++17) that illustrates that in fact SO_BSP_STATE requires a buffer more than sizeof(CSADDR_INFO), which is in direct contrast to the Microsoft published documentation:
/**
* #note This example was created and compiled in Visual Studio 2019.
*/
#define WIN32_LEAN_AND_MEAN
#include <Windows.h>
#include <iostream>
#include <winsock2.h>
#include <ws2tcpip.h>
#pragma comment(lib, "ws2_32.lib")
/**
* #brief The number of bytes to increase the #ref CSADDR_INFO_PLUS_EXTRA_SPACE structure by.
* #note Alignment and pointer size changes when compiling for Intel x86 versus Intel x64.
* The extra bytes required therefore vary.
*/
#if defined(_M_X64) || defined(__amd64__)
#define EXTRA_SPACE (25u) // Required extra space when compiling for X64
#else
#define EXTRA_SPACE (29u) // Required extra space when compiling for X86
#endif
/**
* #brief A structure to add extra space passed the `CSADDR_INFO` structure.
*/
typedef struct _CSADDR_INFO_PLUS_EXTRA_SPACE
{
/**
* #brief The standard structure to store Windows Sockets address information.
*/
CSADDR_INFO Info;
/**
* #brief A blob of extra space.
*/
char Extra [EXTRA_SPACE];
} CSADDR_INFO_PLUS_EXTRA_SPACE;
/**
* #brief The main entry function for this console application for demonstrating an issue with `SO_BSP_STATE`.
*/
int main (void)
{
HRESULT Result = S_OK; // The execution result of this console application.
SOCKET RawSocket = { 0 }; // The raw WSA socket index variable the references the socket's memory.
WSADATA WindowsSocketsApiDetails = { 0 }; // The WSA implementation details about the current WSA DLL.
CSADDR_INFO_PLUS_EXTRA_SPACE Info = { 0 }; // The structure `CSADDR_INFO` plus an extra blob of memory.
int InfoSize = sizeof (CSADDR_INFO_PLUS_EXTRA_SPACE);
std::cout << "Main Entry!" << std::endl;
// Request for the latest Windows Sockets API (WSA) (a.k.a. Winsock) DLL available on this system.
if (WSAStartup (MAKEWORD(2,2),
&WindowsSocketsApiDetails) != 0)
{
Result = HRESULT_FROM_WIN32 (WSAGetLastError ());
}
// Create a blank TCP socket using IPv4.
if ((RawSocket = WSASocketW (AF_INET,
SOCK_STREAM,
IPPROTO_TCP,
nullptr,
0,
0)) == INVALID_SOCKET)
{
Result = HRESULT_FROM_WIN32 (WSAGetLastError ());
}
else
{
// Get the local address information from the raw `SOCKET`.
if (getsockopt (RawSocket,
SOL_SOCKET,
SO_BSP_STATE,
reinterpret_cast <char *> (&Info),
&InfoSize) == SOCKET_ERROR)
{
std::cout << "Failed obtained the socket's state information!" << std::endl;
Result = HRESULT_FROM_WIN32 (WSAGetLastError ());
}
else
{
std::cout << "Successfully obtained the socket's state information!" << std::endl;
Result = S_OK;
}
}
// Clean up the entire Windows Sockets API (WSA) environment and release the DLL resource.
if (WSACleanup () != 0)
{
Result = HRESULT_FROM_WIN32 (WSAGetLastError ());
}
std::cout << "Exit Code: 0x" << std::hex << Result << std::endl;
return Result;
}
(If you change the EXTRA_SPACE define to equal 0 or 1, then you will see the issue I am outlining.)
Due to the default structure alignment and pointer size change when compiling for either X86 or X64 in Visual Studio 2019, the required extra space beyond the CSADDR_INFO structure can vary:
Space required for X86: sizeof(CSADDR_INFO) + 29
Space required for X64: sizeof(CSADDR_INFO) + 25
This is completely arbitrary as shown and if you don't add this arbitrary padding, then getsockopt(...) will fail. This makes me call into question if the data I am getting back is even correct. This looks like there might be a missing footnote in the published documentation, however, I very well could be misunderstanding something (very likely this).
My Question(s):
What is tied to the buffer size that SO_BSP_STATE actually requires (i.e. a structure, etc.)? Because, it is clearly not sizeof(CSADDR_INFO) as documented.
Is the Microsoft documentation incorrect (reference)? If not, what issues are spotted in my above code example, if EXTRA_SPACE is set to 0, in order for getsockopt(...) to be successful?
I think what's happening here is as follows:
CSADDR_INFO is defined like so:
typedef struct _CSADDR_INFO {
SOCKET_ADDRESS LocalAddr;
SOCKET_ADDRESS RemoteAddr;
INT iSocketType;
INT iProtocol;
} CSADDR_INFO;
Specifically, it contains two SOCKET_ADDRESS structures.
SOCKET_ADDRESS is defined like so:
typedef struct _SOCKET_ADDRESS {
LPSOCKADDR lpSockaddr;
INT iSockaddrLength;
} SOCKET_ADDRESS;
The lpSockaddr of the SOCKET_ADDRESS structure is a pointer to a SOCK_ADDR structure, and the length of that varies by address family (ipv4 vs ipv6, for example).
It follows that getsockopt needs somewhere to store these SOCK_ADDR structures and that's where your 'blob' of extra data comes in - they're in there, pointed to by the two SOCKET_ADDRESS structures. It further follows that the worst-case scenario for the size of this extra data may be more than you are allowing, since, if they are ipv6 addresses, they will be longer than if they are ipv4 addressses.
Of course, the documentation should spell all of this out, but, as is sometimes the case, the authors probably didn't understand how things work. You might like to raise a bug report.
I'd like to use the function QueryWorkingSet available in PSAPI, but I'm having trouble to actually define the size of the buffer pv. Here is the code :
#include <Windows.h>
#include <Psapi.h>
#include <iostream>
void testQueryWorkingSet()
{
unsigned int counter;
HANDLE thisProcess = GetCurrentProcess();
SYSTEM_INFO si;
PSAPI_WORKING_SET_INFORMATION wsi, wsi2;
GetSystemInfo(&si);
QueryWorkingSet(thisProcess, &wsi, sizeof(wsi));
DWORD wsi2_buffer_size = (wsi.NumberOfEntries) * sizeof(PSAPI_WORKING_SET_BLOCK);
if (!QueryWorkingSet(thisProcess, &wsi2, wsi2_buffer_size))
{
std::cout << "ERROR CODE : " << GetLastError() << std::endl;
abort();
}
}
int main(int argc, char * argv[])
{
testQueryWorkingSet();
int* test = new int[1000000];
testQueryWorkingSet();
}
I keep ending up with abort() being called and either an error code 24 or 998 during the first call to testQueryWorkingSet(). that I interpret respectively as : wsi2_buffer_size is too low and wsi2_buffer_size is too big.
Now I have no idea of the value this variable should take, I tried :
counting everything including the NumberOfEntries field, that is DWORD wsi2_buffer_size = sizeof(wsi.NumberOfEntries) + wsi.NumberOfEntries * sizeof(PSAPI_WORKING_SET_BLOCK); => error 998;
counting only the number of entries, that is the code given above => error 998;
the size of the variable wsi2, that is DWORD wsi2_buffer_size = sizeof(wsi2); => error 24;
There has to be something I do not understand in the way we're supposed to use this function but I can't find what. I tried to adapt the code given there, that is :
#include <Windows.h>
#include <Psapi.h>
#include <iostream>
void testQueryWorkingSet()
{
unsigned int counter;
HANDLE thisProcess = GetCurrentProcess();
SYSTEM_INFO si;
PSAPI_WORKING_SET_INFORMATION wsi_1, * wsi;
DWORD wsi_size;
GetSystemInfo(&si);
wsi_1.NumberOfEntries = 0;
QueryWorkingSet(thisProcess, (LPVOID)&wsi_1, sizeof(wsi));
#if !defined(_WIN64)
wsi_1.NumberOfEntries--;
#endif
wsi_size = sizeof(PSAPI_WORKING_SET_INFORMATION)
+ sizeof(PSAPI_WORKING_SET_BLOCK) * wsi_1.NumberOfEntries;
wsi = (PSAPI_WORKING_SET_INFORMATION*)HeapAlloc(GetProcessHeap(),
HEAP_ZERO_MEMORY, wsi_size);
if (!QueryWorkingSet(thisProcess, (LPVOID)wsi, wsi_size)) {
printf("# Second QueryWorkingSet failed: %lu\n"
, GetLastError());
abort();
}
}
int main(int argc, char * argv[])
{
testQueryWorkingSet();
int* test = new int[1000000];
testQueryWorkingSet();
}
This code is working for only 1 call to testQueryWorkingSet(), the second one is aborting with error code 24. Here are the questions in brief :
How would you use QueryWorkingSet in a function that you could call multiple times successively?
What is representing the value of the parameter cb of the documentation given a PSAPI_WORKING_SET_INFORMATION?
Both examples are completely ignoring the return value and error code of the 1st call of QueryWorkingSet(). You are doing error handling only on the 2nd call.
Your 1st example fails because you are not taking into account the entire size of the PSAPI_WORKING_SET_INFORMATION when calculating wsi2_buffer_size for the 2nd call of QueryWorkingSet(). Even if the 1st call were successful, you are not allocating any additional memory for the 2nd call to fill in, if the NumberOfEntries returned is > 1.
Your 2nd example is passing in the wrong buffer size value to the cb parameter of the 1st call of QueryWorkingSet(). You are passing in just the size of a single pointer, not the size of the entire PSAPI_WORKING_SET_INFORMATION. Error 24 is ERROR_BAD_LENGTH. You need to use sizeof(wsi_1) instead of sizeof(wsi).
I would suggest calling QueryWorkingSet() in a loop, in case the working set actually changes in between the call to query its size and the call to get its data.
Also, be sure you free the memory you allocate when you are done using it.
With that said, try something more life this:
void testQueryWorkingSet()
{
HANDLE thisProcess = GetCurrentProcess();
PSAPI_WORKING_SET_INFORMATION *wsi, *wsi_new;
DWORD wsi_size;
ULONG_PTR count = 1; // or whatever initial size you want...
do
{
wsi_size = offsetof(PSAPI_WORKING_SET_INFORMATION, WorkingSetInfo[count]);
wsi = (PSAPI_WORKING_SET_INFORMATION*) HeapAlloc(GetProcessHeap(), HEAP_ZERO_MEMORY, wsi_size);
if (!wsi)
{
printf("HeapAlloc failed: %lu\n", GetLastError());
abort();
}
if (QueryWorkingSet(thisProcess, wsi, wsi_size))
break;
if (GetLastError() != ERROR_BAD_LENGTH)
{
printf("QueryWorkingSet failed: %lu\n", GetLastError());
HeapFree(GetProcessHeap(), 0, wsi);
abort();
}
count = wsi->NumberOfEntries;
HeapFree(GetProcessHeap(), 0, wsi);
}
while (true);
// use wsi as needed...
HeapFree(GetProcessHeap(), 0, wsi);
}
So I've been reading the libjpeg documentation and it is extremely lackluster.
I have been trying to figure out how to read from a custom memory buffer rather than a file and am not sure how to even test if my solution is working correctly.
At the moment my function for loading a jpeg from memory is like so:
struct error_mgr{
jpeg_error_mgr pub;
std::jmp_buf buf;
};
bool load_jpeg(void *mem, size_t size, output_struct &output){
jpeg_source_mgr src;
src.next_input_bytes = static_cast<JOCTET*>(mem)-size;
src.bytes_in_buffer = size;
src.init_source = [](j_compress_ptr){};
src.fill_input_buffer = [](j_decompress_ptr cinfo) -> boolean{
// should never reach end of buffer
throw "libjpeg tried to read past end of file";
return true;
};
src.skip_input_data = [](j_compress_ptr cinfo, long num_bytes){
if(num_bytes < 1) return; // negative or 0 us no-op
cinfo->src.next_input_byte+=num_bytes;
cinfo->src.bytes_in_buffer-=num_bytes;
};
src.resync_to_restart = jpeg_resync_to_restart;
src.term_source = [](j_decompress_ptr){};
struct jpeg_decompress_struct cinfo;
error_mgr err;
cinfo.err = jpeg_std_error(&err.pub);
err.pub.error_exit = [](j_common_ptr cinfo){
error_mgr ptr = reinterpret_cast<error_mgr*>(cinfo->err);
std::longjmp(ptr->buf, 1);
};
if(std::setjmp(err.buf)){
jpeg_destroy_decompress(&cinfo);
return false;
}
cinfo.src = &src;
jpeg_create_decompress(&cinfo);
(void) jpeg_read_header(&cinfo, TRUE);
// do the actual reading of the image
return true;
}
But it never makes it past jpeg_read_header.
I know that this is a jpeg file and I know that my memory is being passed correctly because I have libpng loading images with the same signature and calling function fine, so I'm sure it is how I am setting the source manager in cinfo.
Anybody with more experience in libjpeg know how to do this?
In my code I am setting cinfo.src before calling jpeg_create_decompress, setting it afterwards fixed the issue :)
Using jpeg_mem_src() works for me, with libjpeg-turbo:
struct jpeg_decompress_struct cinfo;
/* ... */
char *ptr;
size_t buffer;
/* Initialize ptr, buffer, then: */
jpeg_mem_src(&cinfo, ptr, buffer);
/* Now, it's time for jpeg_read_header(), etc... */
I want to print buffer data at one instance avoiding all other wprintf instances but unable to convert data in compatible type with buffer.
Have a look at code:
Kindly tell me how to get through it:
DWORD PrintEvent(EVT_HANDLE hEvent)
{
DWORD status = ERROR_SUCCESS;
PEVT_VARIANT pRenderedValues = NULL;
WCHAR wsGuid[50];
LPWSTR pwsSid = NULL;
//
// Beginning of functional Logic
//
for (;;)
{
if (!EvtRender(hContext, hEvent, EvtRenderEventValues, dwBufferSize, pRenderedValues, &dwBufferUsed, &dwPropertyCount))
{
if (ERROR_INSUFFICIENT_BUFFER == (status = GetLastError()))
{
dwBufferSize = dwBufferUsed;
dwBytesToWrite = dwBufferSize;
pRenderedValues = (PEVT_VARIANT)malloc(dwBufferSize);
if (pRenderedValues)
{
EvtRender(hContext, hEvent, EvtRenderEventValues, dwBufferSize, pRenderedValues, &dwBufferUsed, &dwPropertyCount);
}
else
{
printf("malloc failed\n");
status = ERROR_OUTOFMEMORY;
break;
}
}
}
Buffer = (wchar_t*) malloc (1*wcslen(pRenderedValues[EvtSystemProviderName].StringVal));
//
// Print the values from the System section of the element.
wcscpy(Buffer,pRenderedValues[EvtSystemProviderName].StringVal);
int i = wcslen(Buffer);
if (NULL != pRenderedValues[EvtSystemProviderGuid].GuidVal)
{
StringFromGUID2(*(pRenderedValues[EvtSystemProviderGuid].GuidVal), wsGuid, sizeof(wsGuid)/sizeof(WCHAR));
wcscpy(Buffer+i,(wchar_t*)pRenderedValues[EvtSystemProviderGuid].GuidVal);
wprintf(L"Provider Guid: %s\n", wsGuid);
}
//Getting "??????" on screen after inclusion of guidval tell me the correct way to copy it??
wprintf(L"Buffer = %ls",Buffer);
//Also tell the way to copy unsigned values into buffer
wprintf(L"EventID: %lu\n", EventID);
wprintf(L"Version: %u\n", pRenderedValues[EvtSystemVersion].ByteVal);
wprintf(L"Level: %u\n", pRenderedValues[EvtSystemLevel].ByteVal);
wprintf(L"EventRecordID: %I64u\n", pRenderedValues[EvtSystemEventRecordId].UInt64Val);
if (EvtVarTypeNull != pRenderedValues[EvtSystemActivityID].Type)
{
StringFromGUID2(*(pRenderedValues[EvtSystemActivityID].GuidVal), wsGuid, sizeof(wsGuid)/sizeof(WCHAR));
wprintf(L"Correlation ActivityID: %s\n", wsGuid);
}
if (EvtVarTypeNull != pRenderedValues[EvtSystemRelatedActivityID].Type)
{
StringFromGUID2(*(pRenderedValues[EvtSystemRelatedActivityID].GuidVal), wsGuid, sizeof(wsGuid)/sizeof(WCHAR));
wprintf(L"Correlation RelatedActivityID: %s\n", wsGuid);
}
wprintf(L"Execution ProcessID: %lu\n", pRenderedValues[EvtSystemProcessID].UInt32Val);
wprintf(L"Execution ThreadID: %lu\n", pRenderedValues[EvtSystemThreadID].UInt32Val);
wprintf(L"Channel: %s\n",pRenderedValues[EvtSystemChannel].StringVal);
wprintf(L"Computer: %s\n", pRenderedValues[EvtSystemComputer].StringVal);
//
// Final Break Point
//
break;
}
}
The first error is when starting to write to the buffer:
Buffer = (wchar_t*) malloc (1*wcslen(pRenderedValues[EvtSystemProviderName].StringVal));
wcscpy(Buffer,pRenderedValues[EvtSystemProviderName].StringVal);
StringVal points to a wide character string with a trailing null byte, so you should
Buffer = malloc (sizeof(wchar_t)*(wcslen(pRenderedValues[EvtSystemProviderName].StringVal)+1));
or even better
Buffer = wcsdup(pRenderedValues[EvtSystemProviderName].StringVal);
Second error is when appending the GUID.
You are not allocating enough memory, you are just appending to the already full Buffer. And you are appending the raw GUID, not the GUID string. You should replace
int i = wcslen(Buffer);
wcscpy(Buffer+i,(wchar_t*)pRenderedValues[EvtSystemProviderGuid].GuidVal);
with something like
// Attention: memory leak if realloc returns NULL! So better use a second variable for the return code and check that before assigning to Buffer.
Buffer = realloc(Buffer, wcslen(Buffer) + wcslen(wsGuid) + 1);
wcscat(Buffer,wsGuid);
Also:
Besides, you should do better error checking for EvtRender. And you should check dwPropertyCount before accessing pRenderedValues[i].
BTW, wprintf(L"Buffer = %s",Buffer); (with %s instead of %ls) is sufficient with wprintf.
And to your last question: if you want to append unsigned values to a buffer you can use wsprintf to write to a string. If you can do it C++-only then you should consider using std::wstring. This is much easier for you with regard to allocating the buffers the right size.
I am working on a task to encrypt large files with AES CCM mode (256-bit key length). Other parameters for encryption are:
tag size: 8 bytes
iv size: 12 bytes
Since we already use OpenSSL 1.0.1c I wanted to use it for this task as well.
The size of the files is not known in advance and they can be very large. That's why I wanted to read them by blocks and encrypt each blocks individually with EVP_EncryptUpdate up to the file size.
Unfortunately the encryption works for me only if the whole file is encrypted at once. I get errors from EVP_EncryptUpdate or strange crashes if I attempt to call it multiple times. I tested the encryption on Windows 7 and Ubuntu Linux with gcc 4.7.2.
I was not able to find and information on OpenSSL site that encrypting the data block by block is not possible (or possible).
Additional references:
http://www.fredriks.se/?p=23
http://incog-izick.blogspot.in/2011/08/using-openssl-aes-gcm.html
Please see the code below that demonstrates what I attempted to achieve. Unfortunately it is failing where indicated in the for loop.
#include <QByteArray>
#include <openssl/evp.h>
// Key in HEX representation
static const char keyHex[] = "d896d105b05aaec8305d5442166d5232e672f8d5c6dfef6f5bf67f056c4cf420";
static const char ivHex[] = "71d90ebb12037f90062d4fdb";
// Test patterns
static const char orig1[] = "Very secret message.";
const int c_tagBytes = 8;
const int c_keyBytes = 256 / 8;
const int c_ivBytes = 12;
bool Encrypt()
{
EVP_CIPHER_CTX *ctx;
ctx = EVP_CIPHER_CTX_new();
EVP_CIPHER_CTX_init(ctx);
QByteArray keyArr = QByteArray::fromHex(keyHex);
QByteArray ivArr = QByteArray::fromHex(ivHex);
auto key = reinterpret_cast<const unsigned char*>(keyArr.constData());
auto iv = reinterpret_cast<const unsigned char*>(ivArr.constData());
// Initialize the context with the alg only
bool success = EVP_EncryptInit(ctx, EVP_aes_256_ccm(), nullptr, nullptr);
if (!success) {
printf("EVP_EncryptInit failed.\n");
return success;
}
success = EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_CCM_SET_IVLEN, c_ivBytes, nullptr);
if (!success) {
printf("EVP_CIPHER_CTX_ctrl(EVP_CTRL_CCM_SET_IVLEN) failed.\n");
return success;
}
success = EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_CCM_SET_TAG, c_tagBytes, nullptr);
if (!success) {
printf("EVP_CIPHER_CTX_ctrl(EVP_CTRL_CCM_SET_TAG) failed.\n");
return success;
}
success = EVP_EncryptInit(ctx, nullptr, key, iv);
if (!success) {
printf("EVP_EncryptInit failed.\n");
return success;
}
const int bsize = 16;
const int loops = 5;
const int finsize = sizeof(orig1)-1; // Don't encrypt '\0'
// Tell the alg we will encrypt size bytes
// http://www.fredriks.se/?p=23
int outl = 0;
success = EVP_EncryptUpdate(ctx, nullptr, &outl, nullptr, loops*bsize + finsize);
if (!success) {
printf("EVP_EncryptUpdate for size failed.\n");
return success;
}
printf("Set input size. outl: %d\n", outl);
// Additional authentication data (AAD) is not used, but 0 must still be
// passed to the function call:
// http://incog-izick.blogspot.in/2011/08/using-openssl-aes-gcm.html
static const unsigned char aadDummy[] = "dummyaad";
success = EVP_EncryptUpdate(ctx, nullptr, &outl, aadDummy, 0);
if (!success) {
printf("EVP_EncryptUpdate for AAD failed.\n");
return success;
}
printf("Set dummy AAD. outl: %d\n", outl);
const unsigned char *in = reinterpret_cast<const unsigned char*>(orig1);
unsigned char out[1000];
int len;
// Simulate multiple input data blocks (for example reading from file)
for (int i = 0; i < loops; ++i) {
// ** This function fails ***
if (!EVP_EncryptUpdate(ctx, out+outl, &len, in, bsize)) {
printf("DHAesDevice: EVP_EncryptUpdate failed.\n");
return false;
}
outl += len;
}
if (!EVP_EncryptUpdate(ctx, out+outl, &len, in, finsize)) {
printf("DHAesDevice: EVP_EncryptUpdate failed.\n");
return false;
}
outl += len;
int finlen;
// Finish with encryption
if (!EVP_EncryptFinal(ctx, out + outl, &finlen)) {
printf("DHAesDevice: EVP_EncryptFinal failed.\n");
return false;
}
outl += finlen;
// Append the tag to the end of the encrypted output
if (!EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_CCM_GET_TAG, c_tagBytes, out + outl)) {
printf("DHAesDevice: EVP_CIPHER_CTX_ctrl failed.\n");
return false;
};
outl += c_tagBytes;
out[outl] = '\0';
EVP_CIPHER_CTX_cleanup(ctx);
EVP_CIPHER_CTX_free(ctx);
QByteArray enc(reinterpret_cast<const char*>(out));
printf("Plain text size: %d\n", loops*bsize + finsize);
printf("Encrypted data size: %d\n", outl);
printf("Encrypted data: %s\n", enc.toBase64().data());
return true;
}
EDIT (Wrong Solution)
The feedback that I received made me think in a different direction and I discovered that EVP_EncryptUpdate for size must be called for each block that it being encrypted, not for the total size of the file. I moved it just before the block is encrypted: like this:
for (int i = 0; i < loops; ++i) {
int buflen;
(void)EVP_EncryptUpdate(m_ctx, nullptr, &buflen, nullptr, bsize);
// Resize the output buffer to buflen here
// ...
// Encrypt into target buffer
(void)EVP_EncryptUpdate(m_ctx, out, &len, in, buflen);
outl += len;
}
AES CCM encryption block by block works this way, but not correctly, because each block is treated as independent message.
EDIT 2
OpenSSL's implementation works properly only if the complete message is encrypted at once.
http://marc.info/?t=136256200100001&r=1&w=1
I decided to use Crypto++ instead.
For AEAD-CCM mode you cannot encrypt data after associated data was feed to the context.
Encrypt all the data, and only after it pass the associated data.
I found some mis-conceptions here
first of all
EVP_EncryptUpdate(ctx, nullptr, &outl
calling this way is to know how much output buffer is needed so you can allocate buffer and second time give the second argument as valid big enough buffer to hold the data.
You are also passing wrong (over written by previous call) values when you actually add the encrypted output.