ICU: ucnv_convertEx – detect encoding error on the fly - c++

Is it possible to detect encoding errors with ICU at conversion time, or is it necessary to pre or post check the conversion?
Given the initialization where a conversion from UTF8 to UTF32 is setup:
#include <stdio.h>
#include "unicode/ucnv.h" /* C Converter API */
static void eval(UConverter* from, UConverter* to);
int main(int argc, char** argv)
{
UConverter* from;
UConverter* to;
UErrorCode status;
/* Initialize converter from UTF8 to Unicode ___________________________*/
status = U_ZERO_ERROR;
from = ucnv_open("UTF-8", &status);
if( ! from || ! U_SUCCESS(status) ) return 1;
status = U_ZERO_ERROR;
to = ucnv_open("UTF32", &status);
if( ! to || ! U_SUCCESS(status) ) return 1;
/*______________________________________________________________________*/
eval(from, to);
return 0;
}
Then, applying the conversion using ucnv_convertEx via
static void eval(UConverter* from, UConverter* to)
{
UErrorCode status = U_ZERO_ERROR;
uint32_t drain[1024];
uint32_t* drain_p = &drain[0];
uint32_t* p = &drain[0];
/* UTF8 sequence with error in third byte ______________________________*/
const char source[] = { "\xED\x8A\x0A\x0A" };
const char* source_p = &source[0];
ucnv_convertEx(to, from, (char**)&drain_p, (char*)&drain[1024],
&source_p, &source[5],
NULL, NULL, NULL, NULL, /* reset = */TRUE, /* flush = */TRUE,
&status);
/* Print conversion result _____________________________________________*/
printf("source_p: source + %i;\n", (int)(source_p - &source[0]));
printf("status: %s;\n", u_errorName(status));
printf("drain: (n=%i)[", (int)(drain_p - &drain[0]));
for(p=&drain[0]; p != drain_p ; ++p) { printf("%06X ", (int)*p); }
printf("]\n");
}
where source contains an inadmissible UTF8 code unit sequence, the function should somehow report an error. Storing the above fragments in "test.c" and compiling the above code with
$ gcc test.c $(icu-config --ldflags) -o test
The output of ./test is (surprisingly):
source_p: source + 5;
status: U_ZERO_ERROR;
drain: (n=5)[00FEFF 00FFFD 00000A 00000A 000000 ]
So, no obvious sign of a detected error. Can error detection be done more elegantly than manually checking the content?

As #Eljay suggests in the comments, you can use an error callback. You don't even need to write your own, since the built-in UCNV_TO_U_CALLBACK_STOP will do what you want (ie, return a failure for any bad characters).
int TestIt()
{
UConverter* utf8conv{};
UConverter* utf32conv{};
UErrorCode status{ U_ZERO_ERROR };
utf8conv = ucnv_open("UTF8", &status);
if (!U_SUCCESS(status))
{
return 1;
}
utf32conv = ucnv_open("UTF32", &status);
if (!U_SUCCESS(status))
{
return 2;
}
const char source[] = { "\xED\x8A\x0A\x0A" };
uint32_t target[10]{ 0 };
ucnv_setToUCallBack(utf8conv, UCNV_TO_U_CALLBACK_STOP, nullptr,
nullptr, nullptr, &status);
if (!U_SUCCESS(status))
{
return 3;
}
auto sourcePtr = source;
auto sourceEnd = source + ARRAYSIZE(source);
auto targetPtr = target;
auto targetEnd = reinterpret_cast<const char*>(target + ARRAYSIZE(target));
ucnv_convertEx(utf32conv, utf8conv, reinterpret_cast<char**>(&targetPtr),
targetEnd, &sourcePtr, sourceEnd, nullptr, nullptr, nullptr, nullptr,
TRUE, TRUE, &status);
if (!U_SUCCESS(status))
{
return 4;
}
printf("Converted '%s' to '", source);
for (auto start = target; start != targetPtr; start++)
{
printf("\\x%x", *start);
}
printf("'\r\n");
return 0;
}
This should return 4 for invalid Unicode codepoints, and print out the UTF-32 values if it was successful. It's unlikely we'd get an error from ucnv_setToUCallBack, but we check just in case. In the example above, we pass nullptr for the previous action since we don't care what it was and don't need to reset it.

Related

sending i2c command from C++ application

I want to send a signal to i2c device though C++ application. I have tried to use system() function, but it take about 7-10ms to return.
so I have found this library but it doesn't allow me to send the port number.
this is the command that i want to send
i2cset -f -y 0 0x74 2 0x00
where, 2 is the port number. 0x00: is the command that I need to set in destination device.
So my question is is there any way to send a direct way to communicate with i2c device the same as i2cset application does?
Yes, there is a way. You can read some documentation here: https://www.kernel.org/doc/Documentation/i2c/dev-interface
Basically you have to first open the I2C device for reading and writing, on Raspberry Pi (which is where I have used this) it is:
int m_busFD = open("/dev/i2c-0", O_RDWR);
Then there are two ways:
Either use ioctl to set the address and then read() or write() to read or write to the line. This can look like so:
bool receiveBytes(const int addr, uint8_t *buf, const int len)
{
if (ioctl(busFD, I2C_SLAVE, addr) < 0)
return -1;
int res = read(busFD, buf, len);
return res == len;
}
Or use the i2c_msg/i2c_rdwr_ioctl_data struct interface with ioctl. This looks more complicated, but allows you to do more complex operations such as a write-restart-read operation. Here is the same read as before, but using this interface:
bool receiveBytes(const int addr, uint8_t *buf, const int len)
{
i2c_msg msgs[1] = {
{.addr = static_cast<uint16_t>(addr),
.flags = I2C_M_RD,
.len = static_cast<uint16_t>(len),
.buf = buf}};
i2c_rdwr_ioctl_data wrapper = {
.msgs = msgs,
.nmsgs = 1};
if (ioctl(m_busFD, I2C_RDWR, &wrapper) < 0)
return false;
return (msgs[0].len == len);
}
And here is an example of a write-restart-read:
bool sendRecBytes(
const int addr,
uint8_t *sbuf, const int slen,
uint8_t *rbuf, const int rlen)
{
i2c_msg msgs[2] = {
{.addr = static_cast<uint16_t>(addr),
.flags = {},
.len = static_cast<uint16_t>(slen),
.buf = sbuf},
{.addr = static_cast<uint16_t>(addr),
.flags = I2C_M_RD,
.len = static_cast<uint16_t>(rlen),
.buf = rbuf}};
i2c_rdwr_ioctl_data wrapper = {
.msgs = msgs,
.nmsgs = 2};
if (ioctl(m_busFD, I2C_RDWR, &wrapper) < 0)
return false;
return (msgs[0].len == slen) && (msgs[1].len == rlen);
}
Edit: Forgot to mention that this all requires:
#include <sys/ioctl.h>
#include <linux/i2c-dev.h>
#include <linux/i2c.h>

Xcode app for macOS. This is how I setup to get audio from usb mic input. Worked a year ago, now doesn't. Why

Here is my audio init code. My app responds when queue buffers are ready, but all data in buffer is zero. Checking sound in system preferences shows that USB Audio CODEC in sound input dialog is active. AudioInit() is called right after app launches.
{
#pragma mark user data struct
typedef struct MyRecorder
{
AudioFileID recordFile;
SInt64 recordPacket;
Float32 *pSampledData;
MorseDecode *pMorseDecoder;
} MyRecorder;
#pragma mark utility functions
void CheckError(OSStatus error, const char *operation)
{
if(error == noErr) return;
char errorString[20];
// see if it appears to be a 4 char code
*(UInt32*)(errorString + 1) = CFSwapInt32HostToBig(error);
if (isprint(errorString[1]) && isprint(errorString[2]) &&
isprint(errorString[3]) && isprint(errorString[4]))
{
errorString[0] = errorString[5] = '\'';
errorString[6] = '\0';
}
else
{
sprintf(errorString, "%d", (int)error);
}
fprintf(stderr, "Error: %s (%s)\n", operation, errorString);
}
OSStatus MyGetDefaultInputDeviceSampleRate(Float64 *outSampleRate)
{
OSStatus error;
AudioDeviceID deviceID = 0;
AudioObjectPropertyAddress propertyAddress;
UInt32 propertySize;
propertyAddress.mSelector = kAudioHardwarePropertyDefaultInputDevice;
propertyAddress.mScope = kAudioObjectPropertyScopeGlobal;
propertyAddress.mElement = 0;
propertySize = sizeof(AudioDeviceID);
error = AudioObjectGetPropertyData(kAudioObjectSystemObject,
&propertyAddress,
0,
NULL,
&propertySize,
&deviceID);
if(error)
return error;
propertyAddress.mSelector = kAudioDevicePropertyNominalSampleRate;
propertyAddress.mScope = kAudioObjectPropertyScopeGlobal;
propertyAddress.mElement = 0;
propertySize = sizeof(Float64);
error = AudioObjectGetPropertyData(deviceID,
&propertyAddress,
0,
NULL,
&propertySize,
outSampleRate);
return error;
}
static int MyComputeRecordBufferSize(const AudioStreamBasicDescription *format,
AudioQueueRef queue,
float seconds)
{
int packets, frames, bytes;
frames = (int)ceil(seconds * format->mSampleRate);
if(format->mBytesPerFrame > 0)
{
bytes = frames * format->mBytesPerFrame;
}
else
{
UInt32 maxPacketSize;
if(format->mBytesPerPacket > 0)
{
// constant packet size
maxPacketSize = format->mBytesPerPacket;
}
else
{
// get the largest single packet size possible
UInt32 propertySize = sizeof(maxPacketSize);
CheckError(AudioQueueGetProperty(queue,
kAudioConverterPropertyMaximumOutputPacketSize,
&maxPacketSize,
&propertySize),
"Couldn't get queues max output packet size");
}
if(format->mFramesPerPacket > 0)
packets = frames / format->mFramesPerPacket;
else
// worst case scenario: 1 frame in a packet
packets = frames;
// sanity check
if(packets == 0)
packets = 1;
bytes = packets * maxPacketSize;
}
return bytes;
}
extern void bridgeToMainThread(MorseDecode *pDecode);
static int callBacks = 0;
// ---------------------------------------------
static void MyAQInputCallback(void *inUserData,
AudioQueueRef inQueue,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription *inPacketDesc)
{
MyRecorder *recorder = (MyRecorder*)inUserData;
Float32 *pAudioData = (Float32*)(inBuffer->mAudioData);
recorder->pMorseDecoder->pBuffer = pAudioData;
recorder->pMorseDecoder->bufferSize = inNumPackets;
bridgeToMainThread(recorder->pMorseDecoder);
CheckError(AudioQueueEnqueueBuffer(inQueue,
inBuffer,
0,
NULL),
"AudioQueueEnqueueBuffer failed");
printf("packets = %ld, bytes = %ld\n",(long)inNumPackets,(long)inBuffer->mAudioDataByteSize);
callBacks++;
//printf("\ncallBacks = %d\n",callBacks);
//if(callBacks == 0)
//audioStop();
}
static AudioQueueRef queue = {0};
static MyRecorder recorder = {0};
static AudioStreamBasicDescription recordFormat;
void audioInit()
{
// set up format
memset(&recordFormat,0,sizeof(recordFormat));
recordFormat.mFormatID = kAudioFormatLinearPCM;
recordFormat.mChannelsPerFrame = 2;
recordFormat.mBitsPerChannel = 32;
recordFormat.mBytesPerPacket = recordFormat.mBytesPerFrame = recordFormat.mChannelsPerFrame * sizeof(Float32);
recordFormat.mFramesPerPacket = 1;
//recordFormat.mFormatFlags = kAudioFormatFlagsCanonical;
recordFormat.mFormatFlags = kAudioFormatFlagsNativeFloatPacked;
MyGetDefaultInputDeviceSampleRate(&recordFormat.mSampleRate);
UInt32 propSize = sizeof(recordFormat);
CheckError(AudioFormatGetProperty(kAudioFormatProperty_FormatInfo,
0,
NULL,
&propSize,
&recordFormat),
"AudioFormatProperty failed");
recorder.pMorseDecoder = MorseDecode::pInstance();
recorder.pMorseDecoder->m_sampleRate = recordFormat.mSampleRate;
// recorder.pMorseDecoder->setCircularBuffer();
//set up queue
CheckError(AudioQueueNewInput(&recordFormat,
MyAQInputCallback,
&recorder,
NULL,
kCFRunLoopCommonModes,
0,
&queue),
"AudioQueueNewInput failed");
UInt32 size = sizeof(recordFormat);
CheckError(AudioQueueGetProperty(queue,
kAudioConverterCurrentOutputStreamDescription,
&recordFormat,
&size), "Couldn't get queue's format");
// set up buffers and enqueue
const int kNumberRecordBuffers = 3;
int bufferByteSize = MyComputeRecordBufferSize(&recordFormat, queue, AUDIO_BUFFER_DURATION);
for(int bufferIndex = 0; bufferIndex < kNumberRecordBuffers; bufferIndex++)
{
AudioQueueBufferRef buffer;
CheckError(AudioQueueAllocateBuffer(queue,
bufferByteSize,
&buffer),
"AudioQueueAllocateBuffer failed");
CheckError(AudioQueueEnqueueBuffer(queue,
buffer,
0,
NULL),
"AudioQueueEnqueueBuffer failed");
}
}
void audioRun()
{
CheckError(AudioQueueStart(queue, NULL), "AudioQueueStart failed");
}
void audioStop()
{
CheckError(AudioQueuePause(queue), "AudioQueuePause failed");
}
}
This sounds like the new macOS 'microphone privacy' setting, which, if set to 'no access' for your app, will cause precisely this behaviour. So:
Open the System Preferences pane.
Click on 'Security and Privacy'.
Select the Privacy tab.
Click on 'Microphone' in the left-hand pane.
Locate your app in the right-hand pane and tick the checkbox next to it.
Then restart your app and test it.
Tedious, no?
Edit: As stated in the comments, you can't directly request microphone access, but you can detect whether it has been granted to your app or not by calling [AVCaptureDevice authorizationStatusForMediaType: AVMediaTypeAudio].

convert emoji string to icu::UnicodeString

I have a method reads a json file and returns a const char* that can be any text, including emojis. I don't have access to the source of this method.
For example, I created a json file with the england flag, 🏴󠁧󠁢󠁥󠁮󠁧󠁿 ({message: "\uD83C\uDFF4\uDB40\uDC67\uDB40\uDC62\uDB40\uDC65\uDB40\uDC6E\uDB40\uDC67\uDB40\uDC7F"}).
When I call that method, it returns something like 🏴󠁧󠁢󠁥󠁮󠁧󠁿, but in order to use it properly, I need to convert it to icu::UnicodeString because I use another method (closed source again) that expects it.
The only way I found to make it work was something like:
icu::UnicodeString unicode;
unicode.setTo((UChar*)convertMessage().data());
std::string messageAsString;
unicode.toUTF8String(messageAsString);
after doing that, messageAsString is usable and everything works.
convertMessage() is a method that uses std::wstring_convert<std::codecvt_utf8_utf16<char16_t>, char16_t>::from_bytes(str).
My question is, is there a way to create a icu::UnicodeString without using that extra convertMessage() call?
This is sample usage of ucnv_toUChars function. I took these function from postgresql source code and used it for my project.
UConverter *icu_converter;
static int32_t icu_to_uchar(UChar **buff_uchar, const char *buff, int32_t nbytes)
{
UErrorCode status;
int32_t len_uchar;
status = U_ZERO_ERROR;
len_uchar = ucnv_toUChars(icu_converter, NULL, 0,buff, nbytes, &status);
if (U_FAILURE(status) && status != U_BUFFER_OVERFLOW_ERROR)
return -1;
*buff_uchar = (UChar *) malloc((len_uchar + 1) * sizeof(**buff_uchar));
status = U_ZERO_ERROR;
len_uchar = ucnv_toUChars(icu_converter, *buff_uchar, len_uchar + 1,buff, nbytes, &status);
if (U_FAILURE(status))
assert(0); //(errmsg("ucnv_toUChars failed: %s", u_errorName(status))));
return len_uchar;
}
static int32_t icu_from_uchar(char **result, const UChar *buff_uchar, int32_t len_uchar)
{
UErrorCode status;
int32_t len_result;
status = U_ZERO_ERROR;
len_result = ucnv_fromUChars(icu_converter, NULL, 0,
buff_uchar, len_uchar, &status);
if (U_FAILURE(status) && status != U_BUFFER_OVERFLOW_ERROR)
assert(0); // (errmsg("ucnv_fromUChars failed: %s", u_errorName(status))));
*result = (char *) malloc(len_result + 1);
status = U_ZERO_ERROR;
len_result = ucnv_fromUChars(icu_converter, *result, len_result + 1,
buff_uchar, len_uchar, &status);
if (U_FAILURE(status))
assert(0); // (errmsg("ucnv_fromUChars failed: %s", u_errorName(status))));
return len_result;
}
void main() {
const char *utf8String = "Hello";
int len = 5;
UErrorCode status = U_ZERO_ERROR;
icu_converter = ucnv_open("utf8", &status);
assert(status <= U_ZERO_ERROR);
UChar *buff_uchar;
int32_t len_uchar = icu_to_uchar(&buff_uchar, ut8String, len);
// use buff_uchar
free(buff_uchar);
}

Substitute Console window by Windows Forms

I have created a Win32 Console Application project in Visual Studio 2012 C++.
How can I substitute the Console window by a more appealing GUI like Windows Forms?
int32_t main(int32_t argc, char* argv[])
{
const char *date = "20150428_1\\";
int mode=0;
_CallServerPtr pCallServer;
uint32_t start_address_comp=0;
uint32_t start_address_module=0;
const char* xmlFile_tx_dbb="tx_dbb.xml";;
char str[100] = "\0";
char localeStr[64];
memset(localeStr, 0, sizeof localeStr);
const char *l_path = "..\\XERCES\\Configs\\";
std::string buf = "";
double Fsym_Hz=(1/1.15)*1e9;
int selection=0;
int user_selection=0;
try
{
if (strlen(localeStr))
{
XMLPlatformUtils::Initialize(localeStr);
}
else
{
XMLPlatformUtils::Initialize();
}
}
catch (const XMLException& toCatch)
{
XERCES_STD_QUALIFIER cerr << "Error during initialization! :\n"
<< StrX(toCatch.getMessage()) << XERCES_STD_QUALIFIER endl;
}
static const XMLCh gLS[] = { chLatin_L, chLatin_S, chNull };
DOMImplementation *impl = DOMImplementationRegistry::getDOMImplementation(gLS);
DOMLSParser *parser = ((DOMImplementationLS*)impl)->createLSParser(DOMImplementationLS::MODE_SYNCHRONOUS, 0);
DOMConfiguration *config = parser->getDomConfig();
DOMLSSerializer *theSerializer = ((DOMImplementationLS*)impl)->createLSSerializer();
DOMLSOutput *theOutputDesc = ((DOMImplementationLS*)impl)->createLSOutput();
config->setParameter(XMLUni::fgDOMDatatypeNormalization, true);
DOMCountErrorHandler errorHandler;
config->setParameter(XMLUni::fgDOMErrorHandler, &errorHandler);
XERCES_STD_QUALIFIER ifstream fin;
//reset error count first
errorHandler.resetErrors();*/
// reset document pool
parser->resetDocumentPool();
char* pszHostname = NULL;
pSaIn = 0;
pSaOut = 0;
// Initialize the COM Library
CoInitialize(NULL);
if (!pszHostname)
{
// Create the CallServer server object on the local computer
pCallServer.CreateInstance(CLSID_CallServer);
}
if (pCallServer == NULL)
throw "Failed to create the CallableVEE CallServer object";
// Load the VEE User Function library
char strpath[256];
strcpy (strpath,reposity_path);
strcat (strpath,l_path_vee);
_bstr_t bstrLibPath(strpath);
LibraryPtr pLib = pCallServer->GetLibraries()->Load(bstrLibPath);
// Print out the names of the UserFunctions in this library.
UserFunctionsPtr pUserFuncColl = pLib->GetUserFunctions();
VARIANT_BOOL bDebug = VARIANT_FALSE;
pCallServer->PutDebug(bDebug);
// Variables added by ivi
float *freq =(float *)_aligned_malloc(6,16); // Read frequency vector
// Previous variables
int32_t devIdx;
int32_t modeClock;
int32_t ifType;
const char *devType;
char fpga_device_type[32];
int32_t rc;
int32_t ref_clk=0;
uint32_t carrier=0;
uint32_t odelay_dac0 = 0;
uint32_t odelay_dac1 = 0;
// Parse the application arguments
if(argc!=5) {
printf("Usage: FMCxxxApp.exe {interface type} {device type} {device index} {clock mode} \n\n");
printf(" {interface type} can be either 0 (PCI) or 1 (Ethernet). At CEIT, we use 1 (Ethernet).\n");
printf(" {device type} is a string defining the target hardware (VP680, ML605, ...). At CEIT, we use VC707.\n");
printf(" {device index} is a PCI index or an Ethernet interface index. This value depends on the PC.\n");
printf(" {clock mode} can be either 0 (Int. Clock) or 1 (Ext. Clock)\n");
printf("\n");
printf("\n");
printf("Example: Fmc230APP.exe 1 VC707 0 0\n");
printf("\n");
printf("\n");
printf(" List of NDIS interfaces found in the system {device index}:\n");
printf(" -----------------------------------------------------------\n");
if(sipif_getdeviceenumeration(API_ENUM_DISPLAY)!=SIPIF_ERR_OK) {
printf("Could not obtain NDIS(Ethernet) device enumeration...\n Check if the 4dspnet driver installed or if the service started?\n");
printf("You can discard this error if you do not have any Ethernet based product in use.");
}
if( EXIT_IF_ERRORS)
{
sipif_free();
system("pause");
return -1;
}
...
}
You mean to have the same code in windows forms. That won't work. The printf and other commands work only in a console application. You must to create a windows form application and rewrite the code for it. You must rewrite all commands that don't work in a windows form application. There probably exists a conversion application, but for this short code I think it's better to rewrite it.

OpenSSL: AES CCM 256 bit encryption of large file by blocks: is it possible?

I am working on a task to encrypt large files with AES CCM mode (256-bit key length). Other parameters for encryption are:
tag size: 8 bytes
iv size: 12 bytes
Since we already use OpenSSL 1.0.1c I wanted to use it for this task as well.
The size of the files is not known in advance and they can be very large. That's why I wanted to read them by blocks and encrypt each blocks individually with EVP_EncryptUpdate up to the file size.
Unfortunately the encryption works for me only if the whole file is encrypted at once. I get errors from EVP_EncryptUpdate or strange crashes if I attempt to call it multiple times. I tested the encryption on Windows 7 and Ubuntu Linux with gcc 4.7.2.
I was not able to find and information on OpenSSL site that encrypting the data block by block is not possible (or possible).
Additional references:
http://www.fredriks.se/?p=23
http://incog-izick.blogspot.in/2011/08/using-openssl-aes-gcm.html
Please see the code below that demonstrates what I attempted to achieve. Unfortunately it is failing where indicated in the for loop.
#include <QByteArray>
#include <openssl/evp.h>
// Key in HEX representation
static const char keyHex[] = "d896d105b05aaec8305d5442166d5232e672f8d5c6dfef6f5bf67f056c4cf420";
static const char ivHex[] = "71d90ebb12037f90062d4fdb";
// Test patterns
static const char orig1[] = "Very secret message.";
const int c_tagBytes = 8;
const int c_keyBytes = 256 / 8;
const int c_ivBytes = 12;
bool Encrypt()
{
EVP_CIPHER_CTX *ctx;
ctx = EVP_CIPHER_CTX_new();
EVP_CIPHER_CTX_init(ctx);
QByteArray keyArr = QByteArray::fromHex(keyHex);
QByteArray ivArr = QByteArray::fromHex(ivHex);
auto key = reinterpret_cast<const unsigned char*>(keyArr.constData());
auto iv = reinterpret_cast<const unsigned char*>(ivArr.constData());
// Initialize the context with the alg only
bool success = EVP_EncryptInit(ctx, EVP_aes_256_ccm(), nullptr, nullptr);
if (!success) {
printf("EVP_EncryptInit failed.\n");
return success;
}
success = EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_CCM_SET_IVLEN, c_ivBytes, nullptr);
if (!success) {
printf("EVP_CIPHER_CTX_ctrl(EVP_CTRL_CCM_SET_IVLEN) failed.\n");
return success;
}
success = EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_CCM_SET_TAG, c_tagBytes, nullptr);
if (!success) {
printf("EVP_CIPHER_CTX_ctrl(EVP_CTRL_CCM_SET_TAG) failed.\n");
return success;
}
success = EVP_EncryptInit(ctx, nullptr, key, iv);
if (!success) {
printf("EVP_EncryptInit failed.\n");
return success;
}
const int bsize = 16;
const int loops = 5;
const int finsize = sizeof(orig1)-1; // Don't encrypt '\0'
// Tell the alg we will encrypt size bytes
// http://www.fredriks.se/?p=23
int outl = 0;
success = EVP_EncryptUpdate(ctx, nullptr, &outl, nullptr, loops*bsize + finsize);
if (!success) {
printf("EVP_EncryptUpdate for size failed.\n");
return success;
}
printf("Set input size. outl: %d\n", outl);
// Additional authentication data (AAD) is not used, but 0 must still be
// passed to the function call:
// http://incog-izick.blogspot.in/2011/08/using-openssl-aes-gcm.html
static const unsigned char aadDummy[] = "dummyaad";
success = EVP_EncryptUpdate(ctx, nullptr, &outl, aadDummy, 0);
if (!success) {
printf("EVP_EncryptUpdate for AAD failed.\n");
return success;
}
printf("Set dummy AAD. outl: %d\n", outl);
const unsigned char *in = reinterpret_cast<const unsigned char*>(orig1);
unsigned char out[1000];
int len;
// Simulate multiple input data blocks (for example reading from file)
for (int i = 0; i < loops; ++i) {
// ** This function fails ***
if (!EVP_EncryptUpdate(ctx, out+outl, &len, in, bsize)) {
printf("DHAesDevice: EVP_EncryptUpdate failed.\n");
return false;
}
outl += len;
}
if (!EVP_EncryptUpdate(ctx, out+outl, &len, in, finsize)) {
printf("DHAesDevice: EVP_EncryptUpdate failed.\n");
return false;
}
outl += len;
int finlen;
// Finish with encryption
if (!EVP_EncryptFinal(ctx, out + outl, &finlen)) {
printf("DHAesDevice: EVP_EncryptFinal failed.\n");
return false;
}
outl += finlen;
// Append the tag to the end of the encrypted output
if (!EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_CCM_GET_TAG, c_tagBytes, out + outl)) {
printf("DHAesDevice: EVP_CIPHER_CTX_ctrl failed.\n");
return false;
};
outl += c_tagBytes;
out[outl] = '\0';
EVP_CIPHER_CTX_cleanup(ctx);
EVP_CIPHER_CTX_free(ctx);
QByteArray enc(reinterpret_cast<const char*>(out));
printf("Plain text size: %d\n", loops*bsize + finsize);
printf("Encrypted data size: %d\n", outl);
printf("Encrypted data: %s\n", enc.toBase64().data());
return true;
}
EDIT (Wrong Solution)
The feedback that I received made me think in a different direction and I discovered that EVP_EncryptUpdate for size must be called for each block that it being encrypted, not for the total size of the file. I moved it just before the block is encrypted: like this:
for (int i = 0; i < loops; ++i) {
int buflen;
(void)EVP_EncryptUpdate(m_ctx, nullptr, &buflen, nullptr, bsize);
// Resize the output buffer to buflen here
// ...
// Encrypt into target buffer
(void)EVP_EncryptUpdate(m_ctx, out, &len, in, buflen);
outl += len;
}
AES CCM encryption block by block works this way, but not correctly, because each block is treated as independent message.
EDIT 2
OpenSSL's implementation works properly only if the complete message is encrypted at once.
http://marc.info/?t=136256200100001&r=1&w=1
I decided to use Crypto++ instead.
For AEAD-CCM mode you cannot encrypt data after associated data was feed to the context.
Encrypt all the data, and only after it pass the associated data.
I found some mis-conceptions here
first of all
EVP_EncryptUpdate(ctx, nullptr, &outl
calling this way is to know how much output buffer is needed so you can allocate buffer and second time give the second argument as valid big enough buffer to hold the data.
You are also passing wrong (over written by previous call) values when you actually add the encrypted output.