serialization of float vector in a struct - c++

I have the following code:
struct MsgDetectedTarget
{
int target_id;
float bbox[4]; // need to change
};
in a serialization function :
void SerializeToArray(std::vector<char>& buffer, int& dst_len, void* pMsg, int len){
buffer.resize(HEADER_LENGTH + len);
// encode message header
char header[HEADER_LENGTH + 1] = "";
std::sprintf(header, "%8d", len);
std::memcpy(&buffer[0], header, HEADER_LENGTH);
// encode message body
std::memcpy(&buffer[0]+HEADER_LENGTH, reinterpret_cast<char*>(pMsg), len);
dst_len = HEADER_LENGTH + len;
}
if the data bbox in MsgDetectedTarget is of fixed size, it is easy to do the serialization.
MsgDetectedTarget msg;
msg.target_id = 1;
msg.bbox[0] = 0;
msg.bbox[1] = 0;
msg.bbox[2] = 500;
msg.bbox[3] = 500;
std::vector<char> msgdata;
int destlen;
SerializeToArray(msgdata, destlen, &msg, sizeof(msg));
Problem:
I want to change the bbox in MsgDetectedTarget to be a vector of float, How can I perform corresponding serialization and deserialization?
Thanks very much.

If you cannot use Boost serialization, you can do what #Jean-FrancoisFabre suggested. Something like:
void SerializeToArray(std::vector<char>& buffer, int& dst_len, MsgDetectedTarget* msg) {
size_t numFloats = msg->bbox.size();
auto len = sizeof(int) + sizeof(size_t) + sizeof(float)*numFloats;
buffer.resize(HEADER_LENGTH + len);
// encode message header
char header[HEADER_LENGTH + 1] = "";
std::sprintf(header, "%8d", len);
std::memcpy(&buffer[0], header, HEADER_LENGTH);
// encode target_id
std::memcpy(&buffer[0] + HEADER_LENGTH, static_cast<char*>(&msg->target_id), sizeof(int));
// encode numFloats
std::memcpy(&buffer[0] + HEADER_LENGTH + sizeof(int), static_cast<char*>(&numFloats), sizeof(size_t));
// encode the vector of float
std::memcpy(&buffer[0] + HEADER_LENGTH + sizeof(int) + sizeof(size_t),
static_cast<char*>(&bbox[0]), sizeof(float)*numFloats);
dst_len = HEADER_LENGTH + len;
}

Related

Flip and crop a bitmap

I'm getting a bytearray (32 bit or 16 bit) from a source.
If the size width is odd, the last pixel in each row needs to be dropped.
If the height is odd, the last row needs to be dropped.
If the height is negative the bitmap needs to be flipped vertically.
Here is my code so far:
m_pbmiLast = new BITMAPINFO(*m_pbmi);
m_pbmiLast->bmiHeader.biWidth = abs(m_pbmiLast->bmiHeader.biWidth) - (abs(m_pbmiLast->bmiHeader.biWidth) % 2);
m_pbmiLast->bmiHeader.biHeight = abs(m_pbmiLast->bmiHeader.biHeight) - (abs(m_pbmiLast->bmiHeader.biHeight) % 2);
int biWidth = m_pbmiLast->bmiHeader.biWidth;
int biHeight = m_pbmiLast->bmiHeader.biHeight;
int iAdjustedStride = ((((biWidth * m_pbmiLast->bmiHeader.biBitCount) + 31) & ~31) >> 3);
int iRealStride = ((((m_pbmi->bmiHeader.biWidth * m_pbmi->bmiHeader.biBitCount) + 31) & ~31) >> 3);
if (m_pbmi->bmiHeader.biHeight < 0) {
/* Copy the actual data */
int iLineOffsetSource = 0;
int iLineOffsetDest = (biHeight - 1) * iRealStride;
for (int i = 0; i < biHeight; ++i) {
memcpy(&pData[iLineOffsetDest], &m_inputBuffer[iLineOffsetSource], iAdjustedStride);
iLineOffsetSource += iRealStride;
iLineOffsetDest -= iRealStride;
}
} else {
int iLineOffset = 0;
for (int i = 0; i < biHeight; ++i) {
memcpy(&pData[iLineOffset], &m_inputBuffer[iLineOffset], iAdjustedStride);
iLineOffset += iRealStride;
}
}
It doesn't flip the bitmap, and when the bitmap is an odd width, it slants the bitmap.
Can be done like so.. I include the reading and writing just to make it an SSCCE. It has little to no error.
As for my comment about new BITMAPINFO. I was saying that you don't have to allocate such a small structure on the HEAP. Ditch the new part. The only allocation you need for a bitmap is the pixels. The header and other info does not need an allocation at all.
See the Flip function below.
#include <iostream>
#include <fstream>
#include <cstring>
#include <windows.h>
typedef struct
{
BITMAPFILEHEADER Header;
BITMAPINFO Info;
unsigned char* Pixels;
} BITMAPDATA;
void LoadBmp(const char* path, BITMAPDATA* Data)
{
std::ifstream hFile(path, std::ios::in | std::ios::binary);
if(hFile.is_open())
{
hFile.read((char*)&Data->Header, sizeof(Data->Header));
hFile.read((char*)&Data->Info, sizeof(Data->Info));
hFile.seekg(Data->Header.bfOffBits, std::ios::beg);
Data->Pixels = new unsigned char[Data->Info.bmiHeader.biSizeImage];
hFile.read((char*)Data->Pixels, Data->Info.bmiHeader.biSizeImage);
hFile.close();
}
}
void SaveBmp(const char* path, BITMAPDATA* Data)
{
std::ofstream hFile(path, std::ios::out | std::ios::binary);
if (hFile.is_open())
{
hFile.write((char*)&Data->Header, sizeof(Data->Header));
hFile.write((char*)&Data->Info, sizeof(Data->Info));
hFile.seekp(Data->Header.bfOffBits, std::ios::beg);
hFile.write((char*)Data->Pixels, Data->Info.bmiHeader.biSizeImage);
hFile.close();
}
}
void Flip(BITMAPDATA* Data)
{
unsigned short bpp = Data->Info.bmiHeader.biBitCount;
unsigned int width = std::abs(Data->Info.bmiHeader.biWidth);
unsigned int height = std::abs(Data->Info.bmiHeader.biHeight);
unsigned char* out = new unsigned char[Data->Info.bmiHeader.biSizeImage];
unsigned long chunk = (bpp > 24 ? width * 4 : width * 3 + width % 4);
unsigned char* dst = out;
unsigned char* src = Data->Pixels + chunk * (height - 1);
while(src != Data->Pixels)
{
std::memcpy(dst, src, chunk);
dst += chunk;
src -= chunk;
}
std::memcpy(dst, src, chunk); //for 24-bit.
std::swap(Data->Pixels, out);
delete[] out;
}
int main()
{
BITMAPDATA Data;
LoadBmp("C:/Users/Brandon/Desktop/Bar.bmp", &Data);
Flip(&Data);
SaveBmp("C:/Users/Brandon/Desktop/Foo.bmp", &Data);
delete[] Data.Pixels;
return 0;
}

wcscpy_s not affecting wchar_t*

I'm trying to load some strings from a database into a struct, but I keep running into an odd issue. Using my struct datum,
struct datum {
wchar_t* name;
wchar_t* lore;
};
I tried the following code snippet
datum thisDatum;
size_t len = 0;
wchar_t wBuffer[2048];
mbstowcs_s(&len, wBuffer, (const char*)sqlite3_column_text(pStmt, 1), 2048);
if (len) {
thisDatum.name = new wchar_t[len + 1];
wcscpy_s(thisDatum.name, len + 1, wBuffer);
} else thisDatum.name = 0;
mbstowcs_s(&len, wBuffer, (const char*)sqlite3_column_text(pStmt, 2), 2048);
if (len) {
thisDatum.lore = new wchar_t[len + 1];
wcscpy_s(thisDatum.lore, len + 1, wBuffer);
} else thisDatum.name = 0;
However, while thisDatum.name copies correctly, thisDatum.lore is always garbage, except on two occassions. If the project is Debug, everything is fine, but that just isn't an option. I also discovered that rewriting the struct datum
struct datum {
wchar_t* lore;
wchar_t* name;
};
completely fixes the issue for thisDatum.lore, but gives me garbage for thisDatum.name.
Try something more like this:
struct datum {
wchar_t* name;
wchar_t* lore;
};
wchar_t* widen(const char *str)
{
wchar_t *wBuffer = NULL;
size_t len = strlen(str) + 1;
size_t wlen = 0;
mbstowcs_s(&wlen, NULL, 0, str, len);
if (wlen)
{
wBuffer = new wchar_t[wlen];
mbstowcs_s(NULL, wBuffer, wlen, str, len);
}
return wBuffer;
}
datum thisDatum;
thisDatum.name = widen((const char*)sqlite3_column_text(pStmt, 1));
thisDatum.lore = widen((const char*)sqlite3_column_text(pStmt, 2));
...
delete[] thisDatum.name;
delete[] thisDatum.lore;
That being said, I would use std::wstring instead:
struct datum {
std::wstring name;
std::wstring lore;
};
#include <locale>
#include <codecvt>
std::wstring widen(const char *str)
{
std::wstring_convert< std::codecvt<wchar_t, char, std::mbstate_t> > conv;
return conv.from_bytes(str);
}
datum thisDatum;
thisDatum.name = widen((const char*)sqlite3_column_text(pStmt, 1));
thisDatum.lore = widen((const char*)sqlite3_column_text(pStmt, 2));

C++ LZMA SDK: Uncompress function for LZMA2 compressed file

I am trying to create a function that uncompresses LZAM2 compressed data. I inspired myself from this tutorial which works great for LZMA and I tried to adapt it for LZMA2. I successfully created the compression function for LZMA2, but i have no success for the uncompression one.
Here is the compression function:
static void Compress2Inc(std::vector<unsigned char> &outBuf,
const std::vector<unsigned char> &inBuf)
{
CLzma2EncHandle enc = Lzma2Enc_Create(&SzAllocForLzma, &SzAllocForLzma2);
assert(enc);
CLzma2EncProps props;
Lzma2EncProps_Init(&props);
props.lzmaProps.writeEndMark = 1; // 0 or 1
SRes res = Lzma2Enc_SetProps(enc, &props);
assert(res == SZ_OK);
unsigned propsSize = LZMA_PROPS_SIZE;
outBuf.resize(propsSize);
res = Lzma2Enc_WriteProperties(enc);
//cout << res;
//assert(res == SZ_OK && propsSize == LZMA_PROPS_SIZE);
VectorInStream inStream = { &VectorInStream_Read, &inBuf, 0 };
VectorOutStream outStream = { &VectorOutStream_Write, &outBuf };
res = Lzma2Enc_Encode(enc,
(ISeqOutStream*)&outStream, (ISeqInStream*)&inStream,
0);
assert(res == SZ_OK);
Lzma2Enc_Destroy(enc);
}
Where:
static void *AllocForLzma2(void *, size_t size) { return BigAlloc(size); }
static void FreeForLzma2(void *, void *address) { BigFree(address); }
static ISzAlloc SzAllocForLzma2 = { AllocForLzma2, FreeForLzma2 };
static void *AllocForLzma(void *, size_t size) { return MyAlloc(size); }
static void FreeForLzma(void *, void *address) { MyFree(address); }
static ISzAlloc SzAllocForLzma = { AllocForLzma, FreeForLzma };
typedef struct
{
ISeqInStream SeqInStream;
const std::vector<unsigned char> *Buf;
unsigned BufPos;
} VectorInStream;
SRes VectorInStream_Read(void *p, void *buf, size_t *size)
{
VectorInStream *ctx = (VectorInStream*)p;
*size = min(*size, ctx->Buf->size() - ctx->BufPos);
if (*size)
memcpy(buf, &(*ctx->Buf)[ctx->BufPos], *size);
ctx->BufPos += *size;
return SZ_OK;
}
typedef struct
{
ISeqOutStream SeqOutStream;
std::vector<unsigned char> *Buf;
} VectorOutStream;
size_t VectorOutStream_Write(void *p, const void *buf, size_t size)
{
VectorOutStream *ctx = (VectorOutStream*)p;
if (size)
{
unsigned oldSize = ctx->Buf->size();
ctx->Buf->resize(oldSize + size);
memcpy(&(*ctx->Buf)[oldSize], buf, size);
}
return size;
}
Here is what I have so far with the uncompression function but Lzma2Dec_DecodeToBuf function returns error code 1(SZ_ERROR_DATA) and I just couldn't find anything on the web regarding this on the web.
static void Uncompress2Inc(std::vector<unsigned char> &outBuf,
const std::vector<unsigned char> &inBuf)
{
CLzma2Dec dec;
Lzma2Dec_Construct(&dec);
SRes res = Lzma2Dec_Allocate(&dec, outBuf.size(), &SzAllocForLzma);
assert(res == SZ_OK);
Lzma2Dec_Init(&dec);
outBuf.resize(UNCOMPRESSED_SIZE);
unsigned outPos = 0, inPos = LZMA_PROPS_SIZE;
ELzmaStatus status;
const unsigned BUF_SIZE = 10240;
while (outPos < outBuf.size())
{
unsigned destLen = min(BUF_SIZE, outBuf.size() - outPos);
unsigned srcLen = min(BUF_SIZE, inBuf.size() - inPos);
unsigned srcLenOld = srcLen, destLenOld = destLen;
res = Lzma2Dec_DecodeToBuf(&dec,
&outBuf[outPos], &destLen,
&inBuf[inPos], &srcLen,
(outPos + destLen == outBuf.size()) ? LZMA_FINISH_END : LZMA_FINISH_ANY,
&status);
assert(res == SZ_OK);
inPos += srcLen;
outPos += destLen;
if (status == LZMA_STATUS_FINISHED_WITH_MARK)
break;
}
Lzma2Dec_Free(&dec, &SzAllocForLzma);
outBuf.resize(outPos);
}
I am using Visual Studio 2008 and LZMA SDK downloaded from here. Someone here had the exact same problem, but i couldn't make use of his code...
Did anyone ever successfully uncompressed LZMA2 compressed files using LZMA SDK?
Please help!
A temporary workaround would be to replace SRes res = Lzma2Dec_Allocate(&dec, outBuf.size(), &SzAllocForLzma); with SRes res = Lzma2Dec_Allocate(&dec, 8, &SzAllocForLzma); in Uncompress2Inc function where 8 is a magic number...
However this is not the right way to solve the problem...
First mistake is that Lzma2Enc_WriteProperties doesn't return a result but a property byte which will have to be used as the second parameter of the Lzma2Dec_Allocate call in the Uncompress2Inc function. As a result we replace the magic number 8 with the property byte and everything works as expected.
In order to achieve this a 5 byte header must be added to the encoded data which will be extracted in the decoding function. Here is an example that works in VS2008(not the most perfect code but it works...I will get back later, when I have time, with a better example):
void Lzma2Benchmark::compressChunk(std::vector<unsigned char> &outBuf, const std::vector<unsigned char> &inBuf)
{
//! \todo This is a temporary workaround, size needs to be added to the
m_uncompressedSize = inBuf.size();
std::cout << "Uncompressed size is: " << inBuf.size() << std::endl;
DWORD tickCountBeforeCompression = GetTickCount();
CLzma2EncHandle enc = Lzma2Enc_Create(&m_szAllocForLzma, &m_szAllocForLzma2);
assert(enc);
CLzma2EncProps props;
Lzma2EncProps_Init(&props);
props.lzmaProps.writeEndMark = 1; // 0 or 1
props.lzmaProps.level = 9;
props.lzmaProps.numThreads = 3;
//props.numTotalThreads = 2;
SRes res = Lzma2Enc_SetProps(enc, &props);
assert(res == SZ_OK);
// LZMA_PROPS_SIZE == 5 bytes
unsigned propsSize = LZMA_PROPS_SIZE;
outBuf.resize(propsSize);
// I think Lzma2Enc_WriteProperties returns the encoding properties in 1 Byte
Byte properties = Lzma2Enc_WriteProperties(enc);
//! \todo This is a temporary workaround
m_propByte = properties;
//! \todo Here m_propByte and m_uncompressedSize need to be added to outBuf's 5 byte header so simply add those 2 values to outBuf and start the encoding from there.
BenchmarkUtils::VectorInStream inStream = { &BenchmarkUtils::VectorInStream_Read, &inBuf, 0 };
BenchmarkUtils::VectorOutStream outStream = { &BenchmarkUtils::VectorOutStream_Write, &outBuf };
res = Lzma2Enc_Encode(enc,
(ISeqOutStream*)&outStream,
(ISeqInStream*)&inStream,
0);
std::cout << "Compress time is: " << GetTickCount() - tickCountBeforeCompression << " milliseconds.\n";
assert(res == SZ_OK);
Lzma2Enc_Destroy(enc);
std::cout << "Compressed size is: " << outBuf.size() << std::endl;
}
void Lzma2Benchmark::unCompressChunk(std::vector<unsigned char> &outBuf, const std::vector<unsigned char> &inBuf)
{
DWORD tickCountBeforeUncompression = GetTickCount();
CLzma2Dec dec;
Lzma2Dec_Construct(&dec);
//! \todo Heere the property size and the uncompressed size need to be extracted from inBuf, which is the compressed data.
// The second parameter is a temporary workaround.
SRes res = Lzma2Dec_Allocate(&dec, m_propByte/*8*/, &m_szAllocForLzma);
assert(res == SZ_OK);
Lzma2Dec_Init(&dec);
outBuf.resize(m_uncompressedSize);
unsigned outPos = 0, inPos = LZMA_PROPS_SIZE;
ELzmaStatus status;
const unsigned BUF_SIZE = 10240;
while(outPos < outBuf.size())
{
SizeT destLen = std::min(BUF_SIZE, outBuf.size() - outPos);
SizeT srcLen = std::min(BUF_SIZE, inBuf.size() - inPos);
SizeT srcLenOld = srcLen, destLenOld = destLen;
res = Lzma2Dec_DecodeToBuf(&dec,
&outBuf[outPos],
&destLen,
&inBuf[inPos],
&srcLen,
(outPos + destLen == outBuf.size()) ? LZMA_FINISH_END : LZMA_FINISH_ANY,
&status);
assert(res == SZ_OK);
inPos += srcLen;
outPos += destLen;
if(status == LZMA_STATUS_FINISHED_WITH_MARK)
{
break;
}
}
Lzma2Dec_Free(&dec, &m_szAllocForLzma);
outBuf.resize(outPos);
std::cout << "Uncompress time is: " << GetTickCount() - tickCountBeforeUncompression << " milliseconds.\n";
}

simulate ulltoa() with a radix/base of 36

I need to convert an unsigned 64-bit integer into a string. That is in Base 36, or characters 0-Z. ulltoa does not exist in the Linux manpages. But sprintf DOES. How do I use sprintf to achieve the desired result? i.e. what formatting % stuff?
Or if snprintf does not work, then how do I do this?
You can always just write your own conversion function. The following idea is stolen from heavily inspired by this fine answer:
char * int2base36(unsigned int n, char * buf, size_t buflen)
{
static const char digits[] = "0123456789ABCDEFGHI...";
if (buflen < 1) return NULL; // buffer too small!
char * b = buf + buflen;
*--b = 0;
do {
if (b == buf) return NULL; // buffer too small!
*--b = digits[n % 36];
n /= 36;
} while(n);
return b;
}
This will return a pointer to a null-terminated string containing the base36-representation of n, placed in a buffer that you provide. Usage:
char buf[100];
std::cout << int2base36(37, buf, 100);
If you want and you're single-threaded, you can also make the char buffer static -- I guess you can figure out a suitable maximal length:
char * int2base36_not_threadsafe(unsigned int n)
{
static char buf[128];
static const size_t buflen = 128;
// rest as above

why UChar* is not working with this ICU conversion?

When converting from UTF-8 to ISO-8859-6 this code didn't work:
UnicodeString ustr = UnicodeString::fromUTF8(StringPiece(input));
const UChar* source = ustr.getBuffer();
char target[1000];
UErrorCode status = U_ZERO_ERROR;
UConverter *conv;
int32_t len;
// set up the converter
conv = ucnv_open("iso-8859-6", &status);
assert(U_SUCCESS(status));
// convert
len = ucnv_fromUChars(conv, target, 100, source, -1, &status);
assert(U_SUCCESS(status));
// close the converter
ucnv_close(conv);
string s(target);
return s;
images: (1,2)
However when replacing UChar* with a hard-coded UChar[] it works well!!
image : (3)
It looks like you're taking the difficult approach. How about this:
static char const* const cp = "iso-8859-6";
UnicodeString ustr = UnicodeString::fromUTF8(StringPiece(input));
std::vector<char> buf(ustr.length() + 1);
std::vector<char>::size_type len = ustr.extract(0, ustr.length(), &buf[0], buf.size(), cp);
if (len >= buf.size())
{
buf.resize(len + 1);
len = ustr.extract(0, ustr.length(), &buf[0], buf.size(), cp);
}
std::string ret;
if (len)
ret.assign(buf.begin(), buf.begin() + len));
return ret;