Taking OpenCV Mat<doube> and converting to array of 12bit values. - c++

I have an cv::Mat of doubles image that I've truncated between 0.0 and 4095.0. I want to be able to convert this matrix/create a new matrix based on this one that is 12bit. (smallest int size needed to hold 0 -> 4095 integer values). I can just get the raw buffer out, however I'm not sure the format of the data inside the matrix.
Manually I could do the following:
cv::Mat new_matrix(/*type CV_8UC3, size (matrix.rows, matrix.cols/2)*/);
for(int i = 0; i < matrix.rows; ++i){
for(int j = 0; j < matrix.cols; ++j){
std::uint16_t upper_half = static_cast<std::uint16_t>(matrix.at<double>(j*2,i));
std::uint16_t lower_half = static_cast<std::uint16_t>(matrix.at<double>(j*2+1,i));
std::uint8_t first_byte = static_cast<std::uint8_t>(upper_half>>4);
std::uint8_t second_byte = static_cast<std::uint8_t>(upper_half<<4) | static_cast<std::uint8_t>(lower_half << 12 >> 12);
std::uint8_t third_byte = static_cast<std::uint8_t>(lower_half>>4);
new_matrix.at<cv::Vec3B>(j, i) = cv::Vec3b(first_byte, second_byte, third_byte);
}
}
which is essentially compressing two double values to one for upper half, one for lower, extracting three bytes out of it (12 + 12 = 24, 24/8 = 3) into a 3 byte matrix. I'm unsure if the memory layout will match that of packed 12 bits however (I do have an even number of cols, so dividing cols/2 isn't a problem) and I'm not sure how to make sure this obeys endianess.
I might even be able to use a custom data type, but I would need to make sure that the elements are not padded if say I made a Union Struct 12bit type or something.
Note after the conversion, I'm not intending to use the 12bit values in OpenCV anymore, I then need to extract the raw values and they get sent to another separate process.

cv::Mat will store data in 8 bits units, minimum. Which means that your 12-bits values would be padded anyways inside the matrix, as evidenced by the return value of Mat::elemSize1(), which is in # of bytes. For what you need to do, the best bet seems to use a custom struct holding 2 values (struct sizes are byte-padded as well), then pack everything in an std::vector<>. You will then waste at worst 12 bits of padding on the streamed data, when you have an odd number of samples.
A note about packing: If you use something like the following, you need to reverse the the order of the bit-sized elements, depending of the machine, if you need to transfer bytes from one architecture to another.
#pragma pack(push, 1)
struct PackedSamples {
char lowA;
char highA : 4; // NOTE: the declaration of bit sized fields order is inverse when
char lowB : 4; // going from BIG_ENDIAN to SMALL_ENDIAN and vice-versa
char highB;
};
#pragma pack(pop)
Here are the macros I use for testing endianness, I assume Windows running on x86/x64. AMD is __BIG_ENDIAN.
#ifdef WIN32
# ifndef __BYTE_ORDER
# define __LITTLE_ENDIAN 1234
# define __BIG_ENDIAN 4321
# define __BYTE_ORDER __LITTLE_ENDIAN
# endif
#else
# include <endian.h>
#endif
So the declaration above would become:
#pragma pack(push, 1)
struct PackedSamples {
char lowA;
#if __BYTE_ORDER == __LITTLE_ENDIAN
char highA : 4;
char lowB : 4;
#else
char lowB : 4;
char highA : 4;
#endif
char highB;
};
#pragma pack(pop)

Related

unsigned int tot_len:16; returns different result to uint16_t

As far as I can see, an unsigned int with specified length should be equivalent to uintxx_t if the sizes are equivalent. However when I point a struct with these members to an area in memory to observe the fields, the values are different. Specifically, I'm examining ip header fields. Using unsigned int tot_len:16; returns the correct result, whereas setting the field to uint16_t returns an incorrect value. gcc/g++ 4.5.2 is being used on a windows platform. Could anyone explain what is happening?
The faulty struct definition:
typedef struct ip4 {

#if __BYTE_ORDER == __LITTLE_ENDIAN
unsigned int ihl :4;
unsigned int version :4;
#elif __BYTE_ORDER == __BIG_ENDIAN
unsigned int version:4;
unsigned int ihl:4;
#else
# error "Please fix <bits/endian.h>"

#endif
 uint8_t tos;
uint16_t tot_len;
uint16_t id;
uint16_t frag_off; // flags=3 bits, offset=13 bits
uint8_t ttl;
uint8_t protocol;
uint16_t check;
uint32_t saddr;
uint32_t daddr;
/*The options start here. */

} ip4_t;
The struct works when all the uintxx_t are switched with the unsigned int xxx:xx equivalents. In other words, all values with uintxx_t are incorrect. When switched with unsigned int xxx:xx, the values are now correct. (I am trying to fix some issues with a third party library that my work is using, so I can't provide an entirely reproducible example. The calling method:
void scan_ip4(register scan_t *scan) {

header_t *eth;

if ((scan->buf_len - scan->offset) < sizeof(ip4_t)) {
return;
}

register ip4_t *ip4 = (ip4_t *) (scan->buf + scan->offset);
uint16_t tot_len = BIG_ENDIAN16(ip4->tot_len);
scan->length = ip4->ihl * 4;
scan->hdr_payload = tot_len - scan->length;


 if (is_accessible(scan, 8) == FALSE) {

 return;

 }
....
}
typedef struct ip4 {

 #if __BYTE_ORDER == __LITTLE_ENDIAN
unsigned int ihl :4;
unsigned int version :4;
#elif __BYTE_ORDER == __BIG_ENDIAN
unsigned int version:4;
unsigned int ihl:4;
 #else
# error "Please fix <bits/endian.h>"

 #endif
uint8_t tos;
uint16_t tot_len;
...
Here you define a bitfield using unsigned int as storage and place 2 4bit values in it. The remaining 24 bit are padded out. tos is then at offset 4 followed by another padding byte and tot_len is at offset 6.
typedef struct ip4 {

 #if __BYTE_ORDER == __LITTLE_ENDIAN
unsigned int ihl :4;
unsigned int version :4;
#elif __BYTE_ORDER == __BIG_ENDIAN
unsigned int version:4;
unsigned int ihl:4;
 #else
# error "Please fix <bits/endian.h>"

 #endif
unsigned int tos:8;
unsigned int tot_len:16;
...
Here you place all 4 variables into the bitfield using a single 32bit unsigned int. So tos is at offset 1 and tot_len is at offset 2.
Or the bitfield is layed out the other way and tot_len is at offset 0, tos at offset 2 and the rest at offset 3. Bitfield layout is implementation defined.
If you want only ihl and version as bitfield then you have to define it at uint8_t ihl : 4; uint8_t version : 4; so only a single byte is used as storage for the bitfield.
You should static_assert(sizeof(ip4) == <expected size>);

Arduino - how to feed a struct from a serial.read()?

I am a beginner and I am trying to feed a struct table with 4 members typed BIN with a pointer, then send them to another one, serial2. I fail to do so.
I receive 4 chars from serial1.read(), for example 'A' '10' '5' '3'.
To decrease the size of the data, I want to use a struct:
struct structTable {
unsigned int page:1; // (0,1)
unsigned int cric:4; // 10 choices (4 bits)
unsigned int crac:3; // 5 choices (3 bits)
unsigned int croc:2; // 3 choices (2 bits)
};
I declare and set: instance and pointer
struct structTable structTable;
struct structTable *PtrstructTable;
PtrstructTable = &structTable;
Then I try to feed like this:
for(int i = 0; i<=4; i++) {
if(i == 1) {
(*PtrProgs).page = Serial.read();
if(i == 2) {
(*PtrProgs).cric = Serial.read();
And so on. But it's not working...
I tried to feed a first char table and tried to cast the result:
(*PtrProgs).page = PtrT[1], BIN;
And now, I realize I can not feed 3 bits in one time! doh! All this seems very weak, and certainly a too long process for just 4 values. (I wanted to keep this kind of struct table for more instances).
Please, could you help me to find a simpler way to feed my table?
You can only send full bytes over the serial port. But you can also send raw data directly.
void send (const structTable* table)
{
Serial.write((const char*)table, sizeof(structTable)); // 2 bytes.
}
bool receive(structTable* table)
{
return (Serial.readBytes((char*)table, sizeof(structTable)) == sizeof(structTable));
}
You also have to be aware that sizeof(int) is not the same on all CPUS
A word about endianness. The definition for your struct for the program at the other end of the serial link, if running on a CPU with a different endianness would become:
struct structTable {
unsigned short int croc:2; // 3 choices (2 bits)
unsigned short int crac:3; // 5 choices (3 bits)
unsigned short int cric:4; // 10 choices (4 bits)
unsigned short int page:1; // (0,1)
};
Note the use of short int, which you can also use in the arduino code to be more precise. The reason is that short int is 16 bits on most CPUs, while int may be 16,32 or even 64 bits.
According to the Arduino reference I just looked up Serial::read, the code returns data byte-by-byte (eight bits at a time). So probably you should just read the data one byte (eight bits at a time) and do your unpacking after the fact.
In fact you might want to use a union (see e.g. this other stackoverflow post on how to use a union) so that you can get the best of both worlds. Specifically, if you define a union of your definition with the bits broken out and a second part of the union as one or two bytes, you can send the data as bytes and then decode it in the bits you are interested in.
UPDATE
Here is an attempt at some more details. There are a lot of caveats about unions - they aren't portable, they are compiler dependent, etc. But this might be worth trying.
typedef struct {
unsigned int page:1; // (0,1)
unsigned int cric:4; // 10 choices (4 bits)
unsigned int crac:3; // 5 choices (3 bits)
unsigned int croc:2; // 3 choices (2 bits)
} structTable;
typedef union {
structTable a;
uint16_t b;
} u_structTable;
serial.Read(val1);
serial.Read(val2);
u_structTable x;
x.b = val1 | (val2<<8);
printf("page is %d\n", x.a.page);

AES-NI 256-Bit block encryption

I am attempting to use this code which is taken from the intel whitepaper as shown below.
My aim is to perform 256-bit block encryption using AES-NI.
I have successfully derived the key schedule using the method, this method was provided in the Intel AES-NI library which is used to expand the keys: iEncExpandKey256(key,expandedKey);
and the expandedKey works fine in my non AES-NI implementation of AES.
However, when I pass the values into Rijndael256_encrypt(testVector,testResult,expandedKey,32,1) ;
I get an error of "Attempting to access protected memory and this usually indicates that the memory is corrupt" and the line of code which is causing this is data1 = _mm_xor_si128(data1, KS[0]); /* round 0 (initial xor) */ as shown below.
So my question is , what could be the possible errors for such an error? My current hypothesis is that data1 and KS[0] could be of different size and I am currently still verifying it. Other than that , I'm not really sure where else I could look at. Would be greatly appreciated if someone can point me in the right direction to troubleshoot this error.
#include <wmmintrin.h>
#include <emmintrin.h>
#include <smmintrin.h>
void Rijndael256_encrypt (unsigned char *in,
unsigned char *out,
unsigned char *Key_Schedule,
unsigned long long length,
int number_of_rounds)
{
__m128i tmp1, tmp2, data1 ,data2;
__m128i RIJNDAEL256_MASK =
_mm_set_epi32(0x03020d0c, 0x0f0e0908, 0x0b0a0504, 0x07060100);
__m128i BLEND_MASK=
_mm_set_epi32(0x80000000, 0x80800000, 0x80800000, 0x80808000);
__m128i *KS = (__m128i*)Key_Schedule;
int i,j;
for(i=0; i < length/32; i++) { /* loop over the data blocks */
data1 = _mm_loadu_si128(&((__m128i*)in)[i*2+0]); /* load data block */
data2 = _mm_loadu_si128(&((__m128i*)in)[i*2+1]);
data1 = _mm_xor_si128(data1, KS[0]); /* round 0 (initial xor) */
data2 = _mm_xor_si128(data2, KS[1]);
/* Do number_of_rounds-1 AES rounds */
for(j=1; j < number_of_rounds; j++) {
/*Blend to compensate for the shift rows shifts bytes between two
128 bit blocks*/
tmp1 = _mm_blendv_epi8(data1, data2, BLEND_MASK);
tmp2 = _mm_blendv_epi8(data2, data1, BLEND_MASK);
/*Shuffle that compensates for the additional shift in rows 3 and 4
as opposed to rijndael128 (AES)*/
tmp1 = _mm_shuffle_epi8(tmp1, RIJNDAEL256_MASK);
tmp2 = _mm_shuffle_epi8(tmp2, RIJNDAEL256_MASK);
/*This is the encryption step that includes sub bytes, shift rows,
mix columns, xor with round key*/
data1 = _mm_aesenc_si128(tmp1, KS[j*2]);
data2 = _mm_aesenc_si128(tmp2, KS[j*2+1]);
}
tmp1 = _mm_blendv_epi8(data1, data2, BLEND_MASK);
tmp2 = _mm_blendv_epi8(data2, data1, BLEND_MASK);
tmp1 = _mm_shuffle_epi8(tmp1, RIJNDAEL256_MASK);
tmp2 = _mm_shuffle_epi8(tmp2, RIJNDAEL256_MASK);
tmp1 = _mm_aesenclast_si128(tmp1, KS[j*2+0]); /*last AES round */
tmp2 = _mm_aesenclast_si128(tmp2, KS[j*2+1]);
_mm_storeu_si128(&((__m128i*)out)[i*2+0],tmp1);
_mm_storeu_si128(&((__m128i*)out)[i*2+1],tmp2);
}
}
You have:
UCHAR* Key_Schedule=Key_schedule+4;
This unaligns Key_Schedule, since Key_schedule is (I hope!) aligned and you've added 32-bits to it.
You're asking the CPU to do something that the hardware is not capable of doing because of the way the data lines are wired. This is a gross oversimplification, but: You can think of the CPU as having sixteen 8-bit slots that it has to read from. To read data, it sends out an address which is the byte address divided by 16 and then decides which slots to read from. If the byte address of all 16 bytes that compose the 128-bit address aren't the same when divided by 16, then it's not possible to read the 16 bytes into the 16 slots.
If you don't want to impose alignment requirements on all the parameters to the function, then you'll need to have the function itself copy them into aligned buffers.
SSE operations need to be aligned to 16 for loading and storing[.] -- AES Intrinsics

Parsing a binary message in C++. Any lib with examples?

I am looking for any library of example parsing a binary msg in C++. Most people asks for reading a binary file, or data received in a socket, but I just have a set of binary messages I need to decode. Somebody mentioned boost::spirit, but I haven't been able to find a suitable example for my needs.
As an example:
9A690C12E077033811FFDFFEF07F042C1CE0B704381E00B1FEFFF78004A92440
where first 8 bits are a preamble, next 6 bits the msg ID (an integer from 0 to 63), next 212 bits are data, and final 24 bits are a CRC24.
So in this case, msg 26, I have to get this data from the 212 data bits:
4 bits integer value
4 bits integer value
A 9 bit float value from 0 to 63.875, where LSB is 0.125
4 bits integer value
EDIT: I need to operate at bit level, so a memcpy is not a good solution, since it copies a number of bytes. To get first 4-bit integer value I should get 2 bits from a byte, and another 2 bits from the next byte, shift each pair and compose. What I am asking for is a more elegant way of extracting the values, because I have about 20 different messages and wanted to reach a common solution to parse them at bit level.
And so on.
Do you know os any library which can easily achieve this?
I also found other Q/A where static_cast is being used. I googled about it, and for each person recommending this approach, there is another one warning about endians. Since I already have my message, I don't know if such a warning applies to me, or is just for socket communications.
EDIT: boost:dynamic_bitset looks promising. Any help using it?
If you can't find a generic library to parse your data, use bitfields to get the data and memcpy() it into an variable of the struct. See the link Bitfields. This will be more streamlined towards your application.
Don't forget to pack the structure.
Example:
#pragma pack
include "order32.h"
struct yourfields{
#if O32_HOST_ORDER == O32_BIG_ENDIAN
unsigned int preamble:8;
unsigned int msgid:6;
unsigned data:212;
unsigned crc:24;
#else
unsigned crc:24;
unsigned data:212;
unsigned int msgid:6;
unsigned int preamble:8;
#endif
}/*__attribute__((packed)) for gcc*/;
You can do a little compile time check to assert if your machine uses LITTLE ENDIAN or BIG ENDIAN format. After that define it into a PREPROCESSOR SYMBOL::
//order32.h
#ifndef ORDER32_H
#define ORDER32_H
#include <limits.h>
#include <stdint.h>
#if CHAR_BIT != 8
#error "unsupported char size"
#endif
enum
{
O32_LITTLE_ENDIAN = 0x03020100ul,
O32_BIG_ENDIAN = 0x00010203ul,
O32_PDP_ENDIAN = 0x01000302ul
};
static const union { unsigned char bytes[4]; uint32_t value; } o32_host_order =
{ { 0, 1, 2, 3 } };
#define O32_HOST_ORDER (o32_host_order.value)
#endif
Thanks to code by Christoph # here
Example program for using bitfields and their outputs:
#include <iostream>
#include <cstdio>
#include <cstdlib>
#include <memory.h>
using namespace std;
struct bitfields{
unsigned opcode:5;
unsigned info:3;
}__attribute__((packed));
struct bitfields opcodes;
/* info: 3bits; opcode: 5bits;*/
/* 001 10001 => 0x31*/
/* 010 10010 => 0x52*/
void set_data(unsigned char data)
{
memcpy(&opcodes,&data,sizeof(data));
}
void print_data()
{
cout << opcodes.opcode << ' ' << opcodes.info << endl;
}
int main(int argc, char *argv[])
{
set_data(0x31);
print_data(); //must print 17 1 on my little-endian machine
set_data(0x52);
print_data(); //must print 18 2
cout << sizeof(opcodes); //must print 1
return 0;
}
You can manipulate bits for your own, for example to parse 4 bit integer value do:
char[64] byte_data;
size_t readPos = 3; //any byte
int value = 0;
int bits_to_read = 4;
for (size_t i = 0; i < bits_to_read; ++i) {
value |= static_cast<unsigned char>(_data[readPos]) & ( 255 >> (7-i) );
}
Floats usually sent as string data:
std::string temp;
temp.assign(_data+readPos, 9);
flaot value = std::stof(temp);
If your data contains custom float format then just extract bits and do your math:
char[64] byte_data;
size_t readPos = 3; //any byte
float value = 0;
int i = 0;
int bits_to_read = 9;
while (bits_to_read) {
if (i > 8) {
++readPos;
i = 0;
}
const int bit = static_cast<unsigned char>(_data[readPos]) & ( 255 >> (7-i) );
//here your code
++i;
--bits_to_read;
}
Here is a good article that describes several solutions to the problem.
It even contains the reference to the ibstream class that the author created specifically for this purpose (the link seems dead, though). The only other mention of this class I could find is in the bit C++ library here - it might be what you need, though it's not popular and it's under GPL.
Anyway, the boost::dynamic_bitset might be the best choice as it's time-tested and community-proven. But I have no personal experience with it.

Getting the size of an indiviual field from a c++ struct field

The short version is: How do I learn the size (in bits) of an individual field of a c++ field?
To clarify, an example of the field I am talking about:
struct Test {
unsigned field1 : 4; // takes up 4 bits
unsigned field2 : 8; // 8 bits
unsigned field3 : 1; // 1 bit
unsigned field4 : 3; // 3 bits
unsigned field5 : 16; // 16 more to make it a 32 bit struct
int normal_member; // normal struct variable member, 4 bytes on my system
};
Test t;
t.field1 = 1;
t.field2 = 5;
// etc.
To get the size of the entire Test object is easy, we just say
sizeof(Test); // returns 8, for 8 bytes total size
We can get a normal struct member through
sizeof(((Test*)0)->normal_member); // returns 4 (on my system)
I would like to know how to get the size of an individual field, say Test::field4. The above example for a normal struct member does not work. Any ideas? Or does someone know a reason why it cannot work? I am fairly convinced that sizeof will not be of help since it only returns size in bytes, but if anyone knows otherwise I'm all ears.
Thanks!
You can calculate the size at run time, fwiw, e.g.:
//instantiate
Test t;
//fill all bits in the field
t.field1 = ~0;
//extract to unsigned integer
unsigned int i = t.field1;
... TODO use contents of i to calculate the bit-width of the field ...
You cannot take the sizeof a bitfield and get the number of bits.
Your best bet would be use #defines or enums:
struct Test {
enum Sizes {
sizeof_field1 = 4,
sizeof_field2 = 8,
sizeof_field3 = 1,
sizeof_field4 = 3,
sizeof_field5 = 16,
};
unsigned field1 : sizeof_field1; // takes up 4 bits
unsigned field2 : sizeof_field2; // 8 bits
unsigned field3 : sizeof_field3; // 1 bit
unsigned field4 : sizeof_field4; // 3 bits
unsigned field5 : sizeof_field5; // 16 more to make it a 32 bit struct
int normal_member; // normal struct variable member, 4 bytes on my system
};
printf("%d\n", Test::sizeof_field1); // prints 4
For the sake of consistency, I believe you can move normal_member up to the top and add an entry in Sizes using sizeof(normal_member). This messes with the order of your data, though.
Seems unlikely, since sizeof() is in bytes, and you want bits.
http://en.wikipedia.org/wiki/Sizeof
building on the bit counting answer, you can use.
http://www-graphics.stanford.edu/~seander/bithacks.html
Using ChrisW's idea (nice, by the way), you can create a helper macro:
#define SIZEOF_BITFIELD(class,member,out) { \
class tmp_; \
tmp_.member = ~0; \
unsigned int tmp2_ = tmp_.member; \
++tmp2_; \
out = log2(tmp2_); \
}
unsigned int log2(unsigned int x) {
// Overflow occured.
if(!x) {
return sizeof(unsigned int) * CHAR_BIT;
}
// Some bit twiddling... Exploiting the fact that floats use base 2 and store the exponent. Assumes 32-bit IEEE.
float f = (float)x;
return (*(unsigned int *)&f >> 23) - 0x7f;
}
Usage:
size_t size;
SIZEOF_BITFIELD(Test, field1, size); // Class of the field, field itself, output variable.
printf("%d\n", size); // Prints 4.
My attempts to use templated functions have failed. I'm not an expert on templates, however, so it may still be possible to have a clean method (e.g. sizeof_bitfield(Test::field1)).
I don't think you can do it. If you really need the size, I suggest you use a #define (or, better yet, if possible a const variable -- I'm not sure if that's legal) as so:
#define TEST_FIELD1_SIZE 4
struct Test {
unsigned field1 : TEST_FIELD1_SIZE;
...
}
This is not possible
Answer to comment:
Because the type is just an int, there is no 'bit' type. The bit field assignment syntax is just short hand for performing the bitwise code for reads and writes.