I am using a binary semaphore in a struct like so:
struct Header {
uint64_t otherfields...
binary_semaphore sem;
[hidden padding of 4 bytes if sem is 4 bytes]
uint64_t post_otherfields..
};
int main() {
cout << sizeof(Header) << endl;
return 0;
}
I tried on godbolt https://godbolt.org/z/8rPs56Y7x to see if different x86_64 compilers gave the same results of size 4 for the semaphore.
I set up u64 values in the struct on purpose to correct any hiccup.
Since this struct is in a mmap region:
I would like to get the guarantee that the size won't suddenly change in the future, using Linux GCC on x86_64 arch.
(more than 8 bytes will be problematic, taking into account the alignment occuring between sem and post_otherfields and that would mess up the layout)
Thanks
Related
My very simple code shows below
#include <iostream>
#include <stdalign.h>
int main() {
char array_char[2] = {'a', 'b'};
float array_float[2] = {1, 2};
std::cout << "alignof(array_char): " << alignof(array_char) << std::endl;
std::cout << "alignof(array_float): " << alignof(array_float) << std::endl;
std::cout << "address of array_char: " << (void *) array_char << std::endl;
std::cout << "address of array_float: " << array_float << std::endl;
}
The output of this code is
alignof(array_char): 1
alignof(array_float): 4
address of array_char: 0x7fff5e8ec580
address of array_float: 0x7fff5e8ec570
The results of alignof operator is under expectation, but the real addresses of the two arrays are not consistent with them. No matter how many times I tried, the addresses are always 16 bytes aligned.
I'm using gcc 5.4.0 on Ubuntu 16.04 with Intel CORE i5 7th Gen CPU.
I have found this patch.
This seems to have been a bug for x86_64 fixed in GCC 6.4.
The System V x86-64 ABI requires aggregate types (such as arrays and structs) to be aligned to at least 16 bytes if they are at least 16 bytes large. According to a comment in the ABI specification this is meant to facilitate use of SSE instructions.
GCC seem to have mistakenly applied that rule to aggregates of size 16 bits (instead of bytes) and larger.
I suggest you upgrade your compiler to a more recent GCC version.
This is however only an optimization issue, not a correctness one. There is nothing wrong with stricter alignment for the variables and (as with the mentioned SSE) overalignment may have performance benefits in some situations that outweight the cost of the wasted stack memory.
I've had to deal with bitfields in structs recently, and came accross a behaviour I can't explain.
The following struct should be 9 bytes, according to individual sizeof. But doing a sizeof of the main struct yields 10 bytes.
The following program yields "10; 1 1 2 1 2 1 1 =9"
int main(){
struct{
uint8_t doubleoscillator;
struct{
char monophonic : 1;
char hold : 1;
char padding : 6;
} test;
int16_t osc1_multisound; //int
int8_t osc1_octave; // -2..1
int16_t osc2_multisound; //int
int8_t osc2_octave; // -2..1
int8_t intervall;
}osc;
std::cout << sizeof(osc) << "; ";
int a[7];
a[0] = sizeof(osc.doubleoscillator);
a[1] = sizeof(osc.test);
a[2] = sizeof(osc.osc1_multisound);
a[3] = sizeof(osc.osc1_octave);
a[4] = sizeof(osc.osc2_multisound);
a[5] = sizeof(osc.osc2_octave);
a[6] = sizeof(osc.intervall);
int total = 0;
for(int i=0;i<7;i++){
std::cout << a[i] << " ";
total += a[i];
}
std::cout << " = " << total << std::endl;
return 0;
}
Why do the sum individual sizeof() of the internal variables of the struct yield a different result from a sizeof() of the osc struct ?
Primarily for performance reasons, padding is added before each member of a struct to align said member in the structure's memory layout. Thus ocs2_multisound likely has a padding byte before it to ensure it appears at a number of bytes into the struct that is a multiple of 2 (because int16_t has an alignment of 2).
Additionally, after all that is done, the structure's total size is padded to a multiple of its strictest alignment requirement (i.e. the highest alignment of any held field). This is so that e.g. elements of an array of said type will all be properly aligned.
The alignment of a type can be checked at compile-time via alignof(T) where T is the type.
The increased size is unavoidable in this case, but the common advice for cutting down on padding bytes is to order struct members in order of descending alignment. This is because the next item is guaranteed to be properly aligned without the need for padding because the previous field was either the same alignment or stricter alignment. So if any padding is added, it will only be to pad the total size of the structure, rather than (wasted) padding between fields.
The reason for alignment is primarily for efficiency nowadays. Reading an unaligned block of memory on hardware that supports it typically is about twice as slow because it's actually reading the two memory blocks around it and extracting what it needs. However there's also hardware that simply will not work if you try to read/write unaligned memory. Such hardware typically triggers a hardware exception in this event.
I am a beginner and I am trying to feed a struct table with 4 members typed BIN with a pointer, then send them to another one, serial2. I fail to do so.
I receive 4 chars from serial1.read(), for example 'A' '10' '5' '3'.
To decrease the size of the data, I want to use a struct:
struct structTable {
unsigned int page:1; // (0,1)
unsigned int cric:4; // 10 choices (4 bits)
unsigned int crac:3; // 5 choices (3 bits)
unsigned int croc:2; // 3 choices (2 bits)
};
I declare and set: instance and pointer
struct structTable structTable;
struct structTable *PtrstructTable;
PtrstructTable = &structTable;
Then I try to feed like this:
for(int i = 0; i<=4; i++) {
if(i == 1) {
(*PtrProgs).page = Serial.read();
if(i == 2) {
(*PtrProgs).cric = Serial.read();
And so on. But it's not working...
I tried to feed a first char table and tried to cast the result:
(*PtrProgs).page = PtrT[1], BIN;
And now, I realize I can not feed 3 bits in one time! doh! All this seems very weak, and certainly a too long process for just 4 values. (I wanted to keep this kind of struct table for more instances).
Please, could you help me to find a simpler way to feed my table?
You can only send full bytes over the serial port. But you can also send raw data directly.
void send (const structTable* table)
{
Serial.write((const char*)table, sizeof(structTable)); // 2 bytes.
}
bool receive(structTable* table)
{
return (Serial.readBytes((char*)table, sizeof(structTable)) == sizeof(structTable));
}
You also have to be aware that sizeof(int) is not the same on all CPUS
A word about endianness. The definition for your struct for the program at the other end of the serial link, if running on a CPU with a different endianness would become:
struct structTable {
unsigned short int croc:2; // 3 choices (2 bits)
unsigned short int crac:3; // 5 choices (3 bits)
unsigned short int cric:4; // 10 choices (4 bits)
unsigned short int page:1; // (0,1)
};
Note the use of short int, which you can also use in the arduino code to be more precise. The reason is that short int is 16 bits on most CPUs, while int may be 16,32 or even 64 bits.
According to the Arduino reference I just looked up Serial::read, the code returns data byte-by-byte (eight bits at a time). So probably you should just read the data one byte (eight bits at a time) and do your unpacking after the fact.
In fact you might want to use a union (see e.g. this other stackoverflow post on how to use a union) so that you can get the best of both worlds. Specifically, if you define a union of your definition with the bits broken out and a second part of the union as one or two bytes, you can send the data as bytes and then decode it in the bits you are interested in.
UPDATE
Here is an attempt at some more details. There are a lot of caveats about unions - they aren't portable, they are compiler dependent, etc. But this might be worth trying.
typedef struct {
unsigned int page:1; // (0,1)
unsigned int cric:4; // 10 choices (4 bits)
unsigned int crac:3; // 5 choices (3 bits)
unsigned int croc:2; // 3 choices (2 bits)
} structTable;
typedef union {
structTable a;
uint16_t b;
} u_structTable;
serial.Read(val1);
serial.Read(val2);
u_structTable x;
x.b = val1 | (val2<<8);
printf("page is %d\n", x.a.page);
As far as I can tell, calling malloc() basically means the program is asking the OS for a hunk of memory. I'm writing a program to interface with a camera, in which I need to allocate chucks of memory large enough to store hundreds of images at a time (its a fast camera).
When I allocate space for about 1.9 Gb worth of images, everything works just fine. The allocation calculation is pretty simple:
int allocateBurst( int numImages )
{
int streamSize = ZIMAGESIZE * numImages;
data.images = new unsigned short [streamSize];
return 0;
}
But as soon as I go over the 2 Gb limit, I get runtime errors like this:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
It seems like 2 Gigs might be the maximum size that I can allocate at once. I have 32 Gigs of ram, and would like to simply be able to allocate larger pieces of memory in one allocation. Is this possible?
I'm running Ubuntu 12.10.
There may be an underlying issue that the OS can't grant your large memory allocation because it is using memory for other applications. Check with your OS to see what the limits are.
Also know that some OS's will "page" memory to the hard disk. When your program asks for memory outside the page, the OS will swap pages with the hard disk. Knowing this, I recommend a classic technique of "Double Buffering" or "Multiple Buffering".
You will need at least two threads: reading and writing. One thread is responsible for reading data from the camera and placing into a buffer. When it fills up a buffer, it starts on another buffer. Meanwhile the writing thread is starting at the buffer and writing it to disk (block file writes). When the writing thread finishes a buffer, it starts on the next one. The buffers should be in a circular sequence to reuse them.
The magic is to have enough buffers so that the reader never catches up to the writer.
Since you are using a couple of small buffers, you should not get any errors from the OS.
The are methods to optimize this, such as obtaining static buffers from the OS.
The problem is you're using a signed 32-bit variable to describe an unsigned 64-bit number.
Use "size_t" instead of "int" for holding the storage count. This has nothing to do with what you intend to store, just how large a count of them you need.
#include <iostream>
int main(int /*argc*/, const char** /*argv*/)
{
int units = 2;
// 32-bit signed, i.e. 31-bit numbers.
int intSize = units * 1024 * 1024 * 1024;
// 64-bit values (ULL suffix)
size_t sizetSize = units * 1024ULL * 1024ULL * 1024ULL;
std::cout << "intSize = " << intSize << ", sizetSize = " << sizetSize << std::endl;
try {
unsigned short* intAlloc = new unsigned short[intSize];
std::cout << "intAlloc = " << intAlloc << std::endl;
delete [] intAlloc;
} catch (std::bad_alloc) {
std::cout << "intAlloc failed (std::bad_alloc)" << std::endl;
}
try {
unsigned short* sizetAlloc = new unsigned short[sizetSize];
std::cout << "sizetAlloc = " << sizetAlloc << std::endl;
delete [] sizetAlloc;
} catch (std::bad_alloc) {
std::cout << "sizetAlloc failed (std::bad_alloc)" << std::endl;
}
return 0;
}
Output (g++ -m64 -o test test.cpp under Mint 15 64 bit with g++ 4.7.3 on a virtual machine with 4Gb of memory)
intSize = -2147483648, sizetSize = 2147483648
intAlloc failed
sizetAlloc = 0x7f55affff010
int allocateBurst( int numImages )
{
// change that from int to long
long streamSize = ZIMAGESIZE * numImages;
data.images = new unsigned short [streamSize];
return 0;
}
Try using
long
OR
cast the result of the allocateBurst function to "uint_64" and the return type of the function to uint_64
Because int you allocate 32 bit allocation while long or uint_64 allocates 64 bit allocation which could possibly allocate more memory space for you.
Hope that helps
I am looking for any library of example parsing a binary msg in C++. Most people asks for reading a binary file, or data received in a socket, but I just have a set of binary messages I need to decode. Somebody mentioned boost::spirit, but I haven't been able to find a suitable example for my needs.
As an example:
9A690C12E077033811FFDFFEF07F042C1CE0B704381E00B1FEFFF78004A92440
where first 8 bits are a preamble, next 6 bits the msg ID (an integer from 0 to 63), next 212 bits are data, and final 24 bits are a CRC24.
So in this case, msg 26, I have to get this data from the 212 data bits:
4 bits integer value
4 bits integer value
A 9 bit float value from 0 to 63.875, where LSB is 0.125
4 bits integer value
EDIT: I need to operate at bit level, so a memcpy is not a good solution, since it copies a number of bytes. To get first 4-bit integer value I should get 2 bits from a byte, and another 2 bits from the next byte, shift each pair and compose. What I am asking for is a more elegant way of extracting the values, because I have about 20 different messages and wanted to reach a common solution to parse them at bit level.
And so on.
Do you know os any library which can easily achieve this?
I also found other Q/A where static_cast is being used. I googled about it, and for each person recommending this approach, there is another one warning about endians. Since I already have my message, I don't know if such a warning applies to me, or is just for socket communications.
EDIT: boost:dynamic_bitset looks promising. Any help using it?
If you can't find a generic library to parse your data, use bitfields to get the data and memcpy() it into an variable of the struct. See the link Bitfields. This will be more streamlined towards your application.
Don't forget to pack the structure.
Example:
#pragma pack
include "order32.h"
struct yourfields{
#if O32_HOST_ORDER == O32_BIG_ENDIAN
unsigned int preamble:8;
unsigned int msgid:6;
unsigned data:212;
unsigned crc:24;
#else
unsigned crc:24;
unsigned data:212;
unsigned int msgid:6;
unsigned int preamble:8;
#endif
}/*__attribute__((packed)) for gcc*/;
You can do a little compile time check to assert if your machine uses LITTLE ENDIAN or BIG ENDIAN format. After that define it into a PREPROCESSOR SYMBOL::
//order32.h
#ifndef ORDER32_H
#define ORDER32_H
#include <limits.h>
#include <stdint.h>
#if CHAR_BIT != 8
#error "unsupported char size"
#endif
enum
{
O32_LITTLE_ENDIAN = 0x03020100ul,
O32_BIG_ENDIAN = 0x00010203ul,
O32_PDP_ENDIAN = 0x01000302ul
};
static const union { unsigned char bytes[4]; uint32_t value; } o32_host_order =
{ { 0, 1, 2, 3 } };
#define O32_HOST_ORDER (o32_host_order.value)
#endif
Thanks to code by Christoph # here
Example program for using bitfields and their outputs:
#include <iostream>
#include <cstdio>
#include <cstdlib>
#include <memory.h>
using namespace std;
struct bitfields{
unsigned opcode:5;
unsigned info:3;
}__attribute__((packed));
struct bitfields opcodes;
/* info: 3bits; opcode: 5bits;*/
/* 001 10001 => 0x31*/
/* 010 10010 => 0x52*/
void set_data(unsigned char data)
{
memcpy(&opcodes,&data,sizeof(data));
}
void print_data()
{
cout << opcodes.opcode << ' ' << opcodes.info << endl;
}
int main(int argc, char *argv[])
{
set_data(0x31);
print_data(); //must print 17 1 on my little-endian machine
set_data(0x52);
print_data(); //must print 18 2
cout << sizeof(opcodes); //must print 1
return 0;
}
You can manipulate bits for your own, for example to parse 4 bit integer value do:
char[64] byte_data;
size_t readPos = 3; //any byte
int value = 0;
int bits_to_read = 4;
for (size_t i = 0; i < bits_to_read; ++i) {
value |= static_cast<unsigned char>(_data[readPos]) & ( 255 >> (7-i) );
}
Floats usually sent as string data:
std::string temp;
temp.assign(_data+readPos, 9);
flaot value = std::stof(temp);
If your data contains custom float format then just extract bits and do your math:
char[64] byte_data;
size_t readPos = 3; //any byte
float value = 0;
int i = 0;
int bits_to_read = 9;
while (bits_to_read) {
if (i > 8) {
++readPos;
i = 0;
}
const int bit = static_cast<unsigned char>(_data[readPos]) & ( 255 >> (7-i) );
//here your code
++i;
--bits_to_read;
}
Here is a good article that describes several solutions to the problem.
It even contains the reference to the ibstream class that the author created specifically for this purpose (the link seems dead, though). The only other mention of this class I could find is in the bit C++ library here - it might be what you need, though it's not popular and it's under GPL.
Anyway, the boost::dynamic_bitset might be the best choice as it's time-tested and community-proven. But I have no personal experience with it.