I am reading simple binary data, without pointers, using C++ classes without padding with the following code:
#include <fstream>
#include <iostream>
using namespace std;
class Data {
public:
int a;
int b;
short int c;
double d;
}__attribute__((packed));
int main() {
Data myData;
ifstream ifs("test.bin", ios::binary);
ifs.read((char *)&myData, sizeof(myData));
ifs.close();
}
I am using this method because the data might have 20+ different formats and I want to write 20+ different classes to cover all the formats that might show up. I also read that other options include using bit-fields, pragma directives, and even the boost serialization routines (I can't because I have to use std). My question is: is this the best way to read simple binary data using classes without padding? Do you suggest any other alternative? I would like to learn what is the safest and most widely used method out there.
Typically, one would use a struct instead of a class, but yes, the same concept applies to both.
I've used these macros to allow packed structs to compile on both gcc and VC:
#ifdef _MSC_VER
#define BEGIN_PACK __pragma( pack(push, 1) )
#define END_PACK __pragma( pack(pop) )
#else
#define BEGIN_PACK
#define END_PACK __attribute__((packed))
#endif
So then you'd use them like this:
BEGIN_PACK
struct Data {
int a;
int b;
short int c;
double d;
} END_PACK;
But yes, that's usually how it's done. Note that these are non-standard extensions.
C++11 has defined packing directives, but I don't know if they're supported by compilers yet.
Related
I need to use #define and using = ; as much as I can to replace possibly everything in C++ with emojis 😝😜😫😻😋.
Is it possible to #define preprocessors like #define 😎 #define or at least #define 😖 if, #define 🍉 ==, etc.? Maybe with 'using'?
I'd like to replace operators, core language instructions... Is it possible anyhow?
I know the aboves doesn't work, but maybe there is a way?... Please help me make something funny! :D
Yes, you can. You may need to think about the syntax. Easiest would be to use one emoji per keyword. However you may still need to write function- and variable names in clear text.
As per Romens comment I tried it and you can also replace method names with emojis.
Just as a proof of concept, the following code compiles in visual studio 2019 with platform toolset v142.
#include <iostream>
#define 😎 int
😎 🍉() {
std::cout << "I'm 🍉!";
return 1;
}
😎 main() {
🍉();
}
Or even more to include some of the comments:
#include <iostream>
#define 🙈 using
#define 🤷🏻 cout
#define 😎 int
namespace 🍏 = std;
🙈 🍏::🤷🏻;
😎 🍉() {
🤷🏻 << "I'm";
🍏::cout << "🍉!";
return 1;
}
😎 main() {
🍉();
}
Also using is something else than #define. You will only need the latter.
Is it possible to #define preprocessors like #define 😎 #define
No, it is not possible to define macros to replace preprocessor directives. (Also, macros cannot expand into directives either).
or at least #define 😖 if
This is potentially possible. It depends on the compiler what input character encoding it supports. Emojis are not listed in the basic source character set specified by the language standard, so they might not exist in the character encoding used by the compiler.
Maybe with 'using'?
Emojis are equally allowed for using as they are for macros.
Note that any identifier could be an emoji (assuming they are supported in the first place) including functions, types and variables. Example:
struct 🍏🍏 {};
struct 🍊🍊 {};
int main() {
🍏🍏{} == 🍊🍊{};
}
Is there a way to write a compile-time assertion that checks if some type has any padding in it?
For example:
struct This_Should_Succeed
{
int a;
int b;
int c;
};
struct This_Should_Fail
{
int a;
char b;
// because there are 3 bytes of padding here
int c;
};
Since C++17 you might be able to use std::has_unique_object_representations.
#include <type_traits>
static_assert(std::has_unique_object_representations_v<This_Should_Succeed>); // succeeds
static_assert(std::has_unique_object_representations_v<This_Should_Fail>); // fails
Although, this might not do exactly what you want it to do. Check the linked cppreference page for details.
Edit: Check Indiana's answer.
Is there a way to write a compile-time assertion that checks if some type has any padding in it?
Yes.
You can sum the sizeof of all members and compare it to the size of the class itself:
static_assert(sizeof(This_Should_Succeed) == sizeof(This_Should_Succeed::a)
+ sizeof(This_Should_Succeed::b)
+ sizeof(This_Should_Succeed::c));
static_assert(sizeof(This_Should_Fail) != sizeof(This_Should_Fail::a)
+ sizeof(This_Should_Fail::b)
+ sizeof(This_Should_Fail::c));
This unfortunately requires explicitly naming the members for the sum. An automatic solution requires (compile time) reflection. Unfortunately, C++ language has no such feature yet. Maybe in C++23 if we are lucky. For now, there are solutions based on wrapping the class definition in a macro.
A non-portable solution might be to use -Wpadded option provided by GCC, which promises to warn if structure contains any padding. This can be combined with #pragma GCC diagnostic push to only do it for chosen structures.
type I'm checking, the type is a template input.
A portable, but not fully satisfactory approach might be to use a custom trait that the user of the template can use to voluntarily promise that the type does not contain padding allowing you to take advantage of the knowledge.
The user would have to rely on explicit or pre-processor based assertion that their promise holds true.
To get the total field size without retyping each struct member you can use an X Macro
First define all the fields
#define LIST_OF_FIELDS_OF_This_Should_Fail \
X(int, a) \
X(char, b) \
X(int, c)
#define LIST_OF_FIELDS_OF_This_Should_Succeed \
X(long long, a) \
X(long long, b) \
X(int, c) \
X(int, d) \
X(int, e) \
X(int, f)
then declare the structs
struct This_Should_Fail {
#define X(type, name) type name;
LIST_OF_FIELDS_OF_This_Should_Fail
#undef X
};
struct This_Should_Succeed {
#define X(type, name) type name;
LIST_OF_FIELDS_OF_This_Should_Succeed
#undef X
};
and check
#define X(type, name) sizeof(This_Should_Fail::name) +
static_assert(sizeof(This_Should_Fail) == LIST_OF_FIELDS_OF_This_Should_Fail 0);
#undef X
#define X(type, name) sizeof(This_Should_Succeed::name) +
static_assert(sizeof(This_Should_Succeed) == LIST_OF_FIELDS_OF_This_Should_Succeed 0);
#undef X
or you can just reuse the same X macro to check
#define X(type, name) sizeof(a.name) +
{
This_Should_Fail a;
static_assert(sizeof(This_Should_Fail) == LIST_OF_FIELDS_OF_This_Should_Fail 0);
}
{
This_Should_Succeed a;
static_assert(sizeof(This_Should_Succeed) == LIST_OF_FIELDS_OF_This_Should_Succeed 0);
}
#undef X
See demo on compiler explorer
For more information about this you can read Real-world use of X-Macros
An alternate non-portable solution is to compare the size of the struct with a packed version with #pragma pack or __attribute__((packed)). #pragma pack is also supported by many other compilers like GCC or IBM XL
#ifdef _MSC_VER
#define PACKED_STRUCT(declaration) __pragma(pack(push, 1)) declaration __pragma(pack(pop))
#else
#define PACKED_STRUCT(declaration) declaration __attribute((packed))
#endif
#define THIS_SHOULD_FAIL(name) struct name \
{ \
int a; \
char b; \
int c; \
}
PACKED_STRUCT(THIS_SHOULD_FAIL(This_Should_Fail_Packed));
THIS_SHOULD_FAIL(This_Should_Fail);
static_assert(sizeof(This_Should_Fail_Packed) == sizeof(This_Should_Fail));
Demo on Compiler Explorer
See Force C++ structure to pack tightly. If you want to have an even more portable pack macro then try this
Related:
How to check the size of struct w/o padding?
Detect if struct has padding
In GCC and Clang there's a -Wpadded option for this purpose
-Wpadded
Warn if padding is included in a structure, either to align an element of the structure or to align the whole structure. Sometimes when this happens it is possible to rearrange the fields of the structure to reduce the padding and so make the structure smaller.
In case the struct is in a header that you can't modify then in some cases it can be worked around like this to get a packed copy of the struct
#include "header.h"
// remove include guard to include the header again
#undef HEADER_H
// Get the packed versions
#define This_Should_Fail This_Should_Fail_Packed
#define This_Should_Succeed This_Should_Succeed_Packed
// We're including the header again, so it's quite dangerous and
// we need to do everything to prevent duplicated identifiers:
// rename them, or define some macros to remove possible parts
#define someFunc someFunc_deleted
// many parts are wrapped in SOME_CONDITION so this way
// we're preventing them from being redeclared
#define SOME_CONDITION 0
#pragma pack(push, 1)
#include "header.h"
#pragma pack(pop)
#undef This_Should_Fail
#undef This_Should_Succeed
static_assert(sizeof(This_Should_Fail_Packed) == sizeof(This_Should_Fail));
static_assert(sizeof(This_Should_Succeed_Packed) == sizeof(This_Should_Succeed ));
This won't work for headers that use #pragma once or some structs that include structs in other headers though
I have a problem that a struct shall be checked - at compile time - if it is well aligned or if it contains gaps.
The checking may be done in additional test code, but I don't want "packed" data in the real implementation code.
This is an example header file (MyData.h) with the typical include guards:
#ifndef MYDATA_H_
#define MYDATA_H_
struct uneven
{
int bla_u32;
short bla_u16;
char bla_u8;
/* <-- this gap will be filled in the unpacked version */
};
#endif // MYDATA_H
I found one possible solution - see below.
Questions:
Is there an elegant way to check if the struct uneven contains a different number of bytes compared to its unpacked counterpart at compile time?
Is there maybe even a solution that will work in C (without using a namespace)?
A compiler specific solution that works for both C and C++: GCC has a warning option -Wpadded, that produces a warning for every definition that change size due to alignment.
You could use a function instead of a namespace (on ideone):
This solution also works in C
Header File:
typedef struct
{
int bla_u32;
short bla_u16;
char bla_u8;
/* <-- this gap will be filled in the unpacked version */
} uneven;
Source File:
#include "MyData.h"
#define StaticAssert(cond, msg) switch(0){case 0:case cond:;}
void checkSizes()
{
uneven unpacked_uneven;
#pragma pack(push, 1)
#undef MYDATA_H_ // force re-including "MyData.h"
#include "MyData.h"
#pragma pack(pop)
uneven packed_uneven;
StaticAssert(sizeof(unpacked_uneven) == sizeof(packed_uneven), "uneven contains gaps");
}
You can place your StaticAssert into the function for a compile time error.
I found one (somehow nasty and very tricky) solution for the problem that only works with C++, not C.
#define StaticAssert(cond, msg) switch(0){case 0:case cond:;}
#pragma pack(push, 1)
namespace packed
{
#include "MyData.h"
}
#pragma pack(pop)
#undef MYDATA_H_ // force re-including "MyData.h"
#include "MyData.h"
void checkSizes()
{
StaticAssert(sizeof(packed::uneven) == sizeof(uneven), "uneven contains gaps");
}
This StaticAssertmacro fails for the given uneven struct data - as the packed version's size is of 7 bytes and the unpacked (normal) version is 8 bytes. If an additional charis added at the end of the struct the test succeeds - both versions have 8 bytes then.
I am working on a huge project which has one file A.h whose code has a line
typedef unsigned __int16 Elf64_Half;
Also since I am building on Linux and using dlinfo function, I have to include link.h file in my project. And this is where it creates a conflict because I have two typedefs having the same name Elf64_Half. (Linux link.h includes elftypes.h and it too has: typedef unsigned short Elf64_Half;).
What do I do in such a case? Is the only option I have, to change my typedef in a.h? Remember it is not too easy because the project is huge and I will have to make a change in several places.
Is there a way to undef a typedef or something?
For clarification, Rahul Manne gave an easy solution. Do
#define Elf64_Half The_Elf64_Half_I_dont_care
#include<link.h>
#undef Elf64_Half
#include<A.h>
/*
* Code here and use Elf64_Half from A.h as you like.
* However, you can't use Elf64_Half from link.h here
* or you have to call it The_Elf64_Half_I_dont_care.
*
*/
This will substitute each Elf64_Half in link.h with The_Elf64_Half_I_dont_care and thus you get no conflicts with A.h. As long as you don't want to use Elf64_Half of link.h explicitly that will work with no problems. You just have to remember that Elf64_Half from link.h is now called The_Elf64_Half_I_dont_care in case you ever have to use it explicitly in this file.
What do I do in such a case?
A common remedy is to put the one which needs the least visibility behind a "compilation firewall". That is, to create your own abstraction/interface which provides the functionality you need, and then to limit the visibility of the included file to the *.cpp by including it in that *.cpp only. Of course, that *.cpp file would also not be permitted to include the header which has the other definition of the typedef.
Then the declarations won't cause conflict because they will never be visible to the same translation unit.
In your example, you'd likely create a wrapper over the dlinfo() functionalities you need. To illustrate:
DLInfo.hpp
namespace MON {
class DLInfo {
public:
/* ...declare the necessary public/client functionality here... */
int foo();
...
};
}
DLInfo.cpp
#include "DLInfo.hpp"
// include the headers for dlinfo() here.
// these includes should not be in your project's headers
#include <link.h>
#include <dlfcn.h>
// and define your MON::DLInfo implementation here, with full
// ability to use dlinfo():
int MON::DLInfo::foo() {
...
}
...
Here's a little work around I figured out: If you define it as something else first, then you can typedef it later. See here in the example (I'm on OS X using g++):
#import <iostream>
using namespace std;
typedef unsigned int uint32;
int main() {
cout << sizeof(uint32) << endl;
}
The output of this program is
4
Now consider this modified program:
#import <iostream>
using namespace std;
typedef unsigned int uint32;
#define uint32 blah
typedef unsigned long uint32;
int main() {
cout << sizeof(uint32) << endl;
}
The output of this program is
8
So to answer your question, you would add a line to your code:
#define Elf64_Half Elf64_Half_custom
For clarity, this works because you are basically renaming them all, but doing so with a single command instead of having to change all of your own names.
The code below runs fine in C.
But in C++ (std-C++00), the compilation fails.
#include <complex.h>
int main()
{
float complex a = 0.0;
return 0;
}
Here's the errors i am facing
Error: complex is not pat of 'std'
Error: expected ';' before a
I have read the solution to the problem I am facing here, and I am also aware of std::complex.
But my problem is that, I must port an enormous amount of C code to C++,
where complex numbers are declared and used, as shown above.
So any way of to do it ?
what other options do I have ?
#include <complex>
int main()
{
// Both a and b will be initialized to 0 + 0i.
std::complex<float> a;
std::complex<float> b = 0.0f;
return 0;
}
The C99 complex number syntax is not supported in C++, which instead includes the type std::complex in the C++ standard library. You don't say what your motivation for porting the code is but there may be a couple different things you can do.
std::complex is specified in C++11 to be layout compatible with C99's complex. Implementations of earlier versions of C++ generally happen to provide std::complex layout compatibility as well. You may be able to use this to avoid having to port most of the code. Simply declare the C code's interface in C++ such that C++ code can be linked to the C implementation.
#include <complex.h>
#ifdef __cplusplus
extern "C" void foo(std::complex<float>);
#else
void foo(complex float);
#endif
You may need to modify the above for your implementation. You have to ensure that both name mangling and calling conventions are consistent between the two declarations.
Alternatively, there are compilers that support the C complex syntax in C++ as an extension. Of course since the C complex macro would interfere with std::complex that macro does not get defined in C++ and you instead have to use the raw C keyword _Complex. Or if you don't use std::complex anywhere and you're never going to then you could just define the macro yourself: #define complex _Complex.
#ifdef __cplusplus
#define complex _Complex
#else
#include <complex.h>
#endif
int main()
{
float complex a = 0.0;
return 0;
}
This #define prohibits any use of the C++ complex headers. (and technically it causes undefined behavior if you make any use at all of the standard library, but as long as you avoid the headers <complex>, <ccomplex>, and <complex.h> it will probably be okay in practice.
If you really do need to port the entire codebase to standard C++, you might consider porting it not directly to idiomatic C++, but instead to 'portable' C. I.e. code that compiles as both C and as C++. This way you can continue building as C while you incrementally port it and verify that it continues to work correctly throughout the porting process.
To do this you'll do things like replace float complex with a typedef and then define the typedef appropriately in each language, use the C header <tgmath.h>, define other macros in C to correspond with the C++ complex interface, etc.
#include <complex.h>
#include <math.h>
#ifdef __cplusplus
using my_complex_type = std::complex<float>;
#else
#include <tgmath.h>
typedef float complex my_complex_type;
#define real creal
#define imag cimag
#endif
#define PRI_complex_elem "f"
void print_sine(my_complex_type x) {
x = sin(x);
printf("%" PRI_complex_elem "+%" PRI_complex_elem "i\n", real(x), imag(x));
}
The answer from bames53 has been accepted and is very helpful. I am not able to add a comment to it, but I would suggest that perhaps
void print_sine(my_complex_type x) {
x = sin(x);
printf("%" PRI_complex_elem "+%" PRI_complex_elem "i\n", real(x), imag(x));
}
should use csin(x) in lieu of sin(x).