This is purely a theoretical problem, nothing I have really found myself in, but it has piqued my curiosity and wanted to see if anyone has a better solution for it:
How do you portably guarantee that an specific file format / network
protocol or whatever conforms to a specific bit pattern.
Say we have a file format that uses a 64 bit header struct immediately followed by a variable length array of 32 bit structures:
Header: magic : 32 bit
count : 32 bit
Field : id : 16 bit
data : 16 bit
My first instinct would be to write something like:
struct Field
{
uint16_t id ;
uint16_t data ;
};
Except that our compiler may decide that padding is advisable and we end up with a 64 bit structure. So our next bet is:
using Field = uint16_t[2];
and work on that.
That is, unless someone has carefully read the standard and noticed that uint16_t is optional. At this point our next best friend is uint_least16_t, which is guaranteed to be at least 16 bits long, but for all we know could be 20 bits long in a 10 bit / char processor.
At this point, the only real solution I can come up with is some sort of bit stream, capable of reading and writing specific amounts of bits, and adaptable by std::numeric_limits.
So, is there someone out there who has very carefully read the standard and found the point I'm missing? Or it is this the only real way of having a portable guarantee.
Notes:
- I've just realized that endianness would probably add another layer of complexity.
- I'm using the current working draft of the ISO standard (N3797).
How do you portably guarantee that an specific file format / network
protocol or whatever conforms to a specific bit pattern.
You can't. Not in C++, which was standardized against an abstract platform where little more than the existence of a "byte" that is made up of bits can be assumed. We can't even say for certain, in looking only at the Standard, how many bits are in a char. You can use bitfields for everything, as bits are indivsible, but then you'll have padding to contend with at the least.
Sometimes it is best to give up on the idea of absolute Standards conformance for the sake of conformance, and look to other means to get the job done efficiently and effectively. In this case, platform specifics in combination with almost absolute Standards conformance (aka, good programming practices) will set you free.
Every platform I work on regularly (linux & windows) provides a means to regulate the padding the compiler will actually apply. For network communications, under Linux & Windows I use:
#pragma pack (push, 1)
as a preface to all the data structures I'm going to send over the wire. Endianness is indeed another challenge, but one more or less easily dealt with using other resources provided by every platform: ntohl and the like.
Standards conformance is a laudable goal, and indeed in a code review I would reject most code that is non-conformant. The lack of conformance is really just a moniker for the rejection however; not the reason itself. The actual reason for the rejection is in large part difficulty in maintaining and porting non-conformant code when moving to another platform, or indeed even just upgrading the compiler on the same platform. Non-conformant code might compile and even appear to work, but it will very often fail in subtle and miserable ways when you least expect it, even after thorough testing.
The moral of the story is:
You should always write Standards-conformant code, except when you
shouldn't.
This really is just a re-imagining of Einstein's articulation of Occam's Razor:
Make everything as simple as possible, but no simpler.
If you want to ensure portability to everything standard-conforming, including platforms for which CHAR_BITS isn't 8, well, you've got your work cut out for you.
If you are comfortable limiting yourself to 98% of the computers you'll ever program, I recommend writing explicit serialization for anything that has to adhere to a particular wire-format. That includes breaking integers into bytes, etc.
Write appropriate abstractions around things and the code won't be too bad. Don't put shifts and masks everywhere. Encapsulate it.
I would use network types and network byte orders. See this link.http://www.beej.us/guide/bgnet/output/html/multipage/htonsman.html. The example uses uint16_t. You can write the values a field at a time to prevent padding.
Or if you want to read and write the entire structure at one see this link C++ struct alignment question
Make the structure easy for the program to use.
Provide input methods that extract data from the input and write to the data members. This removes the issue of padding, alignment boundaries and endianness. Similarly with output.
For example, if your input data is 16-bits wide, but your platform is 32-bits wide, declare the structure using 32-bit fields. Copy the 16 bits from the input into the 32-bit fields.
Most programs read into a structure fewer times than they access the data members. Your program is not reading the input 100% of the time.
Related
I started out just reading/writing 8-bit integers to files using chars. It was not very long before I realized that I needed to be able to work with more than just 256 possible values. I did some research on how to read/write 16-bit integers to files and became aware of the concept of big and little endian. I did even more research and found a few different ways to deal with endianness and I also learned some ways to write endianness-independent code. My overall conclusion was that I have to first check if the system I am using is using big or little endian, change the endianness depending on what type the system is using, and then work with the values.
The one thing I have not been able to find is the best/most common way to deal with endianness when reading/writing to files in C++ (no networking). So how should I go about doing this? To help clarify, I am asking for the best way to read/write 16/32-bit integers to files between big and little endian systems. Because I am concerned about the endianness between different systems, I would also like a cross-platform solution.
The most common way is simply to pass your in-memory values through htons() or htonl() before writing them to the file, and also pass the read data through ntohs() or ntohl() after reading it back from the file. (htons()/ntohs() handle 16-bit values, htonl()/ntohl() handle 32-bit values)
When compiled for a big-endian CPU, these functions are no-ops (they just return the value you passed in to them verbatim), so the values will get written to the file in big-endian format. When compiled for a little-endian CPU, these functions endian-swap the passed-in value and return the swapped version, so again the values will get written to the file in big-endian format.
That way the values in the file are always stored in big-endian format, and they always get converted to/from the appropriate (CPU-native) format when being transferred to/from memory. This is the simplest way to do it (since you don't have to write or debug any conditional logic), and the most common (these functions are implemented and available on just about all platforms)
In practice, a good habit is to avoid binary data (to exchange data between computers) and prefer text files and textual protocols to exchange data. You could use textual formats like JSON, YAML, XML, .... (or sometimes invent your own). There are many C++ libraries related to them, e.g. jsoncpp.
Textual data is indeed more verbose (takes more disk space) and slightly slower to parse (but the disk I/O is often the bottleneck, not the CPU time "wasted" in parsing or encoding formats like JSON) but is much easier to work on.
Read also about serialization. You'll find lots of libraries doing that (using some "common" well defined data format such as XDR or ASN1). Many file formats contain some header describing the concrete encoding. The elf(5) format is a good example of that.
Be aware that most of the time the data is more valuable (economically) than the software working on it. So it is very important to document very well how your data is organized in files.
Consider also using databases. Sometimes simply using sqlite with tables containing JSON is very effective.
PS. Without an actual real world case, your question is too broad, and has no meaningful universal answer. There is no single best way!
Basile, I agree that there is no universal answer.
In my world, embedded real time systems, using a text representation is blasphemy. Textual representations and JSON is at least 2 orders of magnitude slower than binary representations. It may be fine for the web. But that makes a difference when you have to process several kilo bytes of data per seconds (to handle voice for instance) across DSPs and GPPs.
For a more in depth discussion on this toppic, check out chapter 7 of the ZeroMQ book.
I've come across many comments on various questions regarding bitfields asserting that bitfields are non-portable, but I've never been able to find a source explaining precisely why.
At face value, I would have presumed all bitfields merely compile to variations of the same bitshifting code, but evidently there must be more too it than that or there would not be such vehement dislike for them.
So my question is what is it that makes bitfields non-portable?
Bit fields are non-portable in the same sense as integers are non-portable. You can use integers to write a portable program, but you cannot expect to send a binary representation of int as is to a remote machine and expect it to interpret the data correctly.
This is because 1. word lengths of processors differ, and because of that, the sizes of integer types differ (1.1 byte length can differ too, but that is these days rare outside embedded systems). And because 2. the byte endianness differs across processors.
These problems are easy to overcome. Native endianness can be easily converted to agreed upon endianness (big endian is de facto standard for network communication), and the size can be inspected at compile time and fixed length integer types are available these days. Therefore integers can be used to communicate across network, as long as these details are taken care of.
Bit fields build upon regular integer types, so they have the same problems with endianness and integer sizes. But they have even more implementation specified behaviour.
Everything about the actual allocation details of bit fields within the class object
For example, on some platforms, bit fields don't straddle bytes, on others they do
Also, on some platforms, bit fields are packed left-to-right, on others right-to-left
Whether char, short, int, long, and long long bit fields are signed or unsigned (when not declared so explicitly).
Unlike endianness, it is not trivial to convert "everything about the actual allocation details" to a canonical form.
Also, while endianness is cpu architecture specific, the bit field details are specific to the compiler implementer. So, bit fields are not portable for communication even between separate processes within the same computer, unless we can guarantee that they were compiled using the same (or binary compatible) compiler.
TL;DR bit fields are not a portable way to communicate between computers. Integers aren't either, but their non-portability is easy to work around.
Bit fields are non-portable in the sense that the ordering of the bit is unspecified. So the bit at index 0 with one compiler could very well be the last bit with another compiler.
This prevents the use of bit fields in applications like toggling bits in memory-mapped hardware registers.
But, you will see hardware vendor use bitfields in the code they release (like microchip for instance). Usually, it's because they also release the compiler with it or target a single compiler. In microchip case for instance, the licence for their source code mandates you to use their own compiler (for 8 bits low-end devices)
The link pointed to by #Pharap contains an extract of the (c++14) norm related to this un-specified ordering: is-there-a-portable-alternative-to-c-bitfields
What is faster: (Performance)
__int64 x,y;
x=y;
or
int x,y,a,b;
x=a;
y=b;
?
Or they are equal?
__int64 is a non-standard compiler extension, so whilst it may or may not be faster, you don't want to use it if you want cross platform code. Instead, you should consider using #include <cstdint> and using uint64_t etc. These derive from the C99 standard which provides stdint.h and inttypes.h for fixed width integer arithmetic.
In terms of performance, it depends on the system you are on - x86_64 for example should not see any performance difference in adding 32- and 64- bit integers, since the add instruction can handle 32 or 64 bit registers.
However, if you're running code on a 32-bit platform or compiling for a 32-bit architecture, the addition of a 64-bit integers will actually require adding two 32-bit registers, as opposed to one. So if you don't need the extra space, it would be wasteful to allocate it.
I have no idea if compilers can or do optimise the types down to a smaller size if necessary. I expect not, but I'm no compiler engineer.
I hate these sort of questions.
1) If you don't know how to measure for yourself then you almost certainly don't need to know the answer.
2) On modern processors it's very hard to predict how fast something will be based on single instructions, it's far more important to understand your program's cache usage and overall performance than to worry about optimising silly little snippets of code that use an assignment. Let the compiler worry about that and spend your time improving the algorithm used or other things that will have a much bigger impact.
So in short, you probably can't tell and it probably doesn't matter. It's a silly question.
The compiler's optimizer will remove all the code in your example so in that way there is no difference. I think you want to know whether it is faster to move data 32 bits at a time or 64 bits at a time. If your data is aligned to 8 bytes and you are on a 64 bit machine then it should be faster to move data at 8 bytes at a time. There are several caveat to that, however. You may find that your compiler is already doing this optimization for you (you would have to look at the emitted assembly code to be sure), in which case you would see no difference. Also, consider using memcpy instead of rolling your own if you are moving a lot of data. If you are considering casting an array of 32 bit ints to 64 bit in order to copy faster or do some other operation faster (i.e. in half the number of instructions), be sure to Google for the strict aliasing rule.
__int64 should be faster on most platforms, but be careful - some of the architectures require alignment to 8 for this to take effect, and some would even crash your app if it's not aligned.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
When is it worthwhile to use bit fields?
I was looking up bitwise operators recently and stumbled upon the concept of the bitfield. It seems interesting and is a very cool concept, but when and/or why would a person use this in their code?
I know it's used quite a bit in embedded systems programming, but why (I can't seem to find anything about why its useful)? Are there any advantages to it? And where are some other places bitfields are useful?
In general, use bitfields when you don't care about speed and you don't care about memory layout. IF you care about these things, then don't use bitfields.
If you have a set of boolean flags, then you can pack them using bitfields (reducing size needed to store). However, only use the bitfield to access the bitfield.
It is the classic size vs. speed problem.
An additional caveat is that if you have a set of bitfields that are smaller than the native word, then your compiler will probably try to pad and align the bitfield struct. So you have to end up #pragma pack'ing the struct or use at least a native word. So if you are on a 32 bit machine and you happen to have 32 boolean flags that are only used internally, then this would be a good use of bitfields.
Some uses that immediately come to mind are:
implementing communications protocols;
storing user data in objects where you have limited space;
extending data structures in existing protocols (similar to the above);
performing multiple tests in a single operation;
I have used bitfields as part of unions to encompass register in embedded system i.e. control registers of microcontrollers, codecs. They are very useful in depicting physical layout of registers as software constructs thereby conveying readability. They were commonly used in for device driver implementations. A few years back 8-bit micros with very little flash and ram memory were common and therefore bitfields were common. These days 32-bit micros with lots of ram/flash means that bitfields are not necessary.
I need to pack some bits in a byte in this fashion:
struct
{
char bit0: 1;
char bit1: 1;
} a;
if( a.bit1 ) /* etc */
or:
if( a & 0x2 ) /* etc */
From the source code clarity it's pretty obvious to me that bitfields are neater. But which option is faster? I know the speed difference won't be too much if any, but as I can use any of them, if one's faster, better.
On the other hand, I've read that bitfields are not guaranteed to arrange bits in the same order across platforms, and I want my code to be portable.
Notes: If you plan to answer 'Profile' ok, I will, but as I'm lazy, if someone already has the answer, much better.
The code may be wrong, you can correct me if you want, but remember what the point to this question is and please try and answer it too.
Bitfields make the code much clearer if they are used appropriately. I would use bitfields as a space saving device only. One common place I've seen them used is in compilers: Often type or symbol information consists of a bunch of true/false flags. Bitfields are ideal here since a typical program will have many thousands of these nodes created when it is compiled.
I wouldn't use bitfields to do a common embedded programming job: reading and writing device registers. I prefer using shifts and masks here because you get exactly the bits the documentation tells you you need and you don't have to worry about differences in various compilers implementation of bitfields.
As for speed, a good compiler will give the same code for bitfields that masking will.
I would rather use the second example in preference for maximum portability. As Neil Butterworth pointed out, using bitfields is only for the native processor. Ok, think about this, what happens if Intel's x86 went out of business tomorrow, the code will be stuck, which means having to re-implement the bitfields for another processor, say RISC chip.
You have to look at the bigger picture and ask how did OpenBSD manage to port their BSD systems to a lot of platforms using one codebase? Ok, I'll admit that is a bit over the top, and debatable and subjective, but realistically speaking, if you want to port the code to another platform, its the way to do it by using the second example you used in your question.
Not alone that, compilers for different platforms would have their own way of padding, aligning bitfields for the processor where the compiler is on. And furthermore, what about the endianess of the processor?
Never rely on bitfields as a magic bullet. If you want speed for the processor and will be fixed on it, i.e. no intention of porting, then feel free to use bitfields. You cannot have both!
C bitfields were stillborn from the moment they were invented - for unknown reason. People didn't like them and used bitwise operators instead. You have to expect other developers to not understand C bitfield code.
With regard to which is faster: Irrelevant. Any optimizing compiler (which means practically all) will make the code do the same thing in whatever notation. It's a common misconception of C programmers that the compiler will just search-and-replace keywords into assembly. Modern compilers use the source code as a blueprint to what shall be achieved and then emit code that often looks very different but achieves the intended result.
The first one is explicit and whatever the speed the second expression is error-prone because any change to your struct might make the second expression wrong.
So use the first.
If you want portability, avoid bitfields. And if you are interested in performance of specific code, there is no alternative to writing your own tests. Remember, bitfields will be using the processor's bitwise instructions under the hood.
I think a C programmer would tend toward the second option of using bit masks and logic operations to deduce the value of each bit. Rather than having the code littered with hex values, enumerations would be set up, or, usually when more complex operations are involved, macros to get/set specific bits. I've heard on the grapevine, that struct implemented bitfields are slower.
Don't read to much in the "non portable bit-fields". There are two aspects of bit fields which are implementation defined: the signedness and the layout and one unspecified one: the alignment of the allocation unit in which they are packed. If you don't need anything else that the packing effect, using them is as portable (provided you specify explicitly a signed keyword where needed) as function calls which also have unspecified properties.
Concerning the performance, profile is the best answer you can get. In a perfect world, there would be no difference between the two writings. In practice, there can be some, but I can think of as many reasons in one direction as the other. And it can be very sensitive to context (logically meaningless difference between unsigned and signed for instance) so measure in context...
To sum up, the difference is thus mostly a style difference in cases where you have truly the choice (i.e. not if the precise layout is important). In those case, it is a optimization (in size, not in speed) and so I'd tend first to write the code without it and add it after if needed. So, bit fields are the obvious choice (the modifications to be done are the smallest one to achieve the result and are contained to the unique place of definition instead of being spread of to all the places of use).