Is the following the best way to pack a float's bits into a uint32? This might be a fast and easy yes, but I want to make sure there's no better way, or that exchanging the value between processes doesn't introduce a weird wrinkle.
"Best" in my case, is that it won't ever break on a compliant C++ compiler (given the static assert), can be packed and unpacked between two processes on the same computer, and is as fast as copying a uint32 into another uint32.
Process A:
static_assert(sizeof(float) == sizeof(uint32) && alignof(float) == alignof(uint32), "no");
...
float f = 0.5f;
uint32 buffer[128];
memcpy(buffer + 41, &f, sizeof(uint32)); // packing
Process B:
uint32 * buffer = thisUint32Is_ReadFromProcessA(); // reads "buffer" from process A
...
memcpy(&f, buffer + 41, sizeof(uint32)); // unpacking
assert(f == 0.5f);
Yes, this is the standard way to do type punning. Cppreferences's page on memcpy even includes an example showing how you can use it to reinterpret a double as an int64_t
#include <iostream>
#include <cstdint>
#include <cstring>
int main()
{
// simple usage
char source[] = "once upon a midnight dreary...", dest[4];
std::memcpy(dest, source, sizeof dest);
for (char c : dest)
std::cout << c << '\n';
// reinterpreting
double d = 0.1;
// std::int64_t n = *reinterpret_cast<std::int64_t*>(&d); // aliasing violation
std::int64_t n;
std::memcpy(&n, &d, sizeof d); // OK
std::cout << std::hexfloat << d << " is " << std::hex << n
<< " as an std::int64_t\n";
}
ouput
o
n
c
e
0x1.999999999999ap-4 is 3fb999999999999a as an std::int64_t
As long as the asserts pass (your are writing and reading the correct number of bytes) then the operation is safe. You can't pack a 64 bit object in a 32 bit object, but you can pack one 32 bit object into another 32 bit object, as long they are trivially copyable
Or this:
union TheUnion {
uint32 theInt;
float theFloat;
};
TheUnion converter;
converter.theFloat = myFloatValue;
uint32 myIntRep = converter.theInt;
I don't know if this is better, but it's a different way to look at it.
I have a long pointer value that points to a 20 byte header structure followed by a larger array. Dec(57987104)=Hex(0374D020). All the values are stored little endian. 1400 when swapped is 0014 which in decimal is 20.
The question here is how do I get the first value which is a 2 byte unsigned short. I have a C++ dll to convert this for me. I'm running Windows 10.
GetCellData_API unsigned short __stdcall getUnsignedShort(unsigned long ptr)
{
unsigned long *p = &ptr;
unsigned short ret = *p;
return ret;
}
But when I call this from VBA using Debug.Print getUnsignedShort(57987104) I get 30008 when it should be 20.
I might need to do an endian swap but I'm not sure how to incorporate this from CodeGuru: How do I convert between big-endian and little-endian values?
inline void endian_swap(unsigned short& x)
{
x = (x >> 8) |
(x << 8);
}
How do I extract little endian unsigned short from long pointer?
I think I'd be inclined to write your interface function in terms of a general template function that describes the operation:
#include <utility>
#include <cstdint>
// Code for the general case
// you'll be amazed at the compiler's optimiser
template<class Integral>
auto extract_be(const std::uint8_t* buffer)
{
using accumulator_type = std::make_unsigned_t<Integral>;
auto acc = accumulator_type(0);
auto count = sizeof(Integral);
while(count--)
{
acc |= accumulator_type(*buffer++) << (8 * count);
}
return Integral(acc);
}
GetCellData_API unsigned short __stdcall getUnsignedShort(std::uintptr_t ptr)
{
return extract_be<std::uint16_t>(reinterpret_cast<const std::uint8_t*>(ptr));
}
As you can see from the demo on godbolt, the compiler does all the hard work for you.
Note that since we know the size of the data, I have used the sized integer types exported from <cstdint> in case this code needs to be ported to another platform.
EDIT:
Just realised that your data is actually LITTLE ENDIAN :)
template<class Integral>
auto extract_le(const std::uint8_t* buffer)
{
using accumulator_type = std::make_unsigned_t<Integral>;
auto acc = accumulator_type(0);
constexpr auto size = sizeof(Integral);
for(std::size_t count = 0 ; count < size ; ++count)
{
acc |= accumulator_type(*buffer++) << (8 * count);
}
return Integral(acc);
}
GetCellData_API unsigned short __stdcall getUnsignedShort(std::uintptr_t ptr)
{
return extract_le<std::uint16_t>(reinterpret_cast<const std::uint8_t*>(ptr));
}
Lets say youre pointing with pulong pulong[6] you are pointing 6 sixth member of the table
unsigned short psh*;
unsigned char puchar*
unsigend char ptable[4];
ZeroMemory(ptable,4);
puchar[3]=((char *)( &pulong[6]))[0];
puchar[2]=((char *)( &pulong[6]))[1];
puchar[1]=((char *)( &pulong[6]))[2];
puchar[0]=((char *)( &pulong[6]))[3];
psh=(unsigned short *) puchar;
//first one
psh[0];
//second one
psh[1];
THis was what was in my mind while mistaking me
I have some bitmasks that look like this:
namespace bits {
const unsigned bit_one = 1u << 0;
const unsigned bit_two = 1u << 1;
const unsigned bit_three = 1u << 2;
......
const unsigned bit_ten = 1u << 10;
}
except that there are more bits and the names are actually meaningful flags for my program. But sometimes I remove bits, add bits, regroup similar bits, etc. Ideally I could do something like this:
namespace bits {
const unsigned bit_one = 1u << COUNTER;
const unsigned bit_two = 1u << COUNTER;
const unsigned bit_three = 1u << COUNTER;
......
const unsigned bit_ten = 1u << COUNTER;
}
Is there some template / macro do automate this process? I know about __COUNTER__, but this is a header so if it gets included in some other source that uses __COUNTER__ too it may break. I'm working in a framework which is pre-C++11, so while upgrading my compiler will happen eventually, a solution that doesn't use C++11 would be ideal.
Why not use a macro with an argument?
#define BIT(n) (1 << (n))
You can use the __LINE__ macro, which is part of standard C and C++. Use with caution and document your intent so that somebody else reading the code will understand.
#include <iostream>
namespace Bits
{
const unsigned Base = __LINE__ + 1;
const unsigned BitOne = 1u << __LINE__-Base;
const unsigned BitTwo = 1u << __LINE__-Base;
const unsigned BitThree = 1u << __LINE__-Base;
}
int main(void)
{
std::cout << Bits::BitOne << '\n';
std::cout << Bits::BitTwo << '\n';
std::cout << Bits::BitThree << '\n';
return 0;
}
The following will do the trick:
#define NEXT_MASK(x) \
DUMMY1_##x, \
x = (1U << DUMMY1_##x), \
DUMMY2_##x = DUMMY1_##x
enum {
NEXT_MASK(one),
NEXT_MASK(two),
NEXT_MASK(three),
NEXT_MASK(four)
};
#include <stdio.h>
int main()
{
printf("%x\n", one);
printf("%x\n", two);
printf("%x\n", three);
printf("%x\n", four);
return 0;
}
The program will emit:
1
2
4
8
The idea is that the first dummy enum steps up one step from the one before. The x is the mask, and the second dummy restores the value, so that the next macro will have a good starting point.
The classic solution would be an enumeration of the fields:
enum foo_flags {
alpha,
beta,
gamma,
count
};
and then using either std::bitset<count> or the BIT macro as H2CO3 suggested:
BIT(alpha)
Microsoft C++ has the
__COUNTER__
predefined macro, so you could...
#define NEXTBIT (1u << __COUNTER__)
namespace bits {
const unsigned bit_one = NEXTBIT;
const unsigned bit_two = NEXTBIT;
const unsigned bit_three = NEXTBIT;
}
Is it possible to convert floats from big to little endian? I have a big endian value from a PowerPC platform that I am sendING via TCP to a Windows process (little endian). This value is a float, but when I memcpy the value into a Win32 float type and then call _byteswap_ulongon that value, I always get 0.0000?
What am I doing wrong?
simply reverse the four bytes works
float ReverseFloat( const float inFloat )
{
float retVal;
char *floatToConvert = ( char* ) & inFloat;
char *returnFloat = ( char* ) & retVal;
// swap the bytes into a temporary buffer
returnFloat[0] = floatToConvert[3];
returnFloat[1] = floatToConvert[2];
returnFloat[2] = floatToConvert[1];
returnFloat[3] = floatToConvert[0];
return retVal;
}
Here is a function can reverse byte order of any type.
template <typename T>
T bswap(T val) {
T retVal;
char *pVal = (char*) &val;
char *pRetVal = (char*)&retVal;
int size = sizeof(T);
for(int i=0; i<size; i++) {
pRetVal[size-1-i] = pVal[i];
}
return retVal;
}
I found something roughly like this a long time ago. It was good for a laugh, but ingest at your own peril. I've not even compiled it:
void * endian_swap(void * arg)
{
unsigned int n = *((int*)arg);
n = ((n >> 8) & 0x00ff00ff) | ((n << 8) & 0xff00ff00);
n = ((n >> 16) & 0x0000ffff) | ((n << 16) & 0xffff0000);
*arg = n;
return arg;
}
An elegant way to do the byte exchange is to use a union:
float big2little (float f)
{
union
{
float f;
char b[4];
} src, dst;
src.f = f;
dst.b[3] = src.b[0];
dst.b[2] = src.b[1];
dst.b[1] = src.b[2];
dst.b[0] = src.b[3];
return dst.f;
}
Following jjmerelo's recommendation to write a loop, a more generic solution could be:
typedef float number_t;
#define NUMBER_SIZE sizeof(number_t)
number_t big2little (number_t n)
{
union
{
number_t n;
char b[NUMBER_SIZE];
} src, dst;
src.n = n;
for (size_t i=0; i<NUMBER_SIZE; i++)
dst.b[i] = src.b[NUMBER_SIZE-1 - i];
return dst.n;
}
Don't memcpy the data directly into a float type. Keep it as char data, swap the bytes and then treat it as a float.
It might be easier to use the ntoa and related functions to convert from network to host and from host to network..the advantage it would be portable. Here is a link to an article that explains how to do this.
From SDL_endian.h with slight changes:
std::uint32_t Swap32(std::uint32_t x)
{
return static_cast<std::uint32_t>((x << 24) | ((x << 8) & 0x00FF0000) |
((x >> 8) & 0x0000FF00) | (x >> 24));
}
float SwapFloat(float x)
{
union
{
float f;
std::uint32_t ui32;
} swapper;
swapper.f = x;
swapper.ui32 = Swap32(swapper.ui32);
return swapper.f;
}
This value is a float, but when I "memcpy" the value into a win32 float type and then call _byteswap_ulong on that value, I always get 0.0000?
This should work. Can you post the code you have?
However, if you care for performance (perhaps you do not, in that case you can ignore the rest), it should be possible to avoid memcpy, either by directly loading it into the target location and swapping the bytes there, or using a swap which does the swapping while copying.
in some case, especially on modbus: network byte order for a float is:
nfloat[0] = float[1]
nfloat[1] = float[0]
nfloat[2] = float[3]
nfloat[3] = float[2]
Boost libraries have already been mentioned by #morteza and #AnotherParker, stating that the support for float was removed. However, it was added back in a subset of the library since they wrote their comments.
Using Boost.Endian conversion functions, version 1.77.0 as I wrote this answer, you can do the following:
float input = /* some value */;
float reversed = input;
boost::endian::endian_reverse_inplace(reversed);
Check the FAQ to learn why the support was removed then partially added back (mainly, because a reversed float may not be valid anymore) and here for the support history.
int temp = 0x5E; // in binary 0b1011110.
Is there such a way to check if bit 3 in temp is 1 or 0 without bit shifting and masking.
Just want to know if there is some built in function for this, or am I forced to write one myself.
In C, if you want to hide bit manipulation, you can write a macro:
#define CHECK_BIT(var,pos) ((var) & (1<<(pos)))
and use it this way to check the nth bit from the right end:
CHECK_BIT(temp, n - 1)
In C++, you can use std::bitset.
Check if bit N (starting from 0) is set:
temp & (1 << N)
There is no builtin function for this.
I would just use a std::bitset if it's C++. Simple. Straight-forward. No chance for stupid errors.
typedef std::bitset<sizeof(int)> IntBits;
bool is_set = IntBits(value).test(position);
or how about this silliness
template<unsigned int Exp>
struct pow_2 {
static const unsigned int value = 2 * pow_2<Exp-1>::value;
};
template<>
struct pow_2<0> {
static const unsigned int value = 1;
};
template<unsigned int Pos>
bool is_bit_set(unsigned int value)
{
return (value & pow_2<Pos>::value) != 0;
}
bool result = is_bit_set<2>(value);
What the selected answer is doing is actually wrong. The below function will return the bit position or 0 depending on if the bit is actually enabled. This is not what the poster was asking for.
#define CHECK_BIT(var,pos) ((var) & (1<<(pos)))
Here is what the poster was originally looking for. The below function will return either a 1 or 0 if the bit is enabled and not the position.
#define CHECK_BIT(var,pos) (((var)>>(pos)) & 1)
Yeah, I know I don't "have" to do it this way. But I usually write:
/* Return type (8/16/32/64 int size) is specified by argument size. */
template<class TYPE> inline TYPE BIT(const TYPE & x)
{ return TYPE(1) << x; }
template<class TYPE> inline bool IsBitSet(const TYPE & x, const TYPE & y)
{ return 0 != (x & y); }
E.g.:
IsBitSet( foo, BIT(3) | BIT(6) ); // Checks if Bit 3 OR 6 is set.
Amongst other things, this approach:
Accommodates 8/16/32/64 bit integers.
Detects IsBitSet(int32,int64) calls without my knowledge & consent.
Inlined Template, so no function calling overhead.
const& references, so nothing needs to be duplicated/copied. And we are guaranteed that the compiler will pick up any typo's that attempt to change the arguments.
0!= makes the code more clear & obvious. The primary point to writing code is always to communicate clearly and efficiently with other programmers, including those of lesser skill.
While not applicable to this particular case... In general, templated functions avoid the issue of evaluating arguments multiple times. A known problem with some #define macros. E.g.: #define ABS(X) (((X)<0) ? - (X) : (X)) ABS(i++);
According to this description of bit-fields, there is a method for defining and accessing fields directly. The example in this entry goes:
struct preferences {
unsigned int likes_ice_cream : 1;
unsigned int plays_golf : 1;
unsigned int watches_tv : 1;
unsigned int reads_books : 1;
};
struct preferences fred;
fred.likes_ice_cream = 1;
fred.plays_golf = 1;
fred.watches_tv = 1;
fred.reads_books = 0;
if (fred.likes_ice_cream == 1)
/* ... */
Also, there is a warning there:
However, bit members in structs have practical drawbacks. First, the ordering of bits in memory is architecture dependent and memory padding rules varies from compiler to compiler. In addition, many popular compilers generate inefficient code for reading and writing bit members, and there are potentially severe thread safety issues relating to bit fields (especially on multiprocessor systems) due to the fact that most machines cannot manipulate arbitrary sets of bits in memory, but must instead load and store whole words.
You can use a Bitset - http://www.cppreference.com/wiki/stl/bitset/start.
Use std::bitset
#include <bitset>
#include <iostream>
int main()
{
int temp = 0x5E;
std::bitset<sizeof(int)*CHAR_BITS> bits(temp);
// 0 -> bit 1
// 2 -> bit 3
std::cout << bits[2] << std::endl;
}
i was trying to read a 32-bit integer which defined the flags for an object in PDFs and this wasn't working for me
what fixed it was changing the define:
#define CHECK_BIT(var,pos) ((var & (1 << pos)) == (1 << pos))
the operand & returns an integer with the flags that both have in 1, and it wasn't casting properly into boolean, this did the trick
I use this:
#define CHECK_BIT(var,pos) ( (((var) & (pos)) > 0 ) ? (1) : (0) )
where "pos" is defined as 2^n (i.g. 1,2,4,8,16,32 ...)
Returns:
1 if true
0 if false
There is, namely the _bittest intrinsic instruction.
#define CHECK_BIT(var,pos) ((var>>pos) & 1)
pos - Bit position strarting from 0.
returns 0 or 1.
For the low-level x86 specific solution use the x86 TEST opcode.
Your compiler should turn _bittest into this though...
The precedent answers show you how to handle bit checks, but more often then not, it is all about flags encoded in an integer, which is not well defined in any of the precedent cases.
In a typical scenario, flags are defined as integers themselves, with a bit to 1 for the specific bit it refers to. In the example hereafter, you can check if the integer has ANY flag from a list of flags (multiple error flags concatenated) or if EVERY flag is in the integer (multiple success flags concatenated).
Following an example of how to handle flags in an integer.
Live example available here:
https://rextester.com/XIKE82408
//g++ 7.4.0
#include <iostream>
#include <stdint.h>
inline bool any_flag_present(unsigned int value, unsigned int flags) {
return bool(value & flags);
}
inline bool all_flags_present(unsigned int value, unsigned int flags) {
return (value & flags) == flags;
}
enum: unsigned int {
ERROR_1 = 1U,
ERROR_2 = 2U, // or 0b10
ERROR_3 = 4U, // or 0b100
SUCCESS_1 = 8U,
SUCCESS_2 = 16U,
OTHER_FLAG = 32U,
};
int main(void)
{
unsigned int value = 0b101011; // ERROR_1, ERROR_2, SUCCESS_1, OTHER_FLAG
unsigned int all_error_flags = ERROR_1 | ERROR_2 | ERROR_3;
unsigned int all_success_flags = SUCCESS_1 | SUCCESS_2;
std::cout << "Was there at least one error: " << any_flag_present(value, all_error_flags) << std::endl;
std::cout << "Are all success flags enabled: " << all_flags_present(value, all_success_flags) << std::endl;
std::cout << "Is the other flag enabled with eror 1: " << all_flags_present(value, ERROR_1 | OTHER_FLAG) << std::endl;
return 0;
}
Why all these bit shifting operations and need for library functions? If you have the value the OP posted: 1011110 and you want to know if the bit in the 3rd position from the right is set, just do:
int temp = 0b1011110;
if( temp & 4 ) /* or (temp & 0b0100) if that's how you roll */
DoSomething();
Or something a bit prettier that may be more easily interpreted by future readers of the code:
#include <stdbool.h>
int temp = 0b1011110;
bool bThirdBitIsSet = (temp & 4) ? true : false;
if( bThirdBitIsSet )
DoSomething();
Or, with no #include needed:
int temp = 0b1011110;
_Bool bThirdBitIsSet = (temp & 4) ? 1 : 0;
if( bThirdBitIsSet )
DoSomething();
You could "simulate" shifting and masking: if((0x5e/(2*2*2))%2) ...
One approach will be checking within the following condition:
if ( (mask >> bit ) & 1)
An explanation program will be:
#include <stdio.h>
unsigned int bitCheck(unsigned int mask, int pin);
int main(void){
unsigned int mask = 6; // 6 = 0110
int pin0 = 0;
int pin1 = 1;
int pin2 = 2;
int pin3 = 3;
unsigned int bit0= bitCheck( mask, pin0);
unsigned int bit1= bitCheck( mask, pin1);
unsigned int bit2= bitCheck( mask, pin2);
unsigned int bit3= bitCheck( mask, pin3);
printf("Mask = %d ==>> 0110\n", mask);
if ( bit0 == 1 ){
printf("Pin %d is Set\n", pin0);
}else{
printf("Pin %d is not Set\n", pin0);
}
if ( bit1 == 1 ){
printf("Pin %d is Set\n", pin1);
}else{
printf("Pin %d is not Set\n", pin1);
}
if ( bit2 == 1 ){
printf("Pin %d is Set\n", pin2);
}else{
printf("Pin %d is not Set\n", pin2);
}
if ( bit3 == 1 ){
printf("Pin %d is Set\n", pin3);
}else{
printf("Pin %d is not Set\n", pin3);
}
}
unsigned int bitCheck(unsigned int mask, int bit){
if ( (mask >> bit ) & 1){
return 1;
}else{
return 0;
}
}
Output:
Mask = 6 ==>> 0110
Pin 0 is not Set
Pin 1 is Set
Pin 2 is Set
Pin 3 is not Set
if you just want a real hard coded way:
#define IS_BIT3_SET(var) ( ((var) & 0x04) == 0x04 )
note this hw dependent and assumes this bit order 7654 3210 and var is 8 bit.
#include "stdafx.h"
#define IS_BIT3_SET(var) ( ((var) & 0x04) == 0x04 )
int _tmain(int argc, _TCHAR* argv[])
{
int temp =0x5E;
printf(" %d \n", IS_BIT3_SET(temp));
temp = 0x00;
printf(" %d \n", IS_BIT3_SET(temp));
temp = 0x04;
printf(" %d \n", IS_BIT3_SET(temp));
temp = 0xfb;
printf(" %d \n", IS_BIT3_SET(temp));
scanf("waitng %d",&temp);
return 0;
}
Results in:
1
0
1
0
While it is quite late to answer now, there is a simple way one could find if Nth bit is set or not, simply using POWER and MODULUS mathematical operators.
Let us say we want to know if 'temp' has Nth bit set or not. The following boolean expression will give true if bit is set, 0 otherwise.
( temp MODULUS 2^N+1 >= 2^N )
Consider the following example:
int temp = 0x5E; // in binary 0b1011110 // BIT 0 is LSB
If I want to know if 3rd bit is set or not, I get
(94 MODULUS 16) = 14 > 2^3
So expression returns true, indicating 3rd bit is set.
Why not use something as simple as this?
uint8_t status = 255;
cout << "binary: ";
for (int i=((sizeof(status)*8)-1); i>-1; i--)
{
if ((status & (1 << i)))
{
cout << "1";
}
else
{
cout << "0";
}
}
OUTPUT: binary: 11111111
I make this:
LATGbits.LATG0=((m&0x8)>0); //to check if bit-2 of m is 1
the fastest way seems to be a lookup table for masks