I am trying to write a C++ code for conversion of assembly dq 3FA999999999999Ah into C++ double. What to type inside asm block? I dont know how to take out the value.
int main()
{
double x;
asm
{
dq 3FA999999999999Ah
mov x,?????
}
std::cout<<x<<std::endl;
return 0;
}
From the comments it sounds a lot like you want to use a reinterpret cast here. Essentially what this does is to tell the compiler to treat the sequence of bits as if it were of the type that it was casted to but it doesn't do any attempt to convert the value.
uint64_t raw = 0x3FA999999999999A;
double x = reinterpret_cast<double&>(raw);
See this in action here: http://coliru.stacked-crooked.com/a/37aec366eabf1da7
Note that I've used the specific 64bit integer type here to make sure the bit representation required matches that of the 64bit double. Also the cast has to be to double& because of the C++ rules forbidding the plain cast to double. This is because reinterpret cast deals with memory and not type conversions, for more details see this question: Why doesn't this reinterpret_cast compile?. Additionally you need to be sure that the representation of the 64 bit unsigned here will match up with the bit reinterpretation of the double for this to work properly.
EDIT: Something worth noting is that the compiler warns about this breaking strict aliasing rules. The quick summary is that more than one value refers to the same place in memory now and the compiler might not be able to tell which variables are changed if the change occurs via the other way it can be accessed. In general you don't want to ignore this, I'd highly recommend reading the following article on strict aliasing to get to know why this is an issue. So while the intent of the code might be a little less clear you might find a better solution is to use memcpy to avoid the aliasing problems:
#include <iostream>
int main()
{
double x;
const uint64_t raw = 0x3FA999999999999A;
std::memcpy(&x, &raw, sizeof raw);
std::cout<<x<<std::endl;
return 0;
}
See this in action here: http://coliru.stacked-crooked.com/a/5b738874e83e896a
This avoids the issue with the aliasing issue because x is now a double with the correct constituent bits but because of the memcpy usage it is not at the same memory location as the original 64 bit int that was used to represent the bit pattern needed to create it. Because memcpy is treating the variable as if it were an array of char you still need to make sure you get any endianness considerations correct.
Related
Here's a little puzzle I couldn't find a good answer for:
Given a struct with bitfields, such as
struct A {
unsigned foo:13;
unsigned bar:19;
};
Is there a (portable) way in C++ to get the correct mask for one of the bitfields, preferably as a compile-time constant function or template?
Something like this:
constinit unsigned mask = getmask<A::bar>(); // mask should be 0xFFFFE000
In theory, at runtime, I could crudely do:
unsigned getmask_bar() {
union AA {
unsigned mask;
A fields;
} aa{};
aa.fields.bar -= 1;
return aa.mask;
}
That could even be wrapped in a macro (yuck!) to make it "generic".
But I guess you can readily see the various deficiencies of this method.
Is there a nicer, generic C++ way of doing it? Or even a not-so-nice way? Is there something useful coming up for the next C++ standard(s)? Reflection?
Edit: Let me add that I am trying to find a way of making bitfield manipulation more flexible, so that it is up to the programmer to modify multiple fields at the same time using masking. I am after terse notation, so that things can be expressed concisely without lots of boilerplate. Think working with hardware registers in I/O drivers as a use case.
Unfortunately, there is no better way - in fact, there is no way to extract individual adjacent bit fields from a struct by inspecting its memory directly in C++.
From Cppreference:
The following properties of bit-fields are implementation-defined:
The value that results from assigning or initializing a signed bit-field with a value out of range, or from incrementing a signed
bit-field past its range.
Everything about the actual allocation details of bit-fields within the class object
For example, on some platforms, bit-fields don't straddle bytes, on others they do
Also, on some platforms, bit-fields are packed left-to-right, on others right-to-left
Your compiler might give you stronger guarantees; however, if you do rely on the behavior of a specific compiler, you can't expect your code to work with a different compiler/architecture pair. GCC doesn't even document their bit field packing, as far as I can tell, and it differs from one architecture to the next. So your code might work on a specific version of GCC on x86-64 but break on literally everything else, including other versions of the same compiler.
If you really want to be able to extract bitfields from a random structure in a generic way, your best bet is to pass a function pointer around (instead of a mask); that way, the function can access the field in a safe way and return the value to its caller (or set a value instead).
Something like this:
template<typename T>
auto extractThatBitField(const void *ptr) {
return static_cast<const T *>(ptr)->m_thatBitField;
}
auto *extractor1 = &extractThatBitField<Type1>;
auto *extractor2 = &extractThatBitField<Type2>;
/* ... */
Now, if you have a pair of {pointer, extractor}, you can get the value of the bitfield safely. (Of course, the extractor function has to match the type of the object behind that pointer.) It's not much overhead compared to having a {pointer, mask} pair instead; the function pointer is maybe 4 bytes larger than the mask on a 64-bit machine (if at all). The extractor function itself will just be a memory load, some bit twiddling, and a return instruction. It'll still be super fast.
This is portable and supported by the C++ standard, unlike inspecting the bits of a bitfield directly.
Alternatively, C++ allows casting between standard-layout structs that have common initial members. (Though keep in mind that this falls apart as soon as inheritance or private/protected members get involved! The first solution, above, works for all those cases as well.)
struct Common {
int m_a : 13;
int m_b : 19;
int : 0; //Needed to ensure the bit fields end on a byte boundary
};
struct Type1 {
int m_a : 13;
int m_b : 19;
int : 0;
Whatever m_whatever;
};
struct Type2 {
int m_a : 13;
int m_b : 19;
int : 0;
Something m_something;
};
int getFieldA(const void *ptr) {
//We still can't do type punning directly due
//to weirdness in various compilers' aliasing resolution.
//std::memcpy is the official way to do type punning.
//This won't compile to an actual memcpy call.
Common tmp;
std::memcpy(&tmp, ptr, sizeof(Common));
return tmp.m_a;
}
See also: Can memcpy be used for type punning?
Is it possible to portably hash a pointer in C++03, which does not have std::hash defined?
It seems really weird for hashables containing pointers to be impossible in C++, but I can't think of any way of making them.
The closest way I can think of is doing reinterpret_cast<uintptr_t>(ptr), but uintptr_t is not required to be defined in C++03, and I'm not sure if the value could be legally manipulated even if it was defined... is this even possible?
No, in general. In fact it's not even possible in general in C++11 without std::hash.
The reason why lies in the difference between values and value representations.
You may recall the very common example used to demonstrate the different between a value and its representation: the null pointer value. Many people mistakenly assume that the representation for this value is all bits zero. This is not guaranteed in any fashion. You are guaranteed behavior by its value only.
For another example, consider:
int i;
int* x = &i;
int* y = &i;
x == y; // this is true; the two pointer values are equal
Underneath that, though, the value representation for x and y could be different!
Let's play compiler. We'll implement the value representation for pointers. Let's say we need (for hypothetical architecture reasons) the pointers to be at least two bytes, but only one is used for the value.
I'll just jump ahead and say it could be something like this:
struct __pointer_impl
{
std::uint8_t byte1; // contains the address we're holding
std::uint8_t byte2; // needed for architecture reasons, unused
// (assume no padding; we are the compiler, after all)
};
Okay, this is our value representation, now lets implement the value semantics. First, equality:
bool operator==(const __pointer_impl& first, const __pointer_impl& second)
{
return first.byte1 == second.byte1;
}
Because the pointer's value is really only contained in the first byte (even though its representation has two bytes), that's all we have to compare. The second byte is irrelevant, even if they differ.
We need the address-of operator implementation, of course:
__pointer_impl address_of(int& i)
{
__pointer_impl result;
result.byte1 = /* hypothetical architecture magic */;
return result;
}
This particular implementation overload gets us a pointer value representation for a given int. Note that the second byte is left uninitialized! That's okay: it's not important for the value.
This is really all we need to drive the point home. Pretend the rest of the implementation is done. :)
So now consider our first example again, "compiler-ized":
int i;
/* int* x = &i; */
__pointer_impl x = __address_of(i);
/* int* y = &i; */
__pointer_impl y = __address_of(i);
x == y; // this is true; the two pointer values are equal
For our tiny example on the hypothetical architecture, this sufficiently provides the guarantees required by the standard for pointer values. But note you are never guaranteed that x == y implies memcmp(&x, &y, sizeof(__pointer_impl)) == 0. There simply aren't requirements on the value representation to do so.
Now consider your question: how do we hash pointers? That is, we want to implement:
template <typename T>
struct myhash;
template <typename T>
struct myhash<T*> :
std::unary_function<T*, std::size_t>
{
std::size_t operator()(T* const ptr) const
{
return /* ??? */;
}
};
The most important requirement is that if x == y, then myhash()(x) == myhash()(y). We also already know how to hash integers. What can we do?
The only thing we can do is try to is somehow convert the pointer to an integer. Well, C++11 gives us std::uintptr_t, so we can do this, right?
return myhash<std::uintptr_t>()(reinterpret_cast<std::uintptr_t>(ptr));
Perhaps surprisingly, this is not correct. To understand why, imagine again we're implementing it:
// okay because we assumed no padding:
typedef std::uint16_t __uintptr_t; // will be used for std::uintptr_t implementation
__uintptr_t __to_integer(const __pointer_impl& ptr)
{
__uintptr_t result;
std::memcpy(&result, &ptr, sizeof(__uintptr_t));
return result;
}
__pointer_impl __from_integer(const __uintptr_t& ptrint)
{
__pointer_impl result;
std::memcpy(&result, &ptrint, sizeof(__pointer_impl));
return result;
}
So when we reinterpret_cast a pointer to integer, we'll use __to_integer, and going back we'll use __from_integer. Note that the resulting integer will have a value depending upon the bits in the value representation of pointers. That is, two equal pointer values could end up with different integer representations...and this is allowed!
This is allowed because the result of reinterpret_cast is totally implementation-defined; you're only guaranteed the resulting of the opposite reinterpret_cast gives you back the same result.
So there's the first issue: on this implementation, our hash could end up different for equal pointer values.
This idea is out. Maybe we can reach into the representation itself and hash the bytes together. But this obviously ends up with the same issue, which is what the comments on your question are alluding to. Those pesky unused representation bits are always in the way, and there's no way to figure out where they are so we can ignore them.
We're stuck! It's just not possible. In general.
Remember, in practice we compile for certain implementations, and because the results of these operations are implementation-defined they are reliable if you take care to only use them properly. This is what Mats Petersson is saying: find out the guarantees of the implementation and you'll be fine.
In fact, most consumer platforms you use will handle the std::uintptr_t attempt just fine. If it's not available on your system, or if you want an alternative approach, just combine the hashes of the individual bytes in the pointer. All this requires to work is that the unused representation bits always take on the same value. In fact, this is the approach MSVC2012 uses!
Had our hypothetical pointer implementation simply always initialized byte2 to a constant, it would work there as well. But there just isn't any requirement for implementations to do so.
Hope this clarifies a few things.
The answer to your question really depends on "HOW portable" do you want it. Many architectures will have a uintptr_t, but if you want something that can compile on DSP's, Linux, Windows, AIX, old Cray machines, IBM 390 series machines, etc, etc, then you may have to have a config option where you define your own "uintptr_t" if it doesn't exist in that architecture.
Casting a pointer to an integer type should be fine. If you were to cast it back, you may be in trouble. Of course, if you have MANY pointers, and you allocate fairly large sections of memory on a 64-bit machine, using a 32-bit integer, there is a chance you get lots of collissions. Note that 64-bit windows still has a "long" as 32-bit.
I know that for the code below, "Illegal" below is undefined (while some compilers allow it), because union member "a" is active, and then we read from union member "b".
The question is, does the code in "AmILegal" fix it, or am I doing something scary and even more obscure? Can I use memcpy to achieve the same effect or is there another undefined behaviour I am invoking there?
EDIT: Maybe the example is not clear enough. All I want to do is activate the other member.
So I am changing float to int. Although it seems dumb, it is closer to the real case. Read BELOW the code.
(Is it for some reason disallowed to copy one union member into another?)
struct Foo
{
union Bar
{
int a[4];
int b[4];
};
void this_is_Illegal()
{
a[0]=1;
a[1]=2;
a[2]=3;
a[3]=4;
std::cout<<b[0]<<b[1]<<b[2]<<b[3];
}
void but_is_this_Legal?()
{
a[0]=1;
a[1]=2;
a[2]=3;
a[3]=4;
b[0]=a[0];
b[1]=a[1];
b[2]=a[2];
b[3]=a[3];
std::cout<<b[0]<<b[1]<<b[2]<<b[3];
}
void this_looks_scary_but_is_it?()
{
a[0]=1;
a[1]=2;
a[2]=3;
a[3]=4;
//forget portability for this q, assume sizeof(int)==sizeof(float)
//maybe memmove works here as well?
memcpy(b, a, sizeof(int)*4)
std::cout<<b[0]<<b[1]<<b[2]<<b[3];
}
};
If all of the above does not sound very useful, think that a is in truth an _m128 unioned with a float[4]. The bit representation is exact and correct, always.
At one point in time, you WILL need to actually use it, and you NEED to have it in main memory as an array of floats.
The "copy instruction" is in truth an _mm_store_ps from the _m128 union member to the float[4] member. Hence the question about the memset - maybe it is the more exact example to what I need...
The second function is perfectly legal - but doesn't do the same thing, since it will perform an int to float conversion rather than leaving the bits unchanged.
To be honest I would just stick with the first one - the behaviour is technically undefined, but I suspect it just does the right thing for you.
The third one switches one form of undefined behaviour for another (once you've written arbitrary bytes into a float, anything could happen). But if you know the bytes really represent a valid floating point value, it's fine.
the this_is_illegal,this_is_legal? pretty much the standard way to use enums ;)
but the memcpy will not work, becayse &a and &b are at the same address because of the enum and memcpy will do nothing
because &a and &b are at the same address you can do some intresting things with the enum - in your case interpret a float as an integer is the built-in feature of your enum, but auto casting can't be triggered, because they are at the same address
you might want to look at attribute((packed)) because it helps to declare protocol structs/enums
I've been getting warnings from Lint (740 at http://www.gimpel.com/html/pub/msg.txt) to the effect that it warns me not to cast a pointer to a union to a pointer to an unsigned long. I knew I was casting incompatible types so I was using a reinterpret_cast and still I got the warning which surprised me.
Example:
// bar.h
void writeDWordsToHwRegister(unsigned long* ptr, unsigned long size)
{
// write double word by double word to HW registers
...
};
// foo.cpp
#include "bar.h"
struct fooB
{
...
}
union A
{
unsigned long dword1;
struct fooB; // Each translation unit has unique content in the union
...
}
foo()
{
A a;
a = ...; // Set value of a
// Lint warning
writeDWordsToHwRegister(reinterpret_cast<unsigned long*> (&a), sizeof(A));
// My current triage, but a bad one since someone, like me, in a future refactoring
// might redefine union A to include a dword0 variable in the beginning and forget
// to change below statement.
writeDWordsToHwRegister(reinterpret_cast<unsigned long*> (&(a.dword1)), sizeof(A));
}
Leaving aside exactly why I was doing it and how to solve it in the best way (void* in interface and cast to unsigned long* in writeDWordsToHwRegister?), reading the Lint warning explained that on some machines there was a difference between pointer to char and pointer to word. Could someone explain how that difference could manifest itself and maybe give examples on some processors that shows these differences? Are we talking alignment issues?
Since its an embedded system we do use exotic and in house cores so if bad things can happen, they probably will.
Generally difference between pointers do refer to the fact that different types have different sizes and if you do a pointer+=1 you will get different results if p is a pointer to char or if it is a pointer to word.
The compiler assumes that pointers to As and pointers to longs (which are usually dwords, but might just be words in your case) do not point to the same area of memory. This makes a number of optimizations okay: For example, when writing to somewhere pointed to A*, prior loads from long* do not need to be updated. This is called aliasing - or in this case, the lack thereof. But in your case, it has the effect that the code produced might actually not work as expected.
To make this portable, you first have to copy your data through a char buffer, which has an exception to the anti-aliasing rule. chars alias with everything. So when seeing a char, the compiler has to assume it can point to anything. For example, you could do this:
char buffer[sizeof(A)];
// chars aliases with A
memcpy(buffer, reinterpret_cast<char*>(&a), sizeof(A));
// chars also aliases with unsigned long
writeWordsToHwRegister(reinterpret_cast<unsigned long*> (buffer), sizeof(A));
If you have any more questions, look up "strict aliasing" rules. It is actually a pretty well known issue by now.
I know that on some machines, pointers to char and pointers to word are actually different, as pointer to char needs extra bits due to the way memory is addressed.
There are some machines (mainly DSPs, but I think old DEC machines did this too) where this is the case.
This means if you reinterpret_cast something to char on one of these machines, the bit pattern is necessarily valid.
As a pointer to a union can in theory point to any member of it, it means a union pointer then has to contain something to allow you to succesfully use it to point to a char or a word. Which in turn means that reinterpret_casting it will end up with bits that mean something to the compiler being used as if they were part of a valid address
For instance if a pointer is 0xfffa where the 'a' is some magic that the compiler uses to help it work out what to do when you say unionptr->charmember (perhaps nothing) and something different when you do unionptr->wordmember (perhaps convert it to 3ff before using it), when you reinterpret_cast it to long *, you still have fffa, because reinterpret_cast does nothing to the bit pattern.
Now you have something the compiler thinks is a pointer to long, containing fffa, whereas it should be (say) 3ff.
Which is likely to result in a nasty crash.
A char* can be byte-aligned (anything!), whereas a long* generally needs to be aligned to a 4-byte boundary on any modern processor.
On bigger iron, you'll get some crash when you try accessing a long on a mis-aligned boundary (say SIGBUS on *nix). However, on some embedded systems you can just quietly get some odd results which makes detection difficult.
I've seen this happen on ARM7, and yes, it was hard to see what was going on.
I'm not sure why you think a pointer to char is involved - you're casting a pointer to union A to a pointer to long. The best fix would probably be to change:
void writeWordsToHwRegister(unsigned long* ptr, unsigned long size)
to:
void writeWordsToHwRegister(const void * ptr, unsigned long size)
I use a code where I cast an enum* to int*. Something like this:
enum foo { ... }
...
foo foobar;
int *pi = reinterpret_cast<int*>(&foobar);
When compiling the code (g++ 4.1.2), I get the following warning message:
dereferencing type-punned pointer will break strict-aliasing rules
I googled this message, and found that it happens only when strict aliasing optimization is on. I have the following questions:
If I leave the code with this warning, will it generate potentially wrong code?
Is there any way to work around this problem?
If there isn't, is it possible to turn off strict aliasing from inside the source file (because I don't want to turn it off for all source files and I don't want to make a separate Makefile rule for this source file)?
And yes, I actually need this kind of aliasing.
In order:
Yes. GCC will assume that the pointers cannot alias. For instance, if you assign through one then read from the other, GCC may, as an optimisation, reorder the read and write - I have seen this happen in production code, and it is not pleasant to debug.
Several. You could use a union to represent the memory you need to reinterpret. You could use a reinterpret_cast. You could cast via char * at the point where you reinterpret the memory - char * are defined as being able to alias anything. You could use a type which has __attribute__((__may_alias__)). You could turn off the aliasing assumptions globally using -fno-strict-aliasing.
__attribute__((__may_alias__)) on the types used is probably the closest you can get to disabling the assumption for a particular section of code.
For your particular example, note that the size of an enum is ill defined; GCC generally uses the smallest integer size that can be used to represent it, so reinterpreting a pointer to an enum as an integer could leave you with uninitialised data bytes in the resulting integer. Don't do that. Why not just cast to a suitably large integer type?
You could use the following code to cast your data:
template<typename T, typename F>
struct alias_cast_t
{
union
{
F raw;
T data;
};
};
template<typename T, typename F>
T alias_cast(F raw_data)
{
alias_cast_t<T, F> ac;
ac.raw = raw_data;
return ac.data;
}
Example usage:
unsigned int data = alias_cast<unsigned int>(raw_ptr);
But why are you doing this? It will break if sizeof(foo) != sizeof(int). Just because an enum is like an integer does not mean it is stored as one.
So yes, it could generate "potentially" wrong code.
Have you looked into this answer ?
The strict aliasing rule makes this
setup illegal, two unrelated types
can't point to the same memory. Only
char* has this privilege.
Unfortunately you can still code this
way, maybe get some warnings, but have
it compile fine.
Strict aliasing is a compiler option, so you need to turn it off from the makefile.
And yes, it can generate incorrect code. The compiler will effectively assume that foobar and pi aren't bound together, and will assume that *pi won't change if foobar changed.
As already mentioned, use static_cast instead (and no pointers).