I need to pack this pointer (which is 64 bits wide) into 4 WORD's, and then later in some other part of the code I need to extract (assemble) these words back into this pointer.
the code looks like this:
std::vector<WORD> vec;
vec.push_back( this ); // how?
later in the code;
pThis = vec.at(0); // how?
I did take a look at LOWORD/HIWORD and LOWBYTE/HIBYTE macros however I still have no idea how would I go about this.
If you ask why on earth would anyone need this, here is why:
I need to fill in creation data of DLGITEMTEMPLATEEX structure which takes WORD as last argument to specify size, data following this is where you put your data, my data is this pointer, and since I'm working on words (std::vector<WORD>) to fill the structure, the last data is 4 WORDS (64 bits) representing the pointer!
Any suggestion or sample code is welcome.
Well, the best way would be to define a struct that derive form DLGITEMTEMPLATEEX. That way, you can avoid to manually do the conversion. While it might not be defined from the standard, it will works on Windows. That kind of code if platform specific anyway!
struct MyTemplate : DLGITEMTEMPLATEEX
{
MyTemplate(MyPointerType *myVariable)
: extraCount(sizeof *this - sizeof(DLGITEMTEMPLATEEX))
, myVariable(myVariable)
{ }
MyPointerType *myVariable; // 64 if compiled for 64 bit target.
};
And when using the data, you do a static_cast to convert back to that structure.
You could add a few static_assert and assertion to validate that it works as intended.
You could use simple bitshifting to seperate the 64bit into 16bits.
#include <stdio.h>
int main(){
uint64_t bigword = 0xDEADBEEFADEADBED;
uint16_t fourth = bigword ;
uint16_t third = bigword >> 16;
uint16_t second = bigword >> 32;
uint16_t first = bigword >> 48;
printf("%llx %x %x %x %x\n",bigword,fourth,third,second,first);
return 0;
}
Then reverse the process when shifting the words back into the 64bit.
Related
I'm working on a school assignment. I am writing a program that utilizes unions to convert a given IP address in "192.168.1.10" format into its 32-bit single value, two 16-bit values, and four 8-bit values.
I'm having trouble with implementing my structs and unions appropriately, and am looking for insight on the subject. To my understanding, unions point to the same location as the referenced struct, but can look at specified pieces.
Any examples showing how a struct with four 8-bit values and a union can be used together would help. Also, any articles or books that might help me would also be appreciated.
Below is the assignment outline:
Create a program that manages an IP address. Allow the user to enter the IP address as four 8 bit unsigned integer values (just use 4 sequential CIN statements). The program should output the IP address upon the users request as any of the following. As a single 32 bit unsigned integer value, or as four 8 bit unsigned integer values, or as 32 individual bit values which can be requested as a single bit by the user (by entering an integer 0 to 31). Or as all 32 bits assigned into 2 variable sized groups (host group and network group) and outputted as 2 unsigned integer values from 1 bit to 31 bits each.
I was going to cin to int pt1,pt2,pt3,pt4 and assign them to the IP_Adress.pt1, .... etc.
struct IP_Adress {
unsigned int pt1 : 8;
unsigned int pt2 : 8;
unsigned int pt3 : 8;
unsigned int pt4 : 8;
};
I have not gotten anything to work appropriately yet. I think I am lacking a true understanding of the implementation of unions.
A union is not a good fit for this assignment. In fact, nothing in the text you quoted even says to use a union at all. And, a union will not help you with the parts of the assignment that deal with "32 individual bit values" or with "32 bits assigned into 2 variable sized groups". Those parts of the assignment will require bit shifting instead. Bit shifting is the better way to solve the other parts of the assignment, as well.
That being said, if you absolutely must use a union, you are probably looking for something more like this instead:
union IP_Adress {
uint8_t u8[4]; // four 8-bit values
uint16_t u16[2]; // two 16-bit values
uint32_t u32; // one 32-bit value
};
Except that C++ does not allow you to write into one union field and then read from another. C allows that kind of type puning, but it is undefined behavior in C++.
Why is type punning considered UB?
The asker already knows that doing this can blow up in their face a number of different ways, but here's a simple example for 4 byte, 4x1 byte, and 32x1 bit.
union bad_idea
{
uint32_t ints; // 32 bit unsigned integer
uint8_t bytes[sizeof(uint32_t)]; // 4 8 bit unsigned integers
};
and then
uint32_t get_int(const bad_idea & in)
{
return in.ints;
}
uint8_t get_byte(const bad_idea & in,
size_t offset)
{
if (offset >= sizeof(uint32_t)) // trap typos and idiots
{
throw std::runtime_error("invalid offset");
}
return in.bytes[offset];
}
bool get_bit(const bad_idea & in,
size_t offset)
{
if (offset >= sizeof(uint32_t)*8)
{
throw std::runtime_error("invalid offset");
}
return (in.ints >> offset) & 1; // shift the required bit to the end (in.ints >> offset)
// then mask off all of the other bits (& 1)
}
Things get a bit ugly getting input because you can't simply
std::cin >> bad.bytes[0];
because it reads a single character. Type in 127 for the first octet and you'll wind up filling bad.bytes[0] through bad.bytes[2] with '1', '2', and '7'.
You need to involve a temporary variable
int temp;
std::cin >> temp;
// tests for valid range in temp
bad.bytes[0] = temp
or risk some explosive batsmurf like
std::cin >> *(int*)&bad.bytes[0];
// tests for valid value in bad.bytes[0] impossible because aliasing has been broken
// and everything is already <expletive deleted>ed
pardon my C. The more respectable
std::cin >> *reinterpret_cast<int*>(&bad.bytes[0]);
isn't any better. As ugly as it is, use the temporary variable and bundle it up in a function to eliminate the duplication. Frankly this is a time when I'd probably fall back into C and pull out good ol' scanf.
The assignment doesn't say c++, you can just use typecasting instead of a union. I like to print the 32bit address out in hex also as it's easier to make sure you have right 32bit value;
#define word8 uint8_t
#define word16 uint16_t
#define word32 uint32_t
char *sIP = "192.168.0.11";
main(){
word32 ip, *pIP;
pIP = &ip;
inet_pton(AF_INET, sIP, pIP);
printf("32bit:%u %x\n", *pIP, *pIP);
printf("16bit:%u %u\n", *(word16*)pIP, *(((word16*)pIP)+1));
printf("8bit:%u %u %u %u\n", *(word8*)pIP, *(((word8*)pIP)+1),*(((word8*)pIP)+2),*(((word8*)pIP)+3));
}
Output:
32bit:184592576 b00a8c0
16bit:43200 2816
8bit:192 168 0 11
You could also store the IP as a 4 byte string and do math to get the 16 bit and 32 bit answers. Its a pretty dumb assignment IMO; I would never use a union to do it.
there is one structure in Linux (64bit OS)
And I did the following to output this structure as Hexa code.
After the below code, "strBuff" will be output to the file in the same way as "printf",
This is a situation that needs to be read from windows, and should be stored in the same structure as above "example".
However, there was a problem here.
In my current windows, unsigned long size is 4byte.
In my current Linux, unsigned long size is 8byte.
So there is too much zero output in the output text.
This seems to be related to the padding bit. It is expected that only 2 bytes should be padding, and padding is done by 4 bytes.
It is not possible to change the structure "example" because the code is implemented by thinking it is 4byte when outputting from linux and the code is already in the completion stage.
I have two things to ask.
What if I need to get rid of unnecessary zero hexa in the output code?
Currently, we are using a hard coding method to skip all unsigned long and signed long variables.
Compatibility between windows and linux should be solved.
The code can be changed both on the reading side and on the output side. Is there a lib related to the above problem and compatibility that can solve the padding problem?
enter code here
struct example
{
unsigned long Ul;
int a;
signed long Sl;
}
struct examle eg;
// data input at eg
char *tempDataPtr = (char*)(&eg);
for(int i = 0 ; i < size(example) ; i++)
{
sprintf(&strBuff[i*3],"%02X ", tempDataPtr[i]);
}
Use types that have explicit format:
(And order them from largest to smallest for good measure, to protect against padding discrepancies between fields)
struct example
{
uint32_t Ul;
int32_t Sl;
int16_t a;
}
I've got an array of bytes, declared like so:
typedef unsigned char byte;
vector<byte> myBytes = {255, 0 , 76 ...} //individual bytes no larger in value than 255
The problem I have is I need to access the raw data of the vector (without any copying of course), but I need to assign an arbitrary amount of bits to any given pointer to an element.
In other words, I need to assign, say an unsigned int to a certain position in the vector.
So given the example above, I am looking to do something like below:
myBytes[0] = static_cast<unsigned int>(76535); //assign n-bit (here 32-bit) value to any index in the vector
So that the vector data would now look like:
{2, 247, 42, 1} //raw representation of a 32-bit int (76535)
Is this possible? I kind of need to use a vector and am just wondering whether the raw data can be accessed in this way, or does how the vector stores raw data make this impossible or worse - unsafe?
Thanks in advance!
EDIT
I didn't want to add complication, but I'm constructing variously sized integer as follows:
//**N_TYPES
u16& VMTypes::u8sto16(u8& first, u8& last) {
return *new u16((first << 8) | last & 0xffff);
}
u8* VMTypes::u16to8s(u16& orig) {
u8 first = (u8)orig;
u8 last = (u8)(orig >> 8);
return new u8[2]{ first, last };
}
What's terrible about this, is I'm not sure of the endianness of the numbers generated. But I know that I am constructing and destructing them the same everywhere (I'm writing a stack machine), so if I'm not mistaken, endianness is not effected with what I'm trying to do.
EDIT 2
I am constructing ints in the following horrible way:
u32 a = 76535;
u16* b = VMTypes::u32to16s(a);
u8 aa[4] = { VMTypes::u16to8s(b[0])[0], VMTypes::u16to8s(b[0])[1], VMTypes::u16to8s(b[1])[0], VMTypes::u16to8s(b[1])[1] };
Could this then work?:
memcpy(&_stack[0], aa, sizeof(u32));
Yes, it is possible. You take the starting address by &myVector[n] and memcpy your int to that location. Make sure that you stay in the bounds of your vector.
The other way around works too. Take the location and memcpy out of it to your int.
As suggested: by using memcpy you will copy the byte representation of your integer into the vector. That byte representation or byte order may be different from your expectation. Keywords are big and little endian.
As knivil says, memcpy will work if you know the endianess of your system. However, if you want to be safe, you can do this with bitwise arithmetic:
unsigned int myInt = 76535;
const int ratio = sizeof(int) / sizeof(byte);
for(int b = 0; b < ratio; b++)
{
myBytes[b] = byte(myInt >> (8*sizeof(byte)*(ratio - b)));
}
The int can be read out of the vector using a similar pattern, if you want me to show you how let me know.
I'm trying to interface with Ada code using C++, so I'm defining a struct using bit fields, so that all the data is in the same place in both languages. The following is not precisely what I'm doing, but outlines the problem. The following is also a console application in VS2008, but that's not super relevant.
using namespace System;
int main() {
int array1[2] = {0, 0};
int *array2 = new int[2]();
array2[0] = 0;
array2[1] = 0;
#pragma pack(1)
struct testStruct {
// Word 0 (desired)
unsigned a : 8;
unsigned b : 1;
bool c : 1;
unsigned d : 21;
bool e : 1;
// Word 1 (desired)
int f : 32;
// Words 2-3 (desired)
int g[2]; //Cannot assign bit field but takes 64 bits in my compiler
};
testStruct test;
Console::WriteLine("size of char: {0:D}", sizeof(char) * 8);
Console::WriteLine("size of short: {0:D}", sizeof(short) * 8);
Console::WriteLine("size of int: {0:D}", sizeof(int) * 8);
Console::WriteLine("size of unsigned: {0:D}", sizeof(unsigned) * 8);
Console::WriteLine("size of long: {0:D}", sizeof(long) * 8);
Console::WriteLine("size of long long: {0:D}", sizeof(long long) * 8);
Console::WriteLine("size of bool: {0:D}", sizeof(bool) * 8);
Console::WriteLine("size of int[2]: {0:D}", sizeof(array1) * 8);
Console::WriteLine("size of int*: {0:D}", sizeof(array2) * 8);
Console::WriteLine("size of testStruct: {0:D}", sizeof(testStruct) * 8);
Console::WriteLine("size of test: {0:D}", sizeof(test) * 8);
Console::ReadKey(true);
delete[] array2;
return 0;
}
(If it wasn't clear, in the real program, the basic idea is that the program gets a void* from something communicating with the Ada code and casts it to a testStruct* to access the data.)
With #pragma pack(1) commented out, the output is:
size of char: 8
size of short: 16
size of int: 32
size of unsigned: 32
size of long: 32
size of long long: 64
size of bool: 8
size of int[2]: 64
size of int*: 32
size of testStruct: 224
size of test: 224
Obviously 4 words (indexed 0-3) should be 448 = 32*4 = 128 bits, not 224. The other output lines were to help confirm the size of types under the VS2008 compiler.
With #pragma pack(1) uncommented, that number (on the last two lines of output) is reduced to 176, which is still greater than 128. It seems that the bools aren't being packed together with the unsigned ints in "Word 0".
Note: a&b, c, d, e, f, packaged in different words would be 5, +2 for the array = 7 words, times 32 bits = 224, the number we get with #pragma pack(1) commented out. If c and e (the bools) instead take up 8 bits each, as opposed to 32, we get 176, which is the number we get with #pragma pack(1) uncommented. It seems #pragma pack(1) is only allowing the bools to be packed into single bytes by themselves, instead of words, but not the bools with the unsigned ints at all.
So my question, in one sentence: Is there a way to force the compiler to pack a through e into one word? Related is this question: C++ bitfield packing with bools , but that doesn't answer my question; it only points out the behavior I'm trying to force to go away.
If there is literally no way to do this, does anyone have any ideas for workarounds? I'm at a loss, because:
I was asked to avoid changing the struct format that I'm copying (no re-ordering).
I don't want to change the bools to unsigned ints because it may cause problems down the road with constantly having to re-cast it to bool and maybe accidentally using the wrong version of an overloaded function, not to mention making the code more obscure for others who read it later.
I don't want to declare them as private unsigned ints then make public accessors or something because all other members of all other structs in the project are accessed directly without () afterward, so it would seem a bit hacky and obtuse, and one would almost NEED the IntelliSense or trial-and-error to remember which needs () and which doesn't.
I would like to avoid creating another struct type just for the data conversion (and e.g. make a constructor for testStruct that takes in a single testStructImport-type object) because the actual struct is very long with lots of bit-field-specified variables.
I recommend that you create a "normal" structure without any bit packing. Use default POD types for the members.
Create interface functions for loading the "normal" fields from a buffer (uint8_t), and storing to a buffer.
This will allow you to use the data members in a sane method in your program. The bit packing and unpacking will be handled by the interface function. The bit twiddling should use bitwise AND and bitwise OR functions and not rely on the bit field notation in a structure. This will allow you to adjust the bit twiddling and be more portable among compilers.
This is how I designed my protocol classes. And I don't have to worry about bit field positioning, Endianess or things of that sort.
Also, I can use block I/O for reading and writing the buffer.
Try packing in this way:
#pragma pack( push, 1 )
struct testStruct {
// Word 0 (desired)
unsigned a : 8;
unsigned b : 1;
unsigned c : 1;
unsigned d : 21;
unsigned e : 1;
// Word 1 (desired)
unsigned f : 32;
// Words 2-3 (desired)
unsigned g[2]; //Cannot assign bit field but takes 64 bits in my compiler
};
#pragma pack(pop)
There is no easy, elegant method without using accessors or an interface layer. Unfortunately, there is nothing like a #pragma thing to fix this. I ended up just converting the bools to unsigned int and renaming variables from e.g. f to f_flag or f_bool to encourage correct usage and make it clear what the variables contained. It's lower-effort than Thomas's solution, but not as robust, obviously, and still gets around some of the main drawbacks with any of the easier methods.
Years after I posted this question, user #WaltK added this comment to the linked, related question:
"If you want to have more control over the layout of bit field
structures in memory, consider using this bit field facility,
implemented as a library header file."
I want to modify individual bits of data, (for e.g. ints or chars). I want to do this by making a pointer, say ptr. by assigning it to some int or char, and then after incrementing ptr n times, I want to access the nth bit of that data.
Something like
// If i want to change all the 8 bits in a char variable
char c="A";
T *ptr=&c; //T is the data type of pointer I want..
int index=0;
for(index;index<8;index++)
{
*ptr=1; //Something like assigning 1 to the bit pointed by ptr...
}
There no such thing as a bit pointer in C++. You need to use two things, a byte pointer and an offset to the bit. That seems to be what you are getting towards in your code. Here's how you do the individual bit operations.
// set a bit
*ptr |= 1 << index;
// clear a bit
*ptr &= ~(1 << index);
// test a bit
if (*ptr & (1 << index))
...
The smallest addressable memory unit in C and C++ is 1 byte. So You cannot have a pointer to anything less than a byte.If you want to perform bitwise operations C and C++ provide the bitwise operators for these operations.
It is impossible to have address of individual bit, but you can utilize structures with bit fields. Like in this example from Wikipedia so:
struct box_props
{
unsigned int opaque : 1;
unsigned int fill_color : 3;
unsigned int : 4; // fill to 8 bits
unsigned int show_border : 1;
unsigned int border_color : 3;
unsigned int border_style : 2;
unsigned int : 2; // fill to 16 bits
};
Then by manipulating individual fields you will change sets of bits inside unsigned int. Technically this is identical to bitwise operations, but in this case compiler will generate the code (and you have lower chances of bug).
Be advised that you have to be cautious using bit fields.
C and C++ doesn't have a "bit pointer", technically speaking, C and C++ as such, deosn't know about "bits". You could build your own type, to do this, you need two things: A pointer to some type (char, int - probably unsigned) and a bit number. You'd then use the pointer and the bit number, along with the bitwise operators, to actually access the values.
There is nothing like a pointer to a bit
If you want all bits set to 1 then c = 0xff; is what you want, if you want to set a bit under some condition:
for(index;index<8;index++)
{
if (condition) c |= 1 << index;
}
As you can see there is no need to use a pointer
You can not read a single bit from the memory, CPU always read a full cache line, which could have different sizes for different CPUs.
But from the language point of view you can use bit fields
http://publications.gbdirect.co.uk/c_book/chapter6/bitfields.html
http://en.wikipedia.org/wiki/Bit_field