I am using ms c++. I am using struct like
struct header {
unsigned port : 16;
unsigned destport : 16;
unsigned not_used : 7;
unsigned packet_length : 9;
};
struct header HR;
here this value of header i need to put in separate char array.
i did memcpy(&REQUEST[0], &HR, sizeof(HR));
but value of packet_length is not appearing properly.
like if i am assigning HR.packet_length = 31;
i am getting -128(at fifth byte) and 15(at sixth byte).
if you can help me with this or if their is more elegant way to do this.
thanks
Sounds like the expected behaviour with your struct as you defined packet_length to be 9 bits long. So the lowest bit of its value is already within the fifth byte of the memory. Thus the value -128 you see there (as the highest bit of 1 in a signed char is interpreted as a negative value), and the value 15 is what is left in the 6th byte.
The memory bits look like this (in reverse order, i.e. higher to lower bits):
byte 6 | byte 5 | ...
0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0
packet_length | not_used | ...
Note also that this approach may not be portable, as the byte order inside multibyte variables is platform dependent (see endianness).
Update: I am not an expert in cross-platform development, neither did you tell much details about the layout of your request etc. Anyway, in this situation I would try to set the fields of the request individually instead of memcopying the struct into it. That way I could at least control the exact values of each individual field.
struct header {
unsigned port : 16;
unsigned destport : 16;
unsigned not_used : 7;
unsigned packet_length : 9;
};
int main(){
struct header HR = {.packet_length = 31};
printf("%u\n", HR.packet_length);
}
$ gcc new.c && ./a.out
31
Update:
i now that i can print that value directly by using attribute in struct. But i need to send this struct on network and their i am using java.
In that case, use an array of chars (length 16+16+7+9) and parse on the other side using java.
Size of array will be less than struct, and more packing could be possible in a single MTU.
Related
So. I am trying to convert a uint16_t (16 byte int) to class. To get the class member varaible. But it is not working as expected.
class test{
public:
uint8_t m_pcp : 3; // Defining max size as 3 bytes
bool m_dei : 1;
uint16_t m_vid : 12; // Defining max size as 12 bytes
public:
test(uint16_t vid, uint8_t pcp=0, bool dei=0) {
m_vid = vid;
m_pcp = pcp;
m_dei = dei;
}
};
int main() {
uint16_t tci = 65535;
test t = (test)tci;
cout<<"pcp: "<<t.m_pcp<<" dei: "<<t.m_dei<<" vid "<<t.m_vid<<"\n";
return 0;
}
Expected output:
pcp:1 dei: 1 vid 4095
The actual output:
pcp: dei: 0 vid 4095
Also,
cout<<sizeof(t)
returns 2. shouldn't it be 4?
Am I doing something wrong?
test t = (test)tci;
This line does not perform the cast you expect (which would be a reinterpret_cast, but it would not compile). It simply calls your constructor with the default values. So m_vid is assigned 65535 truncated to 12 bits, and m_pcp and m_dei are assigned 0. Try removing the constructor to see that it does not compile.
The only way I know to do what you want is to write a correct constructor, like so:
test(uint16_t i) {
m_vid = i & 0x0fff;
i >>= 12;
m_dei = i & 0x1;
i >>= 1;
m_pcp = i & 0x7;
}
Demo
Also I'm not sure why you would expect m_pcp to be 1, since the 3 highest bits of 65535 make 7.
Also, cout<<sizeof(t) returns 2. shouldn't it be 4?
No, 3+1+12=16 bits make 2 bytes.
Your bitfields have 16 bits in total, so 2 bytes is correct for size. (compiler will pack together adjacent bitfields -- but be wary since may vary across compilers) Your constructor on a single uint16_t value assigns just 12 bits of the value to m_vid and 0 to the other members. The first 12 bits of 65535 are 4095, so the output is correctly as you note (NOTE: your comments on bitfields being bytes should read "bits"), but your expectation for the others is off. The constructor clearly says to provide a 0 value for them if not specified.
I'm programming with a PLC and I'm reading values out of it.
It gives me the data in unsigned char. That's fine, but the values in my PLC can be over 255. And since unsigned chars can't give a value over 255 I get the wrong information.
The structure I get from the library:
struct PlcVarValue
{
unsigned long ulTimeStamp ALIGNATTRIB;
unsigned char bQuality ALIGNATTRIB;
unsigned char byData[1] ALIGNATTRIB;
};
ulTimeStamp gives the time
bQuality gives true/false (be able to read it or not)
byData[1] gives the data.
Anyways I'm trying this now: (where ppValues is an object of PlcVarValue)
unsigned char* variableValue = ppValues[0]->byData;
int iVariableValue = *variableValue;
This works fine... untill ppValues[0]->byData is > 255;
When I try the following when the number is for example 257:
unsigned char testValue = ppValues[0]->byData[0];
unsigned char testValue2 = ppValues[0]->byData[1];
the output is testvalue = 1 and testvalue2 = 1
that doesn't make sense to me.
So my question is, how can I get this solved so it gives me the correct number?
That actually looks like a variable-sized structure, where having an array of size 1 at the end being a common way to have it. See e.g. this tutorial about it.
In this case, both bytes being 1 for the value 257 is the correct values. Think of the two bytes as a 16-bit value, and combine the bits. One byte will become the hight byte, where 1 corresponds to 256, and then add the low bytes which is 1 and you have 256 + 1 which of course is equal to 257. Simple binary arithmetic.
Which byte is the high, and which is the low we can't say, but it's easy to check if you can force a message that contains the value 258 instead, as then one byte will still be 1 but the other will be 2.
How to combine it into a single unsigned 16-bit value is also easy if you know the bitwise shift and or operators:
uint8_t high_byte = ...
uint8_t low_byte = ...
uint16_t word = high_byte << 8 | low_byte;
I have a question about bit packing in C++.
Lets say we have a struct defined in C++. Here it is below:
typedef struct
{
unsigned long byte_half : 4; //0.5
unsigned long byte_oneAndHalf : 12; //2
union
{
unsigned long byte_union_one_1 : 8; //3
unsigned long byte_union_one_2 : 8; //3
unsigned long byte_union_one_3 : 8; //3
};
unsigned long byte_one : 8; //4
}LongStruct;
It is a struct called LongStruct. From the looks of it, it occupies 4 bytes, and fits into a long. Now I execute the following line:
int size = sizeof(LongStruct);
I take a look at size, expecting it to have the value 4. Turns out I get 12 instead. In what way am I incorrectly visualizing my struct?
Thank you in advance for any help you can give me.
The union is expanded to a long, so its size is 4 bytes instead of 1 byte.
As a result, it is aligned to a 4-byte offset from the beginning of the structure.
In addition, the entire structure is expanded to be a multiple of 4 bytes in size.
So the actual structure looks like this:
unsigned long byte_half : 4; // bits 0 - 3
unsigned long byte_oneAndHalf : 12; // bits 4 - 15
unsigned long byte_padding_1 : 16; // bits 16 - 31 // align union
union
{
unsigned long byte_union_one_1 : 8; // bits 32 - 39
unsigned long byte_union_one_2 : 8; // bits 32 - 39
unsigned long byte_union_one_3 : 8; // bits 32 - 39
unsigned long byte_padding_2 : 24; // bits 40 - 63 // expand union
};
unsigned long byte_one : 8; // bits 64 - 71
unsigned long byte_padding_3 : 24; // bits 72 - 95 // expand structure
Hence the total size is 96 bits (12 bytes).
The anonymous union is not substituting its attributes, but is taking a four-byte chunk in the middle of your bit field struct. So your first two members are two bytes + two padding. Then your union is one byte plus three padding. Then your final member is one byte and three more padding. The total is the 12 you observe.
I'll try to dig into the standard to see exactly what it says about anonymous union bitfields. Alternately if you describe the real problem you're trying to solve we could look into answering that as well.
As an aside you have this tagged C++ so strongly prefer struct X {}; over typedef struct {} X;
In the following code
#include <iostream>
using namespace std;
struct field
{
unsigned first : 5;
unsigned second : 9;
};
int main()
{
union
{
field word;
int i;
};
i = 0;
cout<<"First is : "<<word.first<<" Second is : "<<word.second<<" I is "<<i<<"\n";
word.first = 2;
cout<<"First is : "<<word.first<<" Second is : "<<word.second<<" I is "<<i<<"\n";
return 0;
}
when I initialize word.first = 2, as expected it updates 5-bits of the word, and gives the desired output. It is the output of 'i' that is a bit confusing. With word.first = 2, i gives output as 2, and when I do word.second = 2, output for i is 64. Since, they share the same memory block, shouldnt the output (for i) in the latter case be 2?
This particular result is platform-specific; you should read up on endianness.
But to answer your question, no, word.first and word.second don't share memory; they occupy separate bits. Evidently, the underlying representation on your platform is thus:
bit 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| | second | first |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
|<------------------- i ----------------------->|
So setting word.second = 2 sets bit #6 of i, and 26 = 64.
While this depends on both your platform and your specific compiler, this is what happens in your case:
The union overlays both the int and the struct onto the same memory. Let us assume, for now, that your int has a size of 32 bits. Again, this depends on multiple factors. Your memory layout will look something like this:
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
SSSSSSSSSFFFFF
Where I stands for the integer, S for the second field and F for the first field of your struct. Note that I have represented the most significant bit on the left.
When you initialize the integer to zero, all the bits are set as zero, so first and second are also zero.
When you set word.first to two, the memory layout becomes:
00000000000000000000000000000010
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
SSSSSSSSSFFFFF
Whichs leads to a value of 2 for the integer. However, by setting the value of word.second to 2, the memory layout becomes:
00000000000000000000000001000000
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
SSSSSSSSSFFFFF
Which gives you a value of 64 for the integer.
I have a requirement, where 3 bytes (24 bits) need to be populated in a binary protocol. The original value is stored in an int (32 bits). One way to achieve this would be as follows:-
Technique1:-
long x = 24;
long y = htonl(x);
long z = y>>8;
memcpy(dest, z, 3);
Please let me know if above is the correct way to do it?
The other way, which i dont understand was implemented as below
Technique2:-
typedef struct {
char data1;
char data2[3];
} some_data;
typedef union {
long original_data;
some_data data;
} combined_data;
long x = 24;
combined_data somedata;
somedata.original_data = htonl(x);
memcpy(dest, &combined_data.data.data2, 3);
What i dont understand is, how did the 3 bytes end up in combined_data.data.data2 as opposed to first byte should go into combined_data.data.data1 and next 2 bytes should go into
combined_data.data.data2?
This is x86_64 platform running 2.6.x linux and gcc.
PARTIALLY SOLVED:-
On x86_64 platform, memory is addressed from right to left. So a variable of type long with value 24, will have following memory representation
|--Byte4--|--Byte3--|--Byte2--|--Byte1--|
0 0 0 0x18
With htonl() performed on above long type, the memory becomes
|--Byte4--|--Byte3--|--Byte2--|--Byte1--|
0x18 0 0 0
In the struct some_data, the
data1 = Byte1
data2[0] = Byte2
data2[1] = Byte3
data4[2] = Byte4
But my Question still holds, Why not simply right shift by 8 as shown in technique 1 ?
A byte takes 8 bits :-)
int x = 24;
int y = x<<8;
moving by 0 you are changing nothing. By 1 - *2, by 2 - *4, by 8 - *256.
if we are on the BIG ENDIAN machine, 4 bytes are put in memory as so: 2143. And such algorythms won't work for numbers greater than 2^15. On the other way, on the BIG ENDIAN machine you should define, what means " putting integer in 3 bytes"
Hmm. I think, the second proposed algorythm will be ok, but change the order of bytes:
You have them as 2143. You need 321, I think. But better check it.
Edit: I checked on wiki - x86 is little endian, they say, so algorythms are OK