I am a trying to receive some data from network using UDP and parse it.
Here is the code,
char recvline[1024];
int n=recvfrom(sockfd,recvline,1024,0,NULL,NULL);
for(int i=0;i<n;i++)
cout << hex <<static_cast<short int>(recvline[i])<<" ";
Printed the output,
19 ffb0 0 0 ff88 d 38 19 48 38 0 0 2 1 3 1 ff8f ff82 5 40 20 16 6 6 22 36 6 2c 0 0 0 0 0 0 0 0
But I am expecting the output like,
19 b0 0 0 88 d 38 19 48 38 0 0 2 1 3 1 8f 82 5 40 20 16 6 6 22 36 6 2c 0 0 0 0 0 0 0 0
The ff shouldn't be there on printed output.
Actually I have to parse this data based on each character,
Like,
parseCommand(recvline);
and the parse code looks,
void parseCommand( char *msg){
int commId=*(msg+1);
switch(commId){
case 0xb0 : //do some operation
break;
case 0x20 : //do another operation
break;
}
}
And while debugging I am getting commId=-80 on watch.
Note:
In Linux I am getting successful output with the code, note that I have used unsigned char instead char for the read buffer.
unsigned char recvline[1024];
int n=recvfrom(sockfd,recvline,1024,0,NULL,NULL);
Where as in Windows recvfrom() not allowing the second argument as unsigned it giving build error, so I chose char
Looks like you might be getting the correct values, but your cast to short int during printing sign-extends your char value, causing ff to be propogated to the top byte if the top bit of your char is 1 (i.e. it is negative). You should first cast it to unsigned type, then extend to int, so you need 2 casts:
cout << hex << static_cast<short int>(static_cast<uint8_t>(recvline[i]))<<" ";
I have tested this and it behaves as expected.
In response to your extension: the data read is fine, it is a matter of how you interpret it. To parse correctly you should:
uint8_t commId= static_cast<uint8_t>(*(msg+1));
switch(commId){
case 0xb0 : //do some operation
break;
case 0x20 : //do another operation
break;
}
As you store your data in a signed data type conversions/promotion to bigger data types will first sign extend the value (filling the high order bits with the value of the MSB) even if it then gets converted to unsigned datatypes.
One solution is to define recvline as uint8_t[] in the first place an cast it to char* when passing it to the recvfrom function. That way, you only have to cast it once and you are using the same code in your windows and linux version. Also uint8_t[] is (at least to me) a clear indication that you are using the array as raw memory instead of a string of some kind.
Another possibility is to simply perform a bitwise And: (recvline[i] & 0xff). Thanks to automatic integral promotion this doesn't even require a cast.
Personal Note:
It is really annoying that the C and C++ standards don't provide a separate type for raw memory (yet), but with any luck well get a byte type in a future standard revision.
Related
Many compilers seem to be keeping only 0 or 1 in bool values, but I'm not sure this will always work:
int a = 2;
bool b = a;
int c = 3 + b; // 4 or 5?
Yes:
In C++ (§4.5/4):
An rvalue of type bool can be
converted to an rvalue of type int,
with false becoming zero and true
becoming one.
In C, when a value is converted to _Bool, it becomes 0 or 1 (§6.3.1.2/1):
When any scalar value is converted to
_Bool, the result is 0 if the value compares equal to 0; otherwise, the
result is 1.
When converting to int, it's pretty straight-forward. int can hold 0 and 1, so there's no change in value (§6.3.1.3).
Well, not always...
const int n = 100;
bool b[n];
for (int i = 0; i < n; ++i)
{
int x = b[i];
if (x & ~1)
{
std::cout << x << ' ';
}
}
Output on my system:
28 255 34 148 92 192 119 46 165 192 119 232 26 195 119 44 255 34 96 157 192 119
8 47 78 192 119 41 78 192 119 8 250 64 2 194 205 146 124 192 73 64 4 255 34 56 2
55 34 224 255 34 148 92 192 119 80 40 190 119 255 255 255 255 41 78 192 119 66 7
8 192 119 192 73 64 240 255 34 25 74 64 192 73 64
The reason for this apparently weird output is laid out in the standard, 3.9.1 §6:
Values of type bool are either true or false. Using a bool value in ways described by this International Standard as "undefined", such as by examining the value of an uninitialized automatic object, might cause it to behave as if it is neither true nor false.
Is C/C++ .......
There's no language named C/C++.
bool type always guaranteed to be 0 or 1 when typecast'ed to int?
In C++ yes because section $4.5/4 says
An rvalue of type bool can be converted to an rvalue of type int, with false becoming zero and true becoming one.
.
int c = 3 + b; // 4 or 5?
The value of c will be 4
One more example when you are out of the safe boat:
bool b = false;
*(reinterpret_cast<char*>(&b)) = 0xFF;
int from_bool = b;
cout << from_bool << " is " << (b ? "true" : "false");
Output (g++ (GCC) 4.4.7):
255 is true
To be added to the FredOverflow's example.
There is no bool type in C pre C99 (Such as C90), however the bool type in C99/C++ is always guaranteed to be 0 or 1.
In C, all boolean operation are guaranteed to return either 0 or 1, whether the bool type is defined or not.
So a && b or !a or a || b will always return 0 or 1 in C or C++ regardless of the type of a and b.
Types with padding bits may behave strangely if the padding bits don't hold the values expected for the type. Most C89 implementations didn't use padding bits with any of their integer types, but C99 requires that implementations define such a type: _Bool. Reading a _Bool when all of its bits are zero will yield zero. Writing any non-zero value to a _Bool will set its bits to some pattern which will yield 1 when read. Writing zero will set the bits to a pattern (which may or may not be all-bits-zero) which will yield 0 when read.
Unless specified otherwise in an implementation's documentation, any bit pattern other than all-bits-zero which could not have been produced by storing a zero or non-zero value to a _Bool is a trap representation; the Standard says nothing about what will happen if an attempt is made to read such a value. Given, e.g.
union boolChar { _Bool b; unsigned char c; } bc;
storing zero to bc.c and reading bc.b will yield zero. Storing zero or one to bc.b will set bc.c to values which, if written, will cause bc.b to hold zero or one. Storing any other value to bc.c and reading bc.b will yield Undefined Behavior.
Having recently watched this video about facebook's implementation of string, I was curious to see the internals of Microsoft's implementation. Unfortunately, the string file (in %VisualStudioDirectory%/VC/include) doesn't seem to contain the actual definition, but rather just conversion functions (e.g. atoi) and some operator overloads.
I decided to do some poking and prodding at it from user-level programs. The first thing I did, of course, was to test sizeof(std::string). To my surprise, std::string takes 40 bytes! (On 64-bit machines anyways.) The previously mentioned video goes into detail about how facebook's implementation only requires 24 bytes and gcc's takes 32 bytes, so this was shocking to say the least.
We can dig a little deeper by writing a simple program that prints off the contents of the data byte-by-byte (including the c_str address), as such:
#include <iostream>
#include <string>
int main()
{
std::string test = "this is a very, very, very long string";
// Print contents of std::string test.
char* data = reinterpret_cast<char*>(&test);
for (size_t wordNum = 0; wordNum < sizeof(std::string); wordNum = wordNum + sizeof(uint64_t))
{
for (size_t i = 0; i < sizeof(uint64_t); i++)
std::cout << (int)(data[wordNum + i]) << " ";
std::cout << std::endl;
}
// Print the value of the address returned by test.c_str().
// (Doing this byte-by-byte to match the above values).
const char* testAddr = test.c_str();
char* dataAddr = reinterpret_cast<char*>(&testAddr);
std::cout << "c_str address: ";
for (size_t i = 0; i < sizeof(const char*); i++)
std::cout << (int)(dataAddr[i]) << " ";
std::cout << std::endl;
}
This prints out:
48 33 -99 -47 -55 1 0 0
16 78 -100 -47 -55 1 0 0
-52 -52 -52 -52 -52 -52 -52 -52
38 0 0 0 0 0 0 0
47 0 0 0 0 0 0 0
c_str address: 16 78 -100 -47 -55 1 0 0
Examining this, we can see that the second word contains the address that points to the allocated data for the string, the third word is garbage (a buffer for Short String Optimization), the fourth word is the size, and the fifth word is the capacity. But what about the first word? It appears to be an address, but what for? Shouldn't everything already be accounted for?
For the sake of completeness, the following output shows SSO (the string is set to "Short String"). Note that the first word still seems to represent a pointer:
0 36 -28 19 123 1 0 0
83 104 111 114 116 32 83 116
114 105 110 103 0 -52 -52 -52
12 0 0 0 0 0 0 0
15 0 0 0 0 0 0 0
c_str address: 112 -9 79 -108 23 0 0 0
EDIT: Ok, so having done more testing, it appears that the size of std::string actually decreases down to 32 bytes when compiled for release, and the first word is no longer there. But I'm still really interested in knowing why that is the case, and what that extra pointer is used for in debug mode.
Update: As per the tip by the user Yuushi, the extra word appears to related to Debug Iterator Support. This was verified when I turned off Debug Iterator Support (an example for doing this is shown here) and the size of std::string was reduced to 32 bytes, with the first word now missing.
However, it would still be really interesting to see how Debug Iterator Support uses that additional pointer to check for incorrect iterator use.
Visual Studio 2015 use xstring instead of string to define std::basic_string
NOTE: This answer is applied for VS2015 only, VS2013 uses a different implementation, however, they are more or less the same.
It's implemented as:
template<class _Elem,
class _Traits,
class _Alloc>
class basic_string
: public _String_alloc<_String_base_types<_Elem, _Alloc> >
{
// This class has no member data
}
_String_alloc use a _Compressed_pair<_Alty, _String_val<_Val_types> > to store its data, in std::string, _Alty is std::allocator<char> and _Val_types is _Simple_types<char>, because std::is_empty<std::allocator<char>>::value is true, sizeof _Compressed_pair<_Alty, _String_val<_Val_types> > is the same with sizeof _String_val<_Val_types>
class _String_val inherites from _Container_base which is a typedef of _Container_base0 when #if _ITERATOR_DEBUG_LEVEL == 0 and _Container_base12 otherwise. The difference between them is _Container_base12 contains pointer to _Container_proxy for debug purpose. Beside that _String_val also have those members:
union _Bxty
{ // storage for small buffer or pointer to larger one
_Bxty()
{ // user-provided, for fancy pointers
}
~_Bxty() _NOEXCEPT
{ // user-provided, for fancy pointers
}
value_type _Buf[_BUF_SIZE];
pointer _Ptr;
char _Alias[_BUF_SIZE]; // to permit aliasing
} _Bx;
size_type _Mysize; // current length of string
size_type _Myres; // current storage reserved for string
With _BUF_SIZE is 16.
And pointer_type, size_type is well aligned together in this system. No alignment is necessary.
Hence, when _ITERATOR_DEBUG_LEVEL == 0 then sizeof std::string is:
_BUF_SIZE + 2 * sizeof size_type
otherwise it's
sizeof pointer_type + _BUF_SIZE + 2 * sizeof size_type
I have two functions that print 32bit number in binary.
First one divides the number into bytes and starts printing from the last byte (from the 25th bit of the whole integer).
Second one is more straightforward and starts from the 1st bit of the number.
It seems to me that these functions should have different outputs, because they process the bits in different orders. However the outputs are the same. Why?
#include <stdio.h>
void printBits(size_t const size, void const * const ptr)
{
unsigned char *b = (unsigned char*) ptr;
unsigned char byte;
int i, j;
for (i=size-1;i>=0;i--)
{
for (j=7;j>=0;j--)
{
byte = (b[i] >> j) & 1;
printf("%u", byte);
}
}
puts("");
}
void printBits_2( unsigned *A) {
for (int i=31;i>=0;i--)
{
printf("%u", (A[0] >> i ) & 1u );
}
puts("");
}
int main()
{
unsigned a = 1014750;
printBits(sizeof(a), &a); // ->00000000000011110111101111011110
printBits_2(&a); // ->00000000000011110111101111011110
return 0;
}
Both your functions print binary representation of the number from the most significant bit to the least significant bit. Today's PCs (and majority of other computer architectures) use so-called Little Endian format, in which multi-byte values are stored with least significant byte first.
That means that 32-bit value 0x01020304 stored on address 0x1000 will look like this in the memory:
+--------++--------+--------+--------+--------+
|Address || 0x1000 | 0x1001 | 0x1002 | 0x1003 |
+--------++--------+--------+--------+--------+
|Data || 0x04 | 0x03 | 0x02 | 0x01 |
+--------++--------+--------+--------+--------+
Therefore, on Little Endian architectures, printing value's bits from MSB to LSB is equivalent to taking its bytes in reversed order and printing each byte's bits from MSB to LSB.
This is the expected result when:
1) You use both functions to print a single integer, in binary.
2) Your C++ implementation is on a little-endian hardware platform.
Change either one of these factors (with printBits_2 appropriately adjusted), and the results will be different.
They don't process the bits in different orders. Here's a visual:
Bytes: 4 3 2 1
Bits: 8 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1
Bits: 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1
The fact that the output is the same from both of these functions tells you that your platform uses Little-Endian encoding, which means the most significant byte comes last.
The first two rows show how the first function works on your program, and the last row shows how the second function works.
However, the first function will fail on platforms that use Big-Endian encoding and output the bits in this order shown in the third row:
Bytes: 4 3 2 1
Bits: 8 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1
Bits: 8 7 6 5 4 3 2 1 16 15 14 13 12 11 10 9 24 23 22 21 20 19 18 17 32 31 30 29 28 27 26 25
For the printbits1 function, it is taking the uint32 pointer and assigning it to a char pointer.
unsigned char *b = (unsigned char*) ptr;
Now, in a big endian processor, b[0] will point to the Most significant byte of the uint32 value. The inner loop prints this byte in binary, and then b[1] will point to the next most significant byte in ptr. Therefore this method prints the uint32 value MSB first.
As for printbits2, you are using
unsigned *A
i.e. an unsigned int. This loop runs from 31 to 0 and prints the uint32 value in binary.
I am new to boost serialization but this seems very strange to me.
I have a very simple class with two members
int number // always = 123
char buffer[?] // buffer with ? size
so sometimes I set the size to buffer[31] then I serialize the class
22 serialization::archive 8 0 0 1 1 0 0 0 0 123 0 0 31 0 0 0 65 65
we can see the 123 and the 31 so no issue here both are in decimal format.
now I change buffer to buffer[1024] so I expected to see
22 serialization::archive 8 0 0 1 1 0 0 0 0 123 0 0 1024 0 0 0 65 65 ---
this is the actual outcome
22 serialization::archive 8 0 0 1 1 0 0 0 0 123 0 0 0 4 0 0 65 65 65
boost has switched to hex for the buffer size only?
notice the other value is still decimal.
So what happens if I switch number from 123 to 1024 ?
I would imagine 040 ?
22 serialization::archive 8 0 0 1 1 0 0 0 0 1024 0 0 0 4 0 0 65 65
If this is by design, why does the 31 not get converted to 1F ? its not consistent.
This causes problems in our load function for the split_free, we were doing this
unsigned int size;
ar >> size;
but as you might guess, when this is 040, it truncs to zero :(
what is the recommended solution to this?
I was using boost 1.45.0 but I tested this on boost 1_56.0 and it is the same.
EDIT: sample of the serialization function
template<class Archive>
void save(Archive& ar, const MYCLASS& buffer, unsigned int /*version*/) {
ar << boost::serialization::make_array(reinterpret_cast<const unsigned char*>(buffer.begin()), buffer.length());
}
MYCLASS is just a wrapper on a char* with the first element an unsigned int
to keep the length approximating a UNICODE_STRING
http://msdn.microsoft.com/en-gb/library/windows/desktop/aa380518(v=vs.85).aspx
The code is the same if the length is 1024 or 31 so I would not have expected this to be a problem.
I don't think Boost "switched to hex". I honestly don't have any experience with this, but it looks like boost is serializing as an array of bytes, which can only hold numbers from 0 through 255. 1024 would be a byte with a value 4 followed by a byte with the value 0.
"why does the 31 not get converted to 1F ? its not consistent" - your assumptions are creating false inconsistencies. Stop assuming you can read the serialization archive format when actually you're just guessing.
If you want to know, trace the code. If not, just use the archive format.
If you want "human accessible form", consider the xml_oarchive.
Is there a way to stream in a number to a unsigned char?
istringstream bytes( "13 14 15 16 17 18 19 20" );
unsigned char myChars[8];
for( int i = 0; i < 8 && !bytes.eof(); i++ )
{
bytes >> myChars[i];
cout << unsigned( myChars[i] ) << endl;
}
This code currently outputs the ascii values of the first 8 non-space characters:
49 51 49 52 49 53 49 54
But what I want is the numerical values of each token:
13
14
15
16
17
18
19
20
You are reading a char at a time, which means you get '1', '3', skips the space, '1', '4', skips the space, etc.
To read the values as NUMBERS, you need to use an integer type as a temporary:
unsigned short s;
bytes >> s;
myChars[i] = s;
Now, the stream will read an integer value, e.g. 13, 14, and store it in s. Then you convert it to a unsigned char with myChars[i] = s;.
So there is a lot of error checking that you'll bypass to do this without a temporary. For example, does each number being assigned fit in a byte and are there more numbers in bytes than elements of myChars? But presuming you've already delt with those you can just use an istream_iterator<unsigned short>:
copy(istream_iterator<unsigned short>{ bytes }, istream_iterator<unsigned short>{}, begin(myChars))
Live Example
An additional note here: A char[] typically contains a null terminated string. Presuming that's not what you intend, it would be gracious of you to indicate to the reader that's not how you're using it. In c++11 you were given int8_t/uint8_t to do just that. Using something like: uint8_t myChars[8] would make your code more readable.