I ask because i am just curious.
When the size of rx_bytes is unsigned long long type,
acceptable size is 0 ~ 18,446,744,073,709,551,615
How does it work if it exceeds the size ?
Thank you !
If it exceeds the size, it expects to start again from zero.
Related
I am compiling several smaller files into one big file.
I am trying to make it so that each small file begins at a certain granularity, in my case 4096.
Therefore I am filling the gap between each file.
To do that I used
//Have a look at the current file size
unsigned long iStart=ftell(outfile);
//Calculate how many bytes we have to add to fill the gap to fulfill the granularity
unsigned long iBytesToWrite=iStart % 4096;
//write some empty bytes to fill the gap
vector <unsigned char>nBytes;
nBytes.resize(iBytesToWrite+1);
fwrite(&nBytes[0],iBytesToWrite,1,outfile);
//Now have a look at the file size again
iStart=ftell(outfile);
//And check granularity
unsigned long iCheck=iStart % 4096;
if (iCheck!=0)
{
DebugBreak();
}
However iCheck returns
iCheck = 3503
I expected it to be 0.
Does anybody see my mistake?
iStart % 4096 is the number of bytes since the previous 4k-boundary. You want the number of bytes until the next 4k-boundary, which is (4096 - iStart % 4096) % 4096.
You could replace the outer modulo operator with an if, since it's only purpose is to correct 4096 to 0 and leave all the other values untouched. That would be worthwhile if the value of 4096 were, say, a prime. But since 4096 is actually 4096, which is a power of 2, the compiler will do the modulo operation with a bit mask (at least, provided that iStart is unsigned), so the above expression will probably be more efficient.
By the way, you're allowed to fseek a file to a position beyond the end, and the file will be filled with NUL bytes. So you actually don't have do all that work yourself:
The fseek() function shall allow the file-position indicator to be set beyond the end of existing data in the file. If data is later written at this point, subsequent reads of data in the gap shall return bytes with the value 0 until data is actually written into the gap.
(Posix 2008)
I am new to bit manipulations tricks and I wrote a simple code to see the output of doing single bit shifts on a single number viz. 2
#include <iostream>
int main(int argc, char *argv[])
{
int num=2;
do
{
std::cout<<num<<std::endl;
num=num<<1;//Left shift by 1 bit.
} while (num!=0);
return 0;
}
The output of this is the following.
2
4
8
16
32
64
128
256
512
1024
2048
4096
8192
16384
32768
65536
131072
262144
524288
1048576
2097152
4194304
8388608
16777216
33554432
67108864
134217728
268435456
536870912
1073741824
-2147483648
Obviously, continuously bit shifting to the left by 1 bit, will result in zero as it has done above, but why does the computer output a negative number at the very end before terminating the loop (since num turned zero)??
However when I replace int num=2 by unsigned int num=2 then I get the same output except
that the last number is this time displayed as positive i.e. 2147483648 instead of -2147483648
I am using the gcc compiler on Ubuntu Linux
That's because int is a signed integer. In the two's-complement representation, the sign of the integer is determined by the upper-most bit.
Once you have shifted the 1 into the highest (sign) bit, it flips negative.
When you use unsigned, there's no sign bit.
0x80000000 = -2147483648 for a signed 32-bit integer.
0x80000000 = 2147483648 for an unsigned 32-bit integer.
EDIT :
Note that strictly speaking, signed integer overflow is undefined behavior in C/C++. The behavior of GCC in this aspect is not completely consistent:
num = num << 1; or num <<= 1; usually behaves as described above.
num += num; or num *= 2; may actually go into an infinite loop on GCC.
Good question! The answer is rather simple though.
The maximum integer value is 2^31-1. The 31 (not 32) is there for a reason - the last bit on the integer is used for determining whether it's a positive or negative number.
If you keep shifting the bit to the left, you'll eventually hit this bit and it turns negative.
More information about this: http://en.wikipedia.org/wiki/Signed_number_representations
As soon as the bit reaches the sign bit of signed (most significant bit) it turns negative.
This is kind of a curiosity.
I'm studying C++. I was asked to reproduce an infinite loop, for example one that prints a series of powers:
#include <iostream>
int main()
{
int powerOfTwo = 1;
while (true)
{
powerOfTwo *= 2;
cout << powerOfTwo << endl;
}
}
The result kinda troubled me. With the Python interpreter, for example, I used to get an effective infinite loop printing a power of two each time it iterates (until the IDE would stop for exceeding iteration's limit, of course). With this C++ program instead I get a series of 0. But, if I change this to a finite loop, and that is to say I only change the condition statement to:
(powerOfTwo <= 100)
the code works well, printing 2, 4, 16, ..., 128.
So my question is: why an infinite loop in C++ works in this way? Why it seems to not evaluate the while body at all?
Edit: I'm using Code::Blocks and compiling with g++.
In the infinite loop case you see 0 because the int overflows after 32 iterations to 0 and 0*2 == 0.
Look at the first few lines of output. http://ideone.com/zESrn
2
4
8
16
32
64
128
256
512
1024
2048
4096
8192
16384
32768
65536
131072
262144
524288
1048576
2097152
4194304
8388608
16777216
33554432
67108864
134217728
268435456
536870912
1073741824
-2147483648
0
0
0
In Python, integers can hold an arbitrary number of digits. C++ does not work this way, its integers only have a limited precision (normally 32 bits, but this depends on the platform). Multiplication by 2 is implemented by bitwise shifting an integer one bit to the left. What is happening is that you initially have only the first bit in the integer set:
powerOfTwo = 1; // 0x00000001 = 0b00000000000000000000000000000001
After your loop iterates 31 times, the bit will have shifted to the very last position in the integer.
powerOfTwo = -2147483648; // 0x80000000 = 0b10000000000000000000000000000000
The next multiplication by two, the bit is shifted all the way out of the integer (since it has limited precision), and you end up with zero.
powerOfTwo = 0; // 0x00000000 = = 0b00000000000000000000000000000000
From then on, you are stuck, since 0 * 2 is always 0. If you watch your program in "slow motion", you would see an initial burst of powers of 2, followed by an infinite loop of zeroes.
In Python, on the other hand, your code would work as expected - Python integers can expand to hold any arbitrary number of digits, so your single set bit will never "shift off the end" of the integer. The number will simply keep expanding so that the bit is never lost, and you will never wrap back around and get trapped at zero.
Actually it prints powers of two until powerOfTwo gets overflowed and becomes 0. Then 0*2 = 0 and so on. http://ideone.com/XUuHS
I c++ it has a limited size - so therefore is able to compute even if errror
but the whole true makes the case
In C++ you will cause an overflow pretty soon, your int variable won't be able to handle big numbers.
int: 4 bytes signed can handle the range –2,147,483,648 to 2,147,483,647
So as #freerider said, your compiler is maybe optimizing the code for you.
I guess you know all data-type concept in C,C++, so you are declaring powerOfTwo as a integer.
so the range of integer get followed accordingly, if you want an continuous loop you can use char as datatype and by using data conversion you can get infinite loop for you function.
Carefully examine the output of the program. You don't really get an infinite series of zeroes. You get 32 numbers, followed by an infinite series of zeroes.
The thirty-two numbers are the first thirty-two powers of two:
1
2
4
8
...
(2 raised to the 30th)
(2 raised to the 31st)
0
0
0
The problem is how C represents numbers, as finite quantities. Since your mathematical quantity is no longer representable in the C int, C puts some other number in its place. In particular, it puts the true value modulo 2^32. But 2^32 mod 2^32 is zero, so there you are.
I am trying to represent 32768 using 2 bytes. For the high byte, do I use the same values as the low byte and it will interpret them differently or do I put the actual values? So would I put something like
32678 0 or 256 0? Or neither of those? Any help is appreciated.
In hexadecimal, your number is 0x8000 which is 0x80 and 0x00.
To get the low byte from the input, use low=input & 0xff and to get the high byte, use high=(input>>8) & 0xff.
Get the input back from the low and high byes like so: input=low | (high<<8).
Make sure the integer types you use are big enough to store these numbers. On 16-bit systems, unsigned int/short or signed/unsigned long should be be large enough.
Bytes can only contain values from 0 to 255, inclusive. 32768 is 0x8000, so the high byte is 128 and the low byte is 0.
Try this function.
Pass your Hi_Byte and Lo_Byte to the function, it returns the value as Word.
WORD MAKE_WORD( const BYTE Byte_hi, const BYTE Byte_lo)
{
return (( Byte_hi << 8 ) | Byte_lo & 0x00FF );
}
Pointers can do this easily, are MUCH FASTER than shifts and requires no processor math.
Check this answer
BUT:
If I understood your problem, you need up to 32768 stored in 2 bytes, so you need 2 unsigned int's, or 1 unsigned long.
Just change int for long and char for int, and you're good to go.
32768 is 0x8000, so you would put 0x80 (128) in your high byte and 0 in your low byte.
That's assuming unsigned values, of course. 32768 isn't actually a legal value for a signed 16-bit value.
32768 in hex is 0080 on a little-endian platform. The "high" (second in our case) byte contains 128, and the "low" one 0.
Reading from a pipe:
unsigned int sample_in = 0; //4 bytes - 32bits, right?
unsigned int len = sizeof(sample_in); // = 4 in debugger
while (len > 0)
{
if (0 == ReadFile(hRead,
&sample_in,
sizeof(sample_in),
&bytesRead,
0))
{
printf("ReadFile failed\n");
}
len-= bytesRead; //bytesRead always = 4, so far
}
In the debugger, first iteration through:
sample_in = 536739282 //36 bits?
How is this possible if sample in is an unsigned int? I think I'm missing something very basic, go easy on me!
Thanks
Judging from your comment that says //36 bits? I suspect that you're expecting the data to be sent in a BCD-style format: In other words, where each digit is a number that takes up four bits, or two digits per byte. This way would result in wasted space however, you would use four bits, but values "10" to "15" aren't used.
In fact integers are represented in binary internally, thus allowing a 32-bit number to represent up to 2^32 different values. This comes out to 4,294,967,295 (unsigned) which happens to be rather larger than the number you saw in sample_in.
536739282 is well within the maximum boundary of an unsigned 4 byte integer, which is upwards of 4 billion.
536,739,282 will easily fit in an unsigned int and 32bits. The cap on an unsigned int is 4,200,000,000 or so.
unsigned int, your 4 byte unsigned integer, allows for values from 0 to 4,294,967,295. This will easily fit your value of 536,739,282. (This would, in fact, even fit in a standard signed int.)
For details on allowable ranges, see MSDN's Data Type Ranges page for C++.