'>>>' Java to C++ [duplicate] - c++

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What is >>> operation in C++
I need to convert this tiny little part of Java to C + +, but do not know what is '>>>' ... searched, but found no references, only on shift.
Does anyone have any ideas?
int x1;
x1 = text1[i1++] & 0xff;
text2[i2++] = (char) (x1 >>> 8);

The unsigned right shift (>>>) doesn't exist in C++, because it's not necessary -- C++ has distinct signed and unsigned integer types. If you want right shifts to be unsigned, make the variable that's being shifted unsigned:
unsigned int x1 = text1[i1++] & 0xff;
text2[i2++] = (char) (x1 >> 8);
That being said, the code you're translating is silly. The result of the second operation will always be zero in Java, so you could just as easily translate it to:
i1++;
text2[i2++] = 0;

The equivalent in C would be:
unsigned int x1;
x1 = text1[i1++] & 0xff;
text2[i2++] = (unsigned char)(x1 >> 8)
In C the shift operator will drag the 1 from the sign bit over if the variable it is operating on is signed, if not it will act like the java unsigned shift.

The >>> operator in Java is a logical shift equivalent to the >> operator in C++ on unsigned types. It shifts zeros into vacant bit positions on the left. On signed types, it’s implementation-defined whether a right shift is logical or arithmetic, so you need to use an unsigned type from the start (or cast):
unsigned int x1;
x1 = text1[i1++] & 0xff;
text2[i2++] = static_cast<char>(x1 >> 8);
Of course, this code doesn’t seem to make much sense—x1 only has 8 non-zero bits because it was masked with 0xff, so right-shifting it by 8 bits results in zero.

Related

Bitwise or before casting to int32

I was messing about with arrays and noticed this. EG:
int32_t array[];
int16_t value = -4000;
When I tried to write the value into the top and bottom half of the int32 array value,
array[0] = (value << 16) | value;
the compiler would cast the value into a 32 bit value first before the doing the bit shift and the bitwise OR. Thus, instead of 16 bit -4000 being written in the top and bottom halves, the top value will be -1 and the bottom will be -4000.
Is there a way to OR in the 16 bit value of -4000 so both halves are -4000? It's not really a huge problem. I am just curious to know if it can be done.
Sure thing, just undo the sign-extension:
array[0] = (value << 16) | (value & 0xFFFF);
Don't worry, the compiler should handle this reasonably.
To avoid shifting a negative number:
array[0] = ((value & 0xFFFF) << 16) | (value & 0xFFFF);
Fortunately that extra useless & (even more of a NOP than the one on the right) doesn't show up in the code.
Left shift on signed types is defined only in some cases. From standard
6.5.7/4 [...] If E1 has a signed type and nonnegative value, and E1 × 2E2 is representable in the result type, then that is the resulting
value; otherwise, the behavior is undefined.
According to this definition, it seems what you have is undefined behaviour.
use unsigned value:
const uint16_t uvalue = value
array[0] = (uvalue << 16) | uvalue;
Normally when faced with this kind of issue I first set the resultant value to zero, then bitwise assign the values in.
So the code would be:
int32_t array[1];
int16_t value = -4000;
array[0] = 0x0000FFFF;
array[0] &= value;
array[0] |= (value << 16);
Cast the 16 too. Otherwise the int type is contagious.
array[0] = (value << (int16_t 16)) | value;
Edit: I don't have a compiler handy to test. Per comment below, this may not be right, but will get you in the right direction.

Bits shifted by bit shifting operators(<<, >>) in C, C++

can we access the bits shifted by bit shifting operators(<<, >>) in C, C++?
For example:
23>>1
can we access the last bit shifted(1 in this case)?
No, the shift operators only give the value after shifting. You'll need to do other bitwise operations to extract the bits that are shifted out of the value; for example:
unsigned all_lost = value & ((1 << shift)-1); // all bits to be removed by shift
unsigned last_lost = (value >> (shift-1)) & 1; // last bit to be removed by shift
unsigned remaining = value >> shift; // lose those bits
By using 23>>1, the bit 0x01 is purged - you have no way of retrieving it after the bit shift.
That said, nothing's stopping you from checking for the bit before shifting:
int value = 23;
bool bit1 = value & 0x01;
int shifted = value >> 1;
You can access the bits before shifting, e.g.
value = 23; // start with some value
lsbits = value & 1; // extract the LSB
value >>= 1; // shift
It worth signal that on MSVC compiler an intrinsic function exists: _bittest
that speeds up the operation.

Mini questions on 'unsigned char' and 'shift' operation?

The question should be basic but i am surprised that i had some trouble get it now. First one is when i glanced 'C++ primer' book chapter 5.3. The Bitwise Operators, when author use below code as an example to explain shift operation:
unsigned char bits = 1; // '10011011' is the corresponding bit pattern
bits << 1; // left shift
My head spin a little when i looked at this, where is '10011011' coming from? '1' is not '0x01'?
Another question comes from http://c-faq.com/strangeprob/ptralign.html, where author try to unpack structure:
struct mystruct {
char c;
long int i32;
int i16;
} s;
using
unsigned char *p = buf;
s.c = *p++;
s.i32 = (long)*p++ << 24;
s.i32 |= (long)*p++ << 16;
s.i32 |= (unsigned)(*p++ << 8); // this line !
s.i32 |= *p++;
s.i16 = *p++ << 8;
s.i16 |= *p++;
My question is, p is a pointer to unsigned char(which is 8 bits), right? when building higher bytes of s.i32(24~31, 16~23), *p++ is converted to 'long'(32bits) before doing left shift, so left shift would not lose bit in *p++, but in
s.i32 |= (unsigned)(*p++ << 8);
*p++ is shifted first, then convert to unsigned int, wouldn't the bits of *p++ all lost during the shift?
Again i realize i maybe missing some of the basics in C here. Hope someone can give a hand here.
Thanks,
To answer your second question, performing any arithmetic operation on a char promotes it to a (possibly unsigned) int (including shifting and other bitwise operations), so the integer-size value is shifted, not the 8-bit char value.
Yes the bit pattern of 1 is 0x01. 10011011 is the bit pattern for decimal 155.
As for the shifts, you're missing something called the "integral promotions", which are performed in arithmetic and binary operations like shifts. In this case, there's an unsigned char operand (*p++) and an int operand (the constant), so the unsigned char operand is converted to int. The cast (whether to long or to unsigned int) is always performed after the promotion and the shift are performed. So no, in none of the shifts are all of the bits lost.

C++ How to combine two signed 8 Bit numbers to a 16 Bit short? Unexplainable results

I need to combine two signed 8 Bit _int8 values to a signed short (16 Bit) value. It is important that the sign is not lost.
My code is:
unsigned short lsb = -13;
unsigned short msb = 1;
short combined = (msb << 8 )| lsb;
The result I get is -13. However, I expect it to be 499.
For the following examples, I get the correct results with the same code:
msb = -1; lsb = -6; combined = -6;
msb = 1; lsb = 89; combined = 345;
msb = -1; lsb = 13; combined = -243;
However, msb = 1; lsb = -84; combined = -84; where I would expect 428.
It seems that if the lsb is negative and the msb is positive, something goes wrong!
What is wrong with my code? How does the computer get to these unexpected results (Win7, 64 Bit and VS2008 C++)?
Your lsb in this case contains 0xfff3. When you OR it with 1 << 8 nothing changes because there is already a 1 in that bit position.
Try short combined = (msb << 8 ) | (lsb & 0xff);
Or using a union:
#include <iostream>
union Combine
{
short target;
char dest[ sizeof( short ) ];
};
int main()
{
Combine cc;
cc.dest[0] = -13, cc.dest[1] = 1;
std::cout << cc.target << std::endl;
}
It is possible that lsb is being automatically sign-extended to 16 bits. I notice you only have a problem when it is negative and msb is positive, and that is what you would expect to happen given the way you're using the or operator. Although, you're clearly doing something very strange here. What are you actually trying to do here?
Raisonanse C complier for STM8 (and, possibly, many other compilers) generates ugly code for classic C code when writing 16-bit variables into 8-bit hardware registers.
Note - STM8 is big-endian, for little-endian CPUs code must be slightly modified. Read/Write byte order is important too.
So, standard C code piece:
unsigned int ch1Sum;
...
TIM5_CCR1H = ch1Sum >> 8;
TIM5_CCR1L = ch1Sum;
Is being compiled to:
;TIM5_CCR1H = ch1Sum >> 8;
LDW X,ch1Sum
CLR A
RRWA X,A
LD A,XL
LD TIM5_CCR1,A
;TIM5_CCR1L = ch1Sum;
MOV TIM5_CCR1+1,ch1Sum+1
Too long, too slow.
My version:
unsigned int ch1Sum;
...
TIM5_CCR1H = ((u8*)&ch1Sum)[0];
TIM5_CCR1L = ch1Sum;
That is compiled into adequate two MOVes
;TIM5_CCR1H = ((u8*)&ch1Sum)[0];
MOV TIM5_CCR1,ch1Sum
;TIM5_CCR1L = ch1Sum;
MOV TIM5_CCR1+1,ch1Sum+1
Opposite direction:
unsigned int uSonicRange;
...
((unsigned char *)&uSonicRange)[0] = TIM1_CCR2H;
((unsigned char *)&uSonicRange)[1] = TIM1_CCR2L;
instead of
unsigned int uSonicRange;
...
uSonicRange = TIM1_CCR2H << 8;
uSonicRange |= TIM1_CCR2L;
Some things you should know about the datatypes (un)signed short and char:
char is an 8-bit value, thats what you where looking for for lsb and msb. short is 16 bits in length.
You should also not store signed values in unsigned ones execpt you know what you are doing.
You can take a look at the two's complement. It describes the representation of negative values (for integers, not for floating-point values) in C/C++ and many other programming languages.
There are multiple versions of making your own two's complement:
int a;
// setting a
a = -a; // Clean version. Easier to understand and read. Use this one.
a = (~a)+1; // The arithmetical version. Does the same, but takes more steps.
// Don't use the last one unless you need it!
// It can be 'optimized away' by the compiler.
stdint.h (with inttypes.h) is more for the purpose of having exact lengths for your variable. If you really need a variable to have a specific byte-length you should use that (here you need it).
You should everythime use datatypes which fit your needs the best. Your code should therefore look like this:
signed char lsb; // signed 8-bit value
signed char msb; // signed 8-bit value
signed short combined = msb << 8 | (lsb & 0xFF); // signed 16-bit value
or like this:
#include <stdint.h>
int8_t lsb; // signed 8-bit value
int8_t msb; // signed 8-bit value
int_16_t combined = msb << 8 | (lsb & 0xFF); // signed 16-bit value
For the last one the compiler will use signed 8/16-bit values everytime regardless what length int has on your platform. Wikipedia got some nice explanation of the int8_t and int16_t datatypes (and all the other datatypes).
btw: cppreference.com is useful for looking up the ANSI C standards and other things that are worth to know about C/C++.
You wrote, that you need to combine two 8-bit values. Why you're using unsigned short then?
As Dan already said, lsb automatically extended to 16 bits. Try the following code:
uint8_t lsb = -13;
uint8_t msb = 1;
int16_t combined = (msb << 8) | lsb;
This gives you the expected result: 499.
If this is what you want:
msb: 1, lsb: -13, combined: 499
msb: -6, lsb: -1, combined: -1281
msb: 1, lsb: 89, combined: 345
msb: -1, lsb: 13, combined: -243
msb: 1, lsb: -84, combined: 428
Use this:
short combine(unsigned char msb, unsigned char lsb) {
return (msb<<8u)|lsb;
}
I don't understand why you would want msb -6 and lsb -1 to generate -6 though.

Arithmetic vs logical shift operation in C++

I have some code that stuffs in parameters of various length (u8, u16, u32) into a u64 with the left shift operator.
Then at various places in the code i need to get back the original parameters from this big bloated parameter.
Just wondering how , in the code, should we ensure that its a logical right shift and not arithmetic one while getting back the original parameters.
So the qestion is are there any #defs or other ways to ensure and check whether the compiler will screw up?
Here's the C++ code:
u32 x , y ,z;
u64 uniqID = 0;
u64 uniqID = (s64) x << 54 |
(s64) y << 52 |
(s64) z << 32 |
uniqID; // the original uniqID value.
And later on while getting the values back :
z= (u32) ((uniqID >> 32 ) & (0x0FFFFF)); //20 bits
y= (u32) ((uniqID >> (52 ) & 0x03)); //2 bits
x= (u32) ((uniqID >> (54) & 0x03F)); //6 bits
The general rule is a logical shift is suitable for unsigned binary numbers, while the arithmetic shift is suitable for signed 2's comp numbers. It will depend on your compiler (gcc etc), not so much the language, but you can assume that the compiler will use a logical shift for unsigned numbers... So if you have an unsigned type one would think that it will be a logical shift.
You can always write your own method to check and do the shifting if you need some portability between compilers. Or you can use in-line asm to do this and avoid any issues (but you would be fixed to a platform).
In short to be 100% correct check your compiler doco.
This looks like C/C++, so just make sure uniqID is an unsigned integer type.
Alternatively, just cast it:
z = (u32) ( ((unsigned long long)uniqID >> (32) & (0x0FFFFF)); //20 bits
y = (u32) ( ((unsigned long long)uniqID >> (52) & 0x03)) ; //2 bits
x = (u32) ( ((unsigned long long)uniqID >> (54) & 0x03F)) ; //6 bits