What does >> mean in C++ code? [closed] - c++

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
static CBigNum bnProofOfWorkLimit(~uint256(0) >> 32);
That statement is filled with all sorts of magic. What exactly is it doing?

What does >> mean in C++ code?
For integer types, it's the binary right-shift operator, which takes the binary representation of its first operand, and moves it a number of places to the right. a >> b is roughly the same as a / pow(2,b).
That statement is filled with all sorts of magic. What exactly is it doing?
uint256 isn't a standard type or function; I'll assume it's a big-number type with 256 bits, with suitable operator overloads so that it acts like a standard numeric type. So uint256(0) is a 256-bit number with value zero.
~ is the binary negation operator; it zeros all set bits, and sets all zero bits. So ~uint256(0) will contain 256 bits, all set.
Finally, the shift moves those bits 32 bits to the right. So the top 32 bits will all be zero, and the remaining 224 bits will be set.

Assuming uint256 is a 256 bit unsigned integer type and the operators are defined as for the built-in types, this will:
initialize a 256 bit unsigned integer with 0
bitwise invert it (operator ~)
right-shift it by 32 bits (operator >>)
See Wikipedia on C / C++ operators

My guess is a shift. It's shifting the bits to the right, possibly by 32 bits. We can't say for sure without seeing the uint256 class due to c++ operator overloading.

Related

What is masking? (Bitwise operators) [duplicate]

This question already has answers here:
What is bit masking?
(2 answers)
Closed 5 years ago.
I ran across an online explanation of what masking is with respect to bitwise operators, but was unable to fully understand the explanation due to the use of symbols that I'm not familiar with. Here is the demonstration:
The specific line that confuses me is "// Actual shift is 10 & (8-1) = 2
". I get that 10 is the number of bits that we want to shift left by, but what does "&" mean in this context, and why do we subtract 8-1?
Link to the demonstration can be found here:
https://msdn.microsoft.com/en-us/library/8xftzc7e(v=vs.100).aspx
Thanks in advance.
The answer is in the second paragraph of your own links remark section
The << operator masks expression2 to avoid shifting expression1 by too
much. Otherwise, if the shift amount exceeded the number of bits in
the data type of expression1, all the original bits would be shifted
away to give a trivial result. To ensure that each shift leaves at
least one of the original bits, the shift operators use the following
formula to calculate the actual shift amount: mask expression2 (using
the bitwise AND operator) with one less than the number of bits in
expression1.
Number of bits in a byte is 8. So the bitmask is 7, or 0b111. Apply this to 10 (0x1010) you arrive at 0x10 aka 2. That’s your shift amount.

Increment pointer bit by bit [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I know that pointers increment by the number of bytes that sizeof(type_i_am_using) returns. However, is there anyway to make a pointer increment bit by bit?
However, is there anyway to make a pointer increment bit by bit?
If you meant "byte by byte": No, because there is something called alignment. Addresses that are not aligned cannot be addresses of valid objects, hence a pointer containing an unaligned address is invalid. Most operations with invalid pointers invoke undefined behavior. If you want to e.g. access array subobjects of a standard-layout class where that array is the first member, cast the pointer to the element type of the array and work from there. There is no direct point in what you describe.
If you meant literally "bit by bit": There are well-known methods of iterating through all bits in an object representation using a simple for loop.
Most computer architectures don't let you address individual bits. If you need to, say, iterate through a sequence of bits, you need to iterate over the bytes instead (using a char *, or a pointer to a larger, unsigned integral type) and extract bits through bit shifting and bit mask operations. (value >> x) & 1 will extract the bit at index x from the right; value |= 1 << x will set it to 1, and value &= ~(1 << x) will set it to 0.
Note that vector<bool> is specialized to pack its values into individual bits.
A pointer can't be incremented bit by bit because a char is the smallest amount of addressable memory. This is mandated in the language specification. If you need to start looking inside individual bits then you most likely will want to use the bit shifting/masking operations.
For example to look inside a character you might want to do something like this:
bool get_bit_n(unsigned int n, char x){
return (1 << n) & x
}
Also you might want to look into std::bitset.
No. Character is the smallest thing that machine can address. You can however traverse all bits in machine using bit operations. In C++ you can use std::bitset or std::vector<bool> specialization, or bit fields.

How to perform sum between double with bitwise operations [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'd like to know how floating-point numbers sum works.
How can I sum two double(or float) numbers using bitwise operations?
Short answer: if you need to ask, you are not going to implement floating-point addition from bitwise operators. It is completely possible but there are a number of subtle points that you would need to have asked about before. You could start by implementing a double → float conversion function, it is simpler but would introduce you to many of the concepts. You could also do double → nearest integer as an exercise.
Nevertheless, here is the naive version of addition:
Use large arrays of bits for each of the two operands (254 + 23 for float, 2046 + 52 for double). Place the significand at the right place in the array according to the exponent. Assuming the arguments are both normalized, do not forget to place the implicit leading 1. Add the two arrays of bits with the usual rules of binary addition. Then convert the resulting array to floating-point format: first look for the leftmost 1; the position of this leftmost 1 determines the exponent. The significand of the result starts right after this leading 1 and is respectively 23- or 52-bit wide. The bits after that determine whether the value should be rounded up or down.
Although this is the naive version, it is already quite complicated.
The non-naive version does not use 2100-bit wide arrays, but takes advantage of a couple of “guard bits” instead (see section “on rounding” in this document).
The additional subtleties include:
The sign bits of the arguments can mean that the magnitudes should be subtracted for an addition, or added for a subtraction.
One of the arguments can be NaN. Then the result is NaN.
One of the arguments can be an infinity. If the other argument is finite or the same infinity, the result is the same infinity. Otherwise, the result is NaN.
One of the arguments can be a denormalized number. In this case there is no leading 1 when transferring the number to the array of bits for addition.
The result of the addition can be an infinity: depending on the details of the implementation, this would be recognized as an exponent too large to fit the format, or an overflow during the addition of the binary arrays (the overflow can also occur during the rounding step).
The result of the addition can be a denormalized number. This is recognized as the absence of a leading 1 in the first 2046 bits of the array of bits. In this case the last 52 bits of the array should be transferred to the significand of the result, and the exponent should be set to zero, to indicate a denormalized result.

How to determine the number of bits in int [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions must demonstrate a minimal understanding of the problem being solved. Tell us what you've tried to do, why it didn't work, and how it should work. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
This is what I tried:
int i=-1,size=1;
while (i>>1)
size++;
printf("%d",size);
The goal is to determine the size of int without using the sizeof operator.
The above loop turns out to be infinite. Is there a way to fix it so it does what it is intended to do?
Just use unsigned for i, rather than int. They are
guaranteed to have the same size, and right shift of a signed integer is implementation defined (but will usually shift in the sign bit). And don't forget to divide
the results by CHAR_BIT (which is not guaranteed to be 8).
You have chosen a negative number for right-shifting.
Right shifting a negative number, it gets filled with the sign bit 1 (or not, depending on implementation), so your value can never be 0 (=false), which means you get precisely the infinite loop you are complaining about.
your loop is indeed infinite.
start from i = 1 and shift it left till you reach i =0 and stop. you have the bits.
-edit---
this will work for signed as well as unsigned integer alike.

Determining signed overflow (x86 Overflow / Auxilliary Flags)

First of all: I really tried to find a matching answer for this, but I just wasn't successful.
I am currently working on a little 8086 emulator. What I haven't still figured out is how the Overflow and Auxilliary flags are calculated best for addition and subtraction.
As far as I know the Auxilliary Flag complies with the Overflow flag but only uses 4 bits while the Overflow Flag uses the whole size. So if I am adding two signed 1-byte integers the OF would check for 1-byte signed overflow while the Auxilliary Flag would only look at the lower 4 bytes of the two integers.
Are there any generic algorithms or "magic bitwise operations" for calculating the signed overflow for 4,8 and 16 bit addition and subtraction? (I don't mind what language there are written in)
Remark: I need to store the values in unsigned variables internally, so I do only have the possibility to work with unsigned values or bitwise calculations.
Might one solution that works for addition and subtraction be to check whether the "Sign Flag" (or bit 4 for the Auxilliary flag) has changed after the calculation is done?
Thanks in advance!
Overflow Flag indicates whether the result is too large/too small to fit in the destination operand, regardless of its size.
Auxilliary Flag indicates whether the result is too large/too small to fit in four bits.
Edit: How to determine AF: Explain how the AF flag works in an x86 instructions? .