It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I want to construct the unsigned integer (4 bytes) represented by the binary string 10101010101010101010101010101010 (that's 16 1s and 16 0s).
Is there an efficient way to construct this value using bit manipulation? I could do it in a for loop, but I feel that's inefficient.
Any language works for me. I personally know c and C++.
Just read bits 4 by 4 and treat them as hex:
your number is just 0xAAAAAAAA.
With a fixed number of bits, this is a bit obvious...
But for a variable number of bits, such pattern can be obtained by integer division as demonstrated here http://smallissimo.blogspot.fr/2011/07/revisiting-sieve-of-erathostenes.html
EDIT More detailed explanations below:
The idea is that :
2r01 * 2r11 -> 2r11
2r0101 * 2r11 -> 2r1111
2r010101 * 2r11 -> 2r111111
So inversely, we can apply an exact division:
2r111111 / 2r11 -> 2r010101
If we want 2r101010 rather than 2r010101, just add 1 more bit (but the division is then inexact, I assume quotient is given by // like in Smalltalk) :
2r1111111 // 2r11 -> 2r101010
2r1111111 can be constructed easily, it is a power of 2 minus 1, 2^7-1, which can also be obtained by a bith shift (1<<7)-1.
In your case, your constant is ((1<<33)-1)//3, or if you write it in C/C++ ((1ULL<<33)-1)/3(In Smalltalk we don't care of integer length, they are of arbitrary length, but in most language we must make sure the operands fits on a native integer length).
Note that division also work for bit patterns of longer length like 2r100100100, divide by 2r111, and for a bit pattern of length p, divide by 2^p-1 that is (1<<p)-1
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
Found that logic on a code and don't get the reason behind that; why use it instead of assign normal int?
(its a character controller in a 3D environment)
// assumes we're not blocked
int blocked = 0x00;
if ( its_a_floor )
blocked |= 0x01;
if ( its_a_wall )
blocked |= 0x02;
0x00 is a "normal int". We are used to base 10 representations, but other than having 10 fingers in total, base 10 is not special. When you use an integer literal in code you can choose between decimal, octal, hexadecimal and binary representation (see here). Don't confuse the value with its representation. 0b01 is the same integers as 1. There is literally no difference in the value.
As a fun fact and to illustrate the above, consider that 0 is actually not a decimal literal. It is an octal literal. It doesn't really matter, because 0 has the same representation in any base.
As the code is using bit-wise operators it would be most convenient to use a binary literals. For example you can see easily that 0b0101 | 0b10101 equals 0b1111 and that 0b0101 & 0b1010 equals 0b0000. This isn't that obvious when using base 10 representations.
However, the code you posted does not use binary literals, but rather hexadecimal literals. This might be due to the fact that binary literals are only standard C++ since C++14, or because programmers used to bit wise operators are so used to hexadecmials, that they still use them rather than the binary.
This question already has answers here:
Representation of float in C
(3 answers)
Closed 7 years ago.
Can anyone explain in layman's term how it's possible for a float to hold a number such large as 3.4E38 when its size is only 4 bytes?
Since this has 32 bits total, the largest number would be 2E31 + 2E30 + 2E29 + ... + 2E0, which is equivalent to 2147483647. So why can float hold such large number when int can hold only 2147483647 when both are practically the same size of 4 bytes?
I really appreciate your help in advance.
Your question hints at the answer:
How is it possible for float to hold a number as large as 3.4E38
How can "3.4E38" use so few numbers to refer to such a big value? Well, it keeps a value in a set range (1 <= 3.4 < 10) on the left of E - with some smallish number of significant digits, and keeps an exponent "38" on the right - encoding a multiplication by 10^38 - that's also easy enough to store.
float values do the same thing, albeit in binary: 23 bits to store a mantissa (that's the "left" value) and 8 bits to store an exponent; the other bit is for positive/negative sign.
Fuller details at wikipedia.
I also heartily recommend this online "calculator" which lets you type in a value like 3.4E38, see the binary representation (01111111011111111100100110011110), and a more accurate approximation of the value stored; you can even toggle the bits and see how they affect the value stored.
That's because float sacrifices precision to store large numbers. See this answer on Gamedev StackExchange site for a brief explanation.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'd like to know how floating-point numbers sum works.
How can I sum two double(or float) numbers using bitwise operations?
Short answer: if you need to ask, you are not going to implement floating-point addition from bitwise operators. It is completely possible but there are a number of subtle points that you would need to have asked about before. You could start by implementing a double → float conversion function, it is simpler but would introduce you to many of the concepts. You could also do double → nearest integer as an exercise.
Nevertheless, here is the naive version of addition:
Use large arrays of bits for each of the two operands (254 + 23 for float, 2046 + 52 for double). Place the significand at the right place in the array according to the exponent. Assuming the arguments are both normalized, do not forget to place the implicit leading 1. Add the two arrays of bits with the usual rules of binary addition. Then convert the resulting array to floating-point format: first look for the leftmost 1; the position of this leftmost 1 determines the exponent. The significand of the result starts right after this leading 1 and is respectively 23- or 52-bit wide. The bits after that determine whether the value should be rounded up or down.
Although this is the naive version, it is already quite complicated.
The non-naive version does not use 2100-bit wide arrays, but takes advantage of a couple of “guard bits” instead (see section “on rounding” in this document).
The additional subtleties include:
The sign bits of the arguments can mean that the magnitudes should be subtracted for an addition, or added for a subtraction.
One of the arguments can be NaN. Then the result is NaN.
One of the arguments can be an infinity. If the other argument is finite or the same infinity, the result is the same infinity. Otherwise, the result is NaN.
One of the arguments can be a denormalized number. In this case there is no leading 1 when transferring the number to the array of bits for addition.
The result of the addition can be an infinity: depending on the details of the implementation, this would be recognized as an exponent too large to fit the format, or an overflow during the addition of the binary arrays (the overflow can also occur during the rounding step).
The result of the addition can be a denormalized number. This is recognized as the absence of a leading 1 in the first 2046 bits of the array of bits. In this case the last 52 bits of the array should be transferred to the significand of the result, and the exponent should be set to zero, to indicate a denormalized result.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
This is how the code is:
for(char c=0;c<256;c++)
printf("hello");
Why does this go into an infinite loop?
This is because char is an 8-bit integer in your case.
So char only has the values -128 to 127, which is always less than 256.
It's likely that your char data type is limitied to eight bits. If the default char is signed, eight-bit and you're using twos'complement, it will always be in the range -128..127 so will always be less than 256.
If it's unsigned eight-bit (where the encoding doesn't matter since that only affects signed numbers), it will be always in the 0.255 range so that will also always be under 256.
Note that C doesn't mandate two's complement so the range may change a little but with all possibilities of encoding and only eight bits, the highest value it can have is 255.
Only if you have more than eight bits available to you will a char ever be able to reach 256. This is possible because C only mandates the minimum number of bits. You can check what your implementation provides by looking for CHAR_BITS in <limits.h> but, in the vast majority of cases, this will be 8.
For completeness, the eight-bit range for the three possible C encodiding is:
signed unsigned
========= ========
one's complement -127..127 0..255
sign/magnitude -127..127 0..255
two's complement -128..127 0..255
char can only store the values from 0 to 255 if it is unsigned or -128 to 127 if it is signed, which depends on your build settings. In either case, it will always be less than 256. Adding one to the highest value will make it wrap around to the lowest value.
You are looping the byte. Basically when you go over the maximum amount that a variable type can store it will wrap around and begin again from the start.
256 isn't the maximum number a char can hold. It is the maximum number of possible values that a char can represent, those are 2 very different things.
A char is 8 bits (on just about all normal implementations)
A 8 bit type can hold 256 possible distinct values. But for a normal signed char, those possible values are in the range of -128 to 127.
An unsigned char is still 8 bits and therefore holds 256 possible values but those values are in the range of 0 to 255. This is still 1 number off 256 as 0 is a possible value.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
φ(n) = (p-1)(q-1)
p and q are two big numbers
find e such that gcd(e,φ(n)) = 1
consider p and q to be a very large prime number (Bigint). I want to find a efficient solution for this.
[Edit] I can solve this using a brute force method. But as the numbers are too big I need more efficient solution.
also 1< e < (p-1)(q-1)
Usually you choose e to be a prime number. A common choice is 65537. You then select p and q so that gcd(p-1, e)=1 and gcd(q-1, e)=1, which just required you to check that p-1 and q-1 are not multiples of e (when you (rarely) find that one of them is, you generate a new prime instead).
65537 has the advantage of allowing you to optimize the public key operation by observing that x^65537 = x^(2^16+1) = x^2^2^2^2^2^2^2^2^2^2^2^2^2^2^2^2 * x (mod m), so you just need 16 modular squarings and a modular multiplication.
You have to decide how big you want e to be. This is a system decision. Commonly, e used to be fixed at 3; more usual nowadays is e=65537. In these cases, e is prime, so (as others have already pointed out) you just have to check that (p-1)(q-1) is not a multiple of e.
But some system requirements specify a 32-bit random e. This is because some cryptographers feel that flaws are more likely to be discovered in fixed-exponent RSA systems than in random-exponent systems. (As far as I know, no concrete exploitation has been discovered for fixed-exponent systems; but cryptographers are paid to be over-cautious.)
So let's say you're stuck with having to generate a random 32-bit e that is co-prime to (p-1)(q-1). The simplest solution is this: Generate a random, odd 32-bit number e. Then calculate its inverse mod (p-1)(q-1). If this inverse calculation fails, because e is not co-prime to (p-1)(q-1), then try again.
This is a reasonable, practical solution. You will need to calculate the inverse anyway, and computing an inverse doesn't take much longer than computing a gcd.
If you really need to make it as fast as you can, you can look for small prime factors of (p-1)(q-1) and trial-divide e by these factors: if you find small prime factors, then you can speed up your search for e; if you don't, then the search will probably terminate quickly.
Another reasonable soltuion is to generate a random 32-bit prime e, and check (p-1)(q-1) for divisibility by e. Whether this is allowed would depend on your system requirements. Are you setting these system requirements yourself?
Pick first prime number >= 3 that satisfies this.
If you are looking for speed, you might use small exponent.
There might be two problems whit 2 exponents.
You should not use small exponents to encrypt same massage whit multiple schemes. (for instance if there are tree private/public pairs whit exp = 3 you can use Gauss’s algorithm to recover plaintext.
You should not send short messages because attacker might use only cube root to recover this.
Considering these weaknesses you might use this scheme. And as far as I know number 3 is common number for e.
By the way, brute forcing few numbers is negligible compared to checking for primality.
I think you may have misstated the problem; e=1 works nicely for the one you've written.
What you need to do then is compute de = 1 mod phi(n). This is actually very quick - you simply need to use the extended Euclidean Algorithm on e and phi n. Doing so will allow you to compute de + k\phi(n) = 1 which is to say you have computed the inverse of e under \phi(n).
Edit, Rasmus Faber is correct, you do need to verify that gcd(e, \phi(n)) = 1. The extended Euclidean Algorithm will still do this for you - you compute both the gcd and the multiples of e, phi(n). This tells you what d is, namely that d is the inverse of e, modulu phi n which tells you that t^ed = t^1 modulo phi n.
As for doing this in practice, well, I strongly suggest using a bignum library; rolling your own arbitrary precision euclidean extended algorithm isn't easy. Here is one such function that will do this efficiently for arbitrary precision arithmetic.