How can I force a xor operation to stay within the visible ascii range? - bit-manipulation

Say you have a value k in [32,126] and another value p in [32,126]. If you compute p xor k you might get low values such as 0 not in [32,126]. One trick to stay within [32,127] is to compute p xor (k and 15). The and(k, 15) eliminates the 4 most significant bits of k while keeping k's least significant bits intact.
However, 127 is a control character in the ASCII table. Can you do something elegant that makes this range go from 32 to 126 and not 32 to 127?

Related

How to choose the correct left shift in bit wise operations?

I am learning bare metal programming in c++ and it often involves setting a portion of a 32 bit hardware register address to some combination.
For example for an IO pin, I can set the 15th to 17th bit in a 32 bit address to 001 to mark the pin as an output pin.
I have seen code that does this and I half understand it based on an explanation of another SO question.
# here ra is a physical address
# the 15th to 17th bits are being
# cleared by AND-ing it with a value that is one everywhere
# except in the 15th to 17th bits
ra&=~(7<<12);
Another example is:
# this clears the 21st to 23rd bits of another address
ra&=~(7<<21);
How do I choose the 7 and how do I choose the number of bits to shift left?
I tried this out in python to see if I can figure it out
bin((7<<21)).lstrip('-0b').zfill(32)
'00000000111000000000000000000000'
# this has 8, 9 and 10 as the bits which is wrong
The 7 (base 10) is chosen as its binary representation is 111 (7 in base 2).
As for why it's bits 8, 9 and 10 set it's because you're reading from the wrong direction. Binary, just as normal base 10, counts right to left.
(I'd left this as a comment but reputation isn't high enough.)
If you want to isolate and change some bits in a register but not all you need to understand the bitwise operations like and and or and xor and not operate on a single bit column, bit 3 of each operand is used to determine bit 3 of the result, no other bits are involved. So I have some bits in binary represented by letters since they can each either be a 1 or zero
jklmnopq
The and operation truth table you can look up, anything anded with zero is a zero anything anded with one is itself
jklmnopq
& 01110001
============
0klm000q
anything orred with one is a one anything orred with zero is itself.
jklmnopq
| 01110001
============
j111nop1
so if you want to isolate and change two bits in this variable/register say bits 5 and 6 and change them to be a 0b10 (a 2 in decimal), the common method is to and them with zero then or them with the desired value
76543210
jklmnopq
& 10011111
============
j00mnopq
jklmnopq
| 01000000
============
j10mnopq
you could have orred bit 6 with a 1 and anded bit 5 with a zero, but that is specific to the value you wanted to change them to, generically we think I want to change those bits to a 2, so to use that value 2 you want to zero the bits then force the 2 onto those bits, and them to make them zero then orr the 2 onto the bits. generic.
In c
x = read_register(blah);
x = (x&(~(3<<5)))|(2<<5);
write_register(blah,x);
lets dig into this (3 << 5)
00000011
00000110 1
00001100 2
00011000 3
00110000 4
01100000 5
76543210
that puts two ones on top of the bits we are interested in but anding with that value isolates the bits and messes up the others so to zero those and not mess with the other bits in the register we need to invert those bits
using x = ~x inverts those bits a logical not operation.
01100000
10011111
Now we have the mask we want to and with our register as shown way above, zeroing the bits in question while leaving the others alone j00mnopq
Now we need to prep the bits to or (2<<5)
00000010
00000100 1
00001000 2
00010000 3
00100000 4
01000000 5
Giving the bit pattern we want to orr in giving j10mnopq which we write back to the register. Again the j, m, n, ... bits are bits they are either a one or a zero and we dont want to change them so we do this extra masking and shifting work. You may/will sometimes see examples that simply write_register(blah,2<<5); either because they know the state of the other bits, know they are not using those other bits and zero is okay/desired or dont know what they are doing.
x read_register(blah); //bits are jklmnopq
x = (x&(~(3<<5)))|(2<<5);
z = 3
z = z << 5
z = ~z
x = x & z
z = 2
z = z << 5
x = x | z
z = 3
z = 00000011
z = z << 5
z = 01100000
z = ~z
z = 10011111
x = x & z
x = j00mnopq
z = 2
z = 00000010
z = z << 5
z = 01000000
x = x | z
x = j10mnopq
if you have a 3 bit field then the binary is 0b111 which in decimal is the number 7 or hex 0x7. a 4 bit field 0b1111 which is decimal 15 or hex 0xF, as you get past 7 it is easier to use hex IMO. 6 bit field 0x3F, 7 bit field 0x7F and so on.
You can take this further in a way to try to be more generic. If there is a register that controls some function for gpio pins 0 through say 15. starting with bit 0. If you wanted to change the properties for gpio pin 5 then that would be bits 10 and 11, 5*2 = 10 there are two pins so 10 and the next one 11. But generically you could:
x = (x&(~(0x3<<(pin*2)))) | (value<<(pin*2));
since 2 is a power of 2
x = (x&(~(0x3<<(pin<<1)))) | (value<<(pin<<1));
an optimization the compiler might do for if pin cannot be reduced to a specific value at compile time.
but if it were 3 bits per field and the fields start aligned with bit zero
x = (x&(~(0x7<<(pin*3)))) | (value<<(pin*3));
which the compiler might do a multiply by 3 but maybe instead just
pinshift = (pinshift<<1)|pinshift;
to get the multiply by three. depends on the compiler and instruction set.
overall this is called a read modify write as you read something, modify some of it, then write back (if you were modifying all of it you wouldnt need to bother with a read and a modify you would write the whole new value). And folks will say masking and shifting to generically cover isolating bits in a variable either for modification purposes or if you wanted to read/see what those two bits above were you would
x = read_register(blah);
x = x >> 5;
x = x & 0x3;
or mask first then shift
x = x & (0x3<<5);
x = x >> 5;
six of one half a dozen of another, both are equal in general, some instruction sets one might be more efficient than another (or might be equal and then shift, or shift then and). One might make more sense visually to some folks rather than the other.
Although technically this is an endian thing as some processors bit 0 is the most significant bit. In C AFAIK bit 0 is the least significant bit. If/when a manual shows the bits laid out left to right you want your right and left shifts to match that, so as above I showed 76543210 to indicate the documented bits and associated that with jklmnopq and that was the left to right information that mattered to continue the conversation about modifying bits 5 and 6. some documents will use verilog or vhdl style notation 6:5 (meaning bits 6 to 5 inclusive, makes more sense with say 4:2 meaning bits 4,3,2) or [6 downto 5], more likely to just see a visual picture with boxes or lines to show you what bits are what field.
How do I choose the 7
You want to clear three adjacent bits. Three adjacent bits at the bottom of a word is 1+2+4=7.
and how do I choose the number of bits to shift left
You want to clear bits 21-23, not bits 1-3, so you shift left another 20.
Both your examples are wrong. To clear 15-17 you need to shift left 14, and to clear 21-23 you need to shift left 20.
this has 8, 9,and 10 ...
No it doesn't. You're counting from the wrong end.

Keep every n-th bits and collapse them in the least significant bits

I have a 32 bits integer that I treat as a bitfield. I'm interested in the value of the bits with an index of the form 3n where n range from 0 to 6 (every third bit between 0 and 18) I'm not interested in the bits with index in the form 3n+1 or 3n+2.
I can easily use the bitwise AND operator to keep the bits i'm interested in and set all the others bits to zero.
I would also need to "pack" the bits I'm interested in in the 7 least significant bits positions. So the bit at position 0 stay at 0, but the bit at position 3 is moved to position 1, the bit at position 6 moves to position 2 and so on.
I would like to do this in an efficient way, ideally without using a loop. Is there a combinations of operations I could apply to an integer to achieve this?
Since we're only talking about integer arithmetics here, I don't think the programming language I plan to use is of importance. But if you need to know :
I'm gonna use JavaScript.
If the order of the bits is not important, they can be packed into bits 0-6 like this:
function packbits(a)
{
// mask out the bits we're not interested in:
var b = a & 299593; // 1001001001001001001 in binary
// pack into the lower 7 bits:
return (b | (b >> 8) | (b >> 13)) & 127;
}
If the initial bit ordering is like this:
bit 31 bit 0
xxxxxxxxxxxxxGxxFxxExxDxxCxxBxxA
Then the packed ordering is like this:
bit 7 bit 0
0CGEBFDA

how do i find collision for a simple hash algorithm

I have the following hash algorithm:
unsigned long specialNum=0x4E67C6A7;
unsigned int ch;
char inputVal[]=" AAPB2GXG";
for(int i=0;i<strlen(inputVal);i++)
{
ch=inputVal[i];
ch=ch+(specialNum*32);
ch=ch+(specialNum/4);
specialNum=bitXor(specialNum,ch);
}
unsigned int outputVal=specialNum;
The bitXor simply does the Xor operation:
int bitXor(int a,int b)
{
return (a & ~b) | (~a & b);
}
Now I want to find an Algorithm that can generate an "inputVal" when the outputVal is given.(The generated inputVal may not be necessarily be same as the original inputVal.That's why I want to find collision).
This means that I need to find an algorithm that generates a solution that when fed into the above algorithm results same as specified "outputVal".
The length of solution to be generated should be less than or equal to 32.
Method 1: Brute force. Not a big deal, because your "specialNum" is always in the range of an int, so after trying on average a few billion input values, you find the right one. Should be done in a few seconds.
Method 2: Brute force, but clever.
Consider the specialNum value before the last ch is processed. You first calculate (specialNum * 32) + (specialNum / 4) + ch. Since -128 <= ch < 128 or 0 <= ch < 256 depending on the signedness of char, you know the highest 23 bits of the result, independent of ch. After xor'ing ch with specialNum, you also know the highest 23 bits (if ch is signed, there are two possible values for the highest 23 bits). You check whether those 23 bits match the desired output, and if they don't, you have excluded all 256 values of ch in one go. So the brute force method will end on average after 16 million steps.
Now consider the specialNum value before the last two ch are processed. Again, you can determine the highest possible 14 bits of the result (if ch is signed with four alternatives) without examining the last two characters at all. If the highest 14 bits don't match, you are done.
Method 3: This is how you do it. Consider in turn all strings s of length 0, 1, 2, etc. (however, your algorithm will most likely find a solution much quicker). Calculate specialNum after processing the string s. Following your algorithm, and allowing for char to be signed, find the up to 4 different values that the highest 14 bits of specialNum might have after processing two further characters. If any of those matches the desired output, then examine the value of specialNum after processing each of the 256 possible values of the next character, and find the up to 2 different values that the highest 23 bits of specialNum might have after examining another char. If one of those matches the highest 23 bits of the desired output then examine what specialNum would be after processing each of the 256 possible next characters and look for a match.
This should work below a millisecond. If char is unsigned, it is faster.

Shifting by k, for large values of k (CSAPP)

I am reading about shifting by k, for large values of k in the book CSAPP. It was discussing about what would be the effects of shifting a data type consisting of w bits by some value k >= w. It stated the following line:
"On many machines, the shift instructions consider only the lower log_2 w bits of the shift amount when shifting a w-bit value, and so the shift amount is effectively computed as k mod w."
While I do understand the k mod w part, I do not understand what CSAPP means by the lower log_2 w bits of the shift amount. I was thinking that if we have an integer on a 32-bit machine that we want to shift 36 units to the left, we would shift it 36 mod 32, or 4 bits to the left. I wasn't sure how that would be equivalent to the lower log_2 32 bits = 5 bits of the shift amount.

Analysis of the usage of prime numbers in hash functions

I was studying hash-based sort and I found that using prime numbers in a hash function is considered a good idea, because multiplying each character of the key by a prime number and adding the results up would produce a unique value (because primes are unique) and a prime number like 31 would produce better distribution of keys.
key(s)=s[0]*31(len–1)+s[1]*31(len–2)+ ... +s[len–1]
Sample code:
public int hashCode( )
{
int h = hash;
if (h == 0)
{
for (int i = 0; i < chars.length; i++)
{
h = MULT*h + chars[i];
}
hash = h;
}
return h;
}
I would like to understand why the use of even numbers for multiplying each character is a bad idea in the context of this explanation below (found on another forum; it sounds like a good explanation, but I'm failing to grasp it). If the reasoning below is not valid, I would appreciate a simpler explanation.
Suppose MULT were 26, and consider
hashing a hundred-character string.
How much influence does the string's
first character have on the final
value of 'h'? The first character's value
will have been multiplied by MULT 99
times, so if the arithmetic were done
in infinite precision the value would
consist of some jumble of bits
followed by 99 low-order zero bits --
each time you multiply by MULT you
introduce another low-order zero,
right? The computer's finite
arithmetic just chops away all the
excess high-order bits, so the first
character's actual contribution to 'h'
is ... precisely zero! The 'h' value
depends only on the rightmost 32
string characters (assuming a 32-bit
int), and even then things are not
wonderful: the first of those final 32
bytes influences only the leftmost bit
of `h' and has no effect on the
remaining 31. Clearly, an even-valued
MULT is a poor idea.
I think it's easier to see if you use 2 instead of 26. They both have the same effect on the lowest-order bit of h. Consider a 33 character string of some character c followed by 32 zero bytes (for illustrative purposes). Since the string isn't wholly null you'd hope the hash would be nonzero.
For the first character, your computed hash h is equal to c[0]. For the second character, you take h * 2 + c[1]. So now h is 2*c[0]. For the third character h is now h*2 + c[2] which works out to 4*c[0]. Repeat this 30 more times, and you can see that the multiplier uses more bits than are available in your destination, meaning effectively c[0] had no impact on the final hash at all.
The end math works out exactly the same with a different multiplier like 26, except that the intermediate hashes will modulo 2^32 every so often during the process. Since 26 is even it still adds one 0 bit to the low end each iteration.
This hash can be described like this (here ^ is exponentiation, not xor).
hash(string) = sum_over_i(s[i] * MULT^(strlen(s) - i - 1)) % (2^32).
Look at the contribution of the first character. It's
(s[0] * MULT^(strlen(s) - 1)) % (2^32).
If the string is long enough (strlen(s) > 32) then this is zero.
Other people have posted the answer -- if you use an even multiple, then only the last characters in the string matter for computing the hash, as the early character's influence will have shifted out of the register.
Now lets consider what happens when you use a multiplier like 31. Well, 31 is 32-1 or 2^5 - 1. So when you use that, your final hash value will be:
\sum{c_i 2^{5(len-i)} - \sum{c_i}
unfortunately stackoverflow doesn't understad TeX math notation, so the above is hard to understand, but its two summations over the characters in the string, where the first one shifts each character by 5 bits for each subsequent character in the string. So using a 32-bit machine, that will shift off the top for all except the last seven characters of the string.
The upshot of this is that using a multiplier of 31 means that while characters other than the last seven have an effect on the string, its completely independent of their order. If you take two strings that have the same last 7 characters, for which the other characters also the same but in a different order, you'll get the same hash for both. You'll also get the same hash for things like "az" and "by" other than in the last 7 chars.
So using a prime multiplier, while much better than an even multiplier, is still not very good. Better is to use a rotate instruction, which shifts the bits back into the bottom when they shift out the top. Something like:
public unisgned hashCode(string chars)
{
unsigned h = 0;
for (int i = 0; i < chars.length; i++) {
h = (h<<5) + (h>>27); // ROL by 5, assuming 32 bits here
h += chars[i];
}
return h;
}
Of course, this depends on your compiler being smart enough to recognize the idiom for a rotate instruction and turn it into a single instruction for maximum efficiency.
This also still has the problem that swapping 32-character blocks in the string will give the same hash value, so its far from strong, but probably adequate for most non-cryptographic purposes
would produce a unique value
Stop right there. Hashes are not unique. A good hash algorithm will minimize collisions, but the pigeonhole principle assures us that perfectly avoiding collisions is not possible (for any datatype with non-trivial information content).