Bitwise Comparison to establish Bit Shift - bit-manipulation

We are dealing with a SerDes system with a high throughput. To ensure word alignment, I need to estimate the number of bits we need to bitslip.
For example if the input is 14'b 11111110000000 but the decoded output is 14'b 01111111000000 then clearly we need to bitslip by 1 bit.
Due to the timing constraints on this we cannot afford the time to iterativly shift and compare.
Is there any simple (single clock cycle) bitwise comparison can can estimate the number of bits we need to slip, perhaps XOR etc?
Thanks in advance.

Related

Convert each bit in byte to first bit of each nibble in 32 bit int

I have a byte b. I am looking for the most efficient bit manipulation to
convert each bit in b to the first bit of each nibble in a 32 bit int x.
For example, if b = 01010111, then x = 0x10101111
I know I can do a brute force approach:
x = (b&1) | (((b>>1)&1)<<4) | ......
Edit: this for an OpenCL kernel for GPU
PDEP
As user harold mentioned in the comments, PDEP is the instruction that just does exactly what you want - but it's only available on x86 (as far as I know), and it has terrible1 performance on the newest AMD chips.
LUT
Barring that, a lookup table of 256 x 4-byte entries seems reasonable - at the cost of 1K of pressure on your cache subsystem. You'll find a lot of smart people advocate against LUTs due to the hidden cost of cache misses - but if this particular operation is in fact "hot" then it may turn out to be the fastest even when factoring in any additional misses.
As with any LUT solution, you should be especially careful to benchmark it not only with micro-benchmarks, but in the full application to evaluate the effect of memory pressure.
You could also consider a compromise split-LUT solution that uses one or two 16-entry LUTs for each nibble of the byte, where the result is calculated something like:
int32 x = high_lut[(b & 0xF0) >> 4] | low_lut[b & 0xF]
This cuts the size of the LUTs down by a factor of between ~11 to 322, since we have much fewer entries and some entries can be 2 bytes rather than 4 bytes.
Bit Manipulation
If you really want a bit manipulation solution, to impress your inlaws or something, you can try something like the following:
Split the byte into nibbles and use multiplication by 0x00001111 (low nibble) and 0x01111000 (high nibble) to splat the low (resp. high) nibble into the low (resp high) half of the 4-byte word, and combine the results with or or add. So if your byte had bits abcd efgh you'll have a word like abcd abcd abcd abcd efgh efgh efgh efgh.
and this result with a mask that picks out the bit that belongs in each nibble (although it usually won't be in the right place). The mask is something like 0x84218421 and the result (in binary) will be something like a000 0b00 00c0 000d e000 0f00 00g0 000h.
Now move the 6 out of 8 bits that aren't in the high bit to the right position using the carry behavior of subtraction, something like: ((x | 0x08880888) - 0x01110111) ^ 0x08880888.
The basic idea in the last step is that you set the high bit of each nibble, and subtract 1 from the nibble. So for example, you have the 0b00 nibble, which becomes 1b00 - 1 - the subtraction carries though all the zeros, and stops at the first one, which is either the high bit (b is zero) or b if it is one. So you effectively set the high bit based on the value of the selected bit. Note that you don't need to do this for a or e since they are already in the right place.
The final xor is needed because the above actually sets the high bit to the opposite value as the selected bit, so we need to flip it.
I didn't try it out, so there are no doubt bugs, but the basic idea should be sound. There is probably various ways to optimize it further, but it's not too bad as is: a couple of multiplications and perhaps a half-dozen bit-operations. On platforms with slow multiplications you can probably find another approach for the first step that uses only 1 multiplication combined with a few more primitive operations, or zero at the cost of several more operations.
1 Fully 18x worse throughput than Intel - evidently AMD opted not to implement the circuit to do PDEP in hardware and instead implement it via a series of more elementary operations.
2 The largest reduction is if you share a single 16-entry LUT for both the high and low nibble, although this requires an additional shift for the result of the high nibble lookup. The smaller reduction, shown in the example, uses two 16-entry LUTs: one 4-byte one for the high nibble, and a 2-byte one for the low nibble, and avoids the shift.

Why are CRC Polynomials given as Normal, Reversed, etc.?

I'm learning about CRCs, and search engines and SO turn up nothing on this....
Why do we have "Normal" and "Reversed" and "Reciprocal" Polynomials? Does one favor Big Endian, Little Endian, or something else?
The classic definition of a CRC would use a non-reflected polynomial, which shifts the CRC left. If the word size being used for the calculation is larger than the CRC, then you would need an operation at the end to clear the high bits that were shifted into (e.g. & 0xffff for a 16-bit CRC).
You can flip the whole thing, use a reflected polynomial, and shift right instead of left. That gives the same CRC properties, but the bits from the message are effectively operated on from least to most significant bit, instead of most to least significant bit. Since you are shifting right, the extraneous bits get dropped off the bottom into oblivion, and there is no need for the additional operation. This may have been one of the early motivations to use a very slightly faster and more compact implementation.
Sometimes the specification from the original hardware is that the bits are processed from least to most significant, so then you have to use the reflected version.
No, none of this favors little or big endian. Either kind of CRC can be computed just as easily in little-endian or big-endian architectures.

32 bit CRC with some inputs set to zero. Is this less accurate than dummy data?

Sorry if I should be able to answer this simple question myself!
I am working on an embedded system with a 32bit CRC done in hardware for speed. A utility exists that I cannot modify that initially takes 3 inputs (words) and returns a CRC.
If a standard 32 bit was implemented, would generating a CRC from a 32 bit word of actual data and 2 32 bit words comprising only of zeros produce a less reliable CRC than if I just made up/set some random values for the last 2 32?
Depending on the CRC/polynomial, my limited understanding of CRC would say the more data you put in the less accurate it is. But don't zero'd data reduce accuracy when performing the shifts?
Using zeros will be no different than some other value you might pick. The input word will be just as well spread among the CRC bits either way.
I agree with Mark Adler that zeros are mathematically no worse than other numbers. However, if the utility you can't change does something bad like set the initial CRC to zero, then choose non-zero pad words. An initial CRC=0 + Data=0 + Pads=0 produces a final CRC=0. This is technically valid, but routinely getting CRC=0 is undesirable for data integrity checking. You could compensate for a problem like this with non-zero pad characters, e.g. pad = -1.

Are there any good reasons to use bit shifting except for quick math?

I understand bitwise operations and how they might be useful for different purposes, e.g. permissions. However, I don't seem to understand what use the bit shift operators are. I understand how they work, but I can't think of any scenarios where I might want to use them unless I want to do some really quick multiplication or division. Are there any other reasons to use bit-shifting?
There are many reasons, here are some:
Let's say you represent a black and white image as a sequence of bits and you want to set a single pixel in this image generically. For example your byte offset may be x>>3 and your bit offset may be x & 0x7 and you can set that bit by: byte = byte | (1 << (x & 0x7));
Implementing data compression algorithms where you deal with variable length bit sequences, e.g. huffman coding.
You're are interacting with some hardware, e.g. a serial communication device, and you need to read or set some control bits.
For those and other reasons most processors have bit shift and/or rotation instructions as well as other logic instructions (and/or/xor/not).
Historically multiplication and division were significantly slower as they are more complex operations and some CPUs didn't have those at all.
Also see here:
Have you ever had to use bit shifting in real projects?
As you indicate, a left shift is the same thing as a multiplication by two. At least it is when we're talking about unsigned quantities. The meaning of a "left shift" of a signed quantity is ... language dependent.
With modern compilers, there's really no difference between writing "i = x*2;" and "i = x << 1;" The compiler will generate the most efficient code. So in that sense there's no reason to prefer shift over multiply.
Some algorithms work by shifting a quantity left by one bit and then setting the low bit to either 0 or 1. Some simple compression algorithms work this way. For example, if your accumulated value is in the variable x, and the current value (0 or 1) is in y, then it makes more sense to write "x = (x << 1) | y", rather than "x = (x * 2) + y". Both do the same thing, but the first is more notationally correct. You don't have to think, "oh, right, multiply by two is the same as a left shift."
Also, when you're talking about algorithms that shift bits, it's more convenient to shift left or right by a particular number of bits than to figure out what multiple of 2 you want to multiply or divide by.
So, whereas there's typically no performance benefit to shifting rather than multiplying--at least not when working with high level languages--there are times when having the ability to shift makes what you're doing more easily understood.
There are lot of places where bit shift operations are regularly used outside of their usage in numerical computations. For example, Bitboard is a data structure that is commonly used in board games for board representation. Some of the strongest chess engines use this data structure mainly for speed and ease of move generation and evaluation. These programs use bit operations heavily and bit-shift operations specifically are used in a lot of contexts - such as finding bit masks, generating new moves on the board, computing logarithm very quickly, etc. There are even very advanced numerical computations that can be done elegantly by clever use of bit operations. Check out this site for bit twiddling hacks - a lot of those algorithms use shift operators. Bit shift operations are regularly used in device driver programming, codec development, embedded systems programming and so on.
Shifting allows accessing specific bits within a variable. The expression (n >> p) & ((1 << m) - 1) retrieves an m-bit portion of the variable n with an offset of p bits from the right.
This allows your program to use integers that aren't multiples of 8 bits, which is useful for data compression.
For example, I used it in my Netflix Prize programs to pack records (22-bit user ID + 15-bit movie ID + 12-bit date + 3-bit rating) into a uint64_t (with 12 bits to spare).
A very common special case is to pack 8 bool variables into each byte. (Unix file permissions, black-and-white bitmaps, CPU flags registers, etc.)
Also, bit manipulation is used in UTF-8, which is a very popular character encoding. Unicode characters are represented by distributing their bits across 1, 2, 3, or 4 bytes.

how are bitshift, bitrotate implemented in circuit?

Can you implement bit shift using only logic operations: and, or, not, xor ? Can you use bitshift in a bitblt?
To implement bitshifts/rotates in circuits: you can build registers from an array of Flip Flops which in turn you can build e.g. from NAND gates.
In order to implement bit-shifts/rotates you would wire two such registers (or feed back to the same register) by wiring the output of bit 0 to the input of bit 1 etc.
The contents are then transferred on e.g. the next clock rising edge from one array of flip-flops to the other.
You can emulate a left shift with addition a + a. The result of an and/or/not/xor do not depend upon adjacent bits, so you can't use them for bitshifts. In circuit, i'd expect they are hard-coded... You can use bit-shifting for fast hardware multiplication anyway.