how to calculate CRC-32 for 24 bit long hex (for example 0xAAAAAA00)? - crc

generally, CRC-32 is being calculated for 32 bits and its multiples. i want to calculate CRC-32 for a 24-bit number. how to perform such action. I'm not from a computer science background so not having a thorough understanding of CRC-32 so kindly help.

The actual math is in effect appending 32 zero bits to the 24 bit number when calculating a CRC. A software version emulates this by cycling the CRC as needed.
To simplify things, assume the number is stored in big endian format. Then the 24 bit value could be place into a 32 bit register, and the 32 bit register cycled 32 times (emulating appending 32 zero bits) to produce a CRC. Since after putting a 24 bit number into a 32 bit register results in 8 leading zero bits, the first step could just shift the 24 bit number left 8 bits, then cycle the CRC 24 times.
If processing a byte at a time using a table lookup and 3 bytes of data that hold the 24 bit number, the process xor's the next byte into the upper 8 bits of the 32 bit CRC register, then uses the table to emulate cycling the 32 bit CRC 8 times.

Related

How to find decimal value of 128 bit number ( 2 - 64 bit long long) from 2 64 bit represented number?

I want to find the decimal value of an 128 bit number. The 128 bit number has high part and low part and represented as UINT64. I want to know which methods can I use it to see the number in the console, I am using C++. printf and cout both OK:
Only method comes to my mind is iterating over 128 times and calculating each bits decimal value and fit into a 3- 64 byte number.
I tried to search different methods but could not find any.
edit: portability is not concern.
UINT64 a = 0x0000000000000001;
UINT64 b = 0x1000000000000001;
// some printing
// result is 18446744073709551618.

Java convert decimal to 12 bits with bit wise operator &

I'm using Java, for this.
I have the code 97 which represents the 'a' character is ascii. I convert 97 to binary which gives me 1100001 (7 bits) I want to convert this to 12 bits, I can add leading 0's to the existing 7 bits until it reaches 12 bits, but this seems inefficient. I've been thinking of using the & bit wise operator to make zeros all but the lowest bits of 97 to reach 12 bits, is this possible and how can I do it?
byte buffer = (byte) (code & 0xff);
Above line of code will give me 01100001 no?
which gives me 1100001 (7 bits)
Your value buffer is 8 bits. Because that's what a byte is: 8 bits.
If code has type int (detail added in comment below) it is already a 32-bit number with, in this case, 25 leading zero bits. You need do nothing with it. It's got all the bits you're asking for.
There is no Java integral type with 12 bits, nor is one directly achievable, since 12 is not a multiple of the byte size. It's unclear why you want exactly 12 bits. What harm do you think an extra 20 zero bits will do?
The important fact is that in Java, integral types (char, byte, int, etc.) have a fixed number of bits, defined by the language specification.
With reference to your original code & 0xff - code has 32 bits. In general these bits could have any value.
In your particular case, you told us that code was 97, and therefore we know the top 25 bits of code were zero; this follows from the binary representation of 97.
Again in general, & 0xff would set all but the low 8 bits to zero. In your case, that had no actual effect because they were already zero. No bits are "added" - they are always there.

Difference between byte flip and byte swap

I am trying to find the difference becoz of byte flip functionality I see in Calculator on Mac with Programmer`s view.
So I wrote a program to byte swap a value which we do to go from small to big endian or other way round and I call it as byte swap. But when I see byte flip I do not understand what exactly it is and how is it different than byte swap. I did confirm that the results are different.
For example, for an int with value 12976128
Byte Flip gives me 198;
Byte swap gives me 50688.
I want to implement an algorithm for byte flip since 198 is the value I want to get while reading something. Anything on google says byte flip founds the help byte swap which isnt the case for me.
Byte flip and byte swap are synonyms.
The results you see are just two different ways of swapping the bytes, depending on whether you look at the number as a 32bit number (consisting of 4 bytes), or as the smallest size of a number that can hold 12976128, which is 24 bits or 3 bytes.
The 4byte swap is more usual in computer culture, because 32bit processors are currently predominant (even 64bit architectures still do most of their mathematics in 32bit numbers, partly because of backward compatible software infrastructure, partly because it is enough for many practical purposes). But the Mac Calculator seems to use the minimum-width swap, in this case a 3 byte swap.
12976128, when converted to hexadecimal, gives you 0xC60000. That's 3 bytes total ; each hexadecimal digit is 4 bits, or half a byte wide. The bytes to be swapped are 0xC6, zero, and another zero.
After 3byte swap: 0x0000C6 = 198
After 4byte swap: 0x0000C600 = 50688

compiler and endians

I have an ISA which is "kind" of little endian.
The basic memory unit is an integer and not byte.For example
00000000: BEFC03FF 00008000
Represents that the "low" integer is BEFC03FF and "high" integer is 00008000.
I need to read the value represented by some bits.For example bits 31 till 47.
What I am doing in VS10 (c++) generate uint64_t var = 0x00008000BEFC03FF
after it use relevant mask and check the value of var & mask.
Is it legal to do that way?I do some assumption about uint64_t bits arrangement - is it legal?
Can I suppose that for very compiler and for every OS (without dependency on hw) the arrangement of bits in the uint64_t will be this way?
You are right to be concerned, It does matter.
However, in this particular case, since ISA is little endian, i.e. if it has AD[31:0], the least significant bit of an integer is packed to bit 0. Assuming your processor is also little endian, then nothing to worry about. when the data written to memory, it should have the right byte order
0000 FF
0001 03
0002 ..
suppose, if your external bus protocol is big endian and your processor is little endian. then a 16 bit integer in your processor, say 0x1234 would be 0001_0010_0011_0100 in native format, but 0010_1100_0100_1000 on the bus (assuming it's 16 bit).
In this case, multi byte data crosses endian boundary, the hardware will only swap bits inside a byte, because it must preserve the memory contiguousness between bytes. after hardware swap, it becomes:
0000 0001_0010
0001 0011_0100
then it is up to the software to swap the byte order

Retain Data from 27 bit to 19 bit conversion

I have an access control solution where the 27 bit format is 13 bits for the facility code and 14 bits for the badge ID. Conversely, I need to convert it into 8 bits for the facility code and 16 bits for the badge ID.
What is the largest number I can convert from on the 27 bit side to get the same result using the 8 bit facility code size? Meaning, if I have 13 bits for the facility code, how many bits can I chop off to still get the same result and an 8 bit size?
If the facility code is never greater than 255, you can chop off the 5 most significant bits (i.e. keep the 8 least significant ones), without losing information.