I have an access control solution where the 27 bit format is 13 bits for the facility code and 14 bits for the badge ID. Conversely, I need to convert it into 8 bits for the facility code and 16 bits for the badge ID.
What is the largest number I can convert from on the 27 bit side to get the same result using the 8 bit facility code size? Meaning, if I have 13 bits for the facility code, how many bits can I chop off to still get the same result and an 8 bit size?
If the facility code is never greater than 255, you can chop off the 5 most significant bits (i.e. keep the 8 least significant ones), without losing information.
Related
generally, CRC-32 is being calculated for 32 bits and its multiples. i want to calculate CRC-32 for a 24-bit number. how to perform such action. I'm not from a computer science background so not having a thorough understanding of CRC-32 so kindly help.
The actual math is in effect appending 32 zero bits to the 24 bit number when calculating a CRC. A software version emulates this by cycling the CRC as needed.
To simplify things, assume the number is stored in big endian format. Then the 24 bit value could be place into a 32 bit register, and the 32 bit register cycled 32 times (emulating appending 32 zero bits) to produce a CRC. Since after putting a 24 bit number into a 32 bit register results in 8 leading zero bits, the first step could just shift the 24 bit number left 8 bits, then cycle the CRC 24 times.
If processing a byte at a time using a table lookup and 3 bytes of data that hold the 24 bit number, the process xor's the next byte into the upper 8 bits of the 32 bit CRC register, then uses the table to emulate cycling the 32 bit CRC 8 times.
I'm using Java, for this.
I have the code 97 which represents the 'a' character is ascii. I convert 97 to binary which gives me 1100001 (7 bits) I want to convert this to 12 bits, I can add leading 0's to the existing 7 bits until it reaches 12 bits, but this seems inefficient. I've been thinking of using the & bit wise operator to make zeros all but the lowest bits of 97 to reach 12 bits, is this possible and how can I do it?
byte buffer = (byte) (code & 0xff);
Above line of code will give me 01100001 no?
which gives me 1100001 (7 bits)
Your value buffer is 8 bits. Because that's what a byte is: 8 bits.
If code has type int (detail added in comment below) it is already a 32-bit number with, in this case, 25 leading zero bits. You need do nothing with it. It's got all the bits you're asking for.
There is no Java integral type with 12 bits, nor is one directly achievable, since 12 is not a multiple of the byte size. It's unclear why you want exactly 12 bits. What harm do you think an extra 20 zero bits will do?
The important fact is that in Java, integral types (char, byte, int, etc.) have a fixed number of bits, defined by the language specification.
With reference to your original code & 0xff - code has 32 bits. In general these bits could have any value.
In your particular case, you told us that code was 97, and therefore we know the top 25 bits of code were zero; this follows from the binary representation of 97.
Again in general, & 0xff would set all but the low 8 bits to zero. In your case, that had no actual effect because they were already zero. No bits are "added" - they are always there.
I found a wonderful project called python-bitstring, and I believe a C++ port could be very helpful in quite some situations (for sure in some projects of mine).
While porting the read/write/patch bytes methods, I didn't get any problems at all; it was as easy as translating Python to C++.
Anyway, now I'm getting to the bits methods and I'm not really sure how to express that functionality.
For example, let's say I want to create a method like:
readBits(uint64_t n_bits_to_read, uint64_t n_bits_to_skip = 0) {...}
Let's suppose, for the sake of this example, that this->data is a chunk of memory (void *) holding the entire data from which I'm reading.
So, the method will receive a number of bits to read and an optional number of bits to skip.
this->readBits(5, 2);
That way I'll be reading bits from position 2 to position 6 inclusive (forget little/big endian for the sake of this example).
0 1 1 0 1 0 1 1
‾ ‾ ‾ ‾ ‾
I can't return anything smaller than a byte (or can I?), so even if I actually read 5 bits, I'll still be returning 8. But what if I read 14 bits and skip 1? Is there any other way I could return only those bits in some more useful way?
I'm thinking about a few common situations, for example:
Do the first 14 bits match "010101....."
Do the next 13 bits after skipping 2 match "00011010....."
Read the first 5 bits and convert them to an int/float
Read 7 bits after skipping 5 and convert them to an int/float
My question is: what type of data/structure/methods should I return/expose in order to make working with bits easier (or at least easier for the previously described situations).
I have 13 numbers drawing from a set with 13 types of data, each type has 4 item so total 52 items. We can number the item as 1,2,3,4,5,6,7,8,9,10,11,12,13, so there will be 4 "1", 4"2", ... 4"13" in the set. The 13 numbers drawing from the set are random. The whole process repeated million times or even more, so I need a efficient way to store the 13 numbers. I was thinking to use some sort of coding method to compress the 13 integers into bits. For example, I count how many "1", "2" ... first, coding the count for each item with 2 bits and use 1 more bit to denote if the item was drawn or not. So for each item, we need 3 bits, total 13 items cost 39 bits. It definite need 8 bytes to do so. But it is still too much since I am talking about couple millions or billion times of calculation and each set have to be stored to the file later. So if I use 8 bytes, if will still asking about 80GB for my data. However, if I can reduce that by half, I will save 40GB. Any idea how to compress this structure more efficiently? I also think of to use 5 bytes instead but than I need to take care of the different type of number (one int + one char), is there any library in c++ can easily do the coding/compressing for me?
Thanks.
Google's Protocol Buffers can store integers with less bits, depending on its value. It might reduce your storage significantly. See http://code.google.com/p/protobuf/
The actual protocol is described here: https://developers.google.com/protocol-buffers/docs/encoding
As for compression, have you looked at how zlib handles your data?
With your scheme, every hand of 39 bits represented by 8 bytes of 64 bits will have 25 bits wasted, about 40%.
If you batch hands together, you can represent them without wasting those bits.
39 and 64 have no common factors, so the lowest common multiple is just the multiple 39 * 64 = 2496 bits, or 312 bytes. This holds 64 hands and is about 60% of the size of your current scheme.
try googling LV77 and LVZ compression
Maybe a bit more sophisticated than you're looking for, but check out HDF5.
In order to reduce the data size over network, i would like to write only enough bits to network, that can hold the value. For example, if 40 bits can hold the value, i want to write 40 bits to the stream and not 64 bits. Or if the value can be stored in 3 bits, i would simply like to write 3 bits to the binary stream and not 8 bits, with 5 bits as 0.
My question is how do i write non aligned data to a binary stream in C++ ?
The stream works with bytes, not bits, so you'll have to work with multiples of 8 bits. You can write 40 bits to the stream because that's exactly 5 bytes.
You are inventing your own compression scheme and will almost certainly do worse than the experts have done.
Your network may also already doing compression so you might be doing work that is already being done.
Your question is sorely lacking in detail that makes a better answer impossible.