I have an array of 256 unsigned integers called frequencies[256] (one integer for each ascii value). My goal is to read through an input and for each character i increment the integer in the array that corresponds to it (for example the character 'A' will cause the frequencies[65] integer to increase by one) and when the input is over I must output each integer as 4 characters in little endian form.
So far I have made a loop that goes through the input and increases each corresponding integer in the array. But i am very confused on how to output each integer in little endian form. I understand that each byte of the four bytes of each integer should be output as a character (for instance the unsigned integer 1 in little endian is "00000001 00000000 00000000 00000000" which i would want to output as the 4 ascii characters that correspond to those bytes).
But how do i get at the binary representation of an unsigned integer in my code and how would i go about chopping it up and rearranging it?
Thanks for the help.
For hardware portability, please use the following solution:
int freqs[256];
for (int i = 0; i < 256; ++i)
printf("%02x %02x %02x %02x\n", (freqs[i] >> 0 ) & 0xFF
, (freqs[i] >> 8 ) & 0xFF
, (freqs[i] >> 16) & 0xFF
, (freqs[i] >> 24) & 0xFF);
You can use memcpy which copies a block of memory.
char tab[4] ;
memcpy(tab, frequencies+i, sizeof(int));
now, tab[0], tab[1], etc. will be your characters.
A program to swap from big to little endian: Little Endian - Big Endian Problem.
To understand if your system is little or big endian: https://stackoverflow.com/a/1024954/2436175.
Transform your chars/integers in a set of printable bits: https://stackoverflow.com/a/7349767/2436175
It's not really clear what you mean by "little endian" here.
Integers don't have endianness per se; endianness only comes
into play when you cut them up into smaller pieces. So which
smaller pieces to you mean: bytes or characters. If characters,
just convert in the normal way, and reverse the generated
string. If bytes (or any other smaller piece), each individual
byte can be represented as a function of the int: i & 0xFF
calculates the low order byte, (i >> 8) & 0xFF the next
lowest, and so forth. (If the bytes aren't 8 bits, then change
the shift value and the mask correspondingly.)
And with regards to your second paragraph: a single byte of an
int doesn't necessarily correspond to a character, regardless
of the encodig. For the four bytes you show, for example, none
of them corresponds to a character in any of the usual
encodings.
With regards to the last paragraph: to get the binary
representation of an unsigned integer, use the same algorithm
that you would use for any representation:
std::string
asText( unsigned int value, int base, int minDigits = 1 )
{
static std::string digits( "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ" );
assert( base >= 2 && base <= digits.size() );
std::string results;
while ( value != 0 || minDigits > 0 ) {
results += digits[ value % base ];
value /= base;
-- minDigits;
}
// results is now little endian. For the normal big-endian
std::reverse( results.begin(), results.end() );
return results;
}
Called with base equal to 2, this will give you your binary
representation.
Related
I came across the following code to convert 16-bit numbers to 10-bit numbers and store it inside an integer. Could anyone maybe explain to me what exactly is happening with the AND 0x03?
// Convert the data to 10-bits
int xAccl = (((data[1] & 0x03) * 256) + data[0]);
if(xAccl > 511) {
xAccl -= 1024;
}
Link to where I got the code: https://www.instructables.com/id/Measurement-of-Acceleration-Using-ADXL345-and-Ardu/
The bitwise operator & will make a mask, so in this case, it voids the 6 highest bits of the integer.
Basically, this code does a modulo % 1024 (for unsigned values).
data[1] takes the 2nd byte; & 0x03 masks that byte with binary 11 - so: takes 2 bits; * 256 is the same as << 8 - i.e. pushes those 2 bits into the 9th and 10th positions; adding data[0] to data combines these two bytes (personally I'd have used |, not +).
So; xAccl is now the first 10 bits, using big-endian ordering.
The > 511 seems to be a sign check; essentially, it is saying "if the 10th bit is set, treat the entire thing as a negative integer as though we'd used 10-bit twos complement rules".
I need a function to read n bits starting from bit x(bit index should start from zero), and if the result is not byte aligned, pad it with zeros. The function will receive uint8_t array on the input, and should return uint8_t array as well. For example, I have file with following contents:
1011 0011 0110 0000
Read three bits from the third bit(x=2,n=3); Result:
1100 0000
There's no (theoretical) limit on input and bit pattern lengths
Implementing such a bitfield extraction efficiently without beyond the direct bit-serial algorithm isn't precisely hard but a tad cumbersome.
Effectively it boils down to an innerloop reading a pair of bytes from the input for each output byte, shifting the resulting word into place based on the source bit-offset, and writing back the upper or lower byte. In addition the final output byte is masked based on the length.
Below is my (poorly-tested) attempt at an implementation:
void extract_bitfield(unsigned char *dstptr, const unsigned char *srcptr, size_t bitpos, size_t bitlen) {
// Skip to the source byte covering the first bit of the range
srcptr += bitpos / CHAR_BIT;
// Similarly work out the expected, inclusive, final output byte
unsigned char *endptr = &dstptr[bitlen / CHAR_BIT];
// Truncate the bit-positions to offsets within a byte
bitpos %= CHAR_BIT;
bitlen %= CHAR_BIT;
// Scan through and write out a correctly shifted version of every destination byte
// via an intermediate shifter register
unsigned long accum = *srcptr++;
while(dstptr <= endptr) {
accum = accum << CHAR_BIT | *srcptr++;
*dstptr++ = accum << bitpos >> CHAR_BIT;
}
// Mask out the unwanted LSB bits not covered by the length
*endptr &= ~(UCHAR_MAX >> bitlen);
}
Beware that the code above may read past the end of the source buffer and somewhat messy special handling is required if you can't set up the overhead to allow this. It also assumes sizeof(long) != 1.
Of course to get efficiency out of this you will want to use as wide of a native word as possible. However if the target buffer necessarily word-aligned then things get even messier. Furthermore little-endian systems will need byte swizzling fix-ups.
Another subtlety to take heed of is the potential inability to shift a whole word, that is shift counts are frequently interpreted modulo the word length.
Anyway, happy bit-hacking!
Basically it's still a bunch of shift and addition operations.
I'll use a slightly larger example to demonstrate this.
Suppose we are give an input of 4 characters, and x = 10, n = 18.
00101011 10001001 10101110 01011100
First we need to locate the character contains our first bit, by x / 8, which gives us 1 (the second character) in this case. We also need the offset in that character, by x % 8, which equals to 2.
Now we can get out first character of the solution in three operations.
Left shift the second character 10001001 with 2 bits, gives us 00100100.
Right shift the third character 10101110 with 6 (comes from 8 - 2) bits, gives us 00000010.
Add these two characters gives us the first character in your return string, gives 00100110.
Loop this routine for n / 8 rounds. And if n % 8 is not 0, extract that many bits from the next character, you can do it in many approaches.
So in this example, our second round will give us 10111001, and the last step we get 10, then pad the rest bits with 0s.
The code that I'm using for reading .wav file data into an 2D array:
int signal_frame_width = wavHeader.SamplesPerSec / 100; //10ms frame
int total_number_of_frames = numSamples / signal_frame_width;
double** loadedSignal = new double *[total_number_of_frames]; //array that contains the whole signal
int iteration = 0;
int16_t* buffer = new int16_t[signal_frame_width];
while ((bytesRead = fread(buffer, sizeof(buffer[0]), signal_frame_width, wavFile)) > 0)
{
loadedSignal[iteration] = new double[signal_frame_width];
for(int i = 0; i < signal_frame_width; i++){
//value normalisation:
int16_t c = (buffer[i + 1] << 8) | buffer[i];
double normalisedValue = c/32768.0;
loadedSignal[iteration][i] = normalisedValue;
}
iteration++;
}
The problem is in this part, I don't exaclty understand how it works:
int16_t c = (buffer[i + 1] << 8) | buffer[i];
It's example taken from here.
I'm working on 16bit .wav files only. As you can see, my buffer is loading (for ex. sampling freq. = 44.1kHz) 441 elements (each is 2byte signed sample). How should I change above code?
The original example, from which you constructed your code, used an array where each individual element represented a byte. It therefore needs to combine two consecutive bytes into a 16-bit value, which is what this line does:
int16_t c = (buffer[i + 1] << 8) | buffer[i];
It shifts the byte at index i+1 (here assumed to be the most significant byte) left by 8 positions, and then ORs the byte at index i onto that. For example, if buffer[i+1]==0x12 and buffer[i]==0x34, then you get
buffer[i+1] << 8 == 0x12 << 8 == 0x1200
0x1200 | buffer[i] == 0x1200 | 0x34 == 0x1234
(The | operator is a bitwise OR.)
Note that you need to be careful whether your WAV file is little-endian or big-endian (but the original post explains that quite well).
Now, if you store the resulting value in a signed 16-bit integer, you get a value between −32768 and +32767. The point in the actual normalization step (dividing by 32768) is just to bring the value range down to [−1.0, 1.0).
In your case above, you appear to already be reading into a buffer of 16-bit values. Note that your code will therefore only work if the endianness of your platform matches that of the WAV file you are working with. But if this assumption is correct, then you don't need the code line which you do not understand. You can just convert every array element into a double directly:
double normalisedValue = buffer[i]/32768.0;
If buffer was an array of bytes, then that piece of code would interpret two consecutive bytes as a single 16-bit integer (assuming little-endian encoding). The | operator will perform a bit-wise OR on the bits of the two bytes. Since we wish to interpret the two bytes as a single 2-byte integer, then we must shift the bits of one of them 8 bits (1 byte) to the left. Which one depends on whether they are ordered in little-endian or big-endian order. Little-endian means that the least significant byte comes first, so we shift the second byte 8 bits to the left.
Example:
First byte: 0101 1100
Second byte: 1111 0100
Now shift second byte:
Second "byte": 1111 0100 0000 0000
First "byte": 0000 0000 0101 1100
Bitwise OR-operation (if either is 1, then 1. If both are 0, then 0):
16-bit integer: 1111 0100 0101 1100
In your case however, the bytes in your file have already been interpreted as 16-bit ints using whatever endianness the platform has. So you do not need this step. However, in order to correctly interpret the bytes in the file, one must assume the same byte-order as they were written in. Therefore, one usually adds this step to ensure that the code works independent of the endianness of the platform, instead relying on the expected byte-order of the files (as most file formats will specify what the byte-order should be).
I've got to program a function that receives
a binary number like 10001, and
a decimal number that indicates how many shifts I should perform.
The problem is that if I use the C++ operator <<, the zeroes are pushed from behind but the first numbers aren't dropped... For example
shifLeftAddingZeroes(10001,1)
returns 100010 instead of 00010 that is what I want.
I hope I've made myself clear =P
I assume you are storing that information in int. Take into consideration, that this number actually has more leading zeroes than what you see, ergo your number is most likely 16 bits, meaning 00000000 00000001 . Maybe try AND-ing it with number having as many 1 as the number you want to have after shifting? (Assuming you want to stick to bitwise operations).
What you want is to bit shift and then limit the number of output bits which can be active (hold a value of 1). One way to do this is to create a mask for the number of bits you want, then AND the bitshifted value with that mask. Below is a code sample for doing that, just replace int_type with the type of value your using -- or make it a template type.
int_type shiftLeftLimitingBitSize(int_type value, int numshift, int_type numbits=some_default) {
int_type mask = 0;
for (unsigned int bit=0; bit < numbits; bit++) {
mask += 1 << bit;
}
return (value << numshift) & mask;
}
Your output for 10001,1 would now be shiftLeftLimitingBitSize(0b10001, 1, 5) == 0b00010.
Realize that unless your numbits is exactly the length of your integer type, you will always have excess 0 bits on the 'front' of your number.
GIven the fact that I generate a string containing "0" and "1" of a random length, how can I write the data to a file as bits instead of ascii text ?
Given my random string has 12 bits, I know that I should write 2 bytes (or add 4 more 0 bits to make 16 bits) in order to write the 1st byte and the 2nd byte.
Regardless of the size, given I have an array of char[8] or int[8] or a string, how can I write each individual group of bits as one byte in the output file?
I've googled a lot everywhere (it's my 3rd day looking for an answer) and didn't understand how to do it.
Thank you.
You don't do I/O with an array of bits.
Instead, you do two separate steps. First, convert your array of bits to a number. Then, do binary file I/O using that number.
For the first step, the types uint8_t and uint16_t found in <stdint.h> and the bit manipulation operators << (shift left) and | (or) will be useful.
You haven't said what API you're using, so I'm going to assume you're using I/O streams. To write data to the stream just do this:
f.write(buf, len);
You can't write single bits, the best granularity you are going to get is bytes. If you want bits you will have to do some bitwise work to your byte buffer before you write it.
If you want to pack your 8 element array of chars into one byte you can do something like this:
char data[8] = ...;
char byte = 0;
for (unsigned i = 0; i != 8; ++i)
{
byte |= (data[i] & 1) << i;
}
f.put(byte);
If data contains ASCII '0' or '1' characters rather than actual 0 or 1 bits replace the |= line with this:
byte |= (data[i] == '1') << i;
Make an unsigned char out of the bits in an array:
unsigned char make_byte(char input[8]) {
unsigned char result = 0;
for (int i=0; i<8; i++)
if (input[i] != '0')
result |= (1 << i);
return result;
}
This assumes input[0] should become the least significant bit in the byte, and input[7] the most significant.