strange noise after reduce volume on a pcm - c++

Does anyone knows why after I apply this algorithm in c++ to reduce the volume of a pcm apears a white noise in the background?
for(int i = 0; i<pcm.length(); i+=2) {
quint16 byte0 = pcm[i];
quint16 byte1 = pcm[i+1];
//merge byte0 and byte1
qint16 n = (byte1 << 8) + byte0;
n *= volume; // multiplier;
//split n into byte0 and byte1
byte1 = (n >> 8) & 255;
byte0 = n & 255;
//save the new values
pcm[i] = byte0;
pcm[i+1] = byte1;
}

After a long time, I come with the solution. The problem was the mode that I was merging the two bites.
for(int i = 0; i<pcm.length(); i+=2) {
quint16 byte0 = pcm[i];
quint16 byte1 = pcm[i+1];
//merge byte0 and byte1
qint16 n = 0;
n |= speakersRaw[j][i+1] & 0xFF;
n <<= 8;
n |= speakersRaw[j][i] & 0xFF;
n *= volume; // multiplier;
//split n into byte0 and byte1
byte1 = (n >> 8) & 255;
byte0 = n & 255;
//save the new values
pcm[i] = byte0;
pcm[i+1] = byte1;
}

Your n *= 0.5 is effectively doing the same as n >>= 1. You're shifting the least-significant bit from byte1 into the most significant bit of byte0, which is likely the source of your noise.
Why are you combining the two values into one integer rather than doing each one separately?

Perhaps you're packing and unpacking your bytes in the wrong order?
qint16 n = (byte0 << 8) + byte1;
byte0 = (n >> 8) & 255;
byte1 = n & 255;

The byte order mentioned by Mark Ransom is an obvious possible problem. You should check that.
The other possible problem is sign extension.
If you have signed samples and you are manipulating them in an unsigned type, you will lose the sign bit on all the negative samples.
If your byte type is signed then you will get sign extension into the high byte when you load byte0 and byte1, again not what you want.
Does the quint16 type match the actual type of the samples? If not, you should use the same type. You should make user you use unsigned char as your byte type.
Update from info in comments:
To test the sign extension theory, change the:
n *= 0.5;
line to:
n = ((short) n) * 0.5;

Related

Difference between bitshifting mask vs unsigned int

For a project, I had to find the individual 8-bits of a unsigned int. I first tried bit-shifting the mask to find the numbers, but that didn't work, so I tried bit-shifting the value and it worked.
What's the difference between these two? Why didn't the first one work?
ExampleFunk(unsigned int value){
for (int i = 0; i < 4; i++) {
ExampleSubFunk(value & (0x00FF << (i * 8)));
}
}
ExampleFunk(unsigned int value){
for (int i = 0; i < 4; i++) {
ExampleSubFunk((value >> (i * 8)) & 0x00FF);
}
}
Take the value 0xAABBCCDD as an example.
The expression value & (0xFF << (i * 8)) assumes the values:
0xAABBCCDD & 0x000000FF = 0x000000DD
0xAABBCCDD & 0x0000FF00 = 0x0000CC00
0xAABBCCDD & 0x00FF0000 = 0x00BB0000
0xAABBCCDD & 0xFF000000 = 0xAA000000
While the expression (value >> (i * 8)) & 0xFF assumes the values:
0xAABBCCDD & 0x000000FF = 0x000000DD
0x00AABBCC & 0x000000FF = 0x000000CC
0x0000AABB & 0x000000FF = 0x000000BB
0x000000AA & 0x000000FF = 0x000000AA
As you can see, the results are quite different after i = 0, because the first expression is only "selecting" 8 bits from value, while the second expression is shifting them down to the least significant byte first.
Note that in the first case, the expression (0xFF << (i * 8)) is shifting an int literal (0xFF) left. You should cast the literal to unsigned int to avoid signed integer overflow, which is undefined behavior:
value & ((unsigned int)0xFF << (i * 8))
In this code:
ExampleFunk(unsigned int value){
for (int i = 0; i < 4; i++) {
ExampleSubFunk(value & (0x00FF << (i * 8)));
}
}
You are shifting the bits of 0x00FF itself, producing new masks of 0x00FF, 0xFF00, 0xFF0000, and 0xFF000000, and then you are masking value with each of those masks. The result contains only the 8 bits of value that you are interested in, but those 8 bits are not moving position at all.
In this code:
ExampleFunk(unsigned int value){
for (int i = 0; i < 4; i++) {
ExampleSubFunk((value >> (i * 8)) & 0x00FF);
}
}
You are shifting the bits of value, thus moving those 8 bits that you want, and then you are masking the result with 0x00FF to extract those 8 bits.

CRC8 on uneven number of bits - initial 0xff

on nRF24, CRC-1 byte use polynome x^8+x^2+x^1+1 with initial 0xff.
This has to be done on an eneven number of bits. How is that calculated ? I cannot get same result. For instance : in binary
on : 000000000000000100010000000000000100000100000100000111111
nRF24 will give a CRC8 of : 01110110 (0xbb)
Any idea how it is calculated ?
This produces 0x76 from the data:
#include <stdio.h>
unsigned crc8bit(unsigned crc, unsigned bit)
{
crc ^= bit << 7;
return (crc & 0x80 ? (crc << 1) ^ 7 : crc << 1) & 0xff;
}
int main(void)
{
unsigned n, crc;
unsigned char data[] = "000000000000000100010000000000000100000100000100000111111";
crc = 0xff;
for (n = 0; n < sizeof(data) - 1; n++)
crc = crc8bit(crc, data[n] & 1);
printf("crc = %02x\n", crc);
return 0;
}

Extract and combine bits from different bytes c c++

I have declared an array of bytes:
uint8_t memory[123];
which i have filled with:
memory[0]=0xFF;
memory[1]=0x00;
memory[2]=0xFF;
memory[3]=0x00;
memory[4]=0xFF;
And now i get requests from the user for specific bits. For example, i receive a request to send the bits in position 10:35, and i must return those bits combined in bytes. In that case i would need 4 bytes which contain.
response[0]=0b11000000;
responde[1]=0b00111111;
response[2]=0b11000000;
response[3]=0b00000011; //padded with zeros for excess bits
This will be used for Modbus which is a big-endian protocol. I have come up with the following code:
for(int j=findByteINIT;j<(findByteFINAL);j++){
aux[0]=(unsigned char) (memory[j]>>(startingbit-(8*findByteINIT)));
aux[1]=(unsigned char) (memory[j+1]<<(startingbit-(8*findByteINIT)));
response[h]=(unsigned char) (aux[0] | aux[1] );
h++;
aux[0]=0x00;//clean aux
aux[1]=0x00;
}
which does not work but should be close to the ideal solution. Any suggestions?
I think this should do it.
int start_bit = 10, end_bit = 35; // input
int start_byte = start_bit / CHAR_BIT;
int shift = start_bit % CHAR_BIT;
int response_size = (end_bit - start_bit + (CHAR_BIT - 1)) / CHAR_BIT;
int zero_padding = response_size * CHAR_BIT - (end_bit - start_bit + 1);
for (int i = 0; i < response_size; ++i) {
response[i] =
static_cast<uint8_t>((memory[start_byte + i] >> shift) |
(memory[start_byte + i + 1] << (CHAR_BIT - shift)));
}
response[response_size - 1] &= static_cast<uint8_t>(~0) >> zero_padding;
If the input is a starting bit and a number of bits instead of a starting bit and an (inclusive) end bit, then you can use exactly the same code, but compute the above end_bit using:
int start_bit = 10, count = 9; // input
int end_bit = start_bit + count - 1;

Extract n most significant non-zero bits from int in C++ without loops

I want to extract the n most significant bits from an integer in C++ and convert those n bits to an integer.
For example
int a=1200;
// its binary representation within 32 bit word-size is
// 00000000000000000000010010110000
Now I want to extract the 4 most significant digits from that representation, i.e. 1111
00000000000000000000010010110000
^^^^
and convert them again to an integer (1001 in decimal = 9).
How is possible with a simple c++ function without loops?
Some processors have an instruction to count the leading binary zeros of an integer, and some compilers have instrinsics to allow you to use that instruction. For example, using GCC:
uint32_t significant_bits(uint32_t value, unsigned bits) {
unsigned leading_zeros = __builtin_clz(value);
unsigned highest_bit = 32 - leading_zeros;
unsigned lowest_bit = highest_bit - bits;
return value >> lowest_bit;
}
For simplicity, I left out checks that the requested number of bits are available. For Microsoft's compiler, the intrinsic is called __lzcnt.
If your compiler doesn't provide that intrinsic, and you processor doesn't have a suitable instruction, then one way to count the zeros quickly is with a binary search:
unsigned leading_zeros(int32_t value) {
unsigned count = 0;
if ((value & 0xffff0000u) == 0) {
count += 16;
value <<= 16;
}
if ((value & 0xff000000u) == 0) {
count += 8;
value <<= 8;
}
if ((value & 0xf0000000u) == 0) {
count += 4;
value <<= 4;
}
if ((value & 0xc0000000u) == 0) {
count += 2;
value <<= 2;
}
if ((value & 0x80000000u) == 0) {
count += 1;
}
return count;
}
It's not fast, but (int)(log(x)/log(2) + .5) + 1 will tell you the position of the most significant non-zero bit. Finishing the algorithm from there is fairly straight-forward.
This seems to work (done in C# with UInt32 then ported so apologies to Bjarne):
unsigned int input = 1200;
unsigned int most_significant_bits_to_get = 4;
// shift + or the msb over all the lower bits
unsigned int m1 = input | input >> 8 | input >> 16 | input >> 24;
unsigned int m2 = m1 | m1 >> 2 | m1 >> 4 | m1 >> 6;
unsigned int m3 = m2 | m2 >> 1;
unsigned int nbitsmask = m3 ^ m3 >> most_significant_bits_to_get;
unsigned int v = nbitsmask;
unsigned int c = 32; // c will be the number of zero bits on the right
v &= -((int)v);
if (v>0) c--;
if ((v & 0x0000FFFF) >0) c -= 16;
if ((v & 0x00FF00FF) >0) c -= 8;
if ((v & 0x0F0F0F0F) >0 ) c -= 4;
if ((v & 0x33333333) >0) c -= 2;
if ((v & 0x55555555) >0) c -= 1;
unsigned int result = (input & nbitsmask) >> c;
I assumed you meant using only integer math.
I used some code from #OliCharlesworth's link, you could remove the conditionals too by using the LUT for trailing zeroes code there.

Big Endian and Little Endian for Files in C++

I am trying to write some processor independent code to write some files in big endian. I have a sample of code below and I can't understand why it doesn't work. All it is supposed to do is let byte store each byte of data one by one in big endian order. In my actual program I would then write the individual byte out to a file, so I get the same byte order in the file regardless of processor architecture.
#include <iostream>
int main (int argc, char * const argv[]) {
long data = 0x12345678;
long bitmask = (0xFF << (sizeof(long) - 1) * 8);
char byte = 0;
for(long i = 0; i < sizeof(long); i++) {
byte = data & bitmask;
data <<= 8;
}
return 0;
}
For some reason byte always has the value of 0. This confuses me, I am looking at the debugger and see this:
data = 00010010001101000101011001111000
bitmask = 11111111000000000000000000000000
I would think that data & mask would give 00010010, but it just makes byte 00000000 every time! How can his be? I have written some code for the little endian order and this works great, see below:
#include <iostream>
int main (int argc, char * const argv[]) {
long data = 0x12345678;
long bitmask = 0xFF;
char byte = 0;
for(long i = 0; i < sizeof(long); i++) {
byte = data & bitmask;
data >>= 8;
}
return 0;
}
Why does the little endian one work and the big endian not? Thanks for any help :-)
You should use the standard functions ntohl() and kin for this. They operate on explicit sized variables (i.e. uint16_t and uin32_t) rather than compiler-specific long, which necessary for portability.
Some platforms provide 64-bit versions in <endian.h>
In your example, data is 0x12345678.
Your first assignment to byte is therefore:
byte = 0x12000000;
which won't fit in a byte, so it gets truncated to zero.
try:
byte = (data & bitmask) >> (sizeof(long) - 1) * 8);
You're getting the shifting all wrong.
#include <iostream>
int main (int argc, char * const argv[]) {
long data = 0x12345678;
int shift = (sizeof(long) - 1) * 8
const unsigned long mask = 0xff;
char byte = 0;
for (long i = 0; i < sizeof(long); i++, shift -= 8) {
byte = (data & (mask << shift)) >> shift;
}
return 0;
}
Now, I wouldn't recommend you do things this way. I would recommend instead writing some nice conversion functions. Many compilers have these as builtins. So you can write your functions to do it the hard way, then switch them to just forward to the compiler builtin when you figure out what it is.
#include <tr1/cstdint> // To get uint16_t, uint32_t and so on.
inline uint16_t to_bigendian(uint16_t val, char bytes[2])
{
bytes[0] = (val >> 8) & 0xffu;
bytes[1] = val & 0xffu;
}
inline uint32_t to_bigendian(uint32_t val, char bytes[4])
{
bytes[0] = (val >> 24) & 0xffu;
bytes[1] = (val >> 16) & 0xffu;
bytes[2] = (val >> 8) & 0xffu;
bytes[3] = val & 0xffu;
}
This code is simpler and easier to understand than your loop. It's also faster. And lastly, it is recognized by some compilers and automatically turned into the single byte swap operation that would be required on most CPUs.
because you are masking off the top byte from an integer and then not shifting it back down 24 bits ...
Change your loop to:
for(long i = 0; i < sizeof(long); i++) {
byte = (data & bitmask) >> 24;
data <<= 8;
}