Unexpected output when reading .wav data in c++ - c++

I'm making a very simple method but experiencing some trouble.
I want to read the data of a .wav file. I am just interested in the first 80 samples, so what I do is the next.
I use fseek to go to byte 44 of the file (where the data starts) according to
https://ccrma.stanford.edu/courses/422/projects/WaveFormat/
Then I read in blocks of 24 bits (I'm absolutely sure the .wav file has samples of 24 bits since I created it + checked it)
Here is my code:
void WavData::leerTodo(char *fname){
double array[1000];
FILE* fp = fopen(fname,"rb");
fseek(fp, 44, SEEK_SET);
if (fp) {
fread(array,sizeof(double), 100, fp);
for (int i = 0; i<100; i++) {
cout<<"datos del .wav son es "<<array[i]<<"\n";
}
}
}
When I compare the results with the data I get from matlab, it's totally different by 10 to the power or 300 or so. I'm using xcode.

24-bit .wav file has integer samples, not floating point (float/double) ones; floating point IEEE samples in waves have usually 32 or 64 bits. Just read the data into 32-bit int; note that the alignment, signedness and endianess have to match, too (Wave data is usually signed in two's complement format). Alternatively, as pointed out in the comment, export it to 16-bit to allow easy packing and alignment matching on the data.
For example, Sound Foundry 6.0 allows 8, 16, 24 and 32 bit integer samples and 32 & 64 bit floating IEEE samples; most professional audio applications go along the same lines here, mostly because the hardware supports only those bit resolutions.
further reading:
http://en.wikipedia.org/wiki/Audio_bit_depth

Related

How to convert a WAV file to RAW Audio in C++?

I have searched for an answer to this question for several hours. I have already removed the 44 byte header, and have transferred the data using an ofstream. The input stereo WAV file is 16 bit PCM at a 44.1k Hz sample rate.
int szm;
char* buff = new char[szm];
ifstream ssn(f_infile,ios::binary);
ssn.seekg(0,ssn.end);
szm = ssn.tellg();
ssn.seekg(0,ssn.beg);
ssn.read(buff,szm);
ssn.close();
ofstream sso(f_outfile,ios::binary);
for(int i =0; i < szm; i++)
{
if(i > 44)
{
word_w(file, buff[i],1);
word_w(file, 0-(buff[i]), 1);
}
}
sso.close();
file.close();
I got the size of the file, and read the data into a buffer. I know all a RAW data file is is binary data, and I thought this simple technique would work. However, I got mixed results.
This first one worked like a charm. It was the original sample I wanted to convert. It is a side by side comparison of the original WAV file [top] and the raw data [bottom] imported into Audacity at 44.1k Hz.
This next one distorted the right channel for some reason, and doubled the length of the file. It is also a stereo wave file, 16 bit PCM, 44.1k Hz sample rate.
This third one is completely distorted, and the length has increased even more than the previous one.
Why did it work on the first file, but not the other ones when they are all in the exact same file format (16 bit, 44.1k Hz sample rate, 2 channels)?

Noise after changing volume in QAudioOutput

I am trying to play sound by using QAudioOutput and wav in "raw format". After timer's timeout (every 50ms) I do following:
QByteArray TempSBuffer;
short int *hi;
// Check if wav has reached their end and reset its position to the beginning if yes
if((m_timerStepNum+1)*m_audioOutput->periodSize()>=m_soundBuffer.size()) {
m_timerStepNum=0;
}
// 2. Write the buffer data for the next timecycle into a temporary QByteArray TempSBuffer
TempSBuffer=m_soundBuffer.mid(m_timerStepNum*m_audioOutput->periodSize(), m_audioOutput->periodSize());
hi=(short int *)TempSBuffer.data();
for(int i=0;i < m_audioOutput->periodSize() / 2;i++) { hi[i]*= m_audioOutput->volume(); }
// 4. Play the resulting buffer
m_ioDevice->write(TempSBuffer, m_audioOutput->periodSize());
m_timerStepNum++;
Everything plays ok but when I try to change volume say for example 0.2 in QAudioOutput (and my master volume is 100%) I've got the horrible noise. I should admit that this happens only for my one wav file which has format:
bitsPerSample: 8
channels: 1
frequency: 16000
Other files play ok, as I said. Format examples of good-played waves:
bitsPerSample: 16
channels: 1
frequency: 22050
bitsPerSample: 16
channels: 2
frequency: 22050
bitsPerSample: 16
channels: 2
frequency: 22050
Well, according to The ABCs of PCM (Uncompressed) digital audio in Final Notes -
For some reason, WAV files don't support signed 8-bit format, so when reading and writing WAV files, be aware that 8-bits means unsigned, but in virtually all other cases it's safe to assume integers are signed.
I solved for a while my problem by converting my raw wav to 16-bit format.

Building a fast PNG encoder issues

I am trying to build a fast 8-bit greyscale PNG encoder. Unfortunately I must be misunderstanding part of the spec. Smaller image sizes seem to work, but the larger ones will only open in some image viewers. This image (with multiple DEFLATE Blocks) gives a
"Decompression error in IDAT" error in my image viewer but opens fine in my browser:
This image has just one DEFLATE block but also gives an error:
Below I will outline what I put in my IDAT chunk in case you can easily spot any mistakes (note, images and steps have been modified based on answers, but there is still a problem):
IDAT length
"IDAT" in ascii (literally the bytes 0x49 0x44 0x41 0x54)
Zlib header 0x78 0x01
Steps 4-7 are for every deflate block, as the data may need to be broken up:
The byte 0x00 or 0x01, depending on if it is a middle or the last block.
Number of bytes in block (up to 2^16-1) stored as a little endian 16-bit integer
The 1's complement of this integer representation.
Image data (each scan-line is starts with a zero-byte for the no filter option in PNG, and is followed by width bytes of greyscale pixel data)
An adler-32 checksum of all the image data
A CRC of all the IDAT data
I've tried pngcheck on linux, an it does not spot any errors. If nobody can see what is wrong, can you point me in the right direction for a debugging tool?
My last resort is to use the libpng library to make my own decoder, and debug from there.
Some people have suggested it may be my adler-32 function calculation:
static uint32_t adler32(uint32_t height, uint32_t width, char** pixel_array)
{
uint32_t a=1,b=0,w,h;
for(h=0;h<height;h++)
{
b+=a;
for(w=0;w<width;w++)
{
a+=pixel_array[h][w];
b+=a;
}
}
return (uint32_t)(((b%65521)*65536)|(a%65521));
}
Note that because the pixel_array passed to the function does not contain the zero-byte at the beginning of each scanline (needed for PNG) there is an extra b+=a (and implicit a+=0) at the beginning of each iteration of the outer loop.
I do get an error with pngcheck: "zlib: inflate error = -3 (data error)". As your PNG scaffolding structure looks okay, it's time to take a low-level look into the IDAT block with a hex viewer. (I'm going to type this up while working through it.)
The header looks alright; IDAT length is okay. Your zlib flags are 78 01 ("No/low compression", see also What does a zlib header look like?), where one of my own tools use 78 9C ("Default compression"), but then again, these flags are only informative.
Next: zlib's internal blocks (per RFC1950).
Directly after the compression flags (CMF in RFC1950) it expects FLATE compressed data, which is the only compression scheme zlib supports. And that is in another castle RFC: RFC1951.
Each separately compression block is prepended by a byte:
3.2.3. Details of block format
Each block of compressed data begins with 3 header bits
containing the following data:
first bit BFINAL
next 2 bits BTYPE
...
BFINAL is set if and only if this is the last block of the data
set.
BTYPE specifies how the data are compressed, as follows:
00 - no compression
01 - compressed with fixed Huffman codes
10 - compressed with dynamic Huffman codes
11 - reserved (error)
So this value can be set to 00 for 'not last block, uncompressed' and to 01 for 'last block, uncompressed', immediately followed by the length (2 bytes) and its bitwise inverse, per 3.2.4. Non-compressed blocks (BTYPE=00):
3.2.4. Non-compressed blocks (BTYPE=00)
Any bits of input up to the next byte boundary are ignored.
The rest of the block consists of the following information:
0 1 2 3 4...
+---+---+---+---+================================+
| LEN | NLEN |... LEN bytes of literal data...|
+---+---+---+---+================================+
LEN is the number of data bytes in the block. NLEN is the
one's complement of LEN.
They are the final 4 bytes in your IDAT segment. Why do small images work, and larger not? Because you only have 2 bytes for the length.1 You need to break up your image into blocks no larger than 65,535 bytes (in my own PNG creator I seem to have used 32,768, probably "for safety"). If the last block, write out 01, else 00. Then add the two times two LEN bytes, properly encoded, followed by exactly LEN data bytes. Repeat until done.
The Adler-32 checksum is not part of this Flate-compressed data, and should not be counted in the blocks of LEN data. (It is still part of the IDAT block, though.)
After re-reading your question to verify I addressed all of your issues (and confirming I spelled "Adler-32" correctly), I realized you describe all of the steps right -- except that the 'last block' indicator is 01, not 80 (later edit: uh, perhaps you are right about that!) -- but that it does not show in this sample PNG. See if you can get it to work following all of the steps by the letter.
Kudos for doing this 'by hand'. It's a nice exercise in 'following the specs', and if you get this to work, it may be worthwhile to try and add proper compression. I shun pre-made libraries as much as possible; the only allowance I made for my own PNG encoder/decoder was to use Rich Geldreich's miniz.c, because implementing proper Flate encoding/decoding is beyond my ken.
1 That's not the whole story. Browsers are particularly forgiving in HTML errors; it seems they are as forgiving for PNG errors as well. Safari displays your image just fine, and so does Preview. But they may just all be sharing OS X's PNG decoder, because Photoshop rejects the file.
The byte 0x00 or 0x80, depending on if it is a middle or the last block.
Change the 0x80 to 0x01 and all will be well.
The 0x80 is appearing as a stored block that is not the last block. All that's being looked at is the low bit, which is zero, indicating a middle block. All of the data is in that "middle" block, so a decoder will recover the full image. Some liberal PNG decoders may then ignore the errors it gets when it tries to decode the next block, which isn't there, and then ignore the missing check values (Adler-32 and CRC-32), etc. That's why it shows up ok in browsers, even though it is an invalid PNG file.
There are two things wrong with your Adler-32 code. First, you are accessing the data from a char array. char is signed, so your 0xff bytes are being added not as 255, but rather as -127. You need to make the array unsigned char or cast it to that before extracting byte values from it.
Second, you are doing the modulo operation too late. You must do the % 65521 before the uint32_t overflows. Otherwise you don't get the modulo of the sum as required by the algorithm. A simple fix would be to do the % 65521 to a and b right after the width loop, inside the height loop. This will work so long as you can guarantee that the width will be less than 5551 bytes. (Why 5551 is left as an exercise for the reader.) If you cannot guarantee that, then you will need to embed a another loop to consume bytes from the line until you get to 5551 of them, do the modulo, and then continue with the line. Or, a smidge slower, just run a counter and do the modulo when it gets to the limit.
Here is an example of a version that works for any width:
static uint32_t adler32(uint32_t height, uint32_t width, unsigned char ** pixel_array)
{
uint32_t a = 1, b = 0, w, h, k;
for (h = 0; h < height; h++)
{
b += a;
w = k = 0;
while (k < width) {
k += 5551;
if (k > width)
k = width;
while (w < k) {
a += pixel_array[h][w++];
b += a;
}
a %= 65521;
b %= 65521;
}
}
return (b << 16) | a;
}

correct coding of ID3 v2.3 frame size field for GEOB tag

I have some confusion regarding how the frame size bytes should be coded/decoded for ID3 v2.3.0. According to the (informal) ID3 v2.3.0 specification, the size of each frame should be coded into 4 bytes, where the most significant bit of each byte is unused. To calculate the size, it would take the formula below:
byte MASK = (byte)0x7F;
int size = 0;
for (int = 0; i < 4; i++) {
size = size * 128 + (b[i] & MASK);
}
But when I used my parser to parse some MP3 files, quite a few files had GEOB (general encapsulated object tag) frames whose size bytes were coded as if it were a Big Endian 32-bit Integer.
After I fixed these bytes by re-coding them using the proper algorithm, commercial software such as Windows 7 and Winamp were not able proper display the subsequent tags (in several instances, TIT2 was right after GEOB, so the song's title was not displayed although it was in the file).
I also found similar problems for MCDI (music cd identifier), and TALB ('Album/Movie/Show title') tags.
I read through the v2.3 spec, and also Googled, but wasn't able to find any information regarding the use of a 32-bit integer as size metadata for these frames. Yet the common behavior in different commercial software seems to suggest for such fields, a 32-bit integer should be used as size instead of 4 bytes masked by 0x7F.
So I am just wondering if anyone here has worked on ID3 v2.3 and could clarify this for me.
Yes. However, I consider the docs to be explicit enough, given the conventions of % (binary) and $ (hexadecimal) which are explained right away:
Header size:
4 * %0xxxxxxx as per v2.2.0 (§3.1.) header
4 * %0xxxxxxx as per v2.3.0 header
4 * %0xxxxxxx as per v2.4.0 (§3.1.) header
Frame size:
$xx xx xx as per v2.2.0 (i.e. §4.1.) frame
$xx xx xx xx as per v2.3.0 frame
4 * %0xxxxxxx as per v2.4.0 (§4.) frame
Summary:
For all 3 versions in ID3v2 the header size is stored in the same way: using 4 bytes, but for each only 7 bits are valid.
Only for ID3v2.2 frames the size consists of 3 (full) bytes.
Only for ID3v2.3 frames the size consists of 4 (full) bytes.
Only for ID3v2.4 frames the size finally is stored just like the header's size: 4 bytes, but only 28 bits are valid.
ID3v2.4.0 changes §3 also lines out the frame size change from v2.3.0. The whole issue comes from MPEG Audio (and AAC) stream which synchronizes with 9 (or 12) bits set - any decoder might then misinterpret the ID3 metadata as audio data.
I believe I have found the answer. ID3 v2.3, despite its being the more commonly supported (as opposed to v2.4) has not to well-written (and informal) spec. Its header size uses the 4 0x7F bytes, but the frame sizes are in fact 32-bit integers, only they are never clearly spelled out.
the reason I usually encountered the problem when dealing with GEOB is because the problem won't crop up until the frame size is larger than 0x7F, and GEOB usually is.

Arduino Ethernet Byte size problem

I'm using an Arduino (duemilanove) with the official Ethernet shield to send data to the controller for controlling an LED matrix. I am trying to send some raw 32-bit unsigned int values (unix timestamps) to the controller by taking the 4 bytes in the 32-bit value on the desktop and sending it to the arduino as 4 consecutive bytes. However, whenever a byte value is larger than 127, the returned value by the ethernet client library is 63.
The following is a basic example of what I'm doing on the arduino side of things. Some things have been removed for neatness.
byte buffer[32];
memset(buffer, 0, 32);
int data;
int i=0;
data = client.read();
while(data != -1 && i < 32)
{
buffer[i++] = (byte)data;
data = client.read();
}
So, whenever the input byte is bigger than 127 the variable "data" will end up getting set to 63! At first I thought the problem was further down the line (buffer used to be char instead of byte) but when I print out "data" right after the read, it's still 63.
Any ideas what could be causing this? I know client.read() is supposed to output int and internally reads data from the socket as uint8_t which is a full byte and unsigned, so I should be able to at least go to 255...
EDIT: Right you are, Hans. Didn't realize that Encoding.ASCII.GetBytes only supported the first 7 bits and not all 8.
I'm more inclined to suspect the transmit side. Are you positive the transmit side is working correctly? Have you verified with a wireshark capture or some such?
63 is the ASCII code for ?. There's some relevance to the values, ASCII doesn't have character codes for values over 127. An ASCII encoder commonly replaces invalid codes like this with a question mark. Default behavior for the .NET Encoding.ASCII encoder for example.
It isn't exactly clear where that might happen. Definitely not in your snippet. Probably on the other end of the wire. Write bytes, not characters.
+1 for Hans Passant and Karl Bielefeldt.
Can you just send the data without encoding? How is the data being sent? TCP/UDP/IP/Ethernet definitely support sending binary data without restriction. If this isn't possible, perhaps converting the data to hex will solve the problem. Base64 will also work (better) but is considerably more work. For small amounts of data, hex is probably the easiest and fastest solution.
+1 again to Karl and Ben for mentioning wireshark. Invaluable for debugging network problems like this.