I am working on a project, it reads data(24bits per sample) from binary files, decompress it to 32bits and convert it signed int.
I have this function for the decompress and convert tasks, and it causes wasting a lot of time in the project for huge data.
signed int convertedSample(QByteArray samp){
QString sample = QString(samp.toHex() + "00");
bool ok;
signed int decimalSample = sample.toInt(&ok, 16);
return decimalSample;
}
To read data from files and store it, I do:
while(!file.atEnd()){
QByteArray chunkSamples = file.read(chunkSize);// chunkSize here is the size of 1 million of samples
for (int i = 0; i < chunkSamples.size() ; i++){
QByteArray samp = chunkSamples.mid(i * 3, 3); // 24bits
buffer.append(convertedSample(samp)); // buffer is a QVector of int
}
Any help to make it faster ?
Thanks to the guys in the comments, I've made some improvements over time. I avoided using strings for the conversion because it seems that using strings is always slow and replaced it with using bitwise shift and OR to combine the individual bytes into int.
I replaced this code:
QString sample = QString(samp.toHex() + "00");
bool ok;
int decimalSample = sample.toInt(&ok, 16);
with :
int decimalSample = ((samp[0] << 24) | (samp[1] << 16) | (samp[2] << 8));
Related
I have audio data and I am not sure what is the best way to store it as matrix.
I have 4 large files of recordings from acoustic sensors, each file has 4 channels data interleaved.
I am using Qt C++ to do some treatements of these data. I already made this approach using QVector of QVectors to store the data in.
QVector<QVector<int>> buffer(16) // 4 * 4 : numberOfChannels * numberOfFiles
for(int i = 0 ; i < 4 ; i++){
QFile file(fileList[i]); // fileList is QList of QStrings contains 4 files path
if(file.open(QIODevice::ReadOnly)){
int k = 0;
while(!file.atEnd()){
QByteArray sample = file.read(depth/8); // depth here is 24
int integerSample = convertByteArrayToIntFunction(sample);
buffer[4 * i + (K%4)].append(integerSample);
k++;
}
}
}
To have at the end this matrix of 16 columns like below(f:file, c:channel):
f1c0 | f1c1 | f1c2 | f1c3 | f2c0 | f2c1 | ... | f4c2 | f4c3
But this approach it takes ages for large files of few gigabytes. I am wondering if there is another efficient way to fulfill this task and gain a lot of time. As I found, I can divide reading from files to chunks but still not clear for me.
Thanks in advance.
There are two obvious antipatterns in your code.
The first one is not pre-sizing your QVectors. This means that every so often a call to append will notice that the vector's storage is full, which triggers an allocation of memory for a larger vector and then copying the contents of the vector before the append can complete. You know in advance how many samples are in each file, so you can use QVector::reserve to allocate the right amount in advance and inhibit this behavior:
const int bps = depth / 8;
QFile file (fileList[i]);
auto numSamples = file.size() / bps / 4; // "depth" bits per sample and 4 channels
for (int j = 0; j < 4; j++) {
buffer[4 * i + j].reserve(numSamples);
}
Secondly, you are calling file.read() for every sample. This means you are repeatedly paying the cost of retrieving data (although buffering will alleviate this a bit) and that of allocating a QByteArray.
Instead, read a huge chunk of the file at once and then loop over that:
while (!file.atEnd()) {
QByteArray samples = file.read(1'000'000 * 4 * bps); // read up to a million samples at once
for (int k = 0; k * bps < samples.size(); k++) {
QByteArray sample = samples.mid(k * bps, bps);
buffer[4 * i + (k % 4)].append(convertByteArrayToIntFunction(sample));
}
}
You can play around with the 1'000'000 number to see if there is a more optimal number, and you can probably gain a few percent more performance by passing convertByteArrayToIntFunction a const char *, but more readable is probably better.
Suppose I have
unsigned char * buffer; // buffer len in 10000
I want to convert buffer+50 to buffer+54 to int. The following code works
int c=(*((int *) (buffer+ 32));
But is there any better way to do this and how much instruction it should take ?
Thanks a lot.
Something like this would work:
std::uint32_t convert_to_int32(std::uint8_t* buffer) // assume size 4
{
std::uint32_t result = (static_cast<std::uint32_t>(buffer[0]) << 24) |
(static_cast<std::uint32_t>(buffer[1]) << 16) |
(static_cast<std::uint32_t>(buffer[2]) << 8) |
(static_cast<std::uint32_t>(buffer[3]));
return result;
}
The main problem you will have with your current method is if you run into alignment issues (e.g. you attempt to cast the integer pointer from a point in the buffer that is not on an integer alignment barrier). The shifting method gets around that.
I'm having trouble reading in a 16bit .wav file. I have read in the header information, however, the conversion does not seem to work.
For example, in Matlab if I read in wave file I get the following type of data:
-0.0064, -0.0047, -0.0051, -0.0036, -0.0046, -0.0059, -0.0051
However, in my C++ program the following is returned:
0.960938, -0.00390625, -0.949219, -0.00390625, -0.996094, -0.00390625
I need the data to be represented the same way. Now, for 8 bit .wav files I did the following:
uint8_t c;
for(unsigned i=0; (i < size); i++)
{
c = (unsigned)(unsigned char)(data[i]);
double t = (c-128)/128.0;
rawSignal.push_back(t);
}
This worked, however, when I did this for 16bit:
uint16_t c;
for(unsigned i=0; (i < size); i++)
{
c = (signed)(signed char)(data[i]);
double t = (c-256)/256.0;
rawSignal.push_back(t);
}
Does not work and shows the output (above).
I'm following the standards found Here
Where data is a char array and rawSignal is a std::vector<double> I'm probably just handing the conversion wrong but cannot seem to find out where. Anyone have any suggestions?
Thanks
EDIT:
This is what is now displaying (In a graph):
This is what it should be displaying:
There are a few problems here:
8 bit wavs are unsigned, but 16 bit wavs are signed. Therefore, the subtraction step given in the answers by Carl and Jay are unnecessary. I presume they just copied from your code, but they are wrong.
16 bit waves have a range from -32,768 to 32,767, not from -256 to 255, making the multiplication you are using incorrect anyway.
16-bit wavs are 2 bytes, thus you must read two bytes to make one sample, not one. You appear to be reading one character at a time. When you read the bytes, you may have to swap them if your native endianness is not little-endian.
Assuming a little-endian architecture, your code would look more like this (very close to Carl's answer):
for (int i = 0; i < size; i += 2)
{
int c = (data[i + 1] << 8) | data[i];
double t = c/32768.0;
rawSignal.push_back(t);
}
for a big-endian architecture:
for (int i = 0; i < size; i += 2)
{
int c = (data[i] << 8) | data[i+1];
double t = c/32768.0;
rawSignal.push_back(t);
}
That code is untested, so please LMK if it doesn't work.
(First of all about little-endian/big-endian-ness. WAV is just a container format, the data encoded in it can be in countless format. Most of the codecs are lossless (MPEG Layer-3 aka MP3, yes, the stream can be "packaged" into a WAV, various CCITT and other codecs). You assume that you deal with some kind of PCM format, where you see the actual wave in RAW format, no lossless transformation was done on it. The endianness depends on the codec, which produced the stream.
Is the endianness of format params guaranteed in RIFF WAV files?)
It's also a question if the one PCM sample is in linear scale sampled integer or there some scaling, log scale or other transformation behind it. Regular PCM wav files I encountered were simple linear scale samples, but I'm not working in the audio recording or producing industry.
So a path to your solution:
Make sure that you are dealing with regular 16 bit PCM encoded RIFF WAV file.
While reading the stream, always read two bytes (char) at a time and convert the two chars into a 16 bit short. People showed this before me.
The wave form you show clearly suggest that you either not estimated the frequency well (or you just have one mono channel instead of a stereo). Because the sampling rate (44.1kHz, 22KHz, 11KHz, 8kHz, etc) is just as important as the resolution (8 bit, 16 bit, 24 bit, etc). Maybe in the first case you had a stereo data. You can read it in as mono, you may not notice it. In the second case if you have mono data, then you'll run out of samples half way into reading the data. That's what it seems to happen according to your graphs. Talking about the other cause: the lower sampling resolutions (and 16 bit is also lower) often paired with lower sampling rates. So if your input data is the recording time, and you think you have a 22kHz data, but it's actually just 11kHz, then again you'll run out half way through from the actual samples and read in memory garbage. So either one of these.
Make sure that you interpret and treat your loop iterator variable and the size well. It seems that size tells how many bytes you have. You'll have exactly half as much short integer samples. Notice, that Bjorn's solution correctly increments i by 2 because of that.
My working code is
int8_t* buffer = new int8_t[size];
/*
HERE buffer IS FILLED
*/
for (int i = 0; i < size; i += 2)
{
int16_t c = ((unsigned char)buffer[i + 1] << 8) | (unsigned char)buffer[i];
double t = c/32768.0;
rawSignal.push_back(t);
}
A 16-bit quantity gives you a range from -32,768 to 32,767, not from -256 to 255 (that's just 9 bits). Use:
for (int i = 0; i < size; i += 2)
{
c = (data[i + 1] << 8) + data[i]; // WAV files are little-endian
double t = (c - 32768)/32768.0;
rawSignal.push_back(t);
}
You might want something more like this:
uint16_t c;
for(unsigned i=0; (i < size); i++)
{
// get a 16 bit pointer to the array
uint16_t* p = (uint16_t*)data;
// get the i-th element
c = *( p + i );
// convert to signed? I'm guessing this is what you want
int16_t cs = (int16_t)c;
double t = (cs-256)/256.0;
rawSignal.push_back(t);
}
Your code converts the 8 bit value to a signed value then writes it into an unsigned variable. You should look at that and see if it's what you want.
I come across a very tricky problem with bit manipulation.
As far as I know, the smallest variable size to hold a value is one byte of 8 bits. The bit operations available in C/C++ apply to an entire unit of bytes.
Imagine that I have a map to replace a binary pattern 100100 (6 bits) with a signal 10000 (5 bits). If the 1st byte of input data from a file is 10010001 (8 bits) being stored in a char variable, part of it matches the 6 bit pattern and therefore be replaced by the 5 bit signal to give a result of 1000001 (7 bits).
I can use a mask to manipulate the bits within a byte to get a result of the left most bits to 10000 (5 bit) but the right most 3 bits become very tricky to manipulate. I cannot shift the right most 3 bits of the original data to get the correct result 1000001 (7 bit) followed by 1 padding bit in that char variable that should be filled by the 1st bit of next followed byte of input.
I wonder if C/C++ can actually do this sort of replacement of bit patterns of length that do not fit into a Char (1 byte) variable or even Int (4 bytes). Can C/C++ do the trick or we have to go for other assembly languages that deal with single bits manipulations?
I heard that Power Basic may be able to do the bit-by-bit manipulation better than C/C++.
If time and space are not important then you can convert the bits to a string representation and perform replaces on the string, then convert back when needed. Not an elegant solution but one that works.
<< shiftleft
^ XOR
>> shift right
~ one's complement
Using these operations, you could easily isolate the pieces that you are interested in and compare them as integers.
say the byte 001000100 and you want to check if it contains 1000:
char k = (char)68;
char c = (char)8;
int i = 0;
while(i<5){
if((k<<i)>>(8-3-i) == c){
//do stuff
break;
}
}
This is very sketchy code, just meant to be a demonstration.
I wonder if C/C++ can actually do this
sort of replacement of bit patterns of
length that do not fit into a Char (1
byte) variable or even Int (4 bytes).
What about std::bitset?
Here's a small bit reader class which may suit your needs. Of course, you may want to create a bit writer for your use case.
#include <iostream>
#include <sstream>
#include <cassert>
class BitReader {
public:
typedef unsigned char BitBuffer;
BitReader(std::istream &input) :
input(input), bufferedBits(8) {
}
BitBuffer peekBits(int numBits) {
assert(numBits <= 8);
assert(numBits > 0);
skipBits(0); // Make sure we have a non-empty buffer
return (((input.peek() << 8) | buffer) >> bufferedBits) & ((1 << numBits) - 1);
}
void skipBits(int numBits) {
assert(numBits >= 0);
numBits += bufferedBits;
while (numBits > 8) {
buffer = input.get();
numBits -= 8;
}
bufferedBits = numBits;
}
BitBuffer readBits(int numBits) {
assert(numBits <= 8);
assert(numBits > 0);
BitBuffer ret = peekBits(numBits);
skipBits(numBits);
return ret;
}
bool eof() const {
return input.eof();
}
private:
std::istream &input;
BitBuffer buffer;
int bufferedBits; // How many bits are buffered into 'buffer' (0 = empty)
};
Use a vector<bool> if you can read your data into the vector mostly at once. It may be more difficult to find-and-replace sequences of bits, though.
If I understood your questions correctly, you have an input stream and and output stream and you want to replace the 6bits of the input with 5 in the output - and your output still should be a bit stream?
So, the most important programmer's rule can be applied: Divide et impera!
You should split your component in three parts:
Input Stream converter: Convert every pattern in the input stream to a char array (ring) buffer. If I understood you correctly your input "commands" are 8bit long, so there is nothing special about this.
Do the replacement on the ring buffer in a way that you replace every matching 6-bit pattern with the 5bit one, but "pad" the 5 bit with a leading zero, so the total length is still 8bit.
Write an output handler that reads from the ring buffer and let this output handler write only the 7 LSB to the output stream from each input byte. Of course some bit manipulation is necessary again for this.
If your ring buffer size can be divided by 8 and 7 (= is a multiple of 56) you will have a clean buffer at the end and can start again with 1.
The most simplest way to implement this is to iterate over this 3 steps as long as input data is available.
If a performance really matters and you are running on a multi-core CPU you even could split the steps and 3 threads, but then you must carefully synchronize the access to the ring buffer.
I think the following does what you want.
PATTERN_LEN = 6
PATTERNMASK = 0x3F //6 bits
PATTERN = 0x24 //b100100
REPLACE_LEN = 5
REPLACEMENT = 0x10 //b10000
void compress(uint8* inbits, uint8* outbits, int len)
{
uint16 accumulator=0;
int nbits=0;
uint8 candidate;
while (len--) //for all input bytes
{
//for each bit (msb first)
for (i=7;i<=0;i--)
{
//add 1 bit to accumulator
accumulator<<=1;
accumulator|=(*inbits&(1<<i));
nbits++;
//check for pattern
candidate = accumulator&PATTERNMASK;
if (candidate==PATTERN)
{
//remove pattern
accumulator>>=PATTERN_LEN;
//add replacement
accumulator<<=REPLACE_LEN;
accumulator|=REPLACMENT;
nbits+= (REPLACE_LEN - PATTERN_LEN);
}
}
inbits++;
//move accumulator to output to prevent overflow
while (nbits>8)
{
//copy the highest 8 bits
nbits-=8;
*outbits++ = (accumulator>>nbits)&0xFF;
//clear them from accumulator
accumulator&= ~(0xFF<<nbits);
}
}
//copy remainder of accumulator to output
while (nbits>0)
{
nbits-=8;
*outbits++ = (accumulator>>nbits)&0xFF;
accumulator&= ~(0xFF<<nbits);
}
}
You could use a switch or a loop in the middle to check the candidate against multiple patterns. There might have to be some special handling after doing a replacment to ensure the replacement pattern is not re-checked for matches.
#include <iostream>
#include <cstring>
size_t matchCount(const char* str, size_t size, char pat, size_t bsize) noexcept
{
if (bsize > 8) {
return 0;
}
size_t bcount = 0; // curr bit number
size_t pcount = 0; // curr bit in pattern char
size_t totalm = 0; // total number of patterns matched
const size_t limit = size*8;
while (bcount < limit)
{
auto offset = bcount%8;
char c = str[bcount/8];
c >>= offset;
char tpat = pat >> pcount;
if ((c & 1) == (tpat & 1))
{
++pcount;
if (pcount == bsize)
{
++totalm;
pcount = 0;
}
}
else // mismatch
{
bcount -= pcount; // backtrack
//reset
pcount = 0;
}
++bcount;
}
return totalm;
}
int main(int argc, char** argv)
{
const char* str = "abcdefghiibcdiixyz";
char pat = 'i';
std::cout << "Num matches = " << matchCount(str, 18, pat, 7) << std::endl;
return 0;
}
I want to read sizeof(int) bytes from a char* array.
a) In what scenario's do we need to worry if endianness needs to be checked?
b) How would you read the first 4 bytes either taking endianness into consideration or not.
EDIT : The sizeof(int) bytes that I have read needs to be compared with an integer value.
What is the best approach to go about this problem
Do you mean something like that?:
char* a;
int i;
memcpy(&i, a, sizeof(i));
You only have to worry about endianess if the source of the data is from a different platform, like a device.
a) You only need to worry about "endianness" (i.e., byte-swapping) if the data was created on a big-endian machine and is being processed on a little-endian machine, or vice versa. There are many ways this can occur, but here are a couple of examples.
You receive data on a Windows machine via a socket. Windows employs a little-endian architecture while network data is "supposed" to be in big-endian format.
You process a data file that was created on a system with a different "endianness."
In either of these cases, you'll need to byte-swap all numbers that are bigger than 1 byte, e.g., shorts, ints, longs, doubles, etc. However, if you are always dealing with data from the same platform, endian issues are of no concern.
b) Based on your question, it sounds like you have a char pointer and want to extract the first 4 bytes as an int and then deal with any endian issues. To do the extraction, use this:
int n = *(reinterpret_cast<int *>(myArray)); // where myArray is your data
Obviously, this assumes myArray is not a null pointer; otherwise, this will crash since it dereferences the pointer, so employ a good defensive programming scheme.
To swap the bytes on Windows, you can use the ntohs()/ntohl() and/or htons()/htonl() functions defined in winsock2.h. Or you can write some simple routines to do this in C++, for example:
inline unsigned short swap_16bit(unsigned short us)
{
return (unsigned short)(((us & 0xFF00) >> 8) |
((us & 0x00FF) << 8));
}
inline unsigned long swap_32bit(unsigned long ul)
{
return (unsigned long)(((ul & 0xFF000000) >> 24) |
((ul & 0x00FF0000) >> 8) |
((ul & 0x0000FF00) << 8) |
((ul & 0x000000FF) << 24));
}
Depends on how you want to read them, I get the feeling you want to cast 4 bytes into an integer, doing so over network streamed data will usually end up in something like this:
int foo = *(int*)(stream+offset_in_stream);
The easy way to solve this is to make sure whatever generates the bytes does so in a consistent endianness. Typically the "network byte order" used by various TCP/IP stuff is
best: the library routines htonl and ntohl work very well with this, and they
are usually fairly well optimized.
However, if network byte order is not being used, you may need to do things in
other ways. You need to know two things: the size of an integer, and the byte order.
Once you know that, you know how many bytes to extract and in which order to put
them together into an int.
Some example code that assumes sizeof(int) is the right number of bytes:
#include <limits.h>
int bytes_to_int_big_endian(const char *bytes)
{
int i;
int result;
result = 0;
for (i = 0; i < sizeof(int); ++i)
result = (result << CHAR_BIT) + bytes[i];
return result;
}
int bytes_to_int_little_endian(const char *bytes)
{
int i;
int result;
result = 0;
for (i = 0; i < sizeof(int); ++i)
result += bytes[i] << (i * CHAR_BIT);
return result;
}
#ifdef TEST
#include <stdio.h>
int main(void)
{
const int correct = 0x01020304;
const char little[] = "\x04\x03\x02\x01";
const char big[] = "\x01\x02\x03\x04";
printf("correct: %0x\n", correct);
printf("from big-endian: %0x\n", bytes_to_int_big_endian(big));
printf("from-little-endian: %0x\n", bytes_to_int_little_endian(little));
return 0;
}
#endif
How about
int int_from_bytes(const char * bytes, _Bool reverse)
{
if(!reverse)
return *(int *)(void *)bytes;
char tmp[sizeof(int)];
for(size_t i = sizeof(tmp); i--; ++bytes)
tmp[i] = *bytes;
return *(int *)(void *)tmp;
}
You'd use it like this:
int i = int_from_bytes(bytes, SYSTEM_ENDIANNESS != ARRAY_ENDIANNESS);
If you're on a system where casting void * to int * may result in alignment conflicts, you can use
int int_from_bytes(const char * bytes, _Bool reverse)
{
int tmp;
if(reverse)
{
for(size_t i = sizeof(tmp); i--; ++bytes)
((char *)&tmp)[i] = *bytes;
}
else memcpy(&tmp, bytes, sizeof(tmp));
return tmp;
}
You shouldn't need to worry about endianess unless you are reading the bytes from a source created on a different machine, e.g. a network stream.
Given that, can't you just use a for loop?
void ReadBytes(char * stream) {
for (int i = 0; i < sizeof(int); i++) {
char foo = stream[i];
}
}
}
Are you asking for something more complicated than that?
You need to worry about endianess only if the data you're reading is composed of numbers which are larger than one byte.
if you're reading sizeof(int) bytes and expect to interpret them as an int then endianess makes a difference. essentially endianness is the way in which a machine interprets a series of more than 1 bytes into a numerical value.
Just use a for loop that moves over the array in sizeof(int) chunks.
Use the function ntohl (found in the header <arpa/inet.h>, at least on Linux) to convert from bytes in the network order (network order is defined as big-endian) to local byte-order. That library function is implemented to perform the correct network-to-host conversion for whatever processor you're running on.
Why read when you can just compare?
bool AreEqual(int i, char *data)
{
return memcmp(&i, data, sizeof(int)) == 0;
}
If you are worrying about endianness when you need to convert all of integers to some invariant form. htonl and ntohl are good examples.