SDL_mixer function Mix_LoadMUS_RW causes Access Violation - c++

I have a problem in loading music from memory with SDL_mixer.
The following "minimal" example including a bit of error checking will always crash with an access violation in Music::play.
#include <SDL\SDL_mixer.h>
#include <SDL\SDL.h>
#include <vector>
#include <iostream>
#include <string>
#include <fstream>
class Music {
public:
void play(int loops = 1);
SDL_RWops* m_rw;
std::vector<unsigned char> m_file;
Mix_Music * m_music = nullptr;
};
void Music::play(int loops) {
if (Mix_PlayMusic(m_music, loops) == -1)
std::cout << "Error playing music " + std::string(Mix_GetError()) + " ...\n";
}
void readFileToBuffer(std::vector<unsigned char>& buffer, std::string filePath) {
std::ifstream file(filePath, std::ios::binary);
file.seekg(0, std::ios::end);
int fileSize = file.tellg();
file.seekg(0, std::ios::beg);
fileSize -= file.tellg();
buffer.resize(fileSize);
file.read((char *)&(buffer[0]), fileSize);
file.close();
}
void writeFileToBuffer(std::vector<unsigned char>& buffer, std::string filePath) {
std::ofstream file(filePath, std::ios::out | std::ios::binary);
for (size_t i = 0; i < buffer.size(); i++)
file << buffer[i];
file.close();
}
Music loadMusic(std::string filePath) {
Music music;
readFileToBuffer(music.m_file, filePath);
music.m_rw = SDL_RWFromMem(&music.m_file[0], music.m_file.size());
// Uncommenting the next block runs without problems
/*
writeFileToBuffer(music.m_file, filePath);
music.m_rw = SDL_RWFromFile(filePath.c_str(), "r");
*/
if (music.m_rw == nullptr)
std::cout << "Error creating RW " + std::string(Mix_GetError()) + " ...\n";
music.m_music = Mix_LoadMUSType_RW(music.m_rw, Mix_MusicType::MUS_OGG, SDL_FALSE);
if (music.m_music == nullptr)
std::cout << "Error creating music " + std::string(Mix_GetError()) + " ...\n";
return music;
}
int main(int argc, char** argv) {
SDL_Init(SDL_INIT_AUDIO);
Mix_Init(MIX_INIT_MP3 | MIX_INIT_OGG);
Mix_OpenAudio(MIX_DEFAULT_FREQUENCY, MIX_DEFAULT_FORMAT, MIX_DEFAULT_CHANNELS, 1024);
Music music = loadMusic("Sound/music/XYZ.ogg");
music.play();
std::cin.ignore();
return 0;
}
My ArchiveManager works for sure, which can also be seen because ucommenting the block that writes the buffer to a file and creating an SDL_RW from this will run just fine.
The music file I load is just assumed to be an ogg file, which it is in this case, hence creating an SDL_RW from the file works fine. Meaning nothing crashes and the music plays properly start to end.
The music class is from my understanding much too big. I am just keeping the buffer m_file around, as well as the SDL_RW to make sure that the problem does not come from that data being freed. Running Mix_LoadMUS_RW with SDL_FALSE should also make sure that the RW is not freed.
Notably a similar example loading a wav file from the same archive using Mix_LoadWAV_RW works just fine:
Mix_Chunk * chunk;
std::vector<unsigned char> fileBuf = ArchiveManager::loadFileFromArchive(filePath);
chunk = Mix_LoadWAV_RW(SDL_RWFromConstMem(&fileBuf[0], fileBuf.size()), SDL_TRUE);
And here I am not even keeping the buffer around until calling the Mix_PlayCannel. Also here I am calling the load function with SDL_TRUE because I am not creating an explicit SDL_RW. Trying the similar thing for loading the music will not make a difference.
I studied the SDL_mixer source code, but it didn't help me. Maybe my knowledge is not sufficient or maybe I missed something crucial.
To get to the point: Where does that access violation come from and how can I prevent it?
EDIT: Changed the example code so it is straightforward for anyone to reproduce it. So no ArchiveManager or anything like that, just reading an ogg directly into memory. The crucial parts are just the few lines in loadMusic.

Music music = loadMusic("Sound/music/XYZ.ogg");
music.play();
The first line will copy the object of type class Music on the right into the new one called music. This will result in the vector m_file being copied, including the data in it. The data for the vector of our new object music will obviously be stored at a different memory location than that of the vector of the object returned by loadMusic. Then the object returned by loadMusic will be deleted from the stack and the data of it's vector will be freed thus invalidating the previously created Mix_Music object and causing an access violation on the second line.
This can be remedied by only ever creating one Music object, for example by creating it via new on the heap and having loadMusic return a pointer to that object.
Music* music = loadMusic("Sound/music/XYZ.ogg");
music->play();
It might anyway be the better choice to allocate the memory for a whole file on the heap instead of on the stack, though I would guess that vectors do this internally.
So short version, it was (what I consider) an rookie mistake and I was too fixated on blaming SDL_Mixer. Bad idea.

Related

Qt: How can I copy a big data using QT?

I want to read a big data, and then write it to a new file using Qt.
I have tried to read a big file. And the big file only have one line. I test with readAll() and readLine().
If the data file is about 600MB, my code can run although it is slow.
If the data file is about 6GB, my code will fail.
Can you give me some suggestions?
Update
My test code is as following:
#include <QApplication>
#include <QFile>
#include <QTextStream>
#include <QTime>
#include <QDebug>
#define qcout qDebug()
void testFile07()
{
QFile inFile("../03_testFile/file/bigdata03.txt");
if (!inFile.open(QIODevice::ReadOnly | QIODevice::Text))
{
qcout << inFile.errorString();
return ;
}
QFile outFile("../bigdata-read-02.txt");
if (!outFile.open(QIODevice::WriteOnly | QIODevice::Truncate))
return;
QTime time1, time2;
time1 = QTime::currentTime();
while(!inFile.atEnd())
{
QByteArray arr = inFile.read(3*1024);
outFile.write(arr);
}
time2 = QTime::currentTime();
qcout << time1.msecsTo(time2);
}
void testFile08()
{
QFile inFile("../03_testFile/file/bigdata03.txt");
if (!inFile.open(QIODevice::ReadOnly | QIODevice::Text))
return;
QFile outFile("../bigdata-readall-02.txt");
if (!outFile.open(QIODevice::WriteOnly | QIODevice::Truncate))
return;
QTime time1, time2, time3;
time1 = QTime::currentTime();
QByteArray arr = inFile.readAll();
qcout << arr.size();
time3 = QTime::currentTime();
outFile.write(inFile.readAll());
time2 = QTime::currentTime();
qcout << time1.msecsTo(time2);
}
int main(int argc, char *argv[])
{
testFile07();
testFile08();
return 0;
}
After my test, I share my experience about it.
read() and readAll() seem to be the same fast; more actually, read() is slightly faster.
The true difference is writing.
The size of file is 600MB:
Using read function, read and write the file cost about 2.1s, with 875ms for reading
Using readAll function, read and write the file cost about 10s, with 907ms for reading
The size of file is 6GB:
Using read function, read and write the file cost about 162s, with 58s for reading
Using readAll function, get the wrong answer 0. Fail to run well.
Open both files as QFiles. In a loop, read a fixed number of bytes, say 4K, into an array from the input file, then write that array into the output file. Continue until you run out of bytes.
However, if you just want to copy a file verbatim, you can use QFile::copy
You can use QFile::map and use the pointer to the mapped memory to write in a single shot to the target file:
void copymappedfile(QString in_filename, QString out_filename)
{
QFile in_file(in_filename);
if(in_file.open(QFile::ReadOnly))
{
QFile out_file(out_filename);
if(out_file.open(QFile::WriteOnly))
{
const qint64 filesize = in_file.size();
uchar * mem = in_file.map(0, filesize, QFileDevice::MapPrivateOption);
out_file.write(reinterpret_cast<const char *>(mem) , filesize);
in_file.unmap(mem);
out_file.close();
}
in_file.close();
}
}
One thing to keep in mind:
With read() you specify a maximum size for the currently read chunk (in your example 3*1024 bytes), with readAll() you tell the program to read the entire file at once.
In the first case you (repeatedly) put 3072 Bytes on the stack, write them and they get removed from the stack once the current loop iteration ends. In the second case you push the entire file on the stack. Pushing 600MB on the stack at once might be the reason for your performance issues. If you try to put 6GB on the stack at once you may just run out of memory/adress space - causing your program to crash.

C++ get the size of a file while it's being written to

I have a recording application that is reading data from a network stream and writing it to file. It works very well, but I would like to display the file size as the data is being written. Every second the gui thread updates the status bar to update the displayed time of recording. At this point I would also like to display the current file size.
I originally consulted this question and have tried both the stat method:
struct stat stat_buf;
int rc = stat(recFilename.c_str(), &stat_buf);
std::cout << recFilename << " " << stat_buf.st_size << "\n";
(no error checking for simplicity) and the fseek method:
FILE *p_file = NULL;
p_file = fopen(recFilename.c_str(),"rb");
fseek(p_file,0,SEEK_END);
int size = ftell(p_file);
fclose(p_file);
but either way, I get 0 for the file size. When I go back and look at the file I write to, the data is there and the size is correct. The recording is happening on a separate thread.
I know that bytes are being written because I can print the size of the data as it is written in conjunction with the output of the methods shown above.
The filename plus the 0 is what I print out from the GUI thread. 'Bytes written x' is out of the recording thread.
You can read all about C++ file manipulations here http://www.cplusplus.com/doc/tutorial/files/
This is an example of how I would do it.
#include <fstream>
std::ifstream::pos_type filesize(const char* file)
{
std::ifstream in(file, std::ifstream::ate | std::ifstream::binary);
return in.tellg();
}
Hope this helps.
As a desperate alternative, you can use a ftell in "the write data thread" or maybe a variable to track the amount of data that is written, but going to the real problem, you must be making a mistake, maybe fopen never opens the file, or something like that.
I'll copy a test code to show that this works at least in a singlethread app
int _tmain(int argc, _TCHAR* argv[])
{
FILE * mFile;
FILE * mFile2;
mFile = fopen("hi.txt", "a+");
// fseek(mFile, 0, SEEK_END);
// ## this is to make sure that fputs and fwrite works equal
// fputs("fopen example", mFile);
fwrite("fopen ex", 1, 9, mFile);
fseek(mFile, 0, SEEK_END);
std::cout << ftell(mFile) << ":";
mFile2 = fopen("hi.txt", "rb");
fseek(mFile2, 0, SEEK_END);
std::cout << ftell(mFile2) << std::endl;
fclose(mFile2);
fclose(mFile);
getchar();
return 0;
}
Just use freopen function before calling stat. It seems freopen refreshes the file length.
I realize this post is rather old at this point, but in response to #TylerD007, while that works, that is incredibly expensive to do if all you're trying to do is get the amount of bytes written.
In C++17 and later, you can simply use the <filesystem> header and call
auto fileSize {std::filesystem::file_size(filePath)}; and now variable fileSize holds the actual size of the file.

C++ reading large files part by part

I've been having a problem that I not been able to solve as of yet. This problem is related to reading files, I've looked at threads even on this website and they do not seem to solve the problem. That problem is reading files that are larger than a computers system memory. Simply when I asked this question a while ago I was referred too using the following code.
string data("");
getline(cin,data);
std::ifstream is (data);//, std::ifstream::binary);
if (is)
{
// get length of file:
is.seekg (0, is.end);
int length = is.tellg();
is.seekg (0, is.beg);
// allocate memory:
char * buffer = new char [length];
// read data as a block:
is.read (buffer,length);
is.close();
// print content:
std::cout.write (buffer,length);
delete[] buffer;
}
system("pause");
This code works well apart from the fact that it eats memory like fat kid in a candy store.
So after a lot of ghetto and unrefined programing, I was able to figure out a way to sort of fix the problem. However I more or less traded one problem for another in my quest.
#include <iostream>
#include <vector>
#include <string>
#include <fstream>
#include <stdio.h>
#include <stdlib.h>
#include <iomanip>
#include <windows.h>
#include <cstdlib>
#include <thread>
using namespace std;
/*======================================================*/
string *fileName = new string("tldr");
char data[36];
int filePos(0); // The pos of the file
int tmSize(0); // The total size of the file
int split(32);
char buff;
int DNum(0);
/*======================================================*/
int getFileSize(std::string filename) // path to file
{
FILE *p_file = NULL;
p_file = fopen(filename.c_str(),"rb");
fseek(p_file,0,SEEK_END);
int size = ftell(p_file);
fclose(p_file);
return size;
}
void fs()
{
tmSize = getFileSize(*fileName);
int AX(0);
ifstream fileIn;
fileIn.open(*fileName, ios::in | ios::binary);
int n1,n2,n3;
n1 = tmSize / 32;
// Does the processing
while(filePos != tmSize)
{
fileIn.seekg(filePos,ios_base::beg);
buff = fileIn.get();
// To take into account small files
if(tmSize < 32)
{
int Count(0);
char MT[40];
if(Count != tmSize)
{
MT[Count] = buff;
cout << MT[Count];// << endl;
Count++;
}
}
// Anything larger than 32
else
{
if(AX != split)
{
data[AX] = buff;
AX++;
if(AX == split)
{
AX = 0;
}
}
}
filePos++;
}
int tz(0);
filePos = filePos - 12;
while(tz != 2)
{
fileIn.seekg(filePos,ios_base::beg);
buff = fileIn.get();
data[tz] = buff;
tz++;
filePos++;
}
fileIn.close();
}
void main ()
{
fs();
cout << tmSize << endl;
system("pause");
}
What I tried to do with this code is too work around the memory issue. Rather than allocating memory for a large file that simply does not exist on a my system, I tried to use the memory I had instead which is about 8gb, but I only wanted to use maybe a few Kilobytes of it if at all possible.
To give you a layout of what I am talking about I am going to write a line of text.
"Hello my name is cake please give me cake"
Basically what I did was read said piece of text letter by letter. Then I put those letters into a box that could store 32 of them, from there I could use something like xor and then write them onto another file.
The idea in a way works but it is horribly slow and leaves off parts of files.
So basically how can I make something like this work without going slow or cutting off files. I would love to see how xor works with very large files.
So if anyone has a better idea than what I have, then I would be very grateful for the help.
To read and process the file piece-by-piece, you can use the following snippet:
// Buffer size 1 Megabyte (or any number you like)
size_t buffer_size = 1<<20;
char *buffer = new char[buffer_size];
std::ifstream fin("input.dat");
while (fin)
{
// Try to read next chunk of data
fin.read(buffer, buffer_size);
// Get the number of bytes actually read
size_t count = fin.gcount();
// If nothing has been read, break
if (!count)
break;
// Do whatever you need with first count bytes in the buffer
// ...
}
delete[] buffer;
The buffer size of 32 bytes, as you are using, is definitely too small. You make too many calls to library functions (and the library, in turn, makes calls (although probably not every time) to OS, which are typically slow, since they cause context-switching). There is also no need of tell/seek.
If you don't need all the file content simultaneously, reduce the working set first - like a set of about 32 words, but since XOR can be applied sequentially, you may further simplify the working set with constant size, like 4 kilo-bytes.
Now, you have the option to use file reader is.read() in a loop and process a small set of data each iteration, or use memmap() to map the file content as memory pointer which you can perform both read and write operations.

Binary Files in C++, changing the content of raw data on an audio file

I have never worked with binary files before. I opened an .mp3 file using the mode ios::binary, read data from it, assigned 0 to each byte read and then rewrote them to another file opened in ios::binary mode. I opened the output file on a media player, it sounds corrupted but I can still hear the song. I want to know what happened physically.
How can I access/modify the raw data ( bytes ) of an audio ( video, images, ... ) using C++ ( to practice file encryption/decryption later )?
Here is my code:
#include <iostream>
#include <fstream>
#include <cstring>
using namespace std;
int main(){
char buffer[256];
ifstream inFile;
inFile.open("Backstreet Boys - Incomplete.mp3",ios::binary);
ofstream outFile;
outFile.open("Output.mp3",ios::binary);
while(!inFile.eof()){
inFile.read(buffer,256);
for(int i = 0; i<strlen(buffer); i++){
buffer[i] = 0;
}
outFile.write(buffer,256);
}
inFile.close();
outFile.close();
}
What you did has nothing to do with binary files or audio. You simply copied the file while zeroing some of the bytes. (The reason you didn't zero all of the bytes is because you use i<strlen(buffer), which simply counts up to the first zero byte rather than reporting the size of the buffer. Also you modify the buffer which means strlen(buffer) will report the length as zero after you zero the first byte.)
So the exact change in audio you get is entirely dependent on the mp3 file format and the audio compression it uses. MP3 is not an audio format that can be directly manipulated in useful ways.
If you want to manipulate digital audio, you need to learn about how raw audio is represented by computers.
It's actually not too difficult. For example, here's a program that writes out a raw audio file containing just a 400Hz tone.
#include <fstream>
#include <limits>
int main() {
const double pi = 3.1415926535;
double tone_frequency = 400.0;
int samples_per_second = 44100;
double output_duration_seconds = 5.0;
int output_sample_count =
static_cast<int>(output_duration_seconds * samples_per_second);
std::ofstream out("signed-16-bit_mono-channel_44.1kHz-sample-rate.raw",
std::ios::binary);
for (int sample_i = 0; sample_i < output_sample_count; ++sample_i) {
double t = sample_i / static_cast<double>(samples_per_second);
double sound_amplitude = std::sin(t * 2 * pi * tone_frequency);
// encode amplitude as a 16-bit, signed integral value
short sample_value =
static_cast<short>(sound_amplitude * std::numeric_limits<short>::max());
out.write(reinterpret_cast<char const *>(&sample_value),
sizeof sample_value);
}
}
To play the sound you need a program that can handle raw audio, such as Audacity. After running the program to generate the audio file, you can File > Import > Raw data..., to import the data for playing.
How can I access/modify the raw data ( bytes ) of an audio ( video, images, ... ) using C++ ( to practice file encryption/decryption later )?
As pointed out earlier, the reason your existing code is not completely zeroing out the data is because you are using an incorrect buffer size: strlen(buffer). The correct size is the number of bytes read() put into the buffer, which you can get with the function gcount():
inFile.read(buffer,256);
int buffer_size = inFile.gcount();
for(int i = 0; i < buffer_size; i++){
buffer[i] = 0;
}
outFile.write(buffer, buffer_size);
Note: if you were to step through your program using a debugger you probably would have pretty quickly seen the problem yourself when you noticed the inner loop executing less than you expected. Debuggers are a really handy tool to learn how to use.
I notice you're using open() and close() methods here. This is sort of pointless in this program. Just open the file in the constructor, and allow the file to be automatically closed when inFile and outFile go out of scope:
{
ifstream inFile("Backstreet Boys - Incomplete.mp3",ios::binary);
ofstream outFile("Output.mp3",ios::binary);
// don't bother calling .close(), it happens automatically.
}

Copying QFile contents to another QFile, what's the optimal way?

I need to copy a QFile to another QFile in chunks, so I can't use QFile::copy. Here's the most primitive implementation:
bool CFile::copyChunk(int64_t chunkSize, const QString &destFolder)
{
if (!_thisFile.isOpen())
{
// Initializing - opening files
_thisFile.setFileName(_absoluteFilePath);
if (!_thisFile.open(QFile::ReadOnly))
return false;
_destFile.setFileName(destFolder + _thisFileName);
if (!_destFile.open(QFile::WriteOnly))
return false;
}
if (chunkSize < (_thisFile.size() - _thisFile.pos()))
{
QByteArray data (chunkSize, 0);
_thisFile.read(data.data(), chunkSize);
return _destFile.write(data) == chunkSize;
}
}
It's not clear from this fragment, but I only intend to copy a binary file as a whole into another location, just in chunks so I can provide progress callbacks and cancellation facility for large files.
Another idea is to use memory mapping. Should I? If so, then should I only map source file and still use _destFile.write, or should I map both and use memcpy?
I guess this question isn't really tied to Qt, I think the answer should be general to any file I/O API that supports memory mapping.
Ok, ok, if it must be a memory mapping solution. Here is one:
QFile source("/tmp/bla1.bin");
source.open(QIODevice::ReadOnly);
QFile destination("/tmp/bla2.bin");
destination.open(QIODevice::ReadWrite);
destination.resize(source.size());
uchar *data = destination.map(0,destination.size());
if(!data){
qDebug() << "Cannot map";
exit(-1);
}
QByteArray buffer;
int chunksize = 200;
int var = 0;
do{
var = source.read((char *)(data), chunksize);
data += var;
}while(var > 0);
destination.unmap(data);
destination.close();
This maps only the destination file into memory. I doubt it will make much of a difference to map the source file also. But this is something for concrete measurements, not assumptions.
Another questions is whether you can map your whole file into memory at once. Constantly unmapping and remapping will certainly cost performance. And even if you use Qt. Functions like memory mapping have the tendency to act disturbingly different on different platforms, e.g. the maximum file size you map in to memory might be different.
What the optimal method is, lies always a bit in the eye of the beholder. Here is at least one working shorter method:
QFile source("/tmp/bla1.bin");
source.open(QIODevice::ReadOnly);
QFile destination("/tmp/bla2.bin");
destination.open(QIODevice::WriteOnly);
QByteArray buffer;
int chunksize = 200; // Whatever chunk size you like
while(!(buffer = source.read(chunksize)).isEmpty()){
destination.write(buffer);
}
destination.close();
source.close();
And memory mapping... I try to stay away from things like that. I am never too sure how platform independent they are.
Use this QFile::map() method:
QFile fs("Sourcefile.bin");
fs.open(QFile::ReadOnly);
QFile fd("Destinationfile.bin");
fd.open(QFile::WriteOnly);
fd.write((char*) fs.map(0, fs.size()), fs.size()); //Copies all data
fd.close();
fs.close();