Since a few days I try to send JPG data via TCP to a QT interface. All images are generated by another process, which makes some GPU-based image processing. The network topology is shown in the below:
EDIT
Transmitter-side
Previously, data were processed by an algorithm to BMP24 images. Afterwards BMP images are compressed/encoded to JPGs, in order to reduce the network traffic. Encoding is done by using this library which is used in the following function.
void CudaBase::encodeBmpToJpeg(unsigned char* idata, uint8_t* odata, int* p_jpeg_size, int width, int height)
{
... // init stuff
gpujpeg_encoder_input_set_image(&encoder_input, idata); // pass idata which contains interleaved BMP24 data
gpujpeg_encoder_encode(encoder, ¶m, ¶m_image, &encoder_input, &image, p_jpeg_size); // encode BMP24 to JPG, variable image holds compressed JPG data
gpujpeg_image_save_to_file(image_dir, image, *p_jpeg_size); // save JPG on harddrive
CUDA_CHECK(cudaStreamSynchronize(stream));
// Copy JPG from image to odata
odata = (uint8_t*) malloc(*p_jpeg_size);
if(odata != nullptr)
memcpy((void*)odata, (void*)image, *p_jpeg_size);
else
printf("Could not allocated memory for jpeg image\n");
CUDA_CHECK(cudaStreamDestroy(stream));
gpujpeg_image_destroy(image);
gpujpeg_encoder_destroy(encoder);
}
Now odata contains JPG data and is ready to send via TCP, which is done by the following:
socket.make_tcp_header(&header, nof_recieved_records, nof_records, ch);
socket.send(&header, sizeof(header));
socket.waitForACK();
socket.send(images[ch], algthm.getImageSize()); // images[ch] points to odata
socket.waitForACK();
This is how the transmitter sends
template <typename T>
void Socket::send(T* ptr, int count)
{
printf("Writing to server... \n");
int result = 0;
int _size = count;
do
{
result = write(sockfd, (void*)ptr, _size);
if(result == -1)
{
printf("Error sending to server: %d\n",errno);
_size = 0;
}
else
{
_size -= result;
}
}
while(_size > 0);
}
Receiver-side
On the QT client side, JPG data are written to a QByteArray and then displayed in a QLabel using the QPixmap::loadFromData() function, but loadFromData() always returns zero. So it seems that the content of QByteArray is not JPG compliant.
When data are received by the client, QT emits a signal and the following readyRead() function is called:
void ThreadSocket::readyRead()
{
static qint64 sum = 0;
while(socket->bytesAvailable())
{
sum += socket->bytesAvailable();
data->append(socket->readAll());
}
// Receive header from client
if(sum == header_size)
{
sum = 0;
setHeader(stream);
send("OK");
emit received_data(META);
}
// receive image data
else if(sum == header.img_size)
{
sum = 0;
send("OK");
emit received_data(RAW);
}
else
{
return;
}
clearBuffer();
}
EDIT ends here
The following function is a SLOT which is emitted when new (raw) data are received by the client thread.
void Controller::refresh_image()
{
tcp_header *header = socket->getHeader(); // Get header informations from threaded socket
QPixmap pix;
switch(header->format)
{
case BMP24:
{
QImage img(socket->getData(),
header->img_width,
header->img_height,
QImage::Format_RGB888);
pix = QPixmap::fromImage(img);
break;
}
case JPEG:
{
if(!(pix.loadFromData(socket->getData(), "JPG"))) // Every time return zero
qDebug() << "Not able to load pixmap from data";
break;
}
}
// Reload images in GUI
view->label_list()->at(header->current_channel)->setPixmap(pix);
view->update();
}
Note tcp_header is a structure that I have set by myself and is part of my own TCP protocol. Depending on the header, either BMP or JPG images are displayed. Displaying BMP images works without any problems.
After debugging I noticed the following:
JPG data sent by the transmitter are exactly the same as those received. In addition, metadata and BMPs are also received correctly.
When the receiver loads JPG images from the hard disk with QPixmap::load(path_to.jpg)images are correctly displayed in the GUI. (Note the JPGs were generated by the transmitter a few milliseconds before loading).
Received data are completely different from those load from the hard disk, whether both have the same source.
The size of the transferred bytes is exactly the same as on the hard disk. The data is of type uint8_t at the receiver and at the transmitter.
What could be the reason that the transferred data is different from the data loaded from the hard disk?
Does QT possibly make a conversion internally?
Related
I am trying to make a video player desktop application in c++ using primarily FFmpeg and Qt6. As of for now, I can decode and play video frames correctly at the right speed, that is not a problem. I am now trying to get to playback audio, which is much harder than I expected it to be. I am using libsoundio for my audio library but the documentation is really poor and there are not many examples/tutorials on it. I am also a beginner when it comes to audio programming, although I understand the basics. First off, if anyone can recommend an audio library for this type of job let me know, but I would like to use open source libraries. Anyways, here is how I decode my audio data with FFmpeg. I'm not sure if I am doing it correctly as I could barely find documentation on that as well...
I have a struct that contains all the information which is initiated through a function:
struct VideoReader
{
bool valid;
int width, height;
int video_stream_index;
int audio_stream_index;
AVRational time_base;
AVFormatContext* av_format_ctx;
AVCodecContext* av_vi_codec_ctx;
AVCodecContext* av_au_codec_ctx;
AVPacket* packet;
AVFrame* frame;
SwsContext* sws_ctx;
SwrContext* swr_ctx;
};
The function that initiates it is quite long and is not necessary to share but it populates all those values except for the sws_ctx and the swr_ctx.
Here is how I decode packets, this function is simplified, I left the video decoding out of it, ill take care of syncing once I can properly playback audio:
bool video_reader_read_au_frame(VideoReader *video_reader, unsigned char **frame_buffer)
{
// Unpack video_reader
auto& av_format_ctx = video_reader->av_format_ctx;
auto& av_codec_ctx = video_reader->av_au_codec_ctx;
auto& av_packet = video_reader->packet;
auto& av_frame = video_reader->frame;
auto& swr_ctx = video_reader->swr_ctx;
int& audio_stream_index = video_reader->audio_stream_index;
// Decode the video frame data
int response;
while (av_read_frame(av_format_ctx, av_packet) >= 0)
{
last_frame = false;
if (av_packet->stream_index != audio_stream_index)
{
av_packet_unref(av_packet);
continue;
}
response = avcodec_send_packet(av_codec_ctx, av_packet);
if (response < 0)
{
Logger::error("Could not decode packet.");
return false;
}
response = avcodec_receive_frame(av_codec_ctx, av_frame);
if (response == AVERROR(EAGAIN) || response == AVERROR_EOF)
{
av_packet_unref(av_packet);
continue;
}
else if (response < 0)
{
Logger::error("Could not decode packet.");
return false;
}
av_packet_unref(av_packet);
break;
}
// Initialize SwrContext
if (!swr_ctx) {
swr_ctx = swr_alloc_set_opts(nullptr,
av_codec_ctx->channel_layout, AV_SAMPLE_FMT_FLT,
av_codec_ctx->sample_rate, av_codec_ctx->channel_layout,
av_codec_ctx->sample_fmt, av_codec_ctx->sample_rate,
0, nullptr);
if (!swr_ctx)
{
Logger::error("Could not create SwrContext.");
return false;
}
if (swr_init(swr_ctx) < 0)
{
Logger::error("Could not initialize SwrContext.");
return false;
}
}
const int MAX_BUFFER_SIZE = av_samples_get_buffer_size(nullptr, av_frame->channels, av_frame->nb_samples, AV_SAMPLE_FMT_FLT, 1);
*frame_buffer = (unsigned char*)av_malloc(MAX_BUFFER_SIZE);
swr_convert(swr_ctx, frame_buffer, av_frame->nb_samples,
(const unsigned char**)av_frame->data, av_frame->nb_samples);
av_frame_unref(av_frame);
return true;
}
Here is how I would normally call this function:
VideoReader vr{};
if(!video_reader_open(&vr, "C:/Path/to/file.mp4"))
{
Logger::error("Could not initialize VideoReader.");
return 1;
}
unsigned char* buffer;
if(!video_reader_read_au_frame(&vr, &buffer))
{
Logger::error("Could not read audio data.");
return 1;
}
play_audio(&buffer); <-- Find a way to play audio once buffer has data in it
video_reader_close(&vr);
return 0;
Obviously I will loop over video_reader_read_au_frame(&vr, &buffer) to playback the whole video.
I believe my code puts the samples from the decoded frame in buffer, but I am really not sure.. I am unsure as well if I need to convert to AV_SAMPLE_FMT_FLT audio format or something else or just leave it as it is. For libsoundio, I kind of understand this example: http://libsound.io/ but I'm not sure I fully understand how this library works, especially the callback function. I know I have to pass buffer in outstream->userdata as a void pointer, but I don't know how to use it in the callback function. Any help or guidance would be greatly appreciated. Note that later on in this project I might want to send this data over a network to play the video on another computer in sync.
I am trying to build a NDK based c++ low latancy audio player which will encounter three operations for multiple audios.
Play from assets.
Stream from an online source.
Play from local device storage.
From one of the Oboe samples provided by Google, I added another function to the class NDKExtractor.cpp to extract a URL based audio and render it to audio device while reading from source at the same time.
int32_t NDKExtractor::decode(char *file, uint8_t *targetData, AudioProperties targetProperties) {
LOGD("Using NDK decoder: %s",file);
// Extract the audio frames
AMediaExtractor *extractor = AMediaExtractor_new();
//using this method instead of AMediaExtractor_setDataSourceFd() as used for asset files in the rythem game example
media_status_t amresult = AMediaExtractor_setDataSource(extractor, file);
if (amresult != AMEDIA_OK) {
LOGE("Error setting extractor data source, err %d", amresult);
return 0;
}
// Specify our desired output format by creating it from our source
AMediaFormat *format = AMediaExtractor_getTrackFormat(extractor, 0);
int32_t sampleRate;
if (AMediaFormat_getInt32(format, AMEDIAFORMAT_KEY_SAMPLE_RATE, &sampleRate)) {
LOGD("Source sample rate %d", sampleRate);
if (sampleRate != targetProperties.sampleRate) {
LOGE("Input (%d) and output (%d) sample rates do not match. "
"NDK decoder does not support resampling.",
sampleRate,
targetProperties.sampleRate);
return 0;
}
} else {
LOGE("Failed to get sample rate");
return 0;
};
int32_t channelCount;
if (AMediaFormat_getInt32(format, AMEDIAFORMAT_KEY_CHANNEL_COUNT, &channelCount)) {
LOGD("Got channel count %d", channelCount);
if (channelCount != targetProperties.channelCount) {
LOGE("NDK decoder does not support different "
"input (%d) and output (%d) channel counts",
channelCount,
targetProperties.channelCount);
}
} else {
LOGE("Failed to get channel count");
return 0;
}
const char *formatStr = AMediaFormat_toString(format);
LOGD("Output format %s", formatStr);
const char *mimeType;
if (AMediaFormat_getString(format, AMEDIAFORMAT_KEY_MIME, &mimeType)) {
LOGD("Got mime type %s", mimeType);
} else {
LOGE("Failed to get mime type");
return 0;
}
// Obtain the correct decoder
AMediaCodec *codec = nullptr;
AMediaExtractor_selectTrack(extractor, 0);
codec = AMediaCodec_createDecoderByType(mimeType);
AMediaCodec_configure(codec, format, nullptr, nullptr, 0);
AMediaCodec_start(codec);
// DECODE
bool isExtracting = true;
bool isDecoding = true;
int32_t bytesWritten = 0;
while (isExtracting || isDecoding) {
if (isExtracting) {
// Obtain the index of the next available input buffer
ssize_t inputIndex = AMediaCodec_dequeueInputBuffer(codec, 2000);
//LOGV("Got input buffer %d", inputIndex);
// The input index acts as a status if its negative
if (inputIndex < 0) {
if (inputIndex == AMEDIACODEC_INFO_TRY_AGAIN_LATER) {
// LOGV("Codec.dequeueInputBuffer try again later");
} else {
LOGE("Codec.dequeueInputBuffer unknown error status");
}
} else {
// Obtain the actual buffer and read the encoded data into it
size_t inputSize;
uint8_t *inputBuffer = AMediaCodec_getInputBuffer(codec, inputIndex,
&inputSize);
//LOGV("Sample size is: %d", inputSize);
ssize_t sampleSize = AMediaExtractor_readSampleData(extractor, inputBuffer,
inputSize);
auto presentationTimeUs = AMediaExtractor_getSampleTime(extractor);
if (sampleSize > 0) {
// Enqueue the encoded data
AMediaCodec_queueInputBuffer(codec, inputIndex, 0, sampleSize,
presentationTimeUs,
0);
AMediaExtractor_advance(extractor);
} else {
LOGD("End of extractor data stream");
isExtracting = false;
// We need to tell the codec that we've reached the end of the stream
AMediaCodec_queueInputBuffer(codec, inputIndex, 0, 0,
presentationTimeUs,
AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM);
}
}
}
if (isDecoding) {
// Dequeue the decoded data
AMediaCodecBufferInfo info;
ssize_t outputIndex = AMediaCodec_dequeueOutputBuffer(codec, &info, 0);
if (outputIndex >= 0) {
// Check whether this is set earlier
if (info.flags & AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM) {
LOGD("Reached end of decoding stream");
isDecoding = false;
} else {
// Valid index, acquire buffer
size_t outputSize;
uint8_t *outputBuffer = AMediaCodec_getOutputBuffer(codec, outputIndex,
&outputSize);
/*LOGV("Got output buffer index %d, buffer size: %d, info size: %d writing to pcm index %d",
outputIndex,
outputSize,
info.size,
m_writeIndex);*/
// copy the data out of the buffer
memcpy(targetData + bytesWritten, outputBuffer, info.size);
bytesWritten += info.size;
AMediaCodec_releaseOutputBuffer(codec, outputIndex, false);
}
} else {
// The outputIndex doubles as a status return if its value is < 0
switch (outputIndex) {
case AMEDIACODEC_INFO_TRY_AGAIN_LATER:
LOGD("dequeueOutputBuffer: try again later");
break;
case AMEDIACODEC_INFO_OUTPUT_BUFFERS_CHANGED:
LOGD("dequeueOutputBuffer: output buffers changed");
break;
case AMEDIACODEC_INFO_OUTPUT_FORMAT_CHANGED:
LOGD("dequeueOutputBuffer: output outputFormat changed");
format = AMediaCodec_getOutputFormat(codec);
LOGD("outputFormat changed to: %s", AMediaFormat_toString(format));
break;
}
}
}
}
// Clean up
AMediaFormat_delete(format);
AMediaCodec_delete(codec);
AMediaExtractor_delete(extractor);
return bytesWritten;
}
Now the problem i am facing is that this code it first extracts all the audio data saves it into a buffer which then becomes part of AFileDataSource which i derived from DataSource class in the same sample.
And after its done extracting the whole file it plays by calling the onAudioReady() for Oboe AudioStreamBuilder.
What I need is to play as it streams the chunk of audio buffer.
Optional Query: Also aside from the question it blocks the UI even though i created a foreground service to communicate with the NDK functions to execute this code. Any thoughts on this?
You probably solved this already, but for future readers...
You need a FIFO buffer to store the decoded audio. You can use the Oboe's FIFO buffer e.g. oboe::FifoBuffer.
You can have a low/high watermark for the buffer and a state machine, so you start decoding when the buffer is almost empty and you stop decoding when it's full (you'll figure out the other states that you need).
As a side note, I implemented such player only to find at some later time, that the AAC codec is broken on some devices (Xiaomi and Amazon come to mind), so I had to throw away the AMediaCodec/AMediaExtractor parts and use an AAC library instead.
You have to implement a ringBuffer (or use the one implemented in the oboe example LockFreeQueue.h) and copy the data on buffers that you send on the ringbuffer from the extracting thread. On the other end of the RingBuffer, the audio thread will get that data from the queue and copy it to the audio buffer. This will happen on onAudioReady(oboe::AudioStream *oboeStream, void *audioData, int32_t numFrames) callback that you have to implement in your class (look oboe docs). Be sure to follow all the good practices on the Audio thread (don't allocate/deallocate memory there, no mutexes and no file I/O etc.)
Optional query: A service doesn't run in a separate thread, so obviously if you call it from UI thread it blocks the UI. Look at other types of services, there you can have IntentService or a service with a Messenger that will launch a separate thread on Java, or you can create threads in C++ side using std::thread
I am working on a project which requires me to record audio data as .wav files(of 1 second each) from a MIDI Synth plugin loaded in the JUCE Demo Audio Plugin host. Basically, I need to create a dataset automatically (corresponding to different parameter configurations) from the MIDI Synth.
Will I have to send MIDI Note On/Off messages to generate audio data? Or is there a better way of getting audio data?
AudioBuffer<FloatType> getBusBuffer (AudioBuffer<FloatType>& processBlockBuffer) const
Is this the function which will solve my needs? If yes, how would I store the data? If not, could someone please guide me to the right function/solution.
Thank you.
I'm not exactly sure what you're asking, so I'm going to guess:
You need to programmatically trigger some MIDI notes in your synth, then write all the audio to a .wav file, right?
Assuming you already know JUCE, it would be fairly trivial to make an app that opens your plugin, sends MIDI, and records audio, but it's probably just easier to tweak the AudioPluginHost project.
Lets break it into a few simple steps (first open the AudioPluginHost project):
Programmatically send MIDI
Look at GraphEditorPanel.h, specifically the class GraphDocumentComponent. It has a private member variable: MidiKeyboardState keyState;. This collects incoming MIDI Messages and then inserts them into the incoming Audio & MIDI buffer that is sent to the plugin.
You can simply call keyState.noteOn (midiChannel, midiNoteNumber, velocity) and keyState.noteOff (midiChannel, midiNoteNumber, velocity) to trigger a note on.
Record Audio Output
This is a fairly straightforward thing to do in JUCE — you should start by looking at the JUCE Demos. The following example records output audio in the background, but there are plenty of other ways to do it:
class AudioRecorder : public AudioIODeviceCallback
{
public:
AudioRecorder (AudioThumbnail& thumbnailToUpdate)
: thumbnail (thumbnailToUpdate)
{
backgroundThread.startThread();
}
~AudioRecorder()
{
stop();
}
//==============================================================================
void startRecording (const File& file)
{
stop();
if (sampleRate > 0)
{
// Create an OutputStream to write to our destination file...
file.deleteFile();
ScopedPointer<FileOutputStream> fileStream (file.createOutputStream());
if (fileStream.get() != nullptr)
{
// Now create a WAV writer object that writes to our output stream...
WavAudioFormat wavFormat;
auto* writer = wavFormat.createWriterFor (fileStream.get(), sampleRate, 1, 16, {}, 0);
if (writer != nullptr)
{
fileStream.release(); // (passes responsibility for deleting the stream to the writer object that is now using it)
// Now we'll create one of these helper objects which will act as a FIFO buffer, and will
// write the data to disk on our background thread.
threadedWriter.reset (new AudioFormatWriter::ThreadedWriter (writer, backgroundThread, 32768));
// Reset our recording thumbnail
thumbnail.reset (writer->getNumChannels(), writer->getSampleRate());
nextSampleNum = 0;
// And now, swap over our active writer pointer so that the audio callback will start using it..
const ScopedLock sl (writerLock);
activeWriter = threadedWriter.get();
}
}
}
}
void stop()
{
// First, clear this pointer to stop the audio callback from using our writer object..
{
const ScopedLock sl (writerLock);
activeWriter = nullptr;
}
// Now we can delete the writer object. It's done in this order because the deletion could
// take a little time while remaining data gets flushed to disk, so it's best to avoid blocking
// the audio callback while this happens.
threadedWriter.reset();
}
bool isRecording() const
{
return activeWriter != nullptr;
}
//==============================================================================
void audioDeviceAboutToStart (AudioIODevice* device) override
{
sampleRate = device->getCurrentSampleRate();
}
void audioDeviceStopped() override
{
sampleRate = 0;
}
void audioDeviceIOCallback (const float** inputChannelData, int numInputChannels,
float** outputChannelData, int numOutputChannels,
int numSamples) override
{
const ScopedLock sl (writerLock);
if (activeWriter != nullptr && numInputChannels >= thumbnail.getNumChannels())
{
activeWriter->write (inputChannelData, numSamples);
// Create an AudioBuffer to wrap our incoming data, note that this does no allocations or copies, it simply references our input data
AudioBuffer<float> buffer (const_cast<float**> (inputChannelData), thumbnail.getNumChannels(), numSamples);
thumbnail.addBlock (nextSampleNum, buffer, 0, numSamples);
nextSampleNum += numSamples;
}
// We need to clear the output buffers, in case they're full of junk..
for (int i = 0; i < numOutputChannels; ++i)
if (outputChannelData[i] != nullptr)
FloatVectorOperations::clear (outputChannelData[i], numSamples);
}
private:
AudioThumbnail& thumbnail;
TimeSliceThread backgroundThread { "Audio Recorder Thread" }; // the thread that will write our audio data to disk
ScopedPointer<AudioFormatWriter::ThreadedWriter> threadedWriter; // the FIFO used to buffer the incoming data
double sampleRate = 0.0;
int64 nextSampleNum = 0;
CriticalSection writerLock;
AudioFormatWriter::ThreadedWriter* volatile activeWriter = nullptr;
};
Note that the actual audio callbacks that contain the audio data from your plugin are happening inside the AudioProcessorGraph inside FilterGraph. There is an audio callback happening many times a second where the raw audio data is passed in. It would probably be very messy to change that inside AudioPluginHost unless you know what you are doing — it would probably be simpler to use something like the above example or create your own app that has its own audio flow.
The function you asked about:
AudioBuffer<FloatType> getBusBuffer (AudioBuffer<FloatType>& processBlockBuffer) const
is irrelevant. Once you're already in the audio callback, this would give you the audio being sent to a bus of your plugin (aka if your synth had a side chain). What you want to do instead is take the audio coming out of the callback and pass it to an AudioFormatWriter, or preferably an AudioFormatWriter::ThreadedWriter so that the actual writing happens on a different thread.
If you're not at all familiar with C++ or JUCE, Max/MSP or Pure Data might be easier for you to quickly whip something up.
I am trying to capture images taken from a camera connected to a myRIO and send them over a TCP/IP connection from labVIEW to a QT GUI application.
My problem is that QT keeps throwing a heap pointer exception and crashing when I read the data.
Expression: is_block_type_valid(header->_block_use)
I believe this could be because the data being sent is over 35k bytes, so I tried to read the data in separate chunks, but alas am still getting the error.
Below is my function that gets called on readyRead() being emitted:
void TCPHandler::onRead() {
QByteArray byteArray;
QByteArray buffer;
QByteArray dataSize = mainSocket->read(5); //read the expected amount of bytes incoming (about 35000)
while (buffer.size() < dataSize.toInt()) {
int bytesLeft = dataSize.toInt() - buffer.size();
if (bytesLeft < 1024) {
byteArray = mainSocket->read(bytesLeft);
}
else {
byteArray = mainSocket->read(1024);
}
buffer.append(byteArray);
}
QBuffer imageBuffer(&buffer);
imageBuffer.open(QIODevice::ReadOnly);
QImageReader reader(&imageBuffer, "JPEG");
QImage image;
if(reader.canRead())
image = reader.read();
else {
emit read("Cannot read image data");
}
if (!image.isNull())
{
image.save("C:/temp");
}
else
{
emit read(reader.errorString());
}}
In the LabVIEW code I send the size of the bytes being sent first, then the raw image data:
EDIT: Connect for the slot. Also should have mentioned this is running in a separate thread to the Main GUI.
TCPHandler::TCPHandler(QObject *parent)
: QObject(parent),
bytesExpected(0)
{
mainSocket = new QTcpSocket(this);
connect(mainSocket, SIGNAL(readyRead()), this, SLOT(onRead()));
connect(mainSocket, QOverload<QAbstractSocket::SocketError>::of(&QAbstractSocket::error), this, &TCPHandler::displayError);
}
You are sending your length as a decimal string. Then followed by the string.
I would expect that the length would be binary value. So instead of an 'I32 to String' function use a typecast with a string as the type.
I'm trying to load a MP3 in a buffer using the SMPEG2 library, which comes with the SDL2. Every SMPEG function calls returns without error, but when I'm done, the sound buffer is full of zeros.
Here's the code :
bool LoadMP3(char* filename)
{
bool success = false;
const Uint32 Mp3ChunkLen = 4096;
SMPEG* mp3;
SMPEG_Info infoMP3;
Uint8 * ChunkBuffer;
Uint32 MP3Length = 0;
// Allocate a chunk buffer
ChunkBuffer = (Uint8*)malloc(Mp3ChunkLen);
SDL_RWops *mp3File = SDL_RWFromFile(filename, "rb");
if (mp3File != NULL)
{
mp3 = SMPEG_new_rwops(mp3File, &infoMP3, 1, 0);
if(mp3 != NULL)
{
if(infoMP3.has_audio)
{
Uint32 readLen;
// Inform the MP3 of the output audio specifications
SMPEG_actualSpec(mp3, &asDeviceSpecs); // static SDL_AudioSpec asDeviceSpecs; containing valid values after a call to SDL_OpenAudioDevice
// Enable the audio and disable the video.
SMPEG_enableaudio(mp3, 1);
SMPEG_enablevideo(mp3, 0);
// Play the MP3 once to get the size of the needed finale buffer
SMPEG_play(mp3);
while ((readLen = SMPEG_playAudio(mp3, ChunkBuffer, Mp3ChunkLen)) > 0)
{
MP3Length += readLen;
}
SMPEG_stop(mp3);
if(MP3Length > 0)
{
// Reallocate the buffer with the new length (if needed)
if (MP3Length != Mp3ChunkLen)
{
ChunkBuffer = (Uint8*)realloc(ChunkBuffer, MP3Length);
}
// Replay the entire MP3 into the new ChunkBuffer.
SMPEG_rewind(mp3);
SMPEG_play(mp3);
bool readBackSuccess = (MP3Length == SMPEG_playAudio(mp3, ChunkBuffer, MP3Length));
SMPEG_stop(mp3);
if(readBackSuccess)
{
// !!! Here, ChunkBuffer contains only zeros !!!
success = true;
}
}
}
SMPEG_delete(mp3);
mp3 = NULL;
}
SDL_RWclose(mp3File);
mp3File = NULL;
}
free(ChunkBuffer);
return success;
}
The code's widely based on SDL_Mixer, which I cannot use for my projet, based on its limitations.
I know Ogg Vorbis would be a better choice of file format, but I'm porting a very old project, and it worked entirely with MP3s.
I'm sure the sound system is initialized correctly because I can play WAV files just fine. It's intialized with a frequency of 44100, 2 channels, 1024 samples, and the AUDIO_S16SYS format (the latter which is, as I understood from the SMPEG source, mandatory).
I've calculated the anticipated buffer size, based on the bitrate, the amount of data in the MP3 and the OpenAudioDevice audio specs, and everything is consistent.
I cannot figure why everything but the buffer data seems to be working.
UPDATE #1
Still trying to figure out what's wrong, I thought the support for MP3 might not be working, so I created the following function :
SMPEG *mpeg;
SMPEG_Info info;
mpeg = SMPEG_new(filename,&info, 1);
SMPEG_play(mpeg);
do { SDL_Delay(50); } while(SMPEG_status(mpeg) == SMPEG_PLAYING);
SMPEG_delete(mpeg);
The MP3 played. So, the decoding should actually be working. But that's not what I need ; I really need the sound buffer data so I can send it to my mixer.
After much tinkering, research and digging through the SMPEG source code, I realized that I had to pass 1 as the SDLAudio parameter to SMPEG_new_rwops function.
The comment found in smpeg.h is misleading :
The sdl_audio parameter indicates if SMPEG should initialize the SDL audio subsystem. If not, you will have to use the SMPEG_playaudio() function below to extract the decoded data.
Since the audio subsystem was already initialized and I was using the SMPEG_playaudio() function, I had no reason to think I needed this parameter to be non-zero. In SMPEG, this parameter triggers the audio decompression at opening time, but even though I called SMPEG_enableaudio(mp3, 1); the data is never reparsed. This might be a bug/a shady feature.
I had another problem with the freesrc parameter which needed to be 0, since I freed the SDL_RWops object myself.
For future reference, once ChunkBuffer has the MP3 data, it needs to pass through SDL_BuildAudioCVT/SDL_ConvertAudio if it's to be played through an already opened audio device.
The final working code is :
// bool ReadMP3ToBuffer(char* filename)
bool success = false;
const Uint32 Mp3ChunkLen = 4096;
SDL_AudioSpec mp3Specs;
SMPEG* mp3;
SMPEG_Info infoMP3;
Uint8 * ChunkBuffer;
Uint32 MP3Length = 0;
// Allocate a chunk buffer
ChunkBuffer = (Uint8*)malloc(Mp3ChunkLen);
memset(ChunkBuffer, 0, Mp3ChunkLen);
SDL_RWops *mp3File = SDL_RWFromFile(filename, "rb"); // filename is a char* passed to the function.
if (mp3File != NULL)
{
mp3 = SMPEG_new_rwops(mp3File, &infoMP3, 0, 1);
if(mp3 != NULL)
{
if(infoMP3.has_audio)
{
Uint32 readLen;
// Get the MP3 audio specs for later conversion
SMPEG_wantedSpec(mp3, &mp3Specs);
SMPEG_enablevideo(mp3, 0);
// Play the MP3 once to get the size of the needed buffer in relation with the audio specs
SMPEG_play(mp3);
while ((readLen = SMPEG_playAudio(mp3, ChunkBuffer, Mp3ChunkLen)) > 0)
{
MP3Length += readLen;
}
SMPEG_stop(mp3);
if(MP3Length > 0)
{
// Reallocate the buffer with the new length (if needed)
if (MP3Length != Mp3ChunkLen)
{
ChunkBuffer = (Uint8*)realloc(ChunkBuffer, MP3Length);
memset(ChunkBuffer, 0, MP3Length);
}
// Replay the entire MP3 into the new ChunkBuffer.
SMPEG_rewind(mp3);
SMPEG_play(mp3);
bool readBackSuccess = (MP3Length == SMPEG_playAudio(mp3, ChunkBuffer, MP3Length));
SMPEG_stop(mp3);
if(readBackSuccess)
{
SDL_AudioCVT convertedSound;
// NOTE : static SDL_AudioSpec asDeviceSpecs; containing valid values after a call to SDL_OpenAudioDevice
if(SDL_BuildAudioCVT(&convertedSound, mp3Specs.format, mp3Specs.channels, mp3Specs.freq, asDeviceSpecs.format, asDeviceSpecs.channels, asDeviceSpecs.freq) >= 0)
{
Uint32 newBufferLen = MP3Length*convertedSound.len_mult;
// Make sure the audio length is a multiple of a sample size to avoid sound clicking
int sampleSize = ((asDeviceSpecs.format & 0xFF)/8)*asDeviceSpecs.channels;
newBufferLen &= ~(sampleSize-1);
// Allocate the new buffer and proceed with the actual conversion.
convertedSound.buf = (Uint8*)malloc(newBufferLen);
memcpy(convertedSound.buf, ChunkBuffer, MP3Length);
convertedSound.len = MP3Length;
if(SDL_ConvertAudio(&convertedSound) == 0)
{
// Save convertedSound.buf and convertedSound.len_cvt for future use in your mixer code.
// Dont forget to free convertedSound.buf once it's not used anymore.
success = true;
}
}
}
}
}
SMPEG_delete(mp3);
mp3 = NULL;
}
SDL_RWclose(mp3File);
mp3File = NULL;
}
free(ChunkBuffer);
return success;
NOTE : Some MP3 files I tried lost a few milliseconds and cutoff too early during playback when I resampled them with this code. Some others didn't. I could reproduce the same behaviour in Audacity, so I'm not sure what's going on. There may still have a bug with my code, a bug in SMPEG, or it maybe a known issue with the MP3 format itself. If someone can provide and explanation in the comments, that would be great!