QAudioDecoder "GStreamer; Unable to start decoding process" - c++

I get the following Error
GStreamer; Unable to start decoding process
in the Console when i try to start the QAudioDecoder.
The following code:
void Media::decode(Memo* memo){
decoder = new QAudioDecoder();
format.setSampleRate(48000);
format.setChannelCount(1);
format.setSampleSize(8);
format.setCodec("audio/pcm");
format.setSampleType(QAudioFormat::UnSignedInt);
format.setByteOrder(QAudioFormat::LittleEndian);
decoder->setAudioFormat(format);
decoder->setSourceFilename(memo->getPathMedia());
connect(decoder, SIGNAL(bufferReady()), this, SLOT(readBuffer()));
decoder->start();
}
void Media::readBuffer(){
buffer = decoder->read();
}
I hope you can help me.

As #nayana suggested, I enabled GStreamer debug log with GST_DEBUG=3 and it showed that the source file name was uncorrectly set :
filesrc gstfilesrc.c:632:gst_file_src_uri_set_uri:<source> Invalid URI 'file:file:///home/rom1/Music/track01.mp3'
Just remove the file: prefix and it works.
// In my source file ...
QString source = qvariant_cast<QString>(audioPlayer->property("source"));
source.remove(0, 7);
qDebug() << "Loading media" << source;
decoder.setSourceFilename(source);
decoder.start();

Related

QAudioOutput doesn't work correctly. I hear only noise

my program capture raw data from microphone in QT.
QAudioRecorder* recorder = new QAudioRecorder();
QAudioProbe* probe = new QAudioProbe;
connect(probe, SIGNAL(audioBufferProbed(QAudioBuffer)), this, SLOT(processBuffer(QAudioBuffer)));
QAudioEncoderSettings audioSettings;
audioSettings.setCodec("audio/mpeg");
audioSettings.setQuality(QMultimedia::HighQuality);
recorder->setEncodingSettings(audioSettings);
qDebug() << "probe ritorna " << probe->setSource(recorder); // Returns true, hopefully.
//qDebug() << "" << recorder->setOutputLocation(QUrl::fromLocalFile("test"));
recorder->record(); // Now we can do things like calculating levels or performing an FFT
myAudioServer = new MyAudioServer();
myAudioServer->startServer();
In previous code I record audio and I start a Qthread for send audio via QTcpSocket.
void QtVideoWidgetsIssueTrack::processBuffer(const QAudioBuffer& buffer){
QByteArray byteArr;
byteArr.append(buffer.constData<char>(), buffer.byteCount());
QByteArray Data = byteArr;
qDebug() << myAudioServer->isListening();
QTcpSocket* myAudioClient = myAudioServer->getSocket();
qDebug() << myAudioClient;
qDebug() << "in processBuffer";
if (myAudioClient != nullptr) {
myAudioClient->write(Data, Data.count());
myAudioClient->waitForBytesWritten();
}
}
The method processBuffer take data from microphone and send it from server to client.
void MyThreadAudioTcpSocket::readyRead()
{
while (socket->bytesAvailable() > 0) {
//fare il play da QByteArray
// get default output device
QByteArray* yourSoundData = new QByteArray(socket->readAll());
QBuffer* buffer = new QBuffer;
buffer->setData(yourSoundData->data(),yourSoundData->size());
buffer->open(QBuffer::ReadOnly);
QAudioFormat format;
format.setSampleSize(16);
format.setSampleRate(22050);
format.setChannelCount(1);
format.setCodec("audio/mpeg");
format.setByteOrder(QAudioFormat::LittleEndian);
format.setSampleType(QAudioFormat::UnSignedInt);
QAudioDeviceInfo info(QAudioDeviceInfo::defaultOutputDevice());
if (!info.isFormatSupported(format)) {
format = info.nearestFormat(format);
qDebug() << "formato non supportato";
}
QAudioOutput *output = new QAudioOutput(format);
output->moveToThread(this);
output->start(buffer);
}
}
readyRead is where data arrive from socket server. I read all data from socket, I put it in Buffer, set QAudioFormat and I create QAudioOutput linked buffer and start.
Now as you can hear from this link wav file QAudioOutput produce only noise. Why?

Capture a frame (image) from a video playing in a QT GUI

I have written a simple video player GUI code in QT. The GUI allows the user to browse the local files and select a video for playing in the GUI. The GUI also has options for 'play', 'pause' and 'stop' to apply to the video selected.
I want to add another button 'Capture', that captures the current frame of the video that is being played, and displays this captured image next to the video (The video should should get paused at this point).
I looked into the documentation of QT, specifically: this and this. But I am still not able to understand how to implement this in my case.
Kindly guide.
My code so far is as follows:
#include "qtwidgetsapplication4.h"
#include <iostream>
QtWidgetsApplication4::QtWidgetsApplication4(QWidget *parent)
: QMainWindow(parent)
{
ui.setupUi(this);
player = new QMediaPlayer(this);
vw = new QVideoWidget(this);
player->setVideoOutput(vw);
this->setCentralWidget(vw);
}
void QtWidgetsApplication4::on_actionOpen_triggered() {
QString filename = QFileDialog::getOpenFileName(this, "Open a File", "", "Video File (*.*)");
on_actionStop_triggered();
player->setSource(QUrl::fromLocalFile(filename));
on_actionPlay_triggered();
qDebug("Error Message in actionOpen");
qDebug()<<player->mediaStatus();
}
void QtWidgetsApplication4::on_actionPlay_triggered() {
player->play();
ui.statusBar->showMessage("Playing");
qDebug("Error Message in actionPlay");
qDebug() << player->mediaStatus();
}
void QtWidgetsApplication4::on_actionPause_triggered() {
player->pause();
ui.statusBar->showMessage("Paused...");
qDebug("Error Message in actionPause");
qDebug() << player->mediaStatus();
}
void QtWidgetsApplication4::on_actionStop_triggered() {
player->stop();
ui.statusBar->showMessage("Stopped");
qDebug("Error Message in actionStop");
qDebug() << player->mediaStatus();
}
You can use QVideoProbe to capture QVideoFrame. Instantiate and connect videoFrameProbed signal to your slot before pausing video. Convert QVideoFrame to QImage in this slot using frame data.
QImage::Format imageFormat = QVideoFrame::imageFormatFromPixelFormat(frame.pixelFormat());
QImage image(frame.bits(), frame.width(), frame.height(), imageFormat);
Take a look at player example for reference. It uses QVideoProbe to calculate histogram of current frame.

warning: Error opening file (/build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:578)

I cannot access ipcamera on opencv, I'm using ipcctrl app to view camera preview and it's working fine, but when I try to paste the URL into my code it displays warning: Error opening file (/build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:578) what's the problem here ?here is the proof that it is working fine in ipcctrl
cv::Mat imgFrame1;
cv::Mat imgFrame2;
cv::VideoCapture capVideo;
const std::string videoStreamAddress = "http://admin:admin#192.168.8.50:8088/mjpeg.cgi?user=USERNAME&password=PWD&channel=0&.mjpg";
std::vector<Blob> blobs;
cv::Point crossingLine[2];
int carCount = 0;
std::ofstream writer;
writer.open("cars.txt");
writer.close();
capVideo.open(videoStreamAddress);
if (!capVideo.open(videoStreamAddress)) { // if unable to open video file
std::cout << "error reading video file" << std::endl << std::endl; // show error message
_getch(); // it may be necessary to change or remove this line if not using Windows
return(0); // and exit program
}
I already solved this problem, turns out that I have an incorrect URL for the videostream address, the hard part is my camera is not that known and had a little documentations about how to configure it. I used the ispy app to generate a proper URL for my kedacom camera, tested it on VLC and on the app and viola ! it worked.

QT failed to load Image from Buffer

My work Environment : Qt 5.8 MSVC2015 64bit, QT GraphicsView, Windows 7 64 bit
I am loading image from buffer (a demon process is going send a image buffer), but it failed to create image with buffer.
QFile file("D:\\2.png");
if (!file.open(QFile::ReadOnly))
qDebug() << "Error failed to Open file";
QByteArray array = file.readAll();
array = array.toBase64();
QImage tempimage((uchar *)array.data(), 250, 250, QImage::Format_RGBX8888);
if (!tempimage.isNull()) {
///I always get this error
qDebug() << "Error!!! failed to create a image!";
}
Any idea what I am missing here ?
Why are you converting to base64?
Wait, where are you converting from PNG to an image plane?
Try bool QImage::loadFromData(const QByteArray &data, const char *format = Q_NULLPTR) to load the PNG instead of the CTor with the raw data.
If your wire format isn't PNG (and is in fact base64 encoded raw pixel data) then you want to convert FROM base64.
Thanks for all suggestion & help.
I fix my mistakes removed base64 conversion & loaded buffer using loadFromData with QByteArray reinterpret_cast:
Here is a final solution :
QFile file("D:\\2.png");
if (!file.open(QFile::ReadOnly))
qDebug() << "Error failed to Open file";
QByteArray array = file.readAll();
QImage tempimage;
//// This very important to cast in below format, QByteArray don't work as arguments.
tempimage.loadFromData(reinterpret_cast<const uchar *>(array.data()),array.size());
if (tempimage.isNull()) {
qDebug() << "Error!!! failed to create a image!";
}

FFMPEG RTSP stream to MPEG4/H264 file using libx264

Heyo folks,
I'm attempting to transcode/remux an RTSP stream in H264 format into a MPEG4 container, containing just the H264 video stream. Basically, webcam output into a MP4 container.
I can get a poorly coded MP4 produced, using this code:
// Variables here for demo
AVFormatContext * video_file_output_format = nullptr;
AVFormatContext * rtsp_format_context = nullptr;
AVCodecContext * video_file_codec_context = nullptr;
AVCodecContext * rtsp_vidstream_codec_context = nullptr;
AVPacket packet = {0};
AVStream * video_file_stream = nullptr;
AVCodec * rtsp_decoder_codec = nullptr;
int errorNum = 0, video_stream_index = 0;
std::string outputMP4file = "D:\\somemp4file.mp4";
// begin
AVDictionary * opts = nullptr;
av_dict_set(&opts, "rtsp_transport", "tcp", 0);
if ((errorNum = avformat_open_input(&rtsp_format_context, uriANSI.c_str(), NULL, &opts)) < 0) {
errOut << "Connection failed: avformat_open_input failed with error " << errorNum << ":\r\n" << ErrorRead(errorNum);
TacticalAbort();
return;
}
rtsp_format_context->max_analyze_duration = 50000;
if ((errorNum = avformat_find_stream_info(rtsp_format_context, NULL)) < 0) {
errOut << "Connection failed: avformat_find_stream_info failed with error " << errorNum << ":\r\n" << ErrorRead(errorNum);
TacticalAbort();
return;
}
video_stream_index = errorNum = av_find_best_stream(rtsp_format_context, AVMEDIA_TYPE_VIDEO, -1, -1, NULL, 0);
if (video_stream_index < 0) {
errOut << "Connection in unexpected state; made a connection, but there was no video stream.\r\n"
"Attempts to find a video stream resulted in error " << errorNum << ": " << ErrorRead(errorNum);
TacticalAbort();
return;
}
rtsp_vidstream_codec_context = rtsp_format_context->streams[video_stream_index]->codec;
av_init_packet(&packet);
if (!(video_file_output_format = av_guess_format(NULL, outputMP4file.c_str(), NULL))) {
TacticalAbort();
throw std::exception("av_guess_format");
}
if (!(rtsp_decoder_codec = avcodec_find_decoder(rtsp_vidstream_codec_context->codec_id))) {
errOut << "Connection failed: connected, but avcodec_find_decoder returned null.\r\n"
"Couldn't find codec with an AV_CODEC_ID value of " << rtsp_vidstream_codec_context->codec_id << ".";
TacticalAbort();
return;
}
video_file_format_context = avformat_alloc_context();
video_file_format_context->oformat = video_file_output_format;
if (strcpy_s(video_file_format_context->filename, sizeof(video_file_format_context->filename), outputMP4file.c_str())) {
errOut << "Couldn't open video file: strcpy_s failed with error " << errno << ".";
std::string log = errOut.str();
TacticalAbort();
throw std::exception("strcpy_s");
}
if (!(video_file_encoder_codec = avcodec_find_encoder(video_file_output_format->video_codec))) {
TacticalAbort();
throw std::exception("avcodec_find_encoder");
}
// MARKER ONE
if (!outputMP4file.empty() &&
!(video_file_output_format->flags & AVFMT_NOFILE) &&
(errorNum = avio_open2(&video_file_format_context->pb, outputMP4file.c_str(), AVIO_FLAG_WRITE, nullptr, &opts)) < 0) {
errOut << "Couldn't open video file \"" << outputMP4file << "\" for writing : avio_open2 failed with error " << errorNum << ": " << ErrorRead(errorNum);
TacticalAbort();
return;
}
// Create stream in MP4 file
if (!(video_file_stream = avformat_new_stream(video_file_format_context, video_file_encoder_codec))) {
TacticalAbort();
return;
}
AVCodecContext * video_file_codec_context = video_file_stream->codec;
// MARKER TWO
// error -22/-21 in avio_open2 if this is skipped
if ((errorNum = avcodec_copy_context(video_file_codec_context, rtsp_vidstream_codec_context)) != 0) {
TacticalAbort();
throw std::exception("avcodec_copy_context");
}
//video_file_codec_context->codec_tag = 0;
/*
// MARKER 3 - is this not needed? Examples suggest not.
if ((errorNum = avcodec_open2(video_file_codec_context, video_file_encoder_codec, &opts)) < 0)
{
errOut << "Couldn't open video file codec context: avcodec_open2 failed with error " << errorNum << ": " << ErrorRead(errorNum);
std::string log = errOut.str();
TacticalAbort();
throw std::exception("avcodec_open2, video file");
}*/
//video_file_format_context->flags |= AVFMT_FLAG_GENPTS;
if (video_file_format_context->oformat->flags & AVFMT_GLOBALHEADER)
{
video_file_codec_context->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
if ((errorNum = avformat_write_header(video_file_format_context, &opts)) < 0) {
errOut << "Couldn't open video file: avformat_write_header failed with error " << errorNum << ":\r\n" << ErrorRead(errorNum);
std::string log = errOut.str();
TacticalAbort();
return;
}
However, there are several issues:
I can't pass any x264 options to the output file. The output H264 matches the input H264's profile/level - switching cameras to a different model switches H264 level.
The timing of the output file is off, noticeably.
The duration of the output file is off, massively. A few seconds of footage becomes hours, although playtime doesn't match. (FWIW, I'm using VLC to play them.)
Passing x264 options
If I manually increment PTS per packet, and set DTS equal to PTS, it plays too fast, ~2-3 seconds' worth of footage in one second playtime, and duration is hours long. The footage also blurs past several seconds, about 10 seconds' footage in a second.
If I let FFMPEG decide (with or without GENPTS flag), the file has a variable frame rate (probably as expected), but it plays the whole file in an instant and has a long duration too (over forty hours for a few seconds). The duration isn't "real", as the file plays in an instant.
At Marker One, I try to set the profile by passing options to avio_open2. The options are simply ignored by libx264. I've tried:
av_dict_set(&opts, "vprofile", "main", 0);
av_dict_set(&opts, "profile", "main", 0); // error, missing '('
// FF_PROFILE_H264_MAIN equals 77, so I also tried
av_dict_set(&opts, "vprofile", "77", 0);
av_dict_set(&opts, "profile", "77", 0);
It does seem to read the profile setting, but it doesn't use them. At Marker Two, I tried to set it after the avio_open2, before avformat_write_header .
// I tried all 4 av_dict_set from earlier, passing it to avformat_write_header.
// None had any effect, they weren't consumed.
av_opt_set(video_file_codec_context, "profile", "77", 0);
av_opt_set(video_file_codec_context, "profile", "main", 0);
video_file_codec_context->profile = FF_PROFILE_H264_MAIN;
av_opt_set(video_file_codec_context->priv_data, "profile", "77", 0);
av_opt_set(video_file_codec_context->priv_data, "profile", "main", 0);
Messing with privdata made the program unstable, but I was trying anything at that point.
I'd like to solve issue 1 with passing settings, since I imagine it'd bottleneck any attempt to solve issues 2 or 3.
I've been fiddling with this for the better part of a month now. I've been through dozens of documentation, Q&As, examples. It doesn't help that quite a few are outdated.
Any help would be appreciated.
Cheers
Okay, firstly, I wasn't using ffmpeg, but a fork of ffmpeg called libav. Not to be confused, ffmpeg is more recent, and libav was used in some distributions of Linux.
Compiling for Visual Studio
Once I upgraded to the main branch, I had to compile it manually again, since I was using it in Visual Studio and the only static libraries are G++, so linking doesn't work nicely.
The official guide is https://trac.ffmpeg.org/wiki/CompilationGuide/MSVC.
First, compiling as such works fine:
Ensure VS is in PATH. Your PATH should read in this order:
C:\Program Files (x86)\Microsoft Visual Studio XX.0\VC\bin
D:\MinGW\msys64\mingw32\bin
D:\MinGW\msys64\usr\bin
D:\MinGW\bin
Then run Visual Studio x86 Native Tools prompt. Should be in your Start Menu.
In the CMD, run
(your path to MinGW)\msys64\msys2_shell.cmd -full-path
In the created MinGW window, run:
$ cd /your dev path/
$ git clone https://git.ffmpeg.org/ffmpeg.git ffmpeg
After about five minutes you'll have the FFMPEG source in the subfolder ffmpeg.
Access the source via:
$ cd ffmpeg
Then run:
$ which link
if it doesn't provide the VS path from PATH, but usr/link or usr/bin/link, rename similarly:
$ mv /usr/bin/link.exe /usr/bin/msys-link.exe
If it does skip the $ mv step.
Finally, run this command:
$ ./configure --toolchain=msvc and whatever other cmdlines you want
(you can see commandlines via ./configure --help)
It may appear inactive for a long time. Once it's done you'll get a couple pages of output.
Then run:
$ make
$ make install
Note for static builds (configure with --enable-static), although you get Windows static lib files, it'll produce them with extension *.a files. Just rename to .lib.
(you can use cmd: ren *.a *.lib)
Using FFMPEG
To just copy from FFMPEG RTSP to a file, using source profile, level etc, just use:
read network frame av_read_frame(rtsp_format_context)
pass to MP4 av_write_frame(video_file_format_context)
You don't need to open a AVCodecContext, a decoder or encoder; just avformat_open_input, and the video file AVFormatContext and AVIOContext.
If you want to re-encode, you have to:
read network frame
av_read_frame(rtsp_format_context)
pass packet to decoder
avcodec_send_packet(rtsp_decoder_context)
read frames from decoder (in loop)
avcodec_receive_frame(rtsp_decoder_context)
send each decoded frame to encoder
avcodec_send_frame(video_file_encoder_context)
read packets from encoder (in loop)
avcodec_receive_packet(video_file_encoder_context)
send each encoded packet to output video av_write_frame(video_file_format_context)
Some gotchas
Copy out the width, height, and pixel format manually. For H264 it's YUV420P.
As example, for level 3.1, profile high:
AVCodecParameters * video_file_codec_params = video_file_stream->codecpar;
video_file_codec_params->profile = FF_PROFILE_H264_HIGH;
video_file_codec_params->format = AV_PIX_FMT_YUV420P;
video_file_codec_params->level = 31;
video_file_codec_params->width = rtsp_vidstream->codecpar->width;
video_file_codec_params->height = rtsp_vidstream->codecpar->height;
libx264 accepts a H264 preset via the opts parameter in avcodec_open2. Example for "veryfast" preset:
AVDictionary * mydict;
av_dict_set(&mydict, "preset", "veryfast", 0);
avcodec_open2(video_file_encoder_context, video_file_encoder_codec, &opts)
// avcodec_open2 returns < 0 for errors.
// Recognised options will be removed from the mydict variable.
// If all are recognised, mydict will be NULL.
Output timing is a volatile thing. Use this before ``.
video_file_stream->avg_frame_rate = rtsp_vidstream->avg_frame_rate;
video_file_stream->r_frame_rate = rtsp_vidstream->r_frame_rate;
video_file_stream->time_base = rtsp_vidstream->time_base;
video_file_encoder_context->time_base = rtsp_vidstream_codec_context->time_base;
// Decreasing GOP size for more seek positions doesn't end well.
// libx264 forces the new GOP size.
video_file_encoder_context->gop_size = rtsp_vidstream_codec_context->gop_size;
if ((errorNum = avcodec_open2(video_file_encoder_context,...)) < 0) {
// an error...
}
H264 may write at double-speed to the file, so playback is doubly fast. To change this, go manual with encoded packets' timing:
packet->pts = packet->dts = frameNum++;
av_packet_rescale_ts(packet, video_file_encoder_context->time_base, video_file_stream->time_base);
packet->pts *= 2;
packet->dts *= 2;
av_interleaved_write_frame(video_file_format_context, packet)
// av_interleaved_write_frame returns < 0 for errors.
Note we switch av_write_frame to av_interleaved_write_frame, and set both PTS and DTS. frameNum should be a int64_t, and should start from 0 (although that's not required).
Also note the av_rescale_ts call's parameters are the video file encoder context, and the video file stream - RTSP isn't involved.
VLC media player won't play H.264 streams encoded with an FPS of 4 or lower.
So if your RTSP streams are showing the first decoded frame and never progressing, or showing pure green until the video ends, make sure your FPS is high enough. (that's VLC v2.2.4)