How to get frame from video by using vlc-qt - c++

I`m using vlc-qt for decode h264 video stream but I need every frame from video (stream) for further processing. I found this link that describes the solution :
https://discuss.tano.si/t/how-to-get-frame-from-video/253
I made a class that inherits from VlcVideoStream class and re-implement frameUpdated() function as bellow :
void MyVideoStream::frameUpdated() {
qDebug() << "frame" ;
int rows, cols, matType;
// convert to shared pointer to const frame to avoid crash
std::shared_ptr<const VlcYUVVideoFrame> frame = std::dynamic_pointer_cast<const VlcYUVVideoFrame>(renderFrame());
if (!frame) {
return; // LCOV_EXCL_LINE
}
rows = frame->height + frame->height/2;
cols = frame->width;
}
and declared my class as :
MyVideoStream *_stream ;
_stream = new MyVideoStream(Vlc::YUVFormat,ui->video) ;
_stream->init(_player) ;
where _player is a VlcMediaPlayer object reference. but when I ran the program nothing happened. I don`t know what is the problem.

when you subclass the VlcVideoStream and re-implement frameUpdated(), you have access to YUV frame every time it is updated.
if you are familiar with Opencv , just add these code to frameUpdated() function, then you can see the gray image :
void MyVideoStream::frameUpdated()
{
std::shared_ptr<const VlcYUVVideoFrame> frame= std::dynamic_pointer_cast<const VlcYUVVideoFrame>(renderFrame());
if (!frame) {
return; // LCOV_EXCL_LINE
}
int width = frame->width;
int height = frame->height;
cv::Mat result = cv::Mat(height, width, CV_8UC1, (void*)frame->frameBuffer.data());
imshow("result", result);
waitKey(2);
}

Related

Access violation in Mat queue

I'm writing a producer/consumer code that receives frame date from an external library.
Each frame is read in a callback function from an external library that runs in a parallel thread and is pushed into a Mat queue. I created another function that runs in a different thread that reads and pops each frame.
The problem is that I'm getting "Access violation reading location" when I'm trying to read the frame data from the queue.
I'm declaring theses variables globally:
queue<Mat> matQ;
OnFrameDataReceivedCB videoCB;
OnDeviceConnectStatusCB connectCB;
guide_usb_video_mode_e videoMode;
int width = 640;
int height = 512;
std::mutex mu;
Here goes the code for the callback function that pushes each frame data:
void OnVideoCallBack(const guide_usb_frame_data_t data) //callback function
{
if (data.frame_rgb_data_length > 0)
{
// Send the displayed data directly
unsigned char* rgbData;
Mat frame;
Size size = Size(width, height);
rgbData = data.frame_rgb_data;
frame = Mat(size, CV_8UC3, rgbData, Mat::AUTO_STEP);
if (mu.try_lock())
{
printf("producing...\n");
matQ.push(frame);
printf("free producing\n");
mu.unlock();
}
}
}
Here is the function that reads from the queue:
void OnHandleVideoData()
{
while (true)
{
try
{
if (matQ.size() <= 0)
{
chrono::milliseconds duration(200);
this_thread::sleep_for(duration);
continue;
}
if (mu.try_lock())
{
if (matQ.size() > 0)
{
printf("consuming...\n");
Size size = Size(width, height);
Mat frame = Mat(size, CV_8UC3);
frame = matQ.front().clone();
matQ.pop();
imwrite("frame.jpg", frame); //the access violation exception is thrown on this line
printf("free consuming\n");
mu.unlock();
}
}
}
catch (...)
{
}
}
}
I also tried to put the unsigned char* rgbData array into a queue instead of the Mat, but I got the same error.
What am I missing?
You should try to clone the frame immediately when receiving it:
frame = Mat(size, CV_8UC3, rgbData, Mat::AUTO_STEP).clone();
instead of there:
frame = matQ.front()/*.clone()*/;

Force Qt Camera video format

I'm trying to use Qt Camera from QML.
I'm developing a custom VideoFilter:
QVideoFrame MyFilterRunnable::run(QVideoFrame* input, const QVideoSurfaceFormat&, RunFlags)
I started deploying the application on Windows and I have that:
frame is mappable in QAbstractVideoBuffer::ReadWrite
frame pixel format is PixelFormat::Format_BGR32
When I moved to Linux, unfortunately, everything changed, without changing the camera I have:
The frame is only QAbstractVideoBuffer::ReadOnly
frame pixel format is PixelFormat::Format_YUYV
And now I really don't know how to convert this frame to an OpenCV Mat.
Is there any way to choose which will be the pixel format of the Camera?
Right on Steven,
if (input->pixelFormat() == QVideoFrame::Format_YUYV)
{
auto input_w = input->width ();
auto input_h = input->height();
auto cam_data = input->bits();
cv::Mat yuyv(input_h, input_w,CV_8UC2, cam_data);
cv::Mat rgb (input_h, input_w,CV_8UC3);
cvtColor(yuyv, rgb, CV_YUV2BGR_YUYV);
m_mat = rgb;
}
else
if (input->pixelFormat() == QVideoFrame::Format_YUV420P || input->pixelFormat() == QVideoFrame::Format_NV12) {
m_yuv = true;
m_mat = yuvFrameToMat8(*input);
} else {
m_yuv = false;
QImage wrapper = imageWrapper(*input);
if (wrapper.isNull()) {
if (input->handleType() == QAbstractVideoBuffer::NoHandle)
input->unmap();
return *input;
}
m_mat = imageToMat8(wrapper);
}
ensureC3(&m_mat);
I am facing the same problem between my Linux machine and a Raspberry : I am using the same camera and the the pixel formats gven by QVideoFrame are different. It has probably something to do with v4l2
About the conversion from YUYV to OpenCV Mat, this code worked for me :
QVideoFrame *input ;
...
auto input_w = input->width ();
auto input_h = input->height();
auto cam_data = input->bits();
cv::Mat yuyv(input_h, input_w,CV_8UC2, cam_data);
cv::Mat rgb (input_h, input_w,CV_8UC3);
cvtColor(yuyv, rgb, CV_YUV2BGR_YUYV);

Saving images and then display them in a QLabel

I'm currently building a Qt application that using some camera.
In this application the uses capture images, and then they are automatically saved in a specific folder. Everything works great.
Now, when the "library" button is clicked, I want to read all the images (JPEG files) and display all the images that were taken one by one in a QLabel.
I couldn't find any tutorials for this, only found tutorials and uses the argv argument which is no good for me, because in my application the user may capture images and then display them in the same run.
How can read the files list and display it?
Thank you very much :)
If you have a single QLabel then you have to join the images together a single one. I find easier to display a list of QLabels:
auto layout = new QVBoxLayout();
Q_FOREACH (auto imageName, listOfImages) {
QPixmap pixmap(dirPath + "/" + imageName);
if (!pixmap.isNull()) {
auto label = new QLabel();
label->setPixmap(pixmap);
layout->addWidget(label);
}
}
a_wdiget_where_to_show_images->setLayout(layout);
The last line will depend on when do you want to place the labels. I suggest some widget with a scroll bar.
Now, you want to read all the images from a directory (the listOfImages variable above). If you don't have it:
const auto listOfImages = QDir(dirPath).entryList(QStringList("*.jpg"), QDir::Files);
You may have layout problems if your images are too big. In that case you should scale them if they are bigger than a given size. Take a look at QPixmap::scaled or QPixmap::scaledToWidth. Also, if image quality is important, specify Qt::SmoothTransformation as the transformation mode.
You can use opencv library to read all images in a directory.
vector<String> filenames; // notice here that we are using the Opencv's embedded "String" class
String folder = "Deri-45x45/"; // again we are using the Opencv's embedded "String" class
float sayi = 0;
glob(folder, filenames); // new function that does the job ;-)
float toplam = 0;
for (size_t i = 0; i < filenames.size(); ++i)
{
Mat img = imread(filenames[i],0);
//Display img in QLabel
QImage imgIn = putImage(img);
imgIn = imgIn.scaled(ui->label_15->width(), ui->label_15->height(),Qt::IgnoreAspectRatio, Qt::SmoothTransformation);
ui->label_15->setPixmap(QPixmap::fromImage(imgIn));
}
In order to convert Mat type to QImage, we use putImage function:
QImage putImage(const Mat& mat)
{
// 8-bits unsigned, NO. OF CHANNELS=1
if (mat.type() == CV_8UC1)
{
// Set the color table (used to translate colour indexes to qRgb values)
QVector<QRgb> colorTable;
for (int i = 0; i < 256; i++)
colorTable.push_back(qRgb(i, i, i));
// Copy input Mat
const uchar *qImageBuffer = (const uchar*)mat.data;
// Create QImage with same dimensions as input Mat
QImage img(qImageBuffer, mat.cols, mat.rows, mat.step, QImage::Format_Indexed8);
img.setColorTable(colorTable);
return img;
}
// 8-bits unsigned, NO. OF CHANNELS=3
if (mat.type() == CV_8UC3)
{
// Copy input Mat
const uchar *qImageBuffer = (const uchar*)mat.data;
// Create QImage with same dimensions as input Mat
QImage img(qImageBuffer, mat.cols, mat.rows, mat.step, QImage::Format_RGB888);
return img.rgbSwapped();
}
else
{
qDebug() << "ERROR: Mat could not be converted to QImage.";
return QImage();
}
}

Efficient way to read and process video files with Qt and OpenCV

I have a few vision algorithms which perform well enough on live camera streams; however, it is far from good when I run these algorithms on video files; the stream slows down way too much, although it is fine when not running on a vision algorithm, which are executed by calling VideoAlgoVision->execute( grabbedFrame, &CurrentROI );
Here is how I read video so far:
//////////////////////////////////////////////////////////////////
void VisionUnit_widget::Play()
{
if(IsVideoPause)
{
ui->pausePushButton->setEnabled(true);
ui->playPushButton->setEnabled(false);
IsVideoPause = false;
TimerOpen->start( (int) (1000/FPS_open) );
return;
}
else
{
ui->pausePushButton->setEnabled(true);
ui->stopPushButton ->setEnabled(true);
ui->rewPushButton ->setEnabled(true);
ui->ffdPushButton ->setEnabled(true);
ui->videoSlider ->setEnabled(true);
ui->playPushButton ->setEnabled(false);
if(!VideoCap)
VideoCap = new VideoCapture( FileName.toStdString() );
else
VideoCap->open( FileName.toStdString() );
if( VideoCap->isOpened() )
{
FrameH = (int) VideoCap->get(CV_CAP_PROP_FRAME_HEIGHT);
FrameW = (int) VideoCap->get(CV_CAP_PROP_FRAME_WIDTH);
FPS_open = (int) VideoCap->get(CV_CAP_PROP_FPS);
NumFrames = (int) VideoCap->get(CV_CAP_PROP_FRAME_COUNT);
ui->videoSlider->setMaximum( (int)NumFrames );
ui->videoSlider->setEnabled(true);
READCOUNT = 0;
TimerOpen->start( (int) (1000/FPS_open) );
}
}
}
//////////////////////////////////////////////////////////////////
void VisionUnit_widget::Pause()
{
ui->playPushButton->setEnabled(true);
ui->pausePushButton->setEnabled(false);
TimerOpen->stop();
IsVideoPause = true;
}
//////////////////////////////////////////////////////////////////
void VisionUnit_widget::Stop()
{
ui->stopPushButton->setEnabled(false);
ui->playPushButton->setEnabled(false);
ui->pausePushButton->setEnabled(false);
ui->rewPushButton->setEnabled(false);
ui->ffdPushButton->setEnabled(false);
FileName = "";
TimerOpen->stop();
READCOUNT = 0;
ui->videoSlider->setSliderPosition(0);
ui->videoSlider->setEnabled(false);
ui->frameLabel->setText( "No camera connected" );
delete TimerOpen;
TimerOpen = 0;
if(VideoCap)
{
delete VideoCap;
VideoCap = 0;
}
if(VideoAlgoVision)
{
delete VideoAlgoVision;
VideoAlgoVision = 0;
}
}
//////////////////////////////////////////////////////////////////
void VisionUnit_widget::Read()
{
READCOUNT++;
// Update Video Player's slider
ui->videoSlider->setValue(READCOUNT);
if(READCOUNT >= NumFrames) // if avi ends
{
Pause();
return;
}
Mat grabbedFrame;
// Get next frame
if(VideoCap->isOpened() && VideoCap->read(grabbedFrame))
{
// Execute the vision filter
if(VideoAlgoVision)
VideoAlgoVision->execute( grabbedFrame, &CurrentROI );
// Convert Mat to QImage
QImage frame = MatToQImage( grabbedFrame );
// Update the display
UpdateFrame( frame );
}
}
//////////////////////////////////////////////////////////////////
QImage VisionUnit_widget::MatToQImage(const Mat& mat)
{
// 8-bits unsigned, NO. OF CHANNELS = 1
if(mat.type()==CV_8UC1)
{
// Set the color table (used to translate colour indexes to qRgb values)
QVector<QRgb> colorTable;
for (int i=0; i<256; i++)
{
colorTable.push_back(qRgb(i,i,i));
}
// Copy input Mat
const uchar *qImageBuffer = (const uchar*)mat.data;
// Create QImage with same dimensions as input Mat
QImage img(qImageBuffer, mat.cols, mat.rows, mat.step, QImage::Format_Indexed8);
img.setColorTable(colorTable);
return img;
}
// 8-bits unsigned, NO. OF CHANNELS=3
else if(mat.type()==CV_8UC3)
{
// Copy input Mat
const uchar *qImageBuffer = (const uchar*)mat.data;
// Create QImage with same dimensions as input Mat
QImage img(qImageBuffer, mat.cols, mat.rows, mat.step, QImage::Format_RGB888);
return img.rgbSwapped();
}
else
{
return QImage();
}
}
So, my question is, is there a better way than this to read video files with Qt and OpenCV, including video processing? Should I adapt the QTimer timing at run time? I start it with TimerOpen->start( (int) (1000/FPS_open) ); but obviously the vision algorithm will slow down the whole thing. Any thought?
There may be some optimization to make on the vision algorithm, however my point here is they do well on my webcam and IP cameras, having me think there is something wrong with the way I read/use video files.
thx
You did not provide the whole code however i guess that you have connected TimerOpen's timeout() signal to VisionUnit_widget::Read() slot. If this is the case, you are accumulating 1000/FPS_open and processing time.
Changing your desing to something like the following will solve it :
int fSpace = 1000 / FPS;
QTime timer;
timer.start();
forever {
if(timer.elapsed() < fSpace)
msleep(1);
timer.restart();
..Process the frame..
}
And probably it is better to move this to another thread from main thread.

Array to OpenCV matrix

I have a array double dc[][] and want to convert this as to a IplImage* image and further to a video frame.
What I had to do was I was given a video and I extracted out some features and then make a new video of the extracted features.
My approach was I divided the video into frames extracted the features from each frame then did the updation like this and in each iteration of frame I get a new dc
double dc[48][44];
for(int i=0;i<48;i++)
{
for(int j=0;j<44;j++)
{
dc[i][j]=max1[i][j]/(1+max2[i][j]);
}
}
Now I need to save this dc in such a way that I can reconstruct the video.Anybody help me with this.
Thanks in advance
If you're okay with using Mat, then you can make a Mat for existing user-allocated memory. One of the Mat constructors has the signature:
Mat::Mat(int rows, int cols, int type, void* data, size_t step=AUTO_STEP)
where the parameters are:
rows: the memory height,
cols: the width,
type: one of the OpenCV data types (e.g. CV_8UC3),
data: pointer to your data,
step: (optional) stride of your data
I'd encourage you to take a look at the documentation for Mat here
EDIT: Just to make things more concrete, here's an example of making a Mat from some user-allocated data
int main()
{
//allocate and initialize your user-allocated memory
const int nrows = 10;
const int ncols = 10;
double data[nrows][ncols];
int vals = 0;
for (int i = 0; i < nrows; i++)
{
for (int j = 0; j < ncols; j++)
{
data[i][j] = vals++;
}
}
//make the Mat from the data (with default stride)
cv::Mat cv_data(nrows, ncols, CV_64FC1, data);
//print the Mat to see for yourself
std::cout << cv_data << std::endl;
}
You can save a Mat to a video file via the OpenCV VideoWriter class. You just need to create a VideoWriter, open a video file, and write your frames (as Mat). You can see an example of using VideoWriter here
Here's a short example of using the VideoWriter class:
//fill-in a name for your video
const std::string filename = "...";
const double FPS = 30;
VideoWriter outputVideo;
//opens the output video file using an MPEG-1 codec, 30 frames per second, of size height x width and in color
outputVideo.open(filename, CV_FOURCC('P','I','M,'1'), FPS, Size(height, width));
Mat frame;
//do things with the frame
// ...
//writes the frame out to the video file
outputVideo.write(frame);
The tricky part of the VideoWriter is the opening of the file, as you have a lot of options. You can see the names for different codecs here