I have a few vision algorithms which perform well enough on live camera streams; however, it is far from good when I run these algorithms on video files; the stream slows down way too much, although it is fine when not running on a vision algorithm, which are executed by calling VideoAlgoVision->execute( grabbedFrame, &CurrentROI );
Here is how I read video so far:
//////////////////////////////////////////////////////////////////
void VisionUnit_widget::Play()
{
if(IsVideoPause)
{
ui->pausePushButton->setEnabled(true);
ui->playPushButton->setEnabled(false);
IsVideoPause = false;
TimerOpen->start( (int) (1000/FPS_open) );
return;
}
else
{
ui->pausePushButton->setEnabled(true);
ui->stopPushButton ->setEnabled(true);
ui->rewPushButton ->setEnabled(true);
ui->ffdPushButton ->setEnabled(true);
ui->videoSlider ->setEnabled(true);
ui->playPushButton ->setEnabled(false);
if(!VideoCap)
VideoCap = new VideoCapture( FileName.toStdString() );
else
VideoCap->open( FileName.toStdString() );
if( VideoCap->isOpened() )
{
FrameH = (int) VideoCap->get(CV_CAP_PROP_FRAME_HEIGHT);
FrameW = (int) VideoCap->get(CV_CAP_PROP_FRAME_WIDTH);
FPS_open = (int) VideoCap->get(CV_CAP_PROP_FPS);
NumFrames = (int) VideoCap->get(CV_CAP_PROP_FRAME_COUNT);
ui->videoSlider->setMaximum( (int)NumFrames );
ui->videoSlider->setEnabled(true);
READCOUNT = 0;
TimerOpen->start( (int) (1000/FPS_open) );
}
}
}
//////////////////////////////////////////////////////////////////
void VisionUnit_widget::Pause()
{
ui->playPushButton->setEnabled(true);
ui->pausePushButton->setEnabled(false);
TimerOpen->stop();
IsVideoPause = true;
}
//////////////////////////////////////////////////////////////////
void VisionUnit_widget::Stop()
{
ui->stopPushButton->setEnabled(false);
ui->playPushButton->setEnabled(false);
ui->pausePushButton->setEnabled(false);
ui->rewPushButton->setEnabled(false);
ui->ffdPushButton->setEnabled(false);
FileName = "";
TimerOpen->stop();
READCOUNT = 0;
ui->videoSlider->setSliderPosition(0);
ui->videoSlider->setEnabled(false);
ui->frameLabel->setText( "No camera connected" );
delete TimerOpen;
TimerOpen = 0;
if(VideoCap)
{
delete VideoCap;
VideoCap = 0;
}
if(VideoAlgoVision)
{
delete VideoAlgoVision;
VideoAlgoVision = 0;
}
}
//////////////////////////////////////////////////////////////////
void VisionUnit_widget::Read()
{
READCOUNT++;
// Update Video Player's slider
ui->videoSlider->setValue(READCOUNT);
if(READCOUNT >= NumFrames) // if avi ends
{
Pause();
return;
}
Mat grabbedFrame;
// Get next frame
if(VideoCap->isOpened() && VideoCap->read(grabbedFrame))
{
// Execute the vision filter
if(VideoAlgoVision)
VideoAlgoVision->execute( grabbedFrame, &CurrentROI );
// Convert Mat to QImage
QImage frame = MatToQImage( grabbedFrame );
// Update the display
UpdateFrame( frame );
}
}
//////////////////////////////////////////////////////////////////
QImage VisionUnit_widget::MatToQImage(const Mat& mat)
{
// 8-bits unsigned, NO. OF CHANNELS = 1
if(mat.type()==CV_8UC1)
{
// Set the color table (used to translate colour indexes to qRgb values)
QVector<QRgb> colorTable;
for (int i=0; i<256; i++)
{
colorTable.push_back(qRgb(i,i,i));
}
// Copy input Mat
const uchar *qImageBuffer = (const uchar*)mat.data;
// Create QImage with same dimensions as input Mat
QImage img(qImageBuffer, mat.cols, mat.rows, mat.step, QImage::Format_Indexed8);
img.setColorTable(colorTable);
return img;
}
// 8-bits unsigned, NO. OF CHANNELS=3
else if(mat.type()==CV_8UC3)
{
// Copy input Mat
const uchar *qImageBuffer = (const uchar*)mat.data;
// Create QImage with same dimensions as input Mat
QImage img(qImageBuffer, mat.cols, mat.rows, mat.step, QImage::Format_RGB888);
return img.rgbSwapped();
}
else
{
return QImage();
}
}
So, my question is, is there a better way than this to read video files with Qt and OpenCV, including video processing? Should I adapt the QTimer timing at run time? I start it with TimerOpen->start( (int) (1000/FPS_open) ); but obviously the vision algorithm will slow down the whole thing. Any thought?
There may be some optimization to make on the vision algorithm, however my point here is they do well on my webcam and IP cameras, having me think there is something wrong with the way I read/use video files.
thx
You did not provide the whole code however i guess that you have connected TimerOpen's timeout() signal to VisionUnit_widget::Read() slot. If this is the case, you are accumulating 1000/FPS_open and processing time.
Changing your desing to something like the following will solve it :
int fSpace = 1000 / FPS;
QTime timer;
timer.start();
forever {
if(timer.elapsed() < fSpace)
msleep(1);
timer.restart();
..Process the frame..
}
And probably it is better to move this to another thread from main thread.
Related
I have a camera application that will display the cameras live stream:
void MainWindow::on_start()
{
if(video.isOpened())
{
video.release();
return;
}
if(!isCameraFound || !video.open(pipeline.trimmed().toStdString(), cv::CAP_GSTREAMER))
{
QMessageBox::critical(this,
"Video Error",
"Must be the correct USB Camera connected!");
return;
}
Mat frame;
while(video.isOpened())
{
video >> frame;
if(!frame.empty())
{
QImage qimg(frame.data,
frame.cols,
frame.rows,
frame.step,
QImage::Format_RGB888);
pixmap.setPixmap( QPixmap::fromImage(qimg.rgbSwapped()) );
ui->graphicsView->fitInView(&pixmap, Qt::KeepAspectRatio);
}
...
And I have a function that does processing and has a loop to inspect the first 250 frames because the camera gives off a specific frame when something is found. But when doing this loop the videostream lags so much it basically freezes until the loop is done. How would I be able to go about removing the lag caused by the frame grabbing loop to be able to go through the loop and have the videostream not lag?
void MainWindow::on_push()
{
Mat test;
for(int i = 0; i < 250; i++){
video >> test;
...
I`m using vlc-qt for decode h264 video stream but I need every frame from video (stream) for further processing. I found this link that describes the solution :
https://discuss.tano.si/t/how-to-get-frame-from-video/253
I made a class that inherits from VlcVideoStream class and re-implement frameUpdated() function as bellow :
void MyVideoStream::frameUpdated() {
qDebug() << "frame" ;
int rows, cols, matType;
// convert to shared pointer to const frame to avoid crash
std::shared_ptr<const VlcYUVVideoFrame> frame = std::dynamic_pointer_cast<const VlcYUVVideoFrame>(renderFrame());
if (!frame) {
return; // LCOV_EXCL_LINE
}
rows = frame->height + frame->height/2;
cols = frame->width;
}
and declared my class as :
MyVideoStream *_stream ;
_stream = new MyVideoStream(Vlc::YUVFormat,ui->video) ;
_stream->init(_player) ;
where _player is a VlcMediaPlayer object reference. but when I ran the program nothing happened. I don`t know what is the problem.
when you subclass the VlcVideoStream and re-implement frameUpdated(), you have access to YUV frame every time it is updated.
if you are familiar with Opencv , just add these code to frameUpdated() function, then you can see the gray image :
void MyVideoStream::frameUpdated()
{
std::shared_ptr<const VlcYUVVideoFrame> frame= std::dynamic_pointer_cast<const VlcYUVVideoFrame>(renderFrame());
if (!frame) {
return; // LCOV_EXCL_LINE
}
int width = frame->width;
int height = frame->height;
cv::Mat result = cv::Mat(height, width, CV_8UC1, (void*)frame->frameBuffer.data());
imshow("result", result);
waitKey(2);
}
I'm currently building a Qt application that using some camera.
In this application the uses capture images, and then they are automatically saved in a specific folder. Everything works great.
Now, when the "library" button is clicked, I want to read all the images (JPEG files) and display all the images that were taken one by one in a QLabel.
I couldn't find any tutorials for this, only found tutorials and uses the argv argument which is no good for me, because in my application the user may capture images and then display them in the same run.
How can read the files list and display it?
Thank you very much :)
If you have a single QLabel then you have to join the images together a single one. I find easier to display a list of QLabels:
auto layout = new QVBoxLayout();
Q_FOREACH (auto imageName, listOfImages) {
QPixmap pixmap(dirPath + "/" + imageName);
if (!pixmap.isNull()) {
auto label = new QLabel();
label->setPixmap(pixmap);
layout->addWidget(label);
}
}
a_wdiget_where_to_show_images->setLayout(layout);
The last line will depend on when do you want to place the labels. I suggest some widget with a scroll bar.
Now, you want to read all the images from a directory (the listOfImages variable above). If you don't have it:
const auto listOfImages = QDir(dirPath).entryList(QStringList("*.jpg"), QDir::Files);
You may have layout problems if your images are too big. In that case you should scale them if they are bigger than a given size. Take a look at QPixmap::scaled or QPixmap::scaledToWidth. Also, if image quality is important, specify Qt::SmoothTransformation as the transformation mode.
You can use opencv library to read all images in a directory.
vector<String> filenames; // notice here that we are using the Opencv's embedded "String" class
String folder = "Deri-45x45/"; // again we are using the Opencv's embedded "String" class
float sayi = 0;
glob(folder, filenames); // new function that does the job ;-)
float toplam = 0;
for (size_t i = 0; i < filenames.size(); ++i)
{
Mat img = imread(filenames[i],0);
//Display img in QLabel
QImage imgIn = putImage(img);
imgIn = imgIn.scaled(ui->label_15->width(), ui->label_15->height(),Qt::IgnoreAspectRatio, Qt::SmoothTransformation);
ui->label_15->setPixmap(QPixmap::fromImage(imgIn));
}
In order to convert Mat type to QImage, we use putImage function:
QImage putImage(const Mat& mat)
{
// 8-bits unsigned, NO. OF CHANNELS=1
if (mat.type() == CV_8UC1)
{
// Set the color table (used to translate colour indexes to qRgb values)
QVector<QRgb> colorTable;
for (int i = 0; i < 256; i++)
colorTable.push_back(qRgb(i, i, i));
// Copy input Mat
const uchar *qImageBuffer = (const uchar*)mat.data;
// Create QImage with same dimensions as input Mat
QImage img(qImageBuffer, mat.cols, mat.rows, mat.step, QImage::Format_Indexed8);
img.setColorTable(colorTable);
return img;
}
// 8-bits unsigned, NO. OF CHANNELS=3
if (mat.type() == CV_8UC3)
{
// Copy input Mat
const uchar *qImageBuffer = (const uchar*)mat.data;
// Create QImage with same dimensions as input Mat
QImage img(qImageBuffer, mat.cols, mat.rows, mat.step, QImage::Format_RGB888);
return img.rgbSwapped();
}
else
{
qDebug() << "ERROR: Mat could not be converted to QImage.";
return QImage();
}
}
I am trying to detect colored balls like ps3 move controller balls from 2 mt distance.I have 10 camera in same room hanging from the ceiling.Room is dark and balls have led inside.I have 4-5 balls.(red,blue,green,yellow,pink). I want track their position with opencv.Whats the right mehtod for doing this in opencv ? Can u give link , example for this ?
I use this code but i have delay problem.When i comment // my trackFilteredObject line there is no lag.But when using this code i have lot latency.I cant understand why happening because my normal cpu usage ~%15 ram usage 6.3GB/15GB (%40) when run this code cpu usage ~20-23 ram usage 6.4GB . I think its not about cpu-ram performance.What am i doing wrong ?
Video: https://www.youtube.com/watch?v=_BKtJpPrkO4 (You can see lag in first 10 sec.After 10 sen i comment tracking codes.)
Note:Kamerasayisi mean cameracount My Track Function:
void trackFilteredObject(Object theObject,Mat threshold,Mat HSV, Mat &cameraFeed){
//max number of objects to be detected in frame
const int FRAME_WIDTH = 5120;
const int FRAME_HEIGHT = 480;
const int MAX_NUM_OBJECTS=50;
//minimum and maximum object area
const int MIN_OBJECT_AREA = 10*10;
const int MAX_OBJECT_AREA = FRAME_HEIGHT*FRAME_WIDTH/1.5;
vector <Object> objects;
Mat temp;
threshold.copyTo(temp);
//these two vectors needed for output of findContours
vector< vector<Point> > contours;
vector<Vec4i> hierarchy;
//find contours of filtered image using openCV findContours function
findContours(temp,contours,hierarchy,CV_RETR_CCOMP,CV_CHAIN_APPROX_SIMPLE );
//use moments method to find our filtered object
double refArea = 0;
bool objectFound = false;
if (hierarchy.size() > 0) {
int numObjects = hierarchy.size();
//if number of objects greater than MAX_NUM_OBJECTS we have a noisy filter
if(numObjects<MAX_NUM_OBJECTS){
for (int index = 0; index >= 0; index = hierarchy[index][0]) {
Moments moment = moments((cv::Mat)contours[index]);
double area = moment.m00;
//if the area is less than 20 px by 20px then it is probably just noise
//if the area is the same as the 3/2 of the image size, probably just a bad filter
//we only want the object with the largest area so we safe a reference area each
//iteration and compare it to the area in the next iteration.
if(area>MIN_OBJECT_AREA){
Object object;
object.setXPos(moment.m10/area);
object.setYPos(moment.m01/area);
object.setType(theObject.getType());
object.setColor(theObject.getColor());
objects.push_back(object);
objectFound = true;
}else objectFound = false;
}
//let user know you found an object
if(objectFound ==true){
//draw object location on screen
drawObject(objects,cameraFeed,temp,contours,hierarchy);}
}else putText(cameraFeed,"TOO MUCH NOISE! ADJUST FILTER",Point(0,50),1,2,Scalar(0,0,255),2);
}
}
};
Main Code:
void Run()
{
int w, h;
_fps = 30;
IplImage *pCapImage[kameraSayisi];
IplImage *pDisplayImage;
PBYTE pCapBuffer = NULL;
// Create camera instance
for(int i = 0; i < kameraSayisi; i++)
{
_cam[i] = CLEyeCreateCamera(_cameraGUID[i], _mode, _resolution, _fps);
if(_cam[i] == NULL) return;
// Get camera frame dimensions
CLEyeCameraGetFrameDimensions(_cam[i], w, h);
// Create the OpenCV images
pCapImage[i] = cvCreateImage(cvSize(w, h), IPL_DEPTH_8U, 1);
// Set some camera parameters
CLEyeSetCameraParameter(_cam[i], CLEYE_GAIN, 0);
CLEyeSetCameraParameter(_cam[i], CLEYE_EXPOSURE, 511);
// Start capturing
CLEyeCameraStart(_cam[i]);
}
pDisplayImage = cvCreateImage(cvSize(w*kameraSayisi / 2, h * kameraSayisi/4 ), IPL_DEPTH_8U ,1);
if(_cam == NULL) return;
int iLastX = -1;
int iLastY = -1;
//Capture a temporary image from the camera
//program
bool trackObjects = true;
bool useMorphOps = true;
Mat HSV;
//Create a black image with the size as the camera output
Mat imgLines;
// imgLines = Mat::zeros( cvarrToMat(image).size(), CV_8UC3 );;
Mat threshold;
//x and y values for the location of the object
int x=0, y=0;
bool calibrationMode = false;
if(calibrationMode){
//create slider bars for HSV filtering
createTrackbars();
}
// image capturing loop
while(_running)
{
PBYTE pCapBuffer;
// Capture camera images
for(int i = 0; i < kameraSayisi; i++)
{
cvGetImageRawData(pCapImage[i], &pCapBuffer);
CLEyeCameraGetFrame(_cam[i], pCapBuffer, (i==0)?2000:0);
}
// Display stereo image
for(int i = 0; i < kameraSayisi; i++)
{
cvSetImageROI(pDisplayImage, cvRect(w * (i%4) ,i/4 * h, w, h));
cvCopy(pCapImage[i], pDisplayImage);
}
cvResetImageROI(pDisplayImage);
Mat imgOriginal;
Mat imgConverted = cvarrToMat(pDisplayImage);
if(calibrationMode==true)
{
//need to find the appropriate color range values
// calibrationMode must be false
//if in calibration mode, we track objects based on the HSV slider values.
//cvtColor(imgOriginal,imgOriginal,CV_BayerRG2RGB);
cvtColor(imgConverted,imgOriginal,CV_BayerGB2BGR);
cvtColor(imgOriginal,HSV,CV_BGR2HSV);
inRange(HSV,Scalar(H_MIN,S_MIN,V_MIN),Scalar(H_MAX,S_MAX,V_MAX),threshold);
morphOps(threshold);
imshow(_windowName + 'T',threshold);
//the folowing for canny edge detec
/// Create a matrix of the same type and size as src (for dst)
dst.create( imgOriginal.size(), src.type() );
/// Convert the image to grayscale
cvtColor( imgOriginal, src_gray, CV_BGR2GRAY );
/// Create a window
namedWindow( window_name, CV_WINDOW_AUTOSIZE );
/// Create a Trackbar for user to enter threshold
// createTrackbar( "Min Threshold:", window_name, &lowThreshold, max_lowThreshold, CannyThreshold );
/// Show the image
Object a = Object(H_MIN,S_MIN,V_MIN,H_MAX,S_MAX,V_MAX);
trackFilteredObject(a,threshold,HSV,imgOriginal);
}
else{
//we can use their member functions/information
Object blue("blue"), yellow("yellow"), red("red"), orange("orange"),white("white");
cvtColor(imgConverted,imgOriginal,CV_BayerGB2BGR);
//first find blue objects
cvtColor(imgOriginal,HSV,CV_RGB2HSV);
inRange(HSV,blue.getHSVmin(),blue.getHSVmax(),threshold);
morphOps(threshold);
//then yellows
inRange(HSV,yellow.getHSVmin(),yellow.getHSVmax(),threshold);
//then reds
inRange(HSV,red.getHSVmin(),red.getHSVmax(),threshold);
//then white
inRange(HSV,white.getHSVmin(),white.getHSVmax(),threshold);
//then orange
inRange(HSV,orange.getHSVmin(),orange.getHSVmax(),threshold);
trackFilteredObject(yellow,threshold,HSV,imgOriginal);
trackFilteredObject(white,threshold,HSV,imgOriginal);
trackFilteredObject(red,threshold,HSV,imgOriginal);
trackFilteredObject(blue,threshold,HSV,imgOriginal);
trackFilteredObject(orange,threshold,HSV,imgOriginal);
}
//delay 10ms so that screen can refresh.
//image will not appear without this waitKey() command
if (cvWaitKey(30) == 27) //wait for 'esc' key press for 30ms. If 'esc' key is pressed, break loop
{
cout << "esc key is pressed by user" << endl;
break;
}
// cvShowImage(_windowName, image);
imshow(_windowName,imgOriginal);
}
for(int i = 0; i < kameraSayisi; i++)
{
// Stop camera capture
CLEyeCameraStop(_cam[i]);
// Destroy camera object
CLEyeDestroyCamera(_cam[i]);
// Destroy the allocated OpenCV image
cvReleaseImage(&pCapImage[i]);
_cam[i] = NULL;
}
}
I have an app that has to pull frames from video, transform one a little, transform one a lot, and simultaneously display them in GUI. In a worker thread, there's an OpenCV loop:
while(1) {
cv::VideoCapture kalibrowanyPlik;
kalibrowanyPlik.open(kalibracja.toStdString()); //open file from url
int maxFrames = kalibrowanyPlik.get(CV_CAP_PROP_FRAME_COUNT);
for(int i=0; i<maxFrames; i++) //I thought it crashed when finished reading the first time around
{
cv::Mat frame;
cv::Mat gray;
cv::Mat color;
kalibrowanyPlik.read(frame);
cv::cvtColor(frame, gray, CV_BGR2GRAY);
cv::cvtColor(frame, color, CV_BGR2RGB);
QImage image((uchar*)color.data, color.cols, color.rows,QImage::Format_RGB888);
QImage processedImage((uchar*)gray.data, gray.cols, gray.rows,QImage::Format_Indexed8);
emit progressChanged(image, processedImage);
QThread::msleep(50);
}
}
And this is how frames are placed in GUI
void MainWindow::onProgressChagned(QImage image, QImage processedImage) {
QPixmap processed = QPixmap::fromImage(processedImage);
processed = processed.scaledToHeight(379);
ui->labelHsv->clear();
ui->labelHsv->setPixmap(processed);
QPixmap original = QPixmap::fromImage(image); //debug points SIGSEGV here
original = original.scaledToHeight(379);
ui->labelKalibracja->clear();
ui->labelKalibracja->setPixmap(original);
}
The RGB image always crashes, grayscale image never crashes (tested). Why is the RGB image crashing?
edit: I've just discovered that if I change msleep(50) to msleep(100) it executes perfectly. But I don't want that. I need at least 25 frames per second, 10 is not acceptable... why would that cause a SIGSEGV
Standard issue. Problem is memory management!
See my other answer. In comments there is a good link.
So in your code QImage doesn't copy and doesn't take ownership of memory of matrix. And later on when matrix is destroyed and QImage tries access this memory (QImage is copied by creating shallow copy) you have a segfault.
Here is a code form this link (I've tweak it a bit), for some reason this site has some administration issues (some quota exceeded), that is why I'm pasting it here.
inline QImage cvMatToQImage( const cv::Mat &inMat )
{
switch ( inMat.type() )
{
// 8-bit, 4 channel
case CV_8UC4:
{
QImage image( inMat.data, inMat.cols, inMat.rows, inMat.step, QImage::Format_RGB32 );
QImage copy(image);
copy.bits(); //enforce deep copy
return copy;
}
// 8-bit, 3 channel
case CV_8UC3:
{
QImage image( inMat.data, inMat.cols, inMat.rows, inMat.step, QImage::Format_RGB888 );
return image.rgbSwapped();
}
// 8-bit, 1 channel
case CV_8UC1:
{
static QVector<QRgb> sColorTable;
// only create our color table once
if ( sColorTable.isEmpty() )
{
for ( int i = 0; i < 256; ++i )
sColorTable.push_back( qRgb( i, i, i ) );
}
QImage image( inMat.data, inMat.cols, inMat.rows, inMat.step, QImage::Format_Indexed8 );
image.setColorTable( sColorTable );
QImage copy(image);
copy.bits(); //enforce deep copy
return copy;
}
default:
qWarning() << "ASM::cvMatToQImage() - cv::Mat image type not handled in switch:" << inMat.type();
break;
}
return QImage();
}
Your code should utilize this functions like that:
while(1) {
cv::VideoCapture kalibrowanyPlik;
kalibrowanyPlik.open(kalibracja.toStdString()); //open file from url
int maxFrames = kalibrowanyPlik.get(CV_CAP_PROP_FRAME_COUNT);
for(int i=0; i<maxFrames; i++) //I thought it crashed when finished reading the first time around
{
cv::Mat frame;
cv::Mat gray;
kalibrowanyPlik.read(frame);
cv::cvtColor(frame, gray, CV_BGR2GRAY);
QImage image(cvMatToQImage(frame));
QImage processedImage(cvMatToQImage(gray));
emit progressChanged(image, processedImage);
QThread::msleep(10); // this is bad see comments below
}
}
Use of msleep is in 95% cases bad! Remove this loop and create slot which will be invoked by signal from QTimer.
Another solution will be to use a timer :
void ??::timerEvent(QTimerEvent*){
if(kalibrowanssky.isOpened())
cv::Mat frame;
cv::Mat gray;
cv::Mat color;
kalibrowanyPlik.read(frame);
cv::cvtColor(frame, gray, CV_BGR2GRAY);
cv::cvtColor(frame, color, CV_BGR2RGB);
ui->labelHsv->setPixmap(QPixmap::fromImage(Mat2QImage(color)));
ui->labelKalibracja->setPixmap(QPixmap::fromImage(Mat2QImage(gray)));
}
In your main :
cv::VideoCapture kalibrowanyPlik;
startTimer(1000/25); // 25 frames by second
And the function Mat2QImage (I found it here : how to convert an opencv cv::Mat to qimage) :
QImage ??::Mat2QImage(cv::Mat const& src) {
cv::Mat temp;
cvtColor(src, temp,CV_BGR2RGB);
QImage dest((const uchar *) temp.data, temp.cols, temp.rows, temp.step, QImage::Format_RGB888);
dest.bits();
return dest;
}