I have created a program reading images in the sequence. Here is my code:
int main(int argc, char** argv) {
cv::namedWindow("Example3", cv::WINDOW_AUTOSIZE);
cv::VideoCapture cap;
cap.open(string("F:/8TH SEMESTER/trial/%05d.ppm"));
//"F:/8TH SEMESTER/Traffic Sign Dectection/GTSDB/FullIJCNN2013/FullIJCNN2013/01/%05d.ppm"
cv::Mat frame;
for (;;) {
cap >> frame;
if (frame.empty()) break; // Ran out of film
cv::imshow("Example3", frame);
if (cv::waitKey(2000) >= 0) break;
}
return 0;
}
it works perfectly with those images which have traffic sign and cars etc. But when I run this code with sample data, for example only with a 30-speed sign then it runs first time and stops? Why does this happen?
With this image, it works perfectly:
When I run it using the following image, it gives error.
Related
I'm trying to adapt code from this page: https://www.pyimagesearch.com/2018/07/16/opencv-saliency-detection/, however that is written in Python and I'm trying to do it in C++. When I run my code it compiles, but all I see is a white screen and not any type of saliency detection going on. What's wrong?
cap.open(pathToVideo);
int frame_width = cap.get(CAP_PROP_FRAME_WIDTH);
int frame_height = cap.get(CAP_PROP_FRAME_HEIGHT);
while (true) {
Mat frame;
Mat salientFrame;
cap >> frame;
if (frame.empty()) {
break;
}
Ptr<MotionSaliencyBinWangApr2014> MS = MotionSaliencyBinWangApr2014::create();
cvtColor(frame, frame, COLOR_BGR2GRAY);
MS->setImagesize(frame.cols, frame.rows);
MS->init();
MS->computeSaliency(frame, salientFrame);
salientFrame.convertTo(salientFrame, CV_8U, 255);
imshow("Motion Saliency", salientFrame);
char c = (char)waitKey(25);
if (c == 27)
break;
}
cap.release();
The command
Ptr<MotionSaliencyBinWangApr2014> MS = MotionSaliencyBinWangApr2014::create();
should be called before the loop.
The reason is that the method processes a video, not a single image.
I'm using OpenCV 3.1, I try to run a simple code as the following one (main function):
cv::VideoCapture cam;
cv::Mat matTestingNumbers;
cam.open(0);
if (!cam.isOpened()) { printf("--(!)Error opening video capture\n"); return -1; }
while (cam.read(matTestingNumbers))
{
cv::imshow("matTestingNumbers", matTestingNumbers);
cv::waitKey(5000);
}
When I move the camera it seems that the code does not capture and show the current frame but shows all the captured frames from the previous position and only then from the new one.
So when I capture the wall it shows the correct frames (the wall itself) in the correct delay, but, when I twist the camera to my computer, I first see about 3 frames of the wall and only then the computer, it seems that the frames are stuck.
I've tried to use videoCapture.set() functions and set the FPS to 1, and I tried to switch the method of capturing to cam >> matTestingNumbers (and the rest of the main function according to this change) but nothing helped, I still got "stuck" frames.
BTW, These are the solutions I found on web.
What can I do to fix this problem?
Thank you, Dan.
EDIT:
I tried to retrieve frames as the following:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat frame;
namedWindow("edges",1);
for(;;)
{
cap.grab();
if (waitKey(11) >= 0)
{
cap.retrieve(frame);
imshow("edges", frame);
}
}
return 0;
}
But, it gave the result (when I pointed the camera on one spot and pressed a key it showed one more of the previous frames that were captured of the other point).
It is just like you're trying to picture one person then another but when you picture the second you get the photo of the first person what doesn't make sense.
Then, I tried the following:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat frame;
namedWindow("edges",1);
for(;;)
{
cap >> frame;
if (waitKey(33) >= 0)
imshow("edges", frame);
}
return 0;
}
And it worked as expected.
One of the problems is that you are not calling cv::waitKey(X) to properly freeze the window for X amount of milliseconds. Get rid of usleep()!
Currently, I am using OpenCV to record a lifestream from my webcam. I now want to display that in a browser. I was thinking of using VideoWriter to write the video to a file, then somehow access that file from HTML5. Is this possible? Any other suggestions?
The following is the code I have.
int main(int argc, const char * argv[]) {
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) { // check if we succeeded
std::cout << "No camera found!\n";
}
namedWindow("Window",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
imshow("Window", frame);
char keypress;
keypress = waitKey(30);
if(keypress==27) break;
}
return 0;
}
As can be seen, I am displaying the live stream in a window, but, as said, I want a browser lifestream. Thanks.
How do i implement background subtraction(background model obtained by averaging first..say 50 frames..) in opencv
I tried looking for some codes but found they were in python..im working in c++(visual studio 2013)
A small working code snippet will help..thanks!
OpenCV provide background subtraction capabilities. See BackgroundSubtractorMOG2, that models the background with a Mixture of Gaussians, and is therefore quite robust to background changes.
The parameters history is the numbers of frame used to build the background model.
#include <opencv2\opencv.hpp>
using namespace cv;
int main(int argc, char *argv[])
{
int history = 50;
float varThreshold = 16.f; // default value
BackgroundSubtractorMOG2 bg = BackgroundSubtractorMOG2(history, varThreshold);
VideoCapture cap(0);
Mat3b frame;
Mat1b fmask;
for (;;)
{
cap >> frame;
bg(frame, fmask, -1);
imshow("frame", frame);
imshow("mask", fmask);
if (cv::waitKey(30) >= 0) break;
}
return 0;
}
I want to retrieve the BackgroundImage from BackgroundSubtractorMOG with the function getBackgroundImage().
Unfortunately I always get an empty matrix.
Is this behaviour a bug in OpenCV2.4.8 or do I mabye have to do some kind of additional initilization? (If you switch from MOG to MOG2 it works fine.)
Current initialization inspired by this question.
Sample Code:
int main(int argc, char *argv[]){
BackgroundSubtractorMOG mog(3, 4, 0.8);
cv::VideoCapture cap(0);
cv::Mat frame, backMOG, foreMOG;
for(;;){
cap >> frame;
mog(frame, foreMOG,-1);
cv::imshow("foreMOG", foreMOG);
mog.getBackgroundImage(backMOG);
if(!backMOG.empty()){
cv::imshow("backMOG", backMOG); //never reached
}
if(cv::waitKey(30) >= 0) break;
}
return 0;
}
As #Micka pointed correctly out in the comment: It's just not implemented!
A call of mog.getBackgroundImage(backMOG); jumps to
void BackgroundSubtractor::getBackgroundImage(OutputArray) const {}
in file bgfg_gaussmix2.cpp.
Long story short: BackgroundSubtractorGMG and BackgroundSubtractorMOG don't implement getBackgroundImage currently in OpenCV 2.4.8. Only BackgroundSubtractorMOG2 supports it.