OpenCV C++ camera image not being saved to matrix - c++

I am using openCV 2.4.10 on visual studios 2012 express for desktop on windows 7, 32 bit operating system.
I created a function that initializes a webcam, takes an image and stores it in a matrix, and then returns the image matrix.
Mat frameCapture ()
{
Mat srcCap;
//initializes structure type of cap
VideoCapture cap(0);
if(!cap.isOpened())
{
//check for camera
cout << "No camera detected" << endl;
waitKey(10);
}
//stores next frame into matrix
cap >> srcCap;
//check to see the camera took a picture
if( srcCap.empty())
{
cout << "no data in image\n";
}
//return the image matrix
cap.release();
return srcCap;
}
int main ()
{
Mat src;
src = frameCapture();
imshow (window1, src);
waitKey(0);
}
So when running the program, it will say "no data in image" meaning that srcCap.empty() returned true and then it will throw an assertion error for the imshow function. However, the program will sometimes run and return an image successfully. Furthermore, when I incorporate the function in a loop for image processing, it will sometimes take a few pictures and then randomly spit out "no data in image" and throw the same assertion error, or it won't take the first picture at all and spits out "no data in image", throwing the same assertion error. The camera is detected every time and cap is opened; the code never says "No camera detected"
My question is what is causing cap >> srcCap to not work, is it a hardware issue? The camera i'm using is a usb 2.0 plugable microscope.

I think you that your current program just reads the first frame only. Mostly when reading the camera frame, the first frame may not contain any data.
I would suggest that you use a loop in the main() and read latter frames.

Related

How to hold window open during OpenCV processing?

I'm running into an odd problem with OpenCV on Linux, Ubuntu 16.04 specifically. If I use usual code to show a webcam stream like this it works fine:
// WebcamTest.cpp
#include <opencv2/opencv.hpp>
#include <iostream>
int main()
{
// declare a VideoCapture object and associate to webcam, 1 => use 2nd webcam, the 0th webcam is the one integral to the TX2 development board
cv::VideoCapture capWebcam(1);
// check if VideoCapture object was associated to webcam successfully, if not, show error message and bail
if (capWebcam.isOpened() == false)
{
std::cout << "error: capWebcam not accessed successfully\n\n";
return (0);
}
cv::Mat imgOriginal; // input image
cv::Mat imgGrayscale; // grayscale of input image
cv::Mat imgBlurred; // intermediate blured image
cv::Mat imgCanny; // Canny edge image
char charCheckForEscKey = 0;
// while the Esc key has not been pressed and the webcam connection is not lost . . .
while (charCheckForEscKey != 27 && capWebcam.isOpened())
{
bool blnFrameReadSuccessfully = capWebcam.read(imgOriginal); // get next frame
// if frame was not read successfully, print error message and jump out of while loop
if (!blnFrameReadSuccessfully || imgOriginal.empty())
{
std::cout << "error: frame not read from webcam\n";
break;
}
// convert to grayscale
cv::cvtColor(imgOriginal, imgGrayscale, CV_BGR2GRAY);
// blur image
cv::GaussianBlur(imgGrayscale, imgBlurred, cv::Size(5, 5), 0);
// get Canny edges
cv::Canny(imgBlurred, imgCanny, 75, 150);
cv::imshow("imgOriginal", imgOriginal);
cv::imshow("imgCanny", imgCanny);
charCheckForEscKey = cv::waitKey(1); // delay (in ms) and get key press, if any
} // end while
return (0);
}
This example shows the webcam stream in one imshow window and a Canny edges image in a second window. Both windows update and show the images as expected with very little if any perceptible flicker.
If you're wondering why I'm using the 1th camera instead of the usual 0th camera, I'm running this on a Jetson TX2 and the 0th camera is the one integral to the development board and I'd prefer to use an additional external webcam. For this same reason I have to use Ubuntu 16.04 but I suspect the result would be the same with Ubuntu 18.04 (have not tested this however).
If instead I have a function that takes significant processing instead of taking simple Canny edges, i.e.:
int main(void)
{
.
.
.
// declare a VideoCapture object and associate to webcam, 1 => use 2nd webcam, the 0th webcam is the one integral to the TX2 development board
cv::VideoCapture capWebcam(1);
// check if VideoCapture object was associated to webcam successfully, if not, show error message and bail
if (capWebcam.isOpened() == false)
{
std::cout << "error: capWebcam not accessed successfully\n\n";
return (0);
}
cv::namedWindow("imgOriginal");
cv::Mat imgOriginal;
char charCheckForEscKey = 0;
// while the Esc key has not been pressed and the webcam connection is not lost . . .
while (charCheckForEscKey != 27 && capWebcam.isOpened())
{
bool blnFrameReadSuccessfully = capWebcam.read(imgOriginal); // get next frame
// if frame was not read successfully, print error message and jump out of while loop
if (!blnFrameReadSuccessfully || imgOriginal.empty())
{
std::cout << "error: frame not read from webcam\n";
break;
}
detectLicensePlate(imgOriginal);
cv::imshow("imgOriginal", imgOriginal);
charCheckForEscKey = cv::waitKey(1); // delay (in ms) and get key press, if any
} // end while
.
.
.
return (0);
}
The detectLicensePlate() function takes about a second to run.
The problem I'm having is, when running this program, the window only appears for the slightest amount of time, usually not long enough to even be perceptible, and never long enough to actually see the result.
The strange thing is, the window disappears, then the second or so day occurs for detectLicensePlate() to do its thing, then the window appears again for a very short time, then disappears again, and so on. It's almost as though just after cv::imshow("imgOriginal", imgOriginal);, cv::destroyAllWindows(); is implicitly being called.
The behavior I'm attempting to achieve is for the window to stay open and continue to show the previous result while processing the next. From what I recall this was the default behavior on Windows.
I should mention that I'm explicitly declaring the windows with cv::namedWindow("imgOriginal"); before the while loop in an attempt to not let it go out of scope but this does not seem to help.
Of course I can make the delay longer, i.e.
charCheckForEscKey = cv::waitKey(1500);
To wait for 1.5 seconds, but then the application gets very unresponsive.
Based on this post c++ opencv image not display inside the boost thread I tried declaring the window outside the while loop and putting detectLicensePlate() and cv::imshow() on a separate thread, as follows:
.
.
.
cv::namedWindow("imgOriginal");
boost::thread myThread;
// while the Esc key has not been pressed and the webcam connection is not lost . . .
while (charCheckForEscKey != 27 && capWebcam.isOpened())
{
// if frame was not read successfully, print error message and jump out of while loop
if (!blnFrameReadSuccessfully || imgOriginal.empty())
{
std::cout << "error: frame not read from webcam\n";
break;
}
myThread = boost::thread(&preDetectLicensePlate, imgOriginal);
myThread.join();
.
.
.
} // end while
// separate function
void preDetectLicensePlate(cv::Mat &imgOriginal)
{
detectLicensePlate(imgOriginal);
cv::imshow("imgOriginal", imgOriginal);
}
I even tried putting detectLicensePlate() on a separate thread but not cv::imshow(), and the other way around, still the same result. No matter how I change the order or use threading I can't get the window to stay open while the next round of processing is going.
I realize I could use an entirely different windowing environment, such as Qt or something else, and that may or may not solve the problem, but I'd really prefer to avoid that for various reasons.
Does anybody have any other suggestions to get an OpenCV imshow window to stay open until the window is next updated or cv::destroyAllWindows() is called explicitly?

I can't open 2 gige(basler) cameras at the same time with opencv

I'm using two GigE (basler aca2500-14gm) cameras on win10 opencv3.4, I connect the wires of the 2 cameras to the switch, and then connect it to my computer.but I can't open the camera and get the frames at the same time。
my code:
`
int main()
{
PylonInitialize();
VideoCapture cap(0);
VideoCapture cap1(2);
if (!cap.isOpened())
{
cout << "Camera 1 unsuccessfully opened" << endl;
}
if (!cap1.isOpened())
{
cout << "Camera 2 unsuccessfully opened" << endl;
}
bool stop = false;
while (!stop)
{
Mat frame;
Mat frame1;
cap >> frame;
cap1 >> frame1;
if (frame.empty() || frame1.empty())
{
break;
}
imshow("Open the camera 1", frame);
imshow("Open the camera 2", frame1);
if (waitKey(100) >= 0)
{
PylonTerminate();//
stop = true;
}
}
}`
by the way,when I tried to run the sample of basler SDK :Grab_MultipleCameras.cpp,
I can open the camera but the image in the window is grey.
is there anyone help me to solve this problem?
Thanks in advance.
When you run this sample of basler SDK it might happen that the second camera still could not be open but it just show you the window with defualt color (grey).
Another likely thing is that your are passing wrong device ID for VideoCapture to work, See this
OpenCv VideoCap documentation. Also from what I know if you are using GigE cameras it will be better to pass ip address of each camera to VideoCapture
So I would say only to try change one think in your code:
From
VideoCapture cap(0);
VideoCapture cap1(2);
To:
VideoCapture cap(/*camera Ip Address*/); //or try with different IDs
VideoCapture cap1(/*camera Ip Address*/);
Also take a look at this answer VideoCapture and GigE camera . It is stated there that when there is more than one camera you would be better with passing IP address.
Another thing to check will be if you can see both cameras in your device manager.
EDIT:
Hey I found nice documentation about working with Pylon SDK( from camera vendor) andOpenCV (it is probably older version of OpenCV but still can be useful)

openCV to dlib - Mat to array2d

I'm successfully opening and displaying a .avi video using OpenCV and I need this to go through OpenCV because I want to learn how to make OpenCV and dlib communicate.
For my understanding, a Mat has to be converted into an array2d in order to be processed by dlib so here's my first attempt:
cv::VideoCapture cap("/home/francesco/Downloads/05-1.avi");
cv::namedWindow("UNLTD", CV_WINDOW_AUTOSIZE);
while(1)
{
cv::Mat temp;
cv_image<bgr_pixel> cimg(temp);
std::vector<rectangle> faces = detector(cimg);
cout << faces.size() << endl;
cv::imshow("UNLTD", temp);
}
This returns the error
Error detected in file /usr/local/include/dlib/opencv/cv_image.h.
Error detected in function dlib::cv_image<pixel_type>::cv_image(cv::Mat) [with pixel_type = dlib::bgr_pixel].
Failing expression was img.depth() == cv::DataType<typename pixel_traits<pixel_type>::basic_pixel_type>::depth && img.channels() == pixel_traits<pixel_type>::num.
The pixel type you gave doesn't match pixel used by the open cv Mat object.
img.depth(): 0
img.cv::DataType<typename pixel_traits<pixel_type>::basic_pixel_type>::depth: 0
img.channels(): 1
img.pixel_traits<pixel_type>::num: 3
I tried swapping bgr_pixel to rgb_pixel but without any luck.
Looking around the internet somebody mentioned that the img.depth() is zero, therefore I should use unsigned char instead of rgb_pixel.
First thing: my video is playing in colors, so it does have 3 channels, I don't understand why it should be interpreted as a 1 channel image.
The strange thing is that, making that change from rgb_pixel to unsigned char, makes the software work but ZERO faces are detected on that video stream (that is the video of a guy talking and the face on the same video is detected with no problems by dlib on python.
I don't understand what I'm doing wrong
In your code, the temp is empty because you have not fed any frame from the video capture to it. Conversion of cv::Mat to dlib::array2d is also not correct. Please see this post for more information.
You may try:
cv::VideoCapture cap("/home/francesco/Downloads/05-1.avi");
cv::namedWindow("UNLTD", CV_WINDOW_AUTOSIZE);
dlib::frontal_face_detector detector = dlib::get_frontal_face_detector();
while(1)
{
cv::Mat temp;
cap >> temp;
dlib::array2d<bgr_pixel> dlibFrame;
dlib::assign_image(dlibFrame, dlib::cv_image<bgr_pixel>(temp));
std::vector<rectangle> faces = detector(dlibFrame);
cout << faces.size() << endl;
cv::imshow("UNLTD", temp);
}

OpenCV Videocapture Grab and Retrieve

I've had this issue for a long time and I'm not sure whats going on.
So i have a loop from which nextFrame is called, now the issue lies with what the imshow actually shows.
I specifically want one image every time i call cap.grab() and cap.retrieve(), but it seems to have this buffer internally in the "cap" object, so instead on getting individual instantaneous images i would get a sequence/images of images when i click through the images, then after 3/4 frames a new sequence.
How do i get single frames?
cap is a VideoCapture object, maxCount is the size of the vector.
void CamLoop::nextFrame() {
.
.
.
//if first loop fill a vector<Mat> with random Mats from camera
if (firstLoop) {
Mat buff;
cap >> buff;
for(int i = 0; i<(maxCounter); i++) {
buffer.push_back(buff);
}
}
projector.nextCode();
if (!customImages) {
cap.grab();
Mat buff;
cap.retrieve(buff);
//tried this way too
//cap >> buff;
buffer[counter] = buff;
setMouseCallback( "Camera", mouseFunc, this );
imshow("Camera", buffer[counter]);
waitKey(1);
}
.
.
.
counter++;
}
I am using Linux Mint Rosa with OpenCV 3.1.0 on Eclipse Mars
EDIT
The problem is that VideoCapture has a buffer, try this on your own computer in debug mode, the frames aren't live, how would i over come this issue?
I tried using
cap.set(CV_CAP_PROP_BUFFERSIZE,1);
but it gives me this error.
VIDEOIO ERROR: V4L2: setting property #38 is not supported
also tried
cap.set(CV_CAP_PROP_MODE,1);
but it gives me this error.
VIDEOIO ERROR: V4L2: setting property #9 is not supported
EDIT
It may be the camera with the buffer and not the VideoCapture object itself.
A slow and cheat fix may be to do
cap.open( *CAMERA_NUM* );
in the loop, this is slow but it achieves still images without the buffer.

How to show different Frame per second of video in two window in opencv

I am using opencv to show frames from camera. I want to show that frames in to two separation windows. I want show real frame from camera into first window (show frames after every 30 mili-seconds) and show the frames in second window with some delay (that means it will show frames after every 1 seconds). Is it possible to do that task. I tried to do it with my code but it is does not work well. Please give me one solution to do that task using opencv and visual studio 2012. Thanks in advance
This is my code
VideoCapture cap(0);
if (!cap.isOpened())
{
cout << "exit" << endl;
return -1;
}
namedWindow("Window 1", 1);
namedWindow("Window 2", 2);
long count = 0;
Mat face_algin;
while (true)
{
Mat frame;
Mat original;
cap >> frame;
if (!frame.empty()){
original = frame.clone();
cv::imshow("Window 1", original);
}
if (waitKey(30) >= 0) break;// Delay 30ms for first window
}
You could write the loop to display frames in a single function with the video file name as the argument and call them simultaneously by multi-threading.
The pseudo code would look like,
void* play_video(void* frame_rate)
{
// play at specified frame rate
}
main()
{
create_thread(thread1, play_video, normal_frame_rate);
create_thread(thread2, play_video, delayed_frame_rate);
join_thread(thread1);
join_thread(thread2);
}