OpenCV displays the image triplicated in a single frame - c++

I am trying to build an application to simply get, save and show some frames from my camera, a DMK 41BU02 (you can consult the specifications of the device in the following link: datasheet)
My code is as simple as that:
#include "opencv2/opencv.hpp"
using namespace std;
using namespace cv;
int main(int, char**)
{
String path="~/proof.jpg";
VideoCapture cap(1); // /dev/video0 is the integrated webcam of my laptop, while /dev/video1 is the DMK41BU02 camera
cvNamedWindow( "Video", CV_WINDOW_AUTOSIZE );
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat frame;
cap >> frame;
imwrite(path, frame);
imshow("Video", frame);
waitkey(0);
return 0;
}
The code compiles and executes whithout any problem, but the error arrives when the image is shown on the window or saved in the jpg file, because I get something like the following jpg, where the image is triplicated in the frame:
Resulting image of the code shown above
Some aspects to remark:
The code executes normally and returns normal images when working
with the integrated webcam of my laptop.
The DMK41BU02 camera works normally and returns normal images when working with another application, such as fswebcam or VLC.
The camera datasheet says it is compatible with OpenCV.
I have also tried the code with an infinite loop, as I know the first frame grabbed can be blank or with some type of error, but the problem is still there.
I have had some issues installing the camera drivers, but I think they're all resolved.
The laptop is a 32-bit machine with Ubuntu installed on it. Here you can see the output of uname -a: Linux AsusPC 3.11.0-18-generic #32~precise1-Ubuntu SMP Thu Feb 20 17:54:21 UTC 2014 i686 i686 i386 GNU/Linux
I have no idea of how to debug this problem and, of course, I don't know where the error could be. Could you give me any hint, please?
Thank you very much.
UPDATE: I forgot to post the weird outputs that the application writes in the terminal at the very beginning of the program:
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
libv4l2: error set_fmt gave us a different result then try_fmt!
HIGHGUI ERROR: libv4l unable convert to requested pixfmt
libv4l2: error set_fmt gave us a different result then try_fmt!
init done
opengl support available

I've had the exact same problem. The issue is within openCV itself, or more so; how cap_v4l.hpp (in the highgui module) and cap_libv4l.hpp are implemented.
The issue here is that OpenCV appearantly uses a wrong video type or channel type to read the data. Try playing arround with the different types (yuyv variants, etc) inside the opencv lib.
For some magical reason the cap_v4l.hpp is the code thats actually used by opencv and the code in cap_lib4l is not used, but seems to support more vide formats (It could be switched arround, i'm not sure about that).
Switching these files and recompiling opencv did improve stuff for me.

Since after the call to cap>>frame you have three channel (type=16), your capture is unaware that your camera is monochrome. Use grab-retrieve pairs instead since retrieve specifies number of channels.
bool VideoCapture::grab()
bool VideoCapture::retrieve(Mat& image, int channel=0)
Here is example code that also shows how to set camera parameters. You can also try to set some camera parameters that explicitly declare monochrome mode. If everything else fails you can always cut one image out of your triple with
Rect rect(0, 0, frame.cols/3, frame.rows);
Mat true_img = frame(rect).clone();
However, I kind of like what happens in your case. You have a natural frame queue and can analyze motion an possibly structure by looking at what happens in three consecutive frames.

Related

Can't change OpenCV video capture resolution

The problem I am having is that I am unable to change the resolution of an OpenCV video capture. The resolution is always 640x480, no matter what. The code I'm using is written in C++ and I'm using opencv 3.4.8. I've created a super simple program with which to do this and it just doesn't seem to work no matter what I try.
Here is the code in its entirety:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int argc, char** argv)
{
VideoCapture cap(0);
cap.set(CAP_PROP_FRAME_HEIGHT, 1080);
cap.set(CAP_PROP_FRAME_WIDTH, 1920);
// open the default camera, use something different from 0 otherwise;
// Check VideoCapture documentation.
if (!cap.open(0))
return 0;
for (;;)
{
Mat frame;
cap.read(frame);
if (frame.empty()) break; // end of video stream
imshow("this is you, smile! :)", frame);
if (waitKey(10) == 27) break; // stop capturing by pressing ESC
}
// the camera will be closed automatically upon exit
// cap.close();
return 0;
}
When I run the above code frame is always 640x480.
I've tried changing the resolution with cap.set() to smaller and higher resolutions. I am using an ImageSource camera and I know that the resolutions I am attempting to use are supported by the camera and I can view video at those resolutions in another program.
I've tried using different cameras/webcams.
I've tried explicitly changing the backend API when I create the VideoCapture object - i.e. VideoCapture cap(0, CAP_DSHOW). I tried DSHOW, FFMPEG, IMAGES, etc.
I've tried running the same program on different computers.
The result is always the same 640x480 resolution.
Is there something simple I am missing? Every other post I can seem to find on SO just points toward using the cap.set() to change the width and height.
It depends on what your camera backend is. As the documentation says:
Each backend supports devices properties (cv::VideoCaptureProperties)
in a different way or might not support any property at all.
Also mentioned in this documentation:
Reading / writing properties involves many layers. Some unexpected
result might happens along this chain. Effective behaviour depends
from device hardware, driver and API Backend.
It seems your camera backend is not supported by OpenCV Video I/O module.
Note: I also met such kind of cameras, some of them different resolutions are working with different numbers. For example, you may catch desired resolution by trying VideoCaptur(-1) , VideoCapture(1) , VideoCapture(2) ...
Turns out the error was in the "if(!cap.open(0))" line that I was trying to use to check if cap had successfully initialized.
I was under the impression open was just returning true if the video capture object was open or false otherwise. But it actually releases the video capture object if it is already open and then it re-opens it.
Long story short that means that the cap.set() calls that I was using to change the resolution were being erased when the object was re-opened with cap.open(0). At which point the resolution was set back to the default of 640x480.
The method I was looking for is cap.isOpened(), which simply returns true or false if the object is open. A simple, silly mistake.

using OpenCV to capture images, not video

I'm using OpenCV4 to read from a camera. Similar to a webcam. Works great, code is somewhat like this:
cv::VideoCapture cap(0);
cap.set(cv::CAP_PROP_FRAME_WIDTH , 1600);
cap.set(cv::CAP_PROP_FRAME_HEIGHT, 1200);
while (true)
{
cv::Mat mat;
// wait for some external event here so I know it is time to take a picture...
cap >> mat;
process_image(mat);
}
Problem is, this gives many video frames, not a single image. This is important because in my case I don't want nor need to be processing 30 FPS. I actually have specific physical events that trigger reading the image from the camera at certain times. Because OpenCV is expecting the caller to want video -- not surprising considering the class is called cv::VideoCapture -- it has buffered many seconds of frames.
What I see in the image is always from several seconds ago.
So my questions:
Is there a way to flush the OpenCV buffer?
Or to tell OpenCV to discard the input until I tell it to take another image?
Or to get the most recent image instead of the oldest one?
The other option I'm thinking of investigating is using V4L2 directly instead of OpenCV. Will that let me take individual pictures or only stream video like OpenCV?

OpenCV VideoCapture Partial Frame Corruption

I recently started using OpenCV for a project involving reading videos. I followed tutorials online for video's reading and the video seems to be read with no problems. However, when I display any frame from the video, the far right column appears to be corrupted. Here is the code I used for reading and displaying the first frame.
VideoCapture cap("6.avi");
Mat frame;
cap>>frame;
imshow("test",frame);
waitKey(0);
This resulted in a frame that looks good for the most part except the far right column. See here.
I am making no modifications to the video or frames before displaying it. Can anyone help figure out why this is happening?
Note: I'm running Ubuntu 14.04, OpenCV version 2.4.8
Full video can be found here.
Your code looks fine to me. Are you certain the frame is corrupted? Resize, maximize, minimize the "test" GUI window to see if the right edge is still corrupted. Sometimes while displaying really small images, I've seen the right edge of the GUI window display incorrectly even though the frame is correct. You could also try imwrite("test.png",frame) to see if the saved image is still corrupted.
If this doesn't help, it would seem like a codec problem. Ensure you have the latest version of opencv, ffmpeg.
If this still doesn't help, the video itself may be corrupted. You could try converting it into another format using ffmpeg

Opencv doesn't detect firewire webcam on linux

I have connected a cam through firewire and tried to access it using opencv. The camera is detected in coriander and able to get a video stream. Below is the code I used
#include "/home/iiith/opencv-2.4.9/include/opencv/cv.h"
#include "/home/iiith/opencv-2.4.9/include/opencv/highgui.h"
#include "cxcore.h"
#include <iostream>
using namespace cv;
using namespace std;
int main(int,char**)
{
VideoCapture cap(0);
if(!cap.isOpened())
cout<<"Camera not detected"<<endl;
while(1)
{
Mat frame;
namedWindow("display",1);
cap >> frame;
imshow("display",frame);
waitKey(0);
}
}
When I run this code, the video is streamed from the webcam instead of my firewire cam. I tried the same code in my friend's system and there the firewire cam was detected. I tested the settings using different commands such as testlibraw , lsmod and they are all the same. Even the Opencv version, 2.4.9, Ubuntu 12.04 are all the same. This is really bizarre and am at this for 2 days. Can anyone please tell me what the difference could be? How can I get the external cam detected in opencv? Thanks in advance.
Note : Does this have something to have with setting the default cam? Thanks.
Update 1 : VideoCapture cap(1) gives the following error
HIGHGUI ERROR: V4L: index 1 is not correct!
Does this mean the camera is not recognized?
First, you should be sure that camera is recognized from your s.o.
unplug camera and wait few seconds;
open terminal and digit:
watch dmesg
lspci | grep -E -i "(1394|firewire)" #this could give you something
plug your device and read new entry on terminal
if your device is recognized you can launch a command like this:
mplayer tv:// -tv driver=v4l2:width=352:height=288
The Possible problem could be that the camera connected through firewire is not recognized by the system.
First try to see the camera output using AMcap or some other webcam software and check if you are able to see this.
If you not able to see the video in amcap then it means that drivers of that particular camera is missing.

OpenCV - Webcam does not work

I'm learning OpenCV because I want to build and program a 3D Scanner over the summer.
I bought three webcams for this purpose (two for the actual stereo images and one for texture [or as a backup]).
I tried to get a webcam's video with OpenCV. However, this does not work, as I ended up with a black screen, instead of video.
I then tried the same code with my grandmother's webcam. It worked fine.
However, I already bought 3 webcams of the type that I was planning on using to build my scanner: http://www.amazon.com/Webcam-Camera-Vision-Meeting-compatible/dp/B0015TJNEY/ref=pd_bxgy_e_img_b
I don't want to buy any new webcams.
Does anybody have any idea of why my webcams don't work with OpenCV (they work with other programs)?
How can I get OpenCV to accept my webcams?
Any suggestions would be appreciated!
Thanks
If your program pass this step , you should try a different number for cvCaptureFromCAM(0); 0 is the first web cam but maybe your's is set as 1, 2 or 3. you can also try -1 and see what happens
CvCapture *capture;
capture = cvCaptureFromCAM(0);
if (!capture)
{
printf("Error at capture");
return 1;
}