Logitech quickcam pro 9000 Bayer capture with openCV - c++

I'm trying to capture the raw data of the logitech pro 9000 (eg. the so called Bayer pattern). This can be achieved by using the so called bayer application, that can be found floating over the internet. It should return a 8 bit bayer pattern, but the results are quite obviously not such a pattern.
However; The image that is being streamed seems to be quite off. As can be seen in the image below, I get 2 images of the scene in a 3 channel image (meaning 6 channels in total). Each image is 1/4th of the total capture area, so it would seem that there is some kind of YUV data being streamed.
I was unable to convert this data into anything meaningful using the conversions provided by openCV. Any ideas what kind of data is being sent and (more importantly) how to convert this into RGB?
EDIT
As requested; the codesnippet that is used to generate the image.
system("Bayer.exe 1 8"); //Sets the camera to raw mode
// set up camera
VideoCapture capture(0);
if(!capture.isOpened()){
waitKey();
exit(0);
}
Mat capturedFrame;
while(true){
capture>>capturedFrame;
imshow("Raw",capturedFrame);
waitKey(25);
}

How did you get frames from stream using openCV? Can you share some code snippets? There are too many video formats in openCV for getting correct color channel and compressed data.
I think you should be able to obtain correct image frames as mentioned here :
http://forum.openrobotino.org/archive/index.php/t-295.html?s=c33acb1fb91f5916080f8dfd687598ec

This is most likely to happen if the out put data format ( width, height, bit depth, no of channels...) of camera and the data format your program expect is different.
However i could capture of logitec pro cam, simply by using
Mat img;
VideoCapture cap(0);
cap >> img;

Related

using OpenCV to capture images, not video

I'm using OpenCV4 to read from a camera. Similar to a webcam. Works great, code is somewhat like this:
cv::VideoCapture cap(0);
cap.set(cv::CAP_PROP_FRAME_WIDTH , 1600);
cap.set(cv::CAP_PROP_FRAME_HEIGHT, 1200);
while (true)
{
cv::Mat mat;
// wait for some external event here so I know it is time to take a picture...
cap >> mat;
process_image(mat);
}
Problem is, this gives many video frames, not a single image. This is important because in my case I don't want nor need to be processing 30 FPS. I actually have specific physical events that trigger reading the image from the camera at certain times. Because OpenCV is expecting the caller to want video -- not surprising considering the class is called cv::VideoCapture -- it has buffered many seconds of frames.
What I see in the image is always from several seconds ago.
So my questions:
Is there a way to flush the OpenCV buffer?
Or to tell OpenCV to discard the input until I tell it to take another image?
Or to get the most recent image instead of the oldest one?
The other option I'm thinking of investigating is using V4L2 directly instead of OpenCV. Will that let me take individual pictures or only stream video like OpenCV?

C++ Program with GUI to capture using remotely placed camera and read uncompressed TIFF Image

I want to do this operations using C++ on image:
Image Frame1 is used for displaying Original Image
and Image Frame 2 is used for displaying the action performed.
Sharpen the Image
Convolution(overloading: FFT,
Other)
Blur the Image (Programmable
rectangular Seed)
Histogram
•Programmable image Contrast and Brightness
Mean and Standard Deviation of Image
Rotate image by programmable angle
PDF of a Signal acquired through
ADC
You can do it with QT and OpenCV combination. I used QT+OpenCV combination on still images. You can easily create GUI's with QT and with OpenCV you can make video capture and other image processing approaches on each frame.

Unable to perform imwrite an image that I can see with imshow

I'm using opencv, and I have a frame that I can see using imshow() but when I use imwrite to save it on the disk, I get a black image.
......
// frame *= 1/255; even converting the color before writing it didn't help
cv::sqrt(frame,frame);
cv::imwrite("name.tif",frame);
frame *=1/15.96;
imshow("frame",frame); //it works fine
................
anyidea why it isn't working. thanks in advance
What you've done here is that you've performed imwrite(), then performed a mathematical operation on frame, and then performed imshow().
Ergo, you're writing and viewing 2 different versions of Mat frame.
If you've verified that the .tiff extension works, then it's possible that sqrt() is resulting in a black frame. Try:
cv::sqrt(frame,frame);
frame *=1/15.96;
cv::imwrite("name.tif",frame);
imshow("frame",frame);
Now you can be certain that you're writing what you're seeing. If there is still a difference between the two, then try doing the same with other image formats

converting a UYVY FFmpeg

I want to read and show a video using opencv. I've recorded with Direct-show, the Video has UYVY (4:2:2) codec, since opencv can't read that format, I want to convert the codec to an RGB color model, I readed about ffmpeg and I want to know if it's possible to get this done with it ? if not if you a suggestion I'll be thankful.
As I explained to you before, OpenCV can read some formats of YUV, including UYVY (thanks to FFmpeg/GStreamer). So I believe the cv::Mat you get from the camera is already converted to the BGR color space which is what OpenCV uses by default.
I modified my previous program to store the first frame of the video as PNG:
cv::Mat frame;
if (!cap.read(frame))
{
return -1;
}
cv::imwrite("mat.png", frame);
for(;;)
{
// ...
And the image is perfect. Executing the command file on mat.png reveals:
mat.png: PNG image data, 1920 x 1080, 8-bit/color RGB, non-interlaced
A more accurate test would be to dump the entire frame.data() to the disk and open it with an image editor. If you do that keep in mind that the R and B channels will be switched.

OpenCV - VideoWriter produces a video with a "repeated" image

I'm trying to process each frame in a pair of video files in OpenCV and then write the resulting frames to an output avi file. Everything works, except that the output video file looks strange: instead of one solid image, the image is repeated three times and horizontally compressed so all three copies fit into the window. I suspect there is something going wrong with the number of channels the writer is expecting, but I'm giving it 8-bit single channel images to write. Below are the setting with which I'm initializing my videowriter:
//Initialize the video writer
CvVideoWriter *writer = cvCreateVideoWriter("out.avi",CV_FOURCC('D','I','V','X'), 30, frame_sizeL, 0);
Has anyone encountered this strange output from the openCV videowriter before? I've been checking the resulting frames with cvSaveImage just to see if somehow my processing step is creating the "tripled" image, but it's not. It's only when I write to the output avi with cvWriteFrame that the image gets "tripled" and compressed.
Edit: So I've discovered that this only happens when I attempt to write single channel images using write frame. If I write 3 channel 8-bit RGB images, the output video turns out fine. Why is it doing this? I am correctly passing "0" for the color argument when initializing the CvVideoWriter, so it should be expecting single channel images.
In the C++ version you have to tell cv::VideoWriter that you are sending a single channel image by setting the last paramter "false", are you sure you are doing this?
edit: alternatively you can convert a greyscale image to color using cvtColor() and CV_GRAY2RGB