OpenCV : How to display webcam capture in windows form application? - c++

generally we display webcam or video motion in opencv windows with :
CvCapture* capture = cvCreateCameraCapture(0);
cvNamedWindow( "title", CV_WINDOW_AUTOSIZE );
cvMoveWindow("title",x,y);
while(1)
{
frame = cvQueryFrame( capture );
if( !frame )
{
break;
}
cvShowImage( "title", frame );
char c = cvWaitKey(33);
if( c == 27 )
{
break;
}
}
i tried to use pictureBox that is successful to display image in windows form with this :
pictureBox1->Image = gcnew System::Drawing::Bitmap( image->width,image->height,image->widthStep,System::Drawing::Imaging::PixelFormat::Undefined, ( System::IntPtr ) image-> imageData);
but when im trying to display captured image from video it wont works, here is the source :
CvCapture* capture = cvCreateCameraCapture(0);
while(1)
{
frame = cvQueryFrame( capture );
if( !frame )
{
break;
}
pictureBox1->Image = gcnew System::Drawing::Bitmap( frame->width,frame->height,frame->widthStep,System::Drawing::Imaging::PixelFormat::Undefined, ( System::IntPtr ) frame-> imageData);
char c = cvWaitKey(33);
if( c == 27 )
{
break;
}
}
is there anyway to use windows form instead opencv windows to show video or webcam?
or is there something wrong with my code?
thanks for your help.. :)

Piece of advice : use VideoInput instead of CvCapture (CvCapture is a part of highgui a library that is not intended for production use, but just for quick testing). Yes the VideoInput homepage looks strange, but the library is quite worthwhile.
Here is a quick sample for the usage of VideoInput (extracted from the VideoInput.h file):
//create a videoInput object
videoInput VI;
//Prints out a list of available devices and returns num of devices found
int numDevices = VI.listDevices();
int device1 = 0; //this could be any deviceID that shows up in listDevices
int device2 = 1; //this could be any deviceID that shows up in listDevices
//if you want to capture at a different frame rate (default is 30)
//specify it here, you are not guaranteed to get this fps though.
//VI.setIdealFramerate(dev, 60);
//setup the first device - there are a number of options:
VI.setupDevice(device1); //setup the first device with the default settings
//VI.setupDevice(device1, VI_COMPOSITE); //or setup device with specific connection type
//VI.setupDevice(device1, 320, 240); //or setup device with specified video size
//VI.setupDevice(device1, 320, 240, VI_COMPOSITE); //or setup device with video size and connection type
//VI.setFormat(device1, VI_NTSC_M); //if your card doesn't remember what format it should be
//call this with the appropriate format listed above
//NOTE: must be called after setupDevice!
//optionally setup a second (or third, fourth ...) device - same options as above
VI.setupDevice(device2);
//As requested width and height can not always be accomodated
//make sure to check the size once the device is setup
int width = VI.getWidth(device1);
int height = VI.getHeight(device1);
int size = VI.getSize(device1);
unsigned char * yourBuffer1 = new unsigned char[size];
unsigned char * yourBuffer2 = new unsigned char[size];
//to get the data from the device first check if the data is new
if(VI.isFrameNew(device1)){
VI.getPixels(device1, yourBuffer1, false, false); //fills pixels as a BGR (for openCV) unsigned char array - no flipping
VI.getPixels(device1, yourBuffer2, true, true); //fills pixels as a RGB (for openGL) unsigned char array - flipping!
}
//same applies to device2 etc
//to get a settings dialog for the device
VI.showSettingsWindow(device1);
//Shut down devices properly
VI.stopDevice(device1);
VI.stopDevice(device2);

The pixel format should be known when you capture images from a camera, it's very likely the format of 24-bit BGR. System::Drawing::Imaging::PixelFormat::Format24bppRgb will be the closest format but you might get weird color display. A re-arrange of the color component will solve this problem.
Actually, there are .net version opencv library available here:
http://code.google.com/p/opencvdotnet/
and here:
http://www.emgu.com/wiki/index.php/Main_Page
Hope it helps!

I don't know if you will like this, but you could use OpenGL to show the video stream in other windows than the ones provided with opencv. (Capture the frame and display it on a rectangle.. or something like that..)

Another option you might want to consider is using emgu. This is a .Net wrapper for opencv with winforms controlls.

Related

Recording desktop to file, using openCV

I'm attempting to utilize OpenCV, C++, on my windows 10 system to record the screen as part of a larger program I am writing. I need the ability to record the display and save the recording for later review.
I was able to find this link on stackoverflow
How to capture the desktop in OpenCV (ie. turn a bitmap into a Mat)?
User john ktejik created a function that in essence completed exactly what I am looking to accomplish, short of saving the stream to file.
Now what I have always done in the past was once I've opened a connection to my webcam or a video file, I could simply create a VideoWriter Object and write the individual frames to file. I have attempted to do just that utilizing John's function to act as a video source.
int main (int argc, char **argv)
{
HWND hwndDesktop = GetDesktopWindow ();
int key = 0;
int frame_width = 1920;
int frame_height = 1080;
VideoWriter video ("screenCap.avi", CV_FOURCC ('M', 'J', 'P', 'G'), 15, Size (frame_width, frame_height));
while (key != 27)
{
Mat src = hwnd2mat (hwndDesktop);
video.write (src);
imshow ("Screen Capture", src);
key = waitKey (27);
}
video.release ();
destroyAllWindows ();
return 0;
}
What I'm seeing as the output, is the file labeled "screenCap.avi", however the file is empty of video. The file saves as 16KB storage space.
John's function is on point, as it displays the frames just fine via imshow(), but doesn't seem to allow me to save them.
So over the weekend I played with the software some more. And as I really don't have a firm grasp on it, I figured that there had to be a problem with settings between the screen capture and the file writer.
So I started looking at each of the lines in John's function. I came across
src.create(height, width, CV_8UC4);
It seems that the Mat object is being created as with 4 color channels. Did a bit more digging and I found a couple references that point to Videowriter expecting 3 channels.
So a simple change was to convert the output of Johns function from 4 channels to 3 channels. This fixed the problem and I can now write the frames to file.
int main (int argc, char **argv)
{
HWND hwndDesktop = GetDesktopWindow ();
int key = 0;
int frame_width = 1920;
int frame_height = 1080;
VideoWriter video ("screenCap.avi", CV_FOURCC ('M', 'J', 'P', 'G'), 15, Size
(frame_width, frame_height));
while (key != 27)
{
Mat src = hwnd2mat (hwndDesktop);
Mat dst;
cvtColor (src, dst, COLOR_BGRA2RGB);
video.write (dst);
imshow ("Screen Capture", dst);
key = waitKey (27);
}
video.release ();
destroyAllWindows ();
return 0;
}

bad quality, when rendering images from camera in qt4

My code:
camera = new RaspiCam_Cv();//raspbery pi library
camera->set(CV_CAP_PROP_FORMAT,CV_8UC1); //this is monochrome 8 bit format
camera->set(CV_CAP_PROP_FRAME_WIDTH, 960);
camera->set(CV_CAP_PROP_FRAME_HEIGHT,720);
while (1){
camera->grab();//for linux
unsigned char* buff = camera->getImageBufferData();
QPixmap pic = QPixmap::fromImage(QImage( buff, camWidth_, camHeight_, camWidth_ * 1, QImage::Format_Indexed8 ));
label->setPixmap(pic);
}
The problem is bad quality! I found out that the problem happens when using QImage, when using openCv Mat, everything is good!
Same thing happens in other Qt based programs, like this one (same bad quality): https://code.google.com/p/qt-opencv-multithreaded/
Here is a pic, where the problem is shown. there is a white page in front of the camera, so if all went as it should, you should see clean gray image.
You are resizing the image using pixmap and label transformations, which are worse than the one of QImage. This is due to pixmap being optimized for display and not for anything else. The pixmap size should be the same of the label to avoid any further resizing.
QImage img =QImage(
buff,
camWidth_,
camHeight_,
camWidth_ * 1,
QImage::Format_Indexed8 ).scaled(label->size());
label->setPixmap(QPixmap::fromImage(img));
This is not an answer, but it's too hard to share code in the comments.
Can you please test this code and tell me whether the result is good or bad?
int main(int argc, char** argv)
{
RaspiCam_Cv *camera = new RaspiCam_Cv();
camera->set(CV_CAP_PROP_FORMAT , CV_8UC1) ;
camera->set(CV_CAP_PROP_FRAME_WIDTH, 960);
camera->set(CV_CAP_PROP_FRAME_HEIGHT,720);
namedWindow("Output",CV_WINDOW_AUTOSIZE);
while (1)
{
Mat frame;
camera.grab();
//camera.retrieve ( frame);
unsigned char* buff = camera->getImageBufferData();
frame = cv::Mat(720, 960, CV_8UC1, buff);
imshow("Output", frame);
if (waitKey(30) == 27)
{ cout << "Exit" << endl; break; }
}
camera->~RaspiCam_Cv();
return 0;
}
Your provided images look like the color depth is only 16 Bit.
For comparison, here's the provided captured image:
and here's the same image, transformed to 16 bit color space in IrfanView (without Floyd-Steinberg-Dithering).
In the comments we found out that the Raspberry Pi Output Buffer was set to 16 Bit. and setting it to 24 Bit helped.
But I can't explain why rendering the image on the pi with OpenCV's cv::imshow produced well looking images on the Monitor/TV...

ISampleGrabber::BufferCB to IplImage; display in OpenCV shows garbled image - C++

I'm using DirectShow to access a video stream, and then using the SampleGrabber filter and interface to get samples from each frame for further image processing. I'm using a callback, so it gets called after each new frame. I've basically just worked from the PlayCap sample application and added a sample filter to the graph.
The problem I'm having is that I'm trying to display the grabbed samples on a different OpenCV window. However, when I try to cast the information in the buffer to an IplImage, I get a garbled mess of pixels. The code for the BufferCB call is below, sans any proper error handling:
STDMETHODIMP BufferCB(double Time, BYTE *pBuffer, long BufferLen)
{
AM_MEDIA_TYPE type;
g_pGrabber->GetConnectedMediaType(&type);
VIDEOINFOHEADER *pVih = (VIDEOINFOHEADER *)type.pbFormat;
BITMAPINFO* bmi = (BITMAPINFO *)&pVih->bmiHeader;
BITMAPINFOHEADER* bmih = &(bmi->bmiHeader);
int channels = bmih->biBitCount / 8;
mih->biPlanes = 1;
bmih->biBitCount = 24;
bmih->biCompression = BI_RGB;
IplImage *Image = cvCreateImage(cvSize(bmih->biWidth, bmih->biHeight), IPL_DEPTH_8U, channels);
Image->imageSize = BufferLen;
CopyMemory(Image->imageData, pBuffer, BufferLen);
cvFlip(Image);
//openCV Mat creation
Mat cvMat = Mat(Image, true);
imshow("Display window", cvMat); // Show our image inside it.
waitKey(2);
return S_OK;
}
My question is, am I doing something wrong here that will make the image displayed look like this:
Am I missing header information or something?
The quoted code is a part of the solution. You create here an image object of certain width/height with 8-bit pixel data and unknown channel/component count. Then you copy data from another buffer of unknown format.
The only chance for it to work well is that all unknowns amazingly match without your effort. So you basically need to start with checking what media type is exactly on Sample Grabber's input pin. Then, if it is not what you wanted, you have to update your code respectively. It might also be important what is the downstream connection of the SG, and whether it is connected to video renderer in particular.

Displaying IplImages in Qt labels

I ran into a routine that converts from IplImage to QImages in Qt, i tried it and it works perfects, after that i tried to display a video in a label using also Iplframes, it also worked, but now im trying to display live video from my webcam and im running into some kind of trouble because it doesnt display anything, Opencv 2.3 , Ubuntu Linux C++
CvCapture* capture = cvCreateFileCapture( argv[1] );
//CvCapture* capture = cvCaptureFromCAM( 0 );
while(1) {
frame = cvQueryFrame( capture );
cvWaitKey(33);
if( !frame ) break;
cvCvtColor(frame,frame,CV_BGR2RGB);
myImage = QImage((unsigned char *)frame->imageDataOrigin,frame->width,frame->height,QImage::Format_RGB888);
myLabel.setPixmap(QPixmap::fromImage(myImage));
myLabel.show();
//sleep(1);
Sleeper::msleep(33);
}
There i have the 2 options , capturefromcam or capturefromavi, from an avi video it converts and displays converted frames perfectly, but when i try the same thing for my webcam's captured frames it doesnt display anything, also i dont get any error or something like that, any idea?
From the looks of it, cvCaptureFromCAM() failed to find a device at index 0. But you don't know this because you are not coding defensively: cvCaptureFromCAM() returns NULL when it fails to access a device:
CvCapture* capture = cvCaptureFromCAM( 0 );
if (!capture)
{
// print error
// quit application
}
Try passing CV_CAP_ANY or experiment with other indexes: 1, 2, 3, and if none of them work I suggest you check the compatibility list and verify is your camera is supported by OpenCV.
The same attention should be payed with cvQueryFrame():
frame = cvQueryFrame( capture );
if (!frame)
{
// print error
// quit application
}

OpenCV : convert the pointer in memory to image

I have a grabber which can get the images and show them on the screen with the following code
while((lastPicNr = Fg_getLastPicNumberBlockingEx(fg,lastPicNr+1,0,10,_memoryAllc))<200) {
iPtr=(unsigned char*)Fg_getImagePtrEx(fg,lastPicNr,0,_memoryAllc);
::DrawBuffer(nId,iPtr,lastPicNr,"testing"); }
but I want to use the pointer to the image data and display them with OpenCV, cause I need to do the processing on the pixels. my camera is a CCD mono camera and the depth of the pixels is 8bits. I am new to OpenCV, is there any option in opencv that can get the return of the (unsigned char*)Fg_getImagePtrEx(fg,lastPicNr,0,_memoryAllc); and disply it on the screen? or get the data from the iPtr pointer an allow me to use the image data?
Creating an IplImage from unsigned char* raw_data takes 2 important instructions: cvCreateImageHeader() and cvSetData():
// 1 channel for mono camera, and for RGB would be 3
int channels = 1;
IplImage* cv_image = cvCreateImageHeader(cvSize(width,height), IPL_DEPTH_8U, channels);
if (!cv_image)
{
// print error, failed to allocate image!
}
cvSetData(cv_image, raw_data, cv_image->widthStep);
cvNamedWindow("win1", CV_WINDOW_AUTOSIZE);
cvShowImage("win1", cv_image);
cvWaitKey(10);
// release resources
cvReleaseImageHeader(&cv_image);
cvDestroyWindow("win1");
I haven't tested the code, but the roadmap for the code you are looking for is there.
If you are using C++, I don't understand why your are not doing it the simple way like this:
If your camera is supported, I would do it this way:
cv::VideoCapture capture(0);
if(!capture.isOpened()) {
// print error
return -1;
}
cv::namedWindow("viewer");
cv::Mat frame;
while( true )
{
capture >> frame;
// ... processing here
cv::imshow("viewer", frame);
int c = cv::waitKey(10);
if( (char)c == 'c' ) { break; } // press c to quit
}
I would recommend starting to read the docs and tutorials which you can find here.