I am planning on creating a web application that uses OpenCV for image processing. The image processing would be comparing a user uploaded picture with pictures I would have in a database. I would rather the user upload an image instead a similar upload method to this instead via a webcam.
I would like to know how can I do the processing using the OpenCV library? And what would I need to be able to connect my OpenCV processing to my web application?
If you want to load image from web use VideoCapture class open image as we open video file. See the example below.
VideoCapture cap;
if(!cap.open("http://docs.opencv.org/trunk/_downloads/opencv-logo.png")){
cout<<"Cannot open image"<<endl;
return -1;
}
Mat src;
cap>>src;
imshow("src",src);
imwrite("image.jpg",src);
waitKey();
Or use Portable Native Client as OpenCV now ported to Google Chrome NaCl and PNaCl, See OpenCV news more details
You could take the webcam shots and upload them, and then compare them like traditional image files (from the filesystem and/or database).
A modern way of taking the shots would be to use WebRTC:
http://css.dzone.com/articles/face-detection-using-html5
Other alternatives will probably involve Macromedia Flash or similar stuff.
Related
how to display openCV Mat type image in UWP APP with C++?
i want to make image processing app by UWP C++ with openCV
for example, read 2 images and overlay them and display the result image at UWP APP window
i am dealing with the image using cv::Mat structure and i assume that in xaml using tag to display the result image
i try searching using some keywords(ex, uwp, c++, openCV, MAT, Bitmapimage, .etc) but i could not find appropriate solution
is there any solution? or other proper approach?
I'd like to make a detailed video list in my Qt application using vlc-qt. Other playback engines such as QtAV or QtMultimedia are not an option. It should be vlc-qt (libvlc). That's why I need to get a small picture of a video, a preview, but can't find anything suitable for this task, except libvlc_video_take_snapshot. This method will save a picture locally, and I guess it needs a real render window to exist. That's not a good variant for me, maybe there's some better solution?
Is there any way to create a video slideshow from images using ffmpeg without using the exe?? ie, I have to do this with the dll and lib they give.
Please give me suggestions.
I'm working on a project that requires me to record webcam, microphone, and the screen. I have webcam recording, audio is a work in progress, and I stumbled across CMonitor wrapper (which I did some minor modifications to) to grab RGB images of the desktop on a specified monitor (if there are multiple monitors).
How do I go about pushing my raw RGB frames into windows media foundation to encode into a video file? My current video encoding is using a slightly modified version of this msdn sample, if that's easier to modify than it is to write a new class handler.
Or, perhaps there is some sort of media foundation route to recording the screen that I don't know of (which is possible, I'm not that great of a win32 programmer)?
Found PushSource in the Windows SDK samples, which does this.
Check Desktop Duplication API for capture desktop. Media Foundation provides two solution for encoding, MF Sink Writer for simple encoding, Media Session for a more flexible control of the media pipeline. Read this overview page first.
I'm working on a software that the current version has a custom made device driver of a webcam, and we use this driver with our software, that changes the captures image before displaying it, very similar to YouCam.
Basically, when any application that uses the webcam starts, our driver runs a processing in the frame before showing it.
The problem is that there is always "2" webcams installed, the real one, and our custom driver.
I noticed that YouCam does what we need, which is, to hook some method in any installed webcam that will process each frame before showing it.
Does anyone knows how to do this?
We use VC++.
Thanks
As bkritzer said, OpenCV easily does what you want.
IplImage *image = 0; // OpenCV type
CvCapture *capture = 0; // OpenCV type
// Create capture
capture = cvCaptureFromCAM (0);
assert (capture, "Can't connect webcam");
// Capture images
while (stilCapturing)
{
// Grab image
cvGrabFrame (capture);
// Retrieve image
image = cvRetrieveFrame (capture);
// You can configure refresh time
if (image) cvWaitKey (refreshTime);
// Process your image here
//...
}
You can encapsulate these OpenCV calls into a C++ class and dedicate a specific thread for it -- these will be your driver.
I think that YouCam uses DirectShow transform filter. Is that what you need?
Check out the OpenCV libraries. It has a bunch of tutorial examples and libraries that do exactly what you're asking for. It's a bit tough to install, but I've gotten it to work before.
Well, I think there are 2 key concepts in this question that have been misunderstood:
1) How to hook webcam capture
2) ...any application that uses the webcam...
If I understood right, OpenCV is useful for writing your own complete application, complete meaning that it will open camera and will process images. So it wouldn't satisfy point 2), which I understand as referring to other application (not yours!) opening the camera, and your application processing the images.
Point 1) seems to confirm it, because "hook" is a word usually meaning interception of some other process that are not part of your own application.
So I doubt if this question is answered or not. I am also interested on it.