Basically I need to capture the video from videocamera do some processing on frames and for each frame show a rectangle of detection.
Example: http://www.youtube.com/watch?v=aYd2kAN0Y20
How would you superimpose this rectangle on the output of videocamera (usb)? (c++)
I would use OpenCV, an open source imaging library to get input from a webcam/video file.
Here is a tutorial on how to install it:
http://opensourcecollection.blogspot.com.es/2011/04/how-to-setup-opencv-22-in-codeblocks.html
Then I would use this code:
CvCapture *capture = cvCreateCameraCapture(-1);
IplImage* frame = cvQueryFrame(capture);
To get the image, frame from the CvCapture, capture.
In this case, capture is taken directly from a video camera, but you can also create it from a video file with:
CvCapture *capture = cvCreateFileCapture("filename.avi");
Then, I would draw on the image with functions defined here: http://opencv.willowgarage.com/documentation/drawing_functions.html
By the way, the shape in the Youtube video is not a rectangle. It's a parallelogram.
If you want to do it live, then you can basically put this in a loop, getting a frame, processing it, drawing on it, and then outputting the image, like this:
You would include this before your loop:
cvNamedWindow("Capture", CV_WINDOW_AUTOSIZE);
And then, in your loop, you would say this:
cvShowImage("Capture", frame);
After the processing.
EDIT To do this in C++, open your webcam like this:
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
To initialize it from a file, instead of putting in the camera index, put the file path.
Get a frame from the camera like this:
Mat frame;
cap >> frame; // get a new frame from camera
Then you can find drawing functions here:
http://opencv.willowgarage.com/documentation/cpp/core_drawing_functions.html
Cheers!
Related
I'm trying to access multiple usb-cameras in openCV with MacOS 10.11.
My goal is to connect up to 20 cameras to the pc via USB Quad-Channel extensions and take single images. I do not need to have live streaming.
I tried with the following code and I can take a single image from all cameras (currently only 3, via one usb controller).
The question is: does opencv stream a live video from the usb cameras all the time, or does grab() stores an image on the camera which can be retrieve with retrieve() ?
I couldn't find the information, wether opencv uses the grab() command on it's internal video buffer, or on the camera.
int main(int argument_number, char* argument[])
{
std::vector<int> cameraIDs{0,1,2};
std::vector<cv::VideoCapture> cameraCaptures;
std::vector<std::string> nameCaptures{"a","b","c"};
//Load all cameras
for (int i = 0;i<cameraIDs.size();i++)
{
cv::VideoCapture camera(cameraIDs[i]);
if(!camera.isOpened()) return 1;
camera.set(CV_CAP_PROP_FRAME_WIDTH, 640);
camera.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
cameraCaptures.push_back(camera);
}
cv::namedWindow("a");
while(true) {
int c = cvWaitKey(2);
if(27 == char(c)){ //if esc pressed. grab new image and display it.
for (std::vector<cv::VideoCapture>::iterator it=cameraCaptures.begin();it!=cameraCaptures.end();it++)
{
(*it).grab();
}
int i=0;
for (std::vector<cv::VideoCapture>::iterator it=cameraCaptures.begin();it!=cameraCaptures.end();it++)
{
cv::Mat3b frame;
(*it).retrieve(frame);
cv::imshow(nameCaptures[i++], frame);
}
}
}
return 0;
}
Could you please make the statement more clear. You only want the frame from the feed or you want the streams to be connected all the time.
Opencv camera capture is always in running mode unless you release the capture device. So if you say you want only one frame from a device then its better to release this device once you retrieve the frame.
Another point is instead of using grab or retrieve in a multi cam environment, its better to use read() which combines both the above methods and reduces the overhead in decoding streams. So if you want say frame # 2sec position from each of the cams then in time domain they are pretty closely captured as in say the frame x from cam1 captured # 2sec position then frame x at 2.00001sec and frame x from cam3 captured at 2.00015sec..(time multiplexing - multithreading - internal by ocv)
I hope I am clear in explanation.
Thanks
I am trying to obtain video from an IP camera Axis 6034E using OpenCV in c++.
I can easily read stream using following simple code:
VideoCapture vid;
vid.open("http://user:password#ipaddres/mjpg/video.mjpg");
Mat frame;
while(true){
vid.read(frame);
imshow("frame", frame)
waitKey(10);
}
But my problem is, the password contain # and unfortunately it is the last charterer of the password. Any idea I will appreciate it.
I tried \# and some other encoding methods and it didn't help.
I want to detect motion in already existing video, The video is stored in the webm format. I have seen some demo of opencv but those samples is capturing the motion of the live webcam streaming.
Is there any library or api which capture the motion of the webm video file in c++?
please help me.
If you have the code that run with the webcam input you only have to change the input type to accept the video file as input.
Basically, you can accomplish it using the VideoCapture object.
cv::VideoCapture cap("path/for/file.fileextension")
and then, putting this input into a Mat datatype (separating by frame):
Mat frame;
cap >> frame;
I'm using OpenCV to get some video frames. This is how the camera capture is initialised:
VideoCapture capture;
capture.open(0); //Read from camera #0
If I wanted to switch to different camera, I'd do this:
capture.release(); //Release the stream
capture.open(1); //Open different stream
Imagine you had a few cameras connected to your computer and you wanted to loop through them using two buttons Previous camera and Next camera. Without saving the current camera ID to a variable, I need to get the actual value from the VideoCapture object.
So is there a way how to find out the id of currently used device?
Pseudocode:
int current = capture.deviceId;
capture.release();
capture.open(current++);
So is there a way how to find out the id of currently used device?
There's no way to do this because class VideoCapture doesn't contain such variable or method. It actually contains protected pointer to CvCapture (take a look at highgui.h) so you could try to play with it but you don't have access to this field.
I want to read and show a video using opencv. I've recorded with Direct-show, the Video has UYVY (4:2:2) codec, since opencv can't read that format, I want to convert the codec to an RGB color model, I readed about ffmpeg and I want to know if it's possible to get this done with it ? if not if you a suggestion I'll be thankful.
As I explained to you before, OpenCV can read some formats of YUV, including UYVY (thanks to FFmpeg/GStreamer). So I believe the cv::Mat you get from the camera is already converted to the BGR color space which is what OpenCV uses by default.
I modified my previous program to store the first frame of the video as PNG:
cv::Mat frame;
if (!cap.read(frame))
{
return -1;
}
cv::imwrite("mat.png", frame);
for(;;)
{
// ...
And the image is perfect. Executing the command file on mat.png reveals:
mat.png: PNG image data, 1920 x 1080, 8-bit/color RGB, non-interlaced
A more accurate test would be to dump the entire frame.data() to the disk and open it with an image editor. If you do that keep in mind that the R and B channels will be switched.