I want to detect motion in already existing video, The video is stored in the webm format. I have seen some demo of opencv but those samples is capturing the motion of the live webcam streaming.
Is there any library or api which capture the motion of the webm video file in c++?
please help me.
If you have the code that run with the webcam input you only have to change the input type to accept the video file as input.
Basically, you can accomplish it using the VideoCapture object.
cv::VideoCapture cap("path/for/file.fileextension")
and then, putting this input into a Mat datatype (separating by frame):
Mat frame;
cap >> frame;
Related
I'm using OpenCV4 to read from a camera. Similar to a webcam. Works great, code is somewhat like this:
cv::VideoCapture cap(0);
cap.set(cv::CAP_PROP_FRAME_WIDTH , 1600);
cap.set(cv::CAP_PROP_FRAME_HEIGHT, 1200);
while (true)
{
cv::Mat mat;
// wait for some external event here so I know it is time to take a picture...
cap >> mat;
process_image(mat);
}
Problem is, this gives many video frames, not a single image. This is important because in my case I don't want nor need to be processing 30 FPS. I actually have specific physical events that trigger reading the image from the camera at certain times. Because OpenCV is expecting the caller to want video -- not surprising considering the class is called cv::VideoCapture -- it has buffered many seconds of frames.
What I see in the image is always from several seconds ago.
So my questions:
Is there a way to flush the OpenCV buffer?
Or to tell OpenCV to discard the input until I tell it to take another image?
Or to get the most recent image instead of the oldest one?
The other option I'm thinking of investigating is using V4L2 directly instead of OpenCV. Will that let me take individual pictures or only stream video like OpenCV?
Currently, I am working on a streaming project. I need to grab frames from USB camera and send them over TCP.
To open USB camera video stream I'm using cv::VideoCapture. This allows me to read already decoded frames. According to this question I understood that there is no way to get encoded frame data using cv::VideoCapture and I need to encode each frame again and send it whatever I need using cv::imencode. The problem is that I can encode frames to some specific format listed here, and, in case, I use either .jpg or .png the file size still quite big and on receiving side frame rate very poor.
My question is: Is there any way to get mjpeg or h264 encoded data directly
or maybe you can suggest a better way to encode frames.
OpenCV 3.4.3, camera RICOH THETA V, language C++.
My code:
void Streamer::start()
{
cv::Mat img;
cv::VideoCapture cap(0);
cap.set(CV_CAP_PROP_FOURCC, CV_FOURCC('H', '2', '6', '4'));
if (!cap.isOpened())
throw std::invalid_argument("No device found.");
std::vector<int> format_params;
format_params.push_back(CV_LOAD_IMAGE_COLOR);
format_params.push_back(CV_IMWRITE_PNG_STRATEGY);
for (;;)
{
cap.read(img);
cv::imencode(".png", img, buffer_, format_params);
std::string strbuf(buffer_.begin(), buffer_.end());
server_->sendString(socket, strbuf);
}
cap.release();
}
I am trying to obtain video from an IP camera Axis 6034E using OpenCV in c++.
I can easily read stream using following simple code:
VideoCapture vid;
vid.open("http://user:password#ipaddres/mjpg/video.mjpg");
Mat frame;
while(true){
vid.read(frame);
imshow("frame", frame)
waitKey(10);
}
But my problem is, the password contain # and unfortunately it is the last charterer of the password. Any idea I will appreciate it.
I tried \# and some other encoding methods and it didn't help.
I'm trying to capture the raw data of the logitech pro 9000 (eg. the so called Bayer pattern). This can be achieved by using the so called bayer application, that can be found floating over the internet. It should return a 8 bit bayer pattern, but the results are quite obviously not such a pattern.
However; The image that is being streamed seems to be quite off. As can be seen in the image below, I get 2 images of the scene in a 3 channel image (meaning 6 channels in total). Each image is 1/4th of the total capture area, so it would seem that there is some kind of YUV data being streamed.
I was unable to convert this data into anything meaningful using the conversions provided by openCV. Any ideas what kind of data is being sent and (more importantly) how to convert this into RGB?
EDIT
As requested; the codesnippet that is used to generate the image.
system("Bayer.exe 1 8"); //Sets the camera to raw mode
// set up camera
VideoCapture capture(0);
if(!capture.isOpened()){
waitKey();
exit(0);
}
Mat capturedFrame;
while(true){
capture>>capturedFrame;
imshow("Raw",capturedFrame);
waitKey(25);
}
How did you get frames from stream using openCV? Can you share some code snippets? There are too many video formats in openCV for getting correct color channel and compressed data.
I think you should be able to obtain correct image frames as mentioned here :
http://forum.openrobotino.org/archive/index.php/t-295.html?s=c33acb1fb91f5916080f8dfd687598ec
This is most likely to happen if the out put data format ( width, height, bit depth, no of channels...) of camera and the data format your program expect is different.
However i could capture of logitec pro cam, simply by using
Mat img;
VideoCapture cap(0);
cap >> img;
I want to read and show a video using opencv. I've recorded with Direct-show, the Video has UYVY (4:2:2) codec, since opencv can't read that format, I want to convert the codec to an RGB color model, I readed about ffmpeg and I want to know if it's possible to get this done with it ? if not if you a suggestion I'll be thankful.
As I explained to you before, OpenCV can read some formats of YUV, including UYVY (thanks to FFmpeg/GStreamer). So I believe the cv::Mat you get from the camera is already converted to the BGR color space which is what OpenCV uses by default.
I modified my previous program to store the first frame of the video as PNG:
cv::Mat frame;
if (!cap.read(frame))
{
return -1;
}
cv::imwrite("mat.png", frame);
for(;;)
{
// ...
And the image is perfect. Executing the command file on mat.png reveals:
mat.png: PNG image data, 1920 x 1080, 8-bit/color RGB, non-interlaced
A more accurate test would be to dump the entire frame.data() to the disk and open it with an image editor. If you do that keep in mind that the R and B channels will be switched.