I use OpenCV to read several videos,
but the warning message disturbs me.
My program just reads frames from a video,
and calculates the MD5 of every frame.
string VIDEO::getEndingHash(){
int idx = 0;
cv::Mat frame;
while (1){
Cap_.set(CV_CAP_PROP_POS_FRAMES, _FrameCount - idx);
Cap_ >> frame;
if (frame.empty())
idx++;
else
break;
}
Cap_.set(CV_CAP_PROP_POS_FRAMES, 0);
return MD5::MatMD5(frame);
}
How to hide/disable ffmpeg erros when using OpenCV (C++)?
Related
I'm trying to adapt code from this page: https://www.pyimagesearch.com/2018/07/16/opencv-saliency-detection/, however that is written in Python and I'm trying to do it in C++. When I run my code it compiles, but all I see is a white screen and not any type of saliency detection going on. What's wrong?
cap.open(pathToVideo);
int frame_width = cap.get(CAP_PROP_FRAME_WIDTH);
int frame_height = cap.get(CAP_PROP_FRAME_HEIGHT);
while (true) {
Mat frame;
Mat salientFrame;
cap >> frame;
if (frame.empty()) {
break;
}
Ptr<MotionSaliencyBinWangApr2014> MS = MotionSaliencyBinWangApr2014::create();
cvtColor(frame, frame, COLOR_BGR2GRAY);
MS->setImagesize(frame.cols, frame.rows);
MS->init();
MS->computeSaliency(frame, salientFrame);
salientFrame.convertTo(salientFrame, CV_8U, 255);
imshow("Motion Saliency", salientFrame);
char c = (char)waitKey(25);
if (c == 27)
break;
}
cap.release();
The command
Ptr<MotionSaliencyBinWangApr2014> MS = MotionSaliencyBinWangApr2014::create();
should be called before the loop.
The reason is that the method processes a video, not a single image.
I would like to put an image on video and i'm wondering if it's possible in opencv without multithreading.
I would like to avoid it because in my project i am operating on RPI 0W(that's whyi don't want multithreading) .
i can't find anything about it on internet. I got some basic code in c++ . I'm new to open cv.
int main(){
VideoCapture cap(0);
if (!cap.isOpened())
{
cout << "error"<<endl;
return -1;
}
Mat edges;
namedWindow("edges", 1);
Mat img = imread("logo.png");
for (;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
imshow("edges", WINDOW_AUTOSIZE );
imshow("edges", img);
imshow("edges", frame);
if (waitKey(30) >= 0) break;
}
}
In OpenCV showing two things in the same window overwrites the previous one which I think is happening in your case.
You can use OpenCV addWeighted() function or bitwise operations.
OpenCV has good documentation on this. You can find it here
I want to catch video stream from IP camera in OpenCV, but OpenCV can't create VideoCapture from url, but I've got EmguCV project, where I can capture video using this url. Code:
const std::string url = "rtsp://admin:12345#192.168.6.206:554/RVi/1/1";
VideoCapture cap(url);
if (!cap.isOpened())
return -1;
namedWindow("frame", 1);
while (true)
{
Mat frame;
cap >> frame;
imshow("frame", frame);
if (waitKey(30) >= 0) break;
}
return 0;
Just for test I install OpenCV 2.9.11, and even theve everything is works.
What I do wrong?
what you need is to provide a file extension, instead of a URL. You can add the same to your URL as:
std::string url = "rtsp://admin:12345#192.168.6.206:554/RVi/1/1/x.mjpeg";
I have been trying to get openCV to read an image from my computer's webcam. The code below successfully opens the webcam (green light turns on). However, attempts to grab a frame and hence read a frame fail. I am at a loss here. Can anyone help?
Many Thanks,
Hillary
P.S. I am running Mac OS X 10.9 on a MacBook Pro. And my opencv version is 2.4.6.1
And here is the code:
#include "opencv.hpp"
using namespace cv;
int main(int, char**) {
VideoCapture cap = VideoCapture(0);
if(!cap.isOpened()){
printf("failed to open camera\n");
return -1;
}
namedWindow("edges",1);
for(;;){
if(waitKey(50) >= 0 ) break;
if(!cap.grab()){
printf("failed to grab from camera\n");
}
}
return 0;
}
You forgot to read new frames in your loop and show them! There:
for(;;){
if(waitKey(50) >= 0 ) break;
Mat frame;
if(!cap.grab()){
printf("failed to grab from camera\n");
break;
}
cap >> frame;
if(frame.empty()){
printf("failed to grab from camera\n");
break;
}
imshow("edges", frame);
}
I have a grabber which can get the images and show them on the screen with the following code
while((lastPicNr = Fg_getLastPicNumberBlockingEx(fg,lastPicNr+1,0,10,_memoryAllc))<200) {
iPtr=(unsigned char*)Fg_getImagePtrEx(fg,lastPicNr,0,_memoryAllc);
::DrawBuffer(nId,iPtr,lastPicNr,"testing"); }
but I want to use the pointer to the image data and display them with OpenCV, cause I need to do the processing on the pixels. my camera is a CCD mono camera and the depth of the pixels is 8bits. I am new to OpenCV, is there any option in opencv that can get the return of the (unsigned char*)Fg_getImagePtrEx(fg,lastPicNr,0,_memoryAllc); and disply it on the screen? or get the data from the iPtr pointer an allow me to use the image data?
Creating an IplImage from unsigned char* raw_data takes 2 important instructions: cvCreateImageHeader() and cvSetData():
// 1 channel for mono camera, and for RGB would be 3
int channels = 1;
IplImage* cv_image = cvCreateImageHeader(cvSize(width,height), IPL_DEPTH_8U, channels);
if (!cv_image)
{
// print error, failed to allocate image!
}
cvSetData(cv_image, raw_data, cv_image->widthStep);
cvNamedWindow("win1", CV_WINDOW_AUTOSIZE);
cvShowImage("win1", cv_image);
cvWaitKey(10);
// release resources
cvReleaseImageHeader(&cv_image);
cvDestroyWindow("win1");
I haven't tested the code, but the roadmap for the code you are looking for is there.
If you are using C++, I don't understand why your are not doing it the simple way like this:
If your camera is supported, I would do it this way:
cv::VideoCapture capture(0);
if(!capture.isOpened()) {
// print error
return -1;
}
cv::namedWindow("viewer");
cv::Mat frame;
while( true )
{
capture >> frame;
// ... processing here
cv::imshow("viewer", frame);
int c = cv::waitKey(10);
if( (char)c == 'c' ) { break; } // press c to quit
}
I would recommend starting to read the docs and tutorials which you can find here.