System information
OpenCV => 3.3.0
Operating System / Platform => Ubuntu 16.04, x86_64
Compiler => gcc version 5.4.1 20160904
Cuda => 8.0
Nvidia card => GTX 1080 Ti
ffmpeg details
libavutil 55. 74.100 / 55. 74.100
libavcodec 57.103.100 / 57.103.100
libavformat 57. 77.100 / 57. 77.100
libavdevice 57. 7.101 / 57. 7.101
libavfilter 6.100.100 / 6.100.100
libswscale 4. 7.103 / 4. 7.103
libswresample 2. 8.100 / 2. 8.100
Detailed description
i am trying to play a rtsp stream using cudacodec::VideoReader
Rtsp Stream Details ( from vlc )
this stream plays fine in vlc and cv::VideoCapture but when i try to play it in cudacodec::VideoReader i get a error saying:
OpenCV Error: Gpu API call (CUDA_ERROR_FILE_NOT_FOUND [Code = 301]) in CuvidVideoSource, file /home/deep/Development/libraries/opencv/opencv/modules/cudacodec/src/cuvid_video_source.cpp, line 66
OpenCV Error: Assertion failed (init_MediaStream_FFMPEG()) in FFmpegVideoSource, file /home/deep/Development/libraries/opencv/opencv/modules/cudacodec/src/ffmpeg_video_source.cpp, line 101
Steps to reproduce
#include <iostream>
#include "opencv2/opencv_modules.hpp"
#if defined(HAVE_OPENCV_CUDACODEC)
#include <opencv2/core.hpp>
#include <opencv2/cudacodec.hpp>
#include <opencv2/highgui.hpp>
int main(int argc, const char* argv[])
{
const std::string fname = "rtsp://admin:admin#192.168.1.13/media/video2";
cv::namedWindow("GPU", cv::WINDOW_NORMAL);
cv::cuda::GpuMat d_frame;
cv::Ptr<cv::cudacodec::VideoReader> d_reader = cv::cudacodec::createVideoReader(fname);
for (;;)
{
if (!d_reader->nextFrame(d_frame))
break;
cv::Mat frame;
d_frame.download(frame);
cv::imshow("GPU", frame);
if (cv::waitKey(3) > 0)
break;
}
return 0;
}
#else
int main()
{
std::cout << "OpenCV was built without CUDA Video decoding support\n" << std::endl;
return 0;
}
#endif
I tried debugging it using GDB and saw that in ffmpeg_video_source.cpp bool init_MediaStream_FFMPEG() directly returns without checking the if condition.
GDB output
cv::cudacodec::detail::FFmpegVideoSource::FFmpegVideoSource
(this=0x402a20 <_start>, fname=...) at /home/deep/Development/libraries/opencv/opencv/modules/cudacodec/src/ffmpeg_video_source.cpp:98
98 cv::cudacodec::detail::FFmpegVideoSource::FFmpegVideoSource(const String& fname) :
(gdb) n
99 stream_(0)
(gdb) n
101 CV_Assert( init_MediaStream_FFMPEG() );
(gdb) s
(anonymous namespace)::init_MediaStream_FFMPEG () at /home/deep/Development/libraries/opencv/opencv/modules/cudacodec/src/ffmpeg_video_source.cpp:94
94 return initialized;
(gdb) display initialized
4: initialized = false
(gdb) s
95 }
UPDATE:
I have solved the problem. solution link
In the solution provided here the problem was related to the pixel format detected by ffmpeg.
In order to check your rtsp pixel format you can use ffprobe.
Then inside your cap_ffmpeg_impl.hpp you should add the case related to your pixel format like
case AV_PIX_FMT_YUV420P:
case AV_PIX_FMT_YUVJ420P:
*chroma_format = ::VideoChromaFormat_YUV420;
break;
And then rebuild opencv.
Related
I am working on a task in which I have to access live stream of an IP Camera (Edimax IC-3110P) using OpenCV 3. My Host system is Windows 10 and I have used Virtualbox to run Ubuntu 16.04 (Xenial) 64-bit. I am using C++ and Code::Blocks(IDE).
Finally I was able to access the livestream through Microsoft Visual Studio(in Windows 10) with the following program.
#include <stdio.h>
#include <opencv2/opencv.hpp>
#include <iostream>
int main(int, char**) {
cv::VideoCapture vcap;
cv::Mat image;
// This works on a D-Link CDS-932L
const std::string videoStreamAddress =
"http://admin:1234#192.168.2.3/mjpg/video.mjpg";
//open the video stream and make sure it's opened
if(!vcap.open(videoStreamAddress)) {
std::cout << "Error opening video stream or file" << std::endl;
return -1;
}
for(;;) {
if(!vcap.read(image)) {
std::cout << "No frame" << std::endl;
cv::waitKey();
}
cv::imshow("Output Window", image);
if(cv::waitKey(1) >= 0) break;
}
}
However, in Ubuntu with the same program in Code::Blocks it shows "Error loading stream video or file."
This camera doesn't support Linux OS but I can access the livestream through a browser's address bar(in Ubuntu) but not through my program.
Does anyone have any idea how to solve this?
Thank you.
I am compiling on Ubuntu 14.04 using OpenCV 3.1. When trying to open a video file it gives this error:
"Cannot open the video file"
I installed everything i could install : ffmpeg etc. Haven't found a solution checking out similar questions on StackOF.
What do ?
cv::VideoCapture cap(argv[1]);
Where argv[1] is the file name in the same directory as the executable.
In case your constructor is failing, you may want to use the .open() method. So, if you want to open a file that is called "myVideo.mp4" that is in the folder of your project, you would do the following:
cv::VideoCapture cap;
cap.open("myVideo.mp4");
For more detailed informations about this method, check this documentation link
Also, the book Learning OpenCV 3, from the O'Rilley media, on page 26 gives you a good example. Here is a Gist that I made to give you as an example.
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
int main() {
cv::VideoCapture cap;
cap.open("myVideo.mp4" );
cv::namedWindow( "myVideo", cv::WINDOW_AUTOSIZE );
cv::Mat frame;
while(true) {
cap >> frame;
if( frame.empty() ){
std::cout << "Could not load the video frames. \n";
break;
}
cv::imshow( "myVideo", frame );
if( cv::waitKey(27) >= 0 ){
std::cout << "Escape pressed \n";
break;
}
}
return 0;
}
im on Ubuntu 14.04 and I'm trying to write a program that will stream my desktop, using the answer to this: libvlc stream part of screen as an example. However, I don't have another computer readily aviable to see that the stream is going along well, so how can I view that stream on my computer?
libvlc_vlm_add_broadcast(inst, "mybroad", "screen://",
"#transcode{vcodec=h264,vb=800,scale=1,acodec=mpga,ab=128,channels=2,samplerate=44100}:http{mux=ts,dst=:8080/stream}",
5, params, 1, 0)
My program throws no errors, and writes this
[0x7f0118000e18] x264 encoder: using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
[0x7f0118000e18] x264 encoder: profile High, level 3.0
[0x7f0118000e18] x264 encoder: final ratefactor: 25.54
[0x7f0118000e18] x264 encoder: using SAR=1/1
[0x7f0118000e18] x264 encoder: using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
[0x7f0118000e18] x264 encoder: profile High, level 2.2
So to me, everything seems ok. However, I don't know how to view that stream from my computer- if I open vlc and try to open a network stream, using http:// #:7777 (space on purpose, website does not allow to post such links) I get the invalid host error in its log. This probably is a silly mistake or error on my part, but any help would be greatly appreciated!
if anyone needs it, this is my entire code (I'm using QT 4.8.6):
#include <QCoreApplication>
#include <iostream>
#include <vlc/vlc.h>
#include <X11/Xlib.h>
// #include <QDebug>
using namespace std;
bool ended;
void playerEnded(const libvlc_event_t* event, void *ptr);
libvlc_media_list_t * subitems;
libvlc_instance_t * inst;
libvlc_media_player_t *mp;
libvlc_media_t *media;
libvlc_media_t * stream;
int main(int argc, char *argv[])
{
XInitThreads();
QCoreApplication::setAttribute(Qt::AA_X11InitThreads);
ended = false;
QCoreApplication a(argc, argv);
// the array with parameters
const char* params[] = {"screen-top=0",
"screen-left=0",
"screen-width=640",
"screen-height=480",
"screen-fps=10"};
// Load the VLC engine */
inst = libvlc_new (0, NULL);
if(!inst)
std::cout << "Can't load video player plugins" << std::endl;
cout<< "add broacast: " <<
libvlc_vlm_add_broadcast(inst, "mybroad",
"screen://",
"#transcode{vcodec=h264,vb=800,scale=1,acodec=mpga,ab=128,channels=2,samplerate=44100}:http{mux=ts,dst=:8080/stream}",
5, params, // <= 5 == sizeof(params) == count of parameters
1, 0)<< '\n';
cout<< "poczatek broacastu: " <<libvlc_vlm_play_media(inst, "mybroad")<< '\n';
media = libvlc_media_new_location(inst,http://#:8080/stream");
// Create a media player playing environment
mp = libvlc_media_player_new (inst);
libvlc_media_player_play (mp);
cout<<"szatan!!!"<<endl;
int e;
cin>>e;
/* Stop playing */
libvlc_media_player_stop (mp);
/* Free the media_player */
libvlc_media_player_release (mp);
libvlc_release (inst);
return a.exec();
}
so, i have found the answer- stack overflow wont let me post an answer because I'm new here, so its in the comments! I should have used my IP address when creating media: media = libvlc_media_new_location(inst, "http: //192.168.1.56:8080");(space on purpose so that forum does not hide link) works great! –
A bit of context; this program was built originally to work with USB cameras - but because of the setup between where the cameras needs to be and where the computer is it makes more sense to switch to cameras run over a network. Now I'm trying to convert the program to accomplish this, but my efforts thus far have met with poor results. I've also asked this same question over on the OpenCV forums. Help me spy on my neighbors! (This is with their permission, of course!) :D
I'm using:
OpenCV v2.4.6.0
C++
D-Link Cloud Camera 7100 (Installer is DCS-7010L, according to the instructions.)
I am trying to access the DLink camera's video feed through OpenCV.
I can access the camera through it's IP address with a browser without any issues. Unfourtunately; my program is less cooperative. When attempting to access the camera the program gives the OpenCV-generated error:
warning: Error opening file (../../modules/highgui/src/cap_ffmpeg_impl.hpp:529)
This error occurs with just about everything I try that doesn't somehow generate more problems.
For reference - the code in OpenCV's cap_ffmpeg_impl.hpp around line 529 is as follows:
522 bool CvCapture_FFMPEG::open( const char* _filename )
523 {
524 unsigned i;
525 bool valid = false;
526
527 close();
528
529 #if LIBAVFORMAT_BUILD >= CALC_FFMPEG_VERSION(52, 111, 0)
530 int err = avformat_open_input(&ic, _filename, NULL, NULL);
531 #else
532 int err = av_open_input_file(&ic, _filename, NULL, 0, NULL);
533 #endif
...
616 }
...for which I have no idea what I'm looking at. It seems to be looking for the ffmpeg version - but I've already installed the latest ffmpeg on that computer, so that shouldn't be the issue.
This is the edited down version I tried to use as per Sebastian Schmitz's recommendation:
1 #include <fstream> // File input/output
2 #include <iostream> // cout / cin / etc
3 #include <windows.h> // Windows API stuff
4 #include <stdio.h> // More input/output stuff
5 #include <string> // "Strings" of characters strung together to form words and stuff
6 #include <cstring> // "Strings" of characters strung together to form words and stuff
7 #include <streambuf> // For buffering load files
8 #include <array> // Functions for working with arrays
9 #include <opencv2/imgproc/imgproc.hpp> // Image Processor
10 #include <opencv2/core/core.hpp> // Basic OpenCV structures (cv::Mat, Scalar)
11 #include <opencv2/highgui/highgui.hpp> // OpenCV window I/O
12 #include "opencv2/calib3d/calib3d.hpp"
13 #include "opencv2/features2d/features2d.hpp"
14 #include "opencv2/opencv.hpp"
15 #include "resource.h" // Included for linking the .rc file
16 #include <conio.h> // For sleep()
17 #include <chrono> // To get start-time of program.
18 #include <algorithm> // For looking at whole sets.
19
20 #ifdef __BORLANDC__
21 #pragma argsused
22 #endif
23
24 using namespace std; // Standard operations. Needed for most basic functions.
25 using namespace std::chrono; // Chrono operations. Needed getting starting time of program.
26 using namespace cv; // OpenCV operations. Needed for most OpenCV functions.
27
28 string videoFeedAddress = "";
29 VideoCapture videoFeedIP = NULL;
30 Mat clickPointStorage; //Artifact from original program.
31
32 void displayCameraViewTest()
33 {
34 VideoCapture cv_cap_IP;
35 Mat color_img_IP;
36 int capture;
37 IplImage* color_img;
38 cv_cap_IP.open(videoFeedAddress);
39 Sleep(100);
40 if(!cv_cap_IP.isOpened())
41 {
42 cout << "Video Error: Video input will not work.\n";
43 cvDestroyWindow("Camera View");
44 return;
45 }
46 clickPointStorage.create(color_img_IP.rows, color_img_IP.cols, CV_8UC3);
47 clickPointStorage.setTo(Scalar(0, 0, 0));
48 cvNamedWindow("Camera View", 0); // create window
49 IplImage* IplClickPointStorage = new IplImage(clickPointStorage);
50 IplImage* Ipl_IP_Img;
51
52 for(;;)
53 {
54 cv_cap_IP.read(color_img_IP);
55 IplClickPointStorage = new IplImage(clickPointStorage);
56 Ipl_IP_Img = new IplImage(color_img_IP);
57 cvAdd(Ipl_IP_Img, IplClickPointStorage, color_img);
58 cvShowImage("Camera View", color_img); // show frame
59 capture = cvWaitKey(10); // wait 10 ms or for key stroke
60 if(capture == 27 || capture == 13 || capture == 32){break;} // if ESC, Return, or space; close window.
61 }
62 cv_cap_IP.release();
63 delete Ipl_IP_Img;
64 delete IplClickPointStorage;
65 cvDestroyWindow("Camera View");
66 return;
67 }
68
69 int main()
70 {
71 while(1)
72 {
73 cout << "Please Enter Video-Feed Address: ";
74 cin >> videoFeedAddress;
75 if(videoFeedAddress == "exit"){return 0;}
76 cout << "\nvideoFeedAddress: " << videoFeedAddress << endl;
77 displayCameraViewTest();
78 if(cvWaitKey(10) == 27){return 0;}
79 }
80 return 0;
81 }
Using added 'cout's I was able to narrow it down to line 38: "cv_cap_IP.open(videoFeedAddress);"
No value I enter for the videoFeedAddress variable seems to get a different result. I found THIS site that lists a number of possible addresses to connect to it. Since there exists no 7100 anywhere in the list & considering that the install is labeled "DCS-7010L" I used the addresses found next to the DCS-7010L listings. When trying to access the camera most of them can be reached through the browser, confirming that they reach the camera - but they don't seem to affect the outcome when I use them in the videoFeedAddress variable.
I've tried many of them both with and without username:password, the port number (554), and variations on ?.mjpg (the format) at the end.
I searched around and came across a number of different "possible" answers - but none of them seem to work for me. Some of them did give me the idea for including the above username:password, etc stuff, but it doesn't seem to be making a difference. Of course, the number of possible combinations is certainly rather large- so I certainly have not tried all of them (more direction here would be appreciated). Here are some of the links I found:
This is one of the first configurations my code was in. No dice.
This one is talking about files - not cameras. It also mentions codecs - but I wouldn't be able to watch it in a web browser if that were the problem, right? (Correct me if I'm wrong here...)
This one has the wrong error code/points to the wrong line of code!
This one mentions compiling OpenCV with ffmpeg support - but I believe 2.4.6.0 already comes with that all set and ready! Otherwise it's not that different from what I've already tried.
Now THIS one appears to be very similar to what I have, but the only proposed solution doesn't really help as I had already located a list of connections. I do not believe this is a duplicate, because as per THIS meta discussion I had a lot more information and so didn't feel comfortable taking over someone else's question - especially if I end up needing to add even more information later.
Thank you for reading this far. I realize that I am asking a somewhat specific question - although I would appreciate any advice you can think of regarding OpenCV & network cameras or even related topics.
TLDR: Network Camera and OpenCV are not cooperating. I'm unsure if
it's the address I'm using to direct the program to the camera or the
command I'm using - but no adjustment I make seems to improve the
result beyond what I've already done! Now my neighbors will go unwatched!
There's a number of ways to get the video. ffmpeg is not the only way although it's most convenient. To diagnose if ffmpeg is capable of reading the stream, you should use the standalone ffmpeg/ffplay to try to open the url. If it can open directly, it may be some minor things like url formatting such as double slashes(rtsp://IPADDRESS:554/live1.sdp instead of rtsp://IPADDRESS:554//live1.sdp). If it cannot open it directly, it may need some extra commandline switches to make it work. Then you would need to modify opencv's ffmpeg implementation # line 529 to pass options to avformat_open_input. This may require quite bit of tweaking before you can get a working program.
You can also check if the camera provide a http mjpeg stream by consulting it's manual. I do not have the camera you are using. So I cannot be of much help on this.
Alternatively, I have two suggestions below, which might help you up and running relatively quickly since you mentioned that vlc is working.
method 1
i assume that you can at least open mjpeg url with your existing opencv/ffmpeg combination. since vlc is working, just use vlc to transcode the video into mjpeg like
vlc.exe --ignore-config -I dummy rtsp://admin:admin#10.10.204.111 --sout=#transcode"{vcodec=MJPG,vb=5000,scale=1,acodec=none}:std{access=http,mux=raw,dst=127.0.0.1:9080/frame.mjpg}"
after that use http://127.0.0.1:9080/frame.mjpg to grab the frame using opencv VideoCapture. this just requires that you have a transcoder program that can convert the incoming stream into mjpeg.
method 2
you can also directly use vlc api programmatically. the following piece of code use vlc to grab the frames. relevant info for compilation
C:\Program Files (x86)\VideoLAN\VLC\sdk\include
C:\Program Files (x86)\VideoLAN\VLC\sdk\lib
libvlc.lib,libvlccore.lib
code
#include "opencv2/highgui/highgui.hpp"
#include <windows.h>
#include <vlc/vlc.h>
using namespace cv;
struct ctx
{
Mat* image;
HANDLE mutex;
uchar* pixels;
};
bool isRunning=true;
Size getsize(const char* path)
{
libvlc_instance_t *vlcInstance;
libvlc_media_player_t *mp;
libvlc_media_t *media;
const char * const vlc_args[] = {
"-R",
"-I", "dummy",
"--ignore-config",
"--quiet",
};
vlcInstance = libvlc_new(sizeof(vlc_args) / sizeof(vlc_args[0]), vlc_args);
media = libvlc_media_new_location(vlcInstance, path);
mp = libvlc_media_player_new_from_media(media);
libvlc_media_release(media);
libvlc_video_set_callbacks(mp, NULL, NULL, NULL, NULL);
libvlc_video_set_format(mp, "RV24",100,100, 100 * 24 / 8); // pitch = width * BitsPerPixel / 8
libvlc_media_player_play(mp);
Sleep(2000);//wait a while so that something get rendered so that size info is available
unsigned int width=640,height=480;
libvlc_video_get_size(mp,0,&width,&height);
if(width==0 || height ==0)
{
width=640;
height=480;
}
libvlc_media_player_stop(mp);
libvlc_release(vlcInstance);
libvlc_media_player_release(mp);
return Size(width,height);
}
void *lock(void *data, void**p_pixels)
{
struct ctx *ctx = (struct ctx*)data;
WaitForSingleObject(ctx->mutex, INFINITE);
*p_pixels = ctx->pixels;
return NULL;
}
void display(void *data, void *id){
(void) data;
assert(id == NULL);
}
void unlock(void *data, void *id, void *const *p_pixels)
{
struct ctx *ctx = (struct ctx*)data;
Mat frame = *ctx->image;
if(frame.data)
{
imshow("frame",frame);
if(waitKey(1)==27)
{
isRunning=false;
//exit(0);
}
}
ReleaseMutex(ctx->mutex);
}
int main( )
{
string url="rtsp://admin:admin#10.10.204.111";
//vlc sdk does not know the video size until it is rendered, so need to play it a bit so that size is known
Size sz = getsize(url.c_str());
// VLC pointers
libvlc_instance_t *vlcInstance;
libvlc_media_player_t *mp;
libvlc_media_t *media;
const char * const vlc_args[] = {
"-R",
"-I", "dummy",
"--ignore-config",
"--quiet",
};
vlcInstance = libvlc_new(sizeof(vlc_args) / sizeof(vlc_args[0]), vlc_args);
media = libvlc_media_new_location(vlcInstance, url.c_str());
mp = libvlc_media_player_new_from_media(media);
libvlc_media_release(media);
struct ctx* context = ( struct ctx* )malloc( sizeof( *context ) );
context->mutex = CreateMutex(NULL, FALSE, NULL);
context->image = new Mat(sz.height, sz.width, CV_8UC3);
context->pixels = (unsigned char *)context->image->data;
libvlc_video_set_callbacks(mp, lock, unlock, display, context);
libvlc_video_set_format(mp, "RV24", sz.width, sz.height, sz.width * 24 / 8); // pitch = width * BitsPerPixel / 8
libvlc_media_player_play(mp);
while(isRunning)
{
Sleep(1);
}
libvlc_media_player_stop(mp);
libvlc_release(vlcInstance);
libvlc_media_player_release(mp);
free(context);
return 0;
}
I have a 33 second video that I'm trying to process with OpenCV. My goal is to determine what instance in time (relative to the start of the video) each frame corresponds to. I'm doing this in order to be able to compare frames from videos of the same scene that have been recorded at different frame rates.
What's working:
The FPS is correctly reported as 59.75. This is consistent with what ffprobe reports, so I'm happy to believe that's correct.
The problems I'm having are:
CAP_PROP_POS_MSEC returns incorrect values. By the end of the video, it's up to 557924ms (over 9 min). For a 33s video, this can't be right.
CAP_PROP_FRAME_COUNT is also incorrect. It's reported as 33371, which at 59.75 fps would give over 9 minutes worth of footage. Consistent with the above error, but still incorrect.
CAP_PROP_POS_FRAME is similarly incorrect.
The video can be found here (approx. 10MB).
Any ideas on what could be wrong?
ffprobe output:
FFprobe version SVN-r20090707, Copyright (c) 2007-2009 Stefano Sabatini
libavutil 49.15. 0 / 49.15. 0
libavcodec 52.20. 0 / 52.20. 1
libavformat 52.31. 0 / 52.31. 0
built on Jan 20 2010 00:13:01, gcc: 4.4.3 20100116 (prerelease)
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/home/misha/Dropbox/Public/sequence.mp4':
Duration: 00:00:33.37, start: 0.000000, bitrate: 2760 kb/s
Stream #0.0(und): Video: h264, yuv420p, 1920x1080, 59.75 tbr, 1k tbn, 2k tbc
Stream #0.1(und): Audio: aac, 44100 Hz, stereo, s16
Full code:
#include <iostream>
#include <assert.h>
#include <cv.h>
#include <highgui.h>
#include <cmath>
#include <iostream>
#include <string.h>
#include <stdio.h>
extern "C"
{
#include "options.h"
}
using namespace std;
#define DEBUG 0
static void
print_usage(char *argv0)
{
cerr << "usage: " << argv0 << " video.avi [options]" << endl;
}
int main(int argc, char** argv)
{
if (argc < 2)
{
print_usage(argv[0]);
return -1;
}
int step = 30;
struct Option options[] =
{
{ "step", 1, &step },
{ NULL, 0, NULL }
};
int ret = parse_options(2, argc, argv, options);
if (ret == 0)
{
print_usage(argv[0]);
return -1;
}
CvCapture *capture = cvCaptureFromFile(argv[1]);
int counter = 0;
while (cvGrabFrame(capture))
{
++counter;
IplImage *frame = cvRetrieveFrame(capture);
double millis = cvGetCaptureProperty(capture, CV_CAP_PROP_POS_MSEC);
double current = cvGetCaptureProperty(capture, CV_CAP_PROP_POS_FRAMES);
double total = cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_COUNT);
printf("%d %d/%d %f\n", counter, (int)current, (int)total, millis);
int min = (int)(millis/1000/60);
millis -= min*60000;
int sec = (int)(millis/1000);
millis -= sec*1000;
printf("%d %02d:%02d:%f\n", counter, min, sec, millis);
}
cvReleaseCapture(&capture);
return 0;
}
Always inform the software version you are using: which OpenCV version are you using for that? Yours might be an old version, so update to the most recent if possible.
If your video file is part of some other larger video, this information might actually be correct, since:
CV_CAP_PROP_POS_MSEC - film current position in milliseconds or video capture timestamp
OpenCV might be simply reading all these stuff from the file header, which is obviously wrong. This could happen if someone used an ax (or other medieval tool) to strip this piece from the original video.
You should try with videos that you made and you know they haven't been tampered with.
Worst case scenario, you will have to implement these features yourself. No biggie.
EDIT:
#misha I didn't notice at first that you are using:
CvCapture *capture = cvCaptureFromFile(argv[1]);
Replace it for cvCaptureFromAVI() if you can, and ALWAYS check the return value of OpenCV calls:
CvCapture *capture = cvCaptureFromAVI(argv[1]);
if(!capture)
{
printf("!!! cvCaptureFromAVI failed (file not found?)\n");
return -1;
}
A few days ago I shared a code that uses OpenCV to read a video file and then save the frames as JPG images on the disk. It also reports the current FPS using the traditional cvGetCaptureProperty(capture, CV_CAP_PROP_FPS); so it could be interesting for you to take a look at that. Go check it out.
EDIT:
You should also check this thread about frame count using OpenCV;
By the way, I just updated some libraries on my Ubuntu system and recompiled OpenCV-2.1.0.tar.bz2 (using cmake). I changed my source code (that uses cvCaptureFromAVI()) to print stuff using your debug method on each frame. It seems it works:
* Filename: sequence.mp4
* FPS: 59
...
...
...
17 00:00:567.000000
18 601/33371 601.000000
18 00:00:601.000000
19 634/33371 634.000000
19 00:00:634.000000
20 668/33371 668.000000
20 00:00:668.000000
21 701/33371 701.000000
21 00:00:701.000000
22 734/33371 734.000000
22 00:00:734.000000
23 768/33371 768.000000
23 00:00:768.000000
24 801/33371 801.000000
24 00:00:801.000000
25 835/33371 835.000000
25 00:00:835.000000
26 868/33371 868.000000
26 00:00:868.000000
27 901/33371 901.000000
27 00:00:901.000000
28 935/33371 935.000000
...