OpenCV: change resolution of VideoCapture while it is capturing - c++

I am using OpenCV 3.1.0 on Windows 10 64-bit. I would like to be able to set the resolution of webcam while webcam still working. It's easy to set resolution after camera working. But I can't set resolution when webcam is capturing.
Here is my code:
cv::VideoCapture cap(0);
cap.set(cv::CAP_PROP_FRAME_WIDTH, 0x7FFFFFFF); // working
cap.set(cv::CAP_PROP_FRAME_HEIGHT, 0x7FFFFFFF); // working
while (true) {
cv::Mat frame;
cap >> frame;
if (!frame.data) continue;
cv::imshow("test", frame);
if (cv::waitKey(1) >= 0) break;
int newHeight = 500 + rand() % 4 * 100;
cap.set(cv::CAP_PROP_FRAME_HEIGHT, newHeight); // not working
}
int newHeight = 500 + rand() % 4 * 100;
cap.set(cv::CAP_PROP_FRAME_HEIGHT, newHeight); // not working

My best guess is the values for CAP_PROP_FRAME_HEIGHT you are attempting are not supported by the webcam. If you hook your camera up to a Linux box, you can use v4l2-ctl -d 0 --list-formats-ext to list the supported video formats. Here an excerpt of the output for a Microsoft LifeCam Cinema:
Index : 1
Type : Video Capture
Pixel Format: 'MJPG' (compressed)
Name : Motion-JPEG
Size: Discrete 640x480
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.133s (7.500 fps)
Size: Discrete 1280x720
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.133s (7.500 fps)
...
I have not checked recently whether on Windows there is something similar to v4l2-ctl, which uses UVC to query the info from the camera. UVC is typically supported by recent webcams.

The problem is I only set a random height, and webcam only supports its preset resolution. So it select a best matched preset resolution to show it.

Related

Accessing raspberry Pi camera using C++

I am trying to run openCV in C++ and capture the camera input.
The program looks like this:
#include <iostream>
#include <sstream>
#include <new>
#include <string>
#include <sstream>
#include <opencv2/opencv.hpp>
#include <opencv2/core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui.hpp>
#define INPUT_WIDTH 3264
#define INPUT_HEIGHT 2464
#define DISPLAY_WIDTH 640
#define DISPLAY_HEIGHT 480
#define CAMERA_FRAMERATE 21/1
#define FLIP 2
void DisplayVersion()
{
std::cout << "OpenCV version: " << cv::getVersionMajor() << "." << cv::getVersionMinor() << "." << cv::getVersionRevision() << std::endl;
}
int main(int argc, const char** argv)
{
DisplayVersion();
std::stringstream ss;
ss << "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=3264, height=2464, format=NV12, framerate=21/1 ! nvvidconv flip-method=2 ! video/x-raw, width=480, height=680, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink";
//ss << "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=" << INPUT_WIDTH <<
//", height=" << INPUT_HEIGHT <<
//", format=NV12, framerate=" << CAMERA_FRAMERATE <<
//" ! nvvidconv flip-method=" << FLIP <<
//" ! video/x-raw, width=" << DISPLAY_WIDTH <<
//", height=" << DISPLAY_HEIGHT <<
//", format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink";
cv::VideoCapture video;
video.open(ss.str());
if (!video.isOpened())
{
std::cout << "Unable to get video from the camera!" << std::endl;
return -1;
}
std::cout << "Got here!" << std::endl;
cv::Mat frame;
while (video.read(frame))
{
cv::imshow("Video feed", frame);
if (cv::waitKey(25) >= 0)
{
break;
}
}
std::cout << "Finished!" << std::endl;
return 0;
}
When running this code I get the following outout:
OpenCV version: 4.6.0
nvbuf_utils: Could not get EGL display connection
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:751 Failed to create CaptureSession
[ WARN:0#0.269] global /tmp/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp (1405) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
Got here!
Finished!
If I run the other commented command to video.open() I get this output:
OpenCV version: 4.6.0
nvbuf_utils: Could not get EGL display connection
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:751 Failed to create CaptureSession
I'm currently running this from headless mode on a jetson nano.
I also know that OpenCV and xlaunch works because I can use mjpeg streamer from my laptop and successfully stream my laptop camera output to my jetson nano by using video.open(http://laptop-ip:laptop-port/); and that works correctly (OpenCV is able to display a live video feed using xlaunch just fine).
I think this command is telling me my camera is successfully installed:
$ v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'RG10'
Name : 10-bit Bayer RGRG/GBGB
Size: Discrete 3264x2464
Interval: Discrete 0.048s (21.000 fps)
Size: Discrete 3264x1848
Interval: Discrete 0.036s (28.000 fps)
Size: Discrete 1920x1080
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 1640x1232
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.017s (60.000 fps)
Any help would be much appreciated
The warning seems to be stating that you cannot use egl, ie OpenGL when in headless mode (because there is no screen)
If you run in headless mode. Would in not make more sense to not try to open a window do display?
cv::imshow("Video feed", frame);
if (cv::waitKey(25) >= 0)
{
break;
}
Remove this code and instead use cv::imwrite to write to a file, or whatever you want to do with the data.
Or if you run ssh. Run ssh with -X option to show the windows on your client computer instead. Could be slow, but if you really want to use cv::imshow it could be a option.
Well I fixed it by rebooting. I already did do a reboot but I also now have some errors whenever I run the program. I did recompile the dlib library but so I do think that when you update the gstreamer library you need to reboot your machine to successfully use it.

Why does the “nvv4l2camerasrc” output green screen?

I use the following code, then I get a green screen.
I use the command line "v4l2-ctl -d /dev/video0 --list-formats-ext"
then I get the following information.
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'UYVY'
Name : UYVY 4:2:2
Size: Discrete 1920x1080
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.017s (60.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.017s (60.000 fps)
Size: Discrete 3840x2160
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.017s (60.000 fps)
Size: Discrete 640x514
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.017s (60.000 fps)
Size: Discrete 2880x1860
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.017s (60.000 fps)
Index : 1
Type : Video Capture
Pixel Format: 'NV16'
Name : Y/CbCr 4:2:2
Size: Discrete 1920x1080
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.017s (60.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.017s (60.000 fps)
Size: Discrete 3840x2160
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.017s (60.000 fps)
Size: Discrete 640x514
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.017s (60.000 fps)
Size: Discrete 2880x1860
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.017s (60.000 fps)
Index : 2
Type : Video Capture
Pixel Format: 'UYVY'
Name : UYVY 4:2:2
Size: Discrete 1920x1080
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.017s (60.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.017s (60.000 fps)
Size: Discrete 3840x2160
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.017s (60.000 fps)
Size: Discrete 640x514
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.017s (60.000 fps)
Size: Discrete 2880x1860
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.017s (60.000 fps)
#include <iostream>
#include <gst/gst.h>
using namespace std;
int main(int argc, char *argv[])
{
GstElement *pipeline, *source, *converter, *sink;
GstBus *bus;
GstMessage *msg;
GMainLoop *loop;
GstStateChangeReturn ret;
//initialize all elements
gst_init(&argc, &argv);
pipeline = gst_pipeline_new("pipeline");
source = gst_element_factory_make("nvv4l2camerasrc", "source");
converter = gst_element_factory_make("nvvidconv", "converter");
sink = gst_element_factory_make("nv3dsink", "sink");
//check for null objects
if (!pipeline || !source || !converter || !sink)
{
cout << "not all elements created: pipeline[" << !pipeline << "]"
<< "source[" << !source << "]"
<< "converter[" << !converter << "]"
<< "sink[" << !sink << "]" << endl;
return -1;
}
//set video source
g_object_set(G_OBJECT(source), "device", argv[1], NULL);
cout << "==>Set video source." << endl;
g_object_set(G_OBJECT(sink), "sync", FALSE, NULL);
cout << "==>Set video sink." << endl;
gst_bin_add_many(GST_BIN(pipeline), source, converter, sink, NULL);
std::cout << "**************" << std::endl;
GstCaps *cap_source;
cap_source = gst_caps_from_string("video/x-raw(memory:NVMM),format=UYVY,width=1920,height=1080,interlace-mode=progressive,framerate=30/1");
std::cout << "------------" << std::endl;
// cap_source = gst_caps_new_simple("video/x-raw", "format", G_TYPE_STRING, "(string)UYVY",
// "width", G_TYPE_INT, "(int)1920", "height", G_TYPE_INT, "(int)1080", "framerate",
// GST_TYPE_FRACTION, "(fraction)30/1", 1, NULL);
// GstCapsFeatures *feature = gst_caps_features_new("memory:NVMM", NULL);
// gst_caps_set_features(cap_source, 0, feature);
if (!gst_element_link_filtered(source, converter, cap_source))
{
g_printerr("Fail to gst_element_link_filtered source -- converter\n");
return -1;
}
gst_caps_unref(cap_source);
std::cout << "**************" << std::endl;
GstCaps *cap_sink;
cap_sink = gst_caps_from_string("video/x-raw(memory:NVMM),format=NV12");
std::cout << "------------" << std::endl;
if (!gst_element_link_filtered(converter, sink, cap_sink))
{
g_printerr("Fail to gst_element_link_filtered converter -- sink\n");
return -1;
}
gst_caps_unref(cap_sink);
cout << "==>Link elements." << endl;
//set the pipeline state to playing
ret = gst_element_set_state(pipeline, GST_STATE_PLAYING);
if (ret == GST_STATE_CHANGE_FAILURE)
{
cout << "Unable to set the pipeline to the playing state." << endl;
gst_object_unref(pipeline);
return -1;
}
cout << "==>Set video to play." << endl;
//get pipeline's bus
bus = gst_element_get_bus(pipeline);
cout << "==>Setup bus." << endl;
loop = g_main_loop_new(NULL, FALSE);
cout << "==>Begin stream." << endl;
g_main_loop_run(loop);
g_main_loop_unref(loop);
gst_object_unref(bus);
gst_element_set_state(pipeline, GST_STATE_NULL);
gst_object_unref(pipeline);
}

stereo Camera OpenCV read depth image correctly 2 pixels encoded in 1

I have a stereo camera that gives an image in the format YUYV with a a resolution of 320 x 480, where in each pixel (16 bits) has encoded 2 pixels of 8 bits. I'm using OpenCV in order to get the image but when I try to get the real resolution of the image I wont get good results. I guess I missing how to properly split the 16 bits in two.
Using this and this I'm able to reconstruct an image but still is not the real one.
Mat frame;
unsigned int width= cap.get(CV_CAP_PROP_FRAME_WIDTH);
unsigned int height= cap.get(CV_CAP_PROP_FRAME_HEIGHT);
m_pDepthImgBuf = (unsigned char*)calloc(width*height*2, sizeof(unsigned char));
...
cap >> frame; // get a new frame from camera
imshow("YUVY 320x480", frame);
memcpy( (void*)m_pDepthImgBuf, (void*)frame.data, width*height*2 * sizeof(unsigned char) );
cv::Mat depth(height,width*2,CV_8UC1,(void*)m_pDepthImgBuf);
camera properties:
SDL information:
Video driver: x11
Device information:
Device path: /dev/video2
Stream settings:
Frame format: YUYV (MJPG is not supported by device)
Frame size: 320x480
Frame rate: 30 fps
v4l2-ctl -d /dev/video1 --list-formats
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'YUYV'
Name : YUV 4:2:2 (YUYV)
In the following image you can see in green the initial 320x480 image and in gray scale the depth that I'm trying to extract.
The expected result should be:

Get all supported FPS values of a camera in Microsoft Media Foundation

I want to get a list of all FPS values that my webcam supports.
In How to Set the Video Capture Frame Rate msdn article it says that I can query the system for maximum and minimum supported FPS of a particular camera.
It also says:
The device might support other frame rates within this range.
And in MF_MT_FRAME_RATE_RANGE_MIN it says:
The device is not guaranteed to support every increment within this range.
So it sounds like there is no way to get all of the supported FPS values by the camera in Media Foundation, just the max and min.
I know that on Linux v4l2-ctl --list-formats-ext command prints a lot more of the supported FPSes than just min and max.
Here are just a few examples from Linux using different cameras:
$ v4l2-ctl --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'YUYV'
Name : YUV 4:2:2 (YUYV)
Size: Discrete 160x120
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.036s (27.500 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.044s (22.500 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.057s (17.500 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.080s (12.500 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.133s (7.500 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 176x144
Interval: Discrete 0.033s (30.000 fps)
...
and
$ v4l2-ctl --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'YUYV'
Name : YUV 4:2:2 (YUYV)
Size: Discrete 640x360
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 640x480
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 320x240
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 160x120
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 960x544
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
and
$ v4l2-ctl --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'YUYV'
Name : YUV 4:2:2 (YUYV)
Size: Discrete 1280x720
Interval: Discrete 0.111s (9.000 fps)
Size: Discrete 160x120
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 320x240
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 1280x800
Interval: Discrete 0.111s (9.000 fps)
Size: Discrete 640x480
Interval: Discrete 0.033s (30.000 fps)
Index : 1
Type : Video Capture
Pixel Format: 'MJPG' (compressed)
Name : MJPEG
Size: Discrete 1280x720
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 160x120
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 320x240
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 1280x800
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 640x480
Interval: Discrete 0.033s (30.000 fps)
So, is there a way to get all of the supported FPSes by a camera in Microsoft Media Foundation or it's really is limited in this aspect?
The frame rates and other attributes can be retrieved with code similar to the following (error checking omitted for brevity):
Microsoft::WRL::ComPtr<IMFSourceReader> reader = nullptr;
/* reader code omitted */
IMFMediaType* mediaType = nullptr;
GUID subtype { 0 };
UINT32 frameRate = 0u;
UINT32 frameRateMin = 0u;
UINT32 frameRateMax = 0u;
UINT32 denominator = 0u;
DWORD index = 0u;
DWORD width = 0u;
DWORD height = 0u;
HRESULT hr = S_OK;
while (hr == S_OK)
{
hr = reader->GetNativeMediaType((DWORD) MF_SOURCE_READER_FIRST_VIDEO_STREAM, index, &mediaType);
if (hr == MF_E_NO_MORE_TYPES)
break;
// Error checking omitted for brevity
hr = mediaType->GetGUID(MF_MT_SUBTYPE, &subtype);
hr = MFGetAttributeSize(mediaType, MF_MT_FRAME_SIZE, &width, &height);
hr = MFGetAttributeRatio(mediaType, MF_MT_FRAME_RATE, &frameRate, &denominator);
hr = MFGetAttributeRatio(mediaType, MF_MT_FRAME_RATE_RANGE_MIN, &frameRateMin, &denominator);
hr = MFGetAttributeRatio(mediaType, MF_MT_FRAME_RATE_RANGE_MAX, &frameRateMax, &denominator);
++index;
}
The frame rate is expressed as a ratio. The upper 32 bits of the
attribute value contain the numerator and the lower 32 bits contain
the denominator. For example, if the frame rate is 30 frames per
second (fps), the ratio is 30/1. If the frame rate is 29.97 fps, the
ratio is 30,000/1001.
Generally, denominator will be 1 (I have not seen it be anything else). And with the various webcams I have tested, frameRate, frameRateMin, and frameRateMax are the same number. The results will look nearly identical to what you listed above.
Edit:
For example, the following is the output of the code above (minus the printf) to a console output of the native formats that are supported for a Logitech Webcam Pro 9000:
This older webcam has 46 native formats, whereas newer webcams have many more (the C930e has 216). Here are the first 81 native formats of the C930e:
Sometimes a webcam will have very high numbers, which generally means that frames will not be throttled, and are delivered as quickly as possible, which is dependent on shutter speed, resolution, etc (I max this number to 99 for readability).
I think you are getting hung up on the following quote:
The device might support other frame rates within this range
However that is if the min and max do not equal the frame rate, and I have not seen webcams which vary in these numbers. Keep in mind that this can be used with any capture device. A 4 lane PCIe capture card has the bandwidth to keep up with almost whatever you want, so they would choose to write the driver accordingly (few formats with a large variance between min and max).

OpenCV network (IP) camera frames per second slow after initial burst

EDIT: Upgrading to OpenCV 2.4.2 and FFMPEG 0.11.1 seems to have solved all the errors and connection problems, but it still hasn't solved the slow-down of frame rate.
I am using the default OpenCV package in Ubuntu 12.04 which I believe is 2.3.1. I am connecting to a Foscam FI8910W which streams MJPEG. I have seen where people have said that the best way is to use opencv+libjpeg+curl since it is faster than the gstreamer solution. However, I can occasionally (50% of the time) connect to the camera from OpenCV as it is built and get a video stream. This stream starts out at around 30 fps for about 1 s but then slows down to 5-10 fps. The project I am working on will require having 6 of cameras preferably running at 15-30 fps (faster is better).
Here are my questions:
Is this a problem that is fixed in 2.4.2 and I should just
upgrade?
If not, any ideas why I get a short burst and then it
slows down?
Is the best solution still to use curl+libjpeg?
I see lots of people who say that solutions have been posted, but
very few actual links to posts with solutions. Having all the actual
solutions (both curl and gstreamer) referenced in one place would be
very handy as per http://opencv-users.1802565.n2.nabble.com/IP-camera-solution-td7345005.html.
Here is my code:
VideoCapture cap;
cap.open("http://10.10.1.10/videostream.asf?user=test&pwd=1234&resolution=32");
Mat frame;
cap >> frame;
wr.open("test.avi", CV_FOURCC('P','I','M','1'), 29.92, frame.size(), true);
if(!wr.isOpened())
{
cout << "Video writer open failed" << endl;
return(-1);
}
Mat dst = Mat::zeros(frame.rows + HEADER_HEIGHT, frame.cols, CV_8UC3);
Mat roi(dst, Rect(0, HEADER_HEIGHT-1, frame.cols, frame.rows));
Mat head(dst, Rect(0,0,frame.cols, HEADER_HEIGHT));
Mat zhead = Mat::zeros(head.rows, head.cols, CV_8UC3);
namedWindow("test", 1);
time_t tnow;
tm *tS;
double t1 = (double)getTickCount();
double t2;
for(int i = 0; i>-1 ; i++) // infinite loop
{
cap >> frame;
if(!frame.data)
break;
tnow = time(0);
tS = localtime(&tnow);
frame.copyTo(roi);
std::ostringstream L1, L2;
L1 << tS->tm_year+1900 << " " << coutPrep << tS->tm_mon+1 << " ";
L1 << coutPrep << tS->tm_mday << " ";
L1 << coutPrep << tS->tm_hour;
L1 << ":" << coutPrep << tS->tm_min << ":" << coutPrep << tS->tm_sec;
actValueStr = L1.str();
zhead.copyTo(head);
putText(dst, actValueStr, Point(0,HEADER_HEIGHT/2), fontFace, fontScale, Scalar(0,255,0), fontThickness, 8);
L2 << "Frame: " << i;
t2 = (double)getTickCount();
L2 << " " << (t2 - t1)/getTickFrequency()*1000. << " ms";
t1 = (double)getTickCount();
actValueStr = L2.str();
putText(dst, actValueStr, Point(0,HEADER_HEIGHT), fontFace, fontScale, Scalar(0,255,0), fontThickness, 8);
imshow("test", dst);
wr << dst; // write frame to file
cout << "Frame: " << i << endl;
if(waitKey(30) >= 0)
break;
}
Here are the errors listed when it runs correctly:
Opening 10.10.1.10
Using network protocols without global network initialization. Please use avformat_network_init(), this will become mandatory later.
Using network protocols without global network initialization. Please use avformat_network_init(), this will become mandatory later.
[asf # 0x701de0] max_analyze_duration reached
[asf # 0x701de0] Estimating duration from bitrate, this may be inaccurate
[asf # 0x701de0] ignoring invalid packet_obj_size (21084 656 21720 21740)
[asf # 0x701de0] freeing incomplete packet size 21720, new 21696
[asf # 0x701de0] ff asf bad header 0 at:1029744
[asf # 0x701de0] ff asf skip 678 (unknown stream)
[asf # 0x701de0] ff asf bad header 45 at:1030589
[asf # 0x701de0] packet_obj_size invalid
[asf # 0x701de0] ff asf bad header 29 at:1049378
[asf # 0x701de0] packet_obj_size invalid
[asf # 0x701de0] freeing incomplete packet size 21820, new 21684
[asf # 0x701de0] freeing incomplete packet size 21684, new 21836
Using network protocols without global network initialization. Please use avformat_network_init(), this will become mandatory later.
Using network protocols without global network initialization. Please use avformat_network_init(), this will become mandatory later.
[asf # 0x701de0] Estimating duration from bitrate, this may be inaccurate
Successfully opened network camera
[swscaler # 0x8cf400] No accelerated colorspace conversion found from yuv422p to bgr24.
Output #0, avi, to 'test.avi':
Stream #0.0: Video: mpeg1video (hq), yuv420p, 640x480, q=2-31, 19660 kb/s, 90k tbn, 29.97 tbc
[swscaler # 0x9d25c0] No accelerated colorspace conversion found from yuv422p to bgr24.
Frame: 0
[swscaler # 0xa89f20] No accelerated colorspace conversion found from yuv422p to bgr24.
Frame: 1
[swscaler # 0x7f7840] No accelerated colorspace conversion found from yuv422p to bgr24.
Frame: 2
[swscaler # 0xb9e6c0] No accelerated colorspace conversion found from yuv422p to bgr24.
Frame: 3
Sometimes it hangs after the first Estimating duration from bitrate statement
Have you tried removing the code that writes to disk? I've seen very similar performance issues with USB cameras when a disk buffer fills up. Great framerate at first, and then it drops dramatically.
If that is the issue, another option is to change your compression codec to something that compresses more significantly.
A fast initial FPS that changes to a slow FPS would suggest that the camera is increasing exposure time to compensate for a poorly lit subject. The camera is analyzing the first few frames and then adjusting the exposure time accordingly.
It seems that actual FPS is a combination of two things:
Hardware Limitations (defines the max FPS)
Required Exposure Time (defines the min FPS)
The hardware may have the bandwidth required to transfer X FPS, but poorly lit conditions may require an exposure time that slows down the actual FPS. For example, if each frame needs to be exposed for 0.1 seconds, the fasted FPS will be 10.
To test for this, measure FPS with the camera pointed at poorly lit subject and compare that to the FPS with the camera pointed at a well lit subject. Be sure to exaggerate the lighting conditions and give the camera a few seconds to detect the required exposure.