Getting and Setting Camera Settings - c++

I have been searching around and can't find an example of how to get and set the camera capturing settings. For example the capturing resolution, fps, color balance, etc. I have only seen examples of how to change the settings when saving the captured video but I want to be able to find all the camera's capturing modes and choose which one I want. For example, I am using the PS3eye webcam and in the test program it allows you to change the settings (320x240 at 15,30,60,120 fps, 640x480 at 15,30,60,75 fps). So is there a function in OpenCV for getting all the camera's capture modes and choosing one? I remember in OpenFrameworks there was a function to change these settings but I would like to know how to do it in OpenCV.
Here is the code for OpenFrameworks with OpenCV that does sort of what I want:
vidGrabber.setDeviceID( 4 );
vidGrabber.setDesiredFrameRate( 30 ); //I want this
vidGrabber.videoSettings();
vidGrabber.setVerbose(true);
vidGrabber.initGrabber(320,240); //And this

cvSetCaptureProperty()
with these flags:
CV_CAP_PROP_FRAME_WIDTH - width of frames in the video stream (only for cameras)
CV_CAP_PROP_FRAME_HEIGHT - height of frames in the video stream (only for cameras)
CV_CAP_PROP_FPS - frame rate (only for cameras)

I built a Directshow camera library which able to acquire the camera resolutions and properties.
https://github.com/kcwongjoe/directshow_camera
The fps is usually depended on your machine, resolution and exposure time. To change the fps, you can modify the exposure time in a fixed resolution.

Related

OpenCV: Difference between cap.set(CAP_PROP_FRAME_WIDTH or CAP_PROP_FRAME_HEIGHT) and resize()

I am working on OpenCV (4.3.0) and trying to understand how I can change the resolution of the image.
To my understanding there are 2 ways that I can do this,
Using "cap.set()" function
cv::VideoCapture cap(0)
cap.set(CAP_PROP_FRAME_WIDTH, 320);//Setting the width of the video
cap.set(CAP_PROP_FRAME_HEIGHT, 240);//Setting the height of the video
Using "resize()" function,
int up_width = 600;
int up_height = 400;
Mat resized_up;
resize(image, resized_up, Size(up_width, up_height), INTER_LINEAR);
I wanted to understand if they both are the same or if they are different. What are the exact differences between them?
cap means capability of a camera. we set the required resolution before capturing from camera. If the resolution is supported only then we get a valid frame from the camera. For example a webcam might support some fixed number of resolutions like 1280x720, 640x480 etc and we can only set the cam for those resolutions.
resize is an interpolation (bilinear or bicubic or anyohter)function which resizes(upscale or downscale) a frame to any desired size.
Here is one of your question's answer. CAP_PROP_FRAME_WIDTH and CAP_PROP_FRAME_HEIGHT are some of capture properties. Documentation says:
Reading / writing properties involves many layers. Some unexpected
result might happens along this chain. Effective behaviour depends
from device hardware, driver and API Backend.
So if your camera backend matches with the opencv supported backends, then you will be able to change the resolution of your camera (if your camera configuration supports different resolution). For example a camera can support 640x480 and 1920x1080 at the same time. If opencv backend support this camera backend you can switch the resolution configurations by the code:
cap.set(CAP_PROP_FRAME_WIDTH, 640);
cap.set(CAP_PROP_FRAME_HEIGHT, 480);
or
cap.set(CAP_PROP_FRAME_WIDTH, 1920);
cap.set(CAP_PROP_FRAME_HEIGHT, 1080);
What about resize ?
resize() is totally different than the concept we talked above. Video properties are based on hardware, if you use a camera with 640x480 resolution. It means that camera sensor has specified perceptron cells inside for each pixel. However, resize deal with the resulted image via on software. What resize is doing is that interpolating(manipulating) image's height and width. Otherwords, it looks like you are looking at somewhere very close(zoom in) or very far(zoom out).

Feed GStreamer sink into OpenPose

I have a custom USB camera with a custom driver on a custom board Nvidia Jetson TX2 that is not detected through openpose examples. I access the data using GStreamer custom source. I currently pull frames into a CV mat, color convert them and feed into OpenPose on a per picture basis, it works fine but 30 - 40% slower than a comparable video stream from a plug and play camera. I would like to explore things like tracking that is available for streams since Im trying to maximize the fps. I believe the stream feed is superior due to better (continuous) use of the GPU.
In particular the speedup would come at confidence expense and would be addressed later. 1 frame goes through pose estimation and 3 - 4 subsequent frames are just tracking the object with decreasing confidence levels. I tried that on a plug and play camera and openpose example and the results were somewhat satisfactory.
The point where I stumbled is that I can put the video stream into CV VideoCapture but I do not know, however, how to provide the CV video capture to OpenPose for processing.
If there is a better way to do it, I am happy to try different things but the bottom line is that the custom camera stays (I know ;/). Solutions to the issue described or different ideas are welcome.
Things I already tried:
Lower resolution of the camera (the camera crops below certain res instead of binning so cant really go below 1920x1080, its a 40+ MegaPixel video camera by the way)
use CUDA to shrink the image before feeding it to OpenPose (the shrink + pose estimation time was virtually equivalent to the pose estimation on the original image)
since the camera view is static, check for changes between frames, crop the image down to the area that changed and run pose estimation on that section (10% speedup, high risk of missing something)

Set Kinect v2 frame rates (rgb, depth, skeleton) to 25 fps

I'm working on a project which requires all three streams from the Kinect v2 sensor (RGB, Depth, and Skeleton) to be captured, processed and streamed at 25 fps.
My program works with default settings and all three streams seem to be operating at 30fps. Is there a method to reduce this to 25 fps?.
I'm working on a C++ environment.
The KINECT SDK offers no way of setting the frame rate. Moreover, the RGB frame rate is not fixed and may drop down to 15 fps (from 30 fps) in low light conditions. Your approach of adding delay does not change the native frame rate of the devices. All you are doing is selectively dropping some frames based on timing. If regular spacing between frame capture times is important to your application you should be implementing an interpolation method instead.

Why does a full screen window resolution in OpenCV (# Banana Pi, Raspbian) slow down the camera footage and let it lag?

Currently I’m working on a project to mirror a camera for a blind spot.
The camera got 640 x 480 NTSC signal.
The output screen is 854 x 480 NTSC.
I grab the camera with an EasyCAP video grabber.
On the Banana Pi I installed open cv 2.4.9.
The critical point of this project is that the video on the display needs to be real time.
Whenever I comment the line that puts the window into fullscreen, there pop ups a small window and the footage runs without delay and lagg.
But when I set the video to full screen, the footage becomes slow, and lags.
Part of the code:
namedWindow("window",0);
setWindowProperty("window",CV_WND_PROP_FULLSCREEN,CV_WINDOW_FULLSCREEN);
while(1){
cap>>image;
flip(image, destination,1);
imshow("window",destination);
waitKey(33); //delay 33 ms
}
How can I fill the screen with the camera footage without losing speed and frames?
Is it possible to output the footage directly to the composite output?
The problem is that upscaling and drawing is done in software here. The Banana Pi processor is not powerful enough to process the needed throughput with 30 frames per second.
This is an educated guess on my side, as even desktop systems can run into lag problems when processing and simultaneously displaying video.
A common solution in the computer vision community for this problem is to use OpenGL for display. Here, the upscaling and display is offloaded to the graphics processor. You can do the same thing on a Banana Pi.
If you compiled OpenCV with OpenGL support, you can try it like this:
namedWindow("window", WINDOW_OPENGL);
imshow("window", destination);
Note that if you use OpenGL, you can also save on the flip operation by using an approprate modelview matrix. For this however you probably need to dive into GL code yourself instead of using imshow.
I fixed the whole problem by using:
namedWindow("window",1);
With FLAG 1 stands for WINDOW_AUTOSIZE.
The footage is more real-time now.
I’m using a small monitor, so the window size is nearly the same as the monitor.

How to capture high resolution image on Windows Mobile

I would like to capture high resolution image with Windows Mobile device. I've tried the example from WM SDK, but it captures just a single frame of video camera and the resolution is poor.
Has anyone any experience with image capturing on Pocket PC with C++?
Thanks
You need to change the filter used by the example code to capture a high-resolution image. When you use the viewfinder in a digital camera, the camera "simulates" a video camera look by applying the lowest resolution filter and then rapidly taking and displaying single frames. When you click the button to take a high-res picture, the camera has to swap out the low-res filter for the high-res filter and then take the high-res picture - this is why (cheap) digital cameras always take so long to snap a picture.
I don't know which code example you're working with, but if it's the one I used it defaults to using the lowest resolution filter. There should be a line in it somewhere that selects the filter. You just need to change the value passed from 0 to (probably) 3 or 4 for the highest resolution.