I am working with a PS-Eye-3 camera, libusb, PSEye driver, OpenCV 3.4.2 and Visual Studio 2015 / C++ on Windows 10.
I can set the exposure of the camera to any value by using this code:
cv::VideoCapture *cap;
...
cap = new cv::VideoCapture(0);
cap->set(CV_CAP_PROP_EXPOSURE, exposure); // exposure = [0, 255]
Now I would like to switch to auto-exposure too. How can I set the camera to auto-exposure mode?
I tried the following:
cap->set(CV_CAP_PROP_EXPOSURE, 0); // not working
cap->set(CV_CAP_PROP_EXPOSURE, -1); // not working
cap->set(CV_CAP_PROP_AUTO_EXPOSURE, 1); // not working, exposure stays fixed
cap->set(CV_CAP_PROP_AUTO_EXPOSURE, 0); // not working, exposure stays fixed
cap->set(CV_CAP_PROP_AUTO_EXPOSURE, -1); // not working, exposure stays fixed
Some idea?
It depends on the capture api you are using. If you are using CAP_V4L2, then automatic exposure is set to 'on' with value 3 and 'off' with value 1.
All settable values can be found by typing v4l2-ctl -l in terminal.
I think for OpenCV < 4.0 default api is CAP_GSTREAMER and automatic exposure is set to 'on' with value 0.75 and 'off' with value 0.25.
Try cap->set(CV_CAP_PROP_AUTO_EXPOSURE, X);
where X is a camera-dependent value such as 0.25 or 0.75.
For a similar issue see the discussion:
https://github.com/opencv/opencv/issues/9738
i think finally i found a solution, at least for my problem,
capture = cv2.VideoCapture(id)
capture.set(cv2.CAP_PROP_AUTO_EXPOSURE, 3) # auto mode
capture.set(cv2.CAP_PROP_AUTO_EXPOSURE, 1) # manual mode
capture.set(cv2.CAP_PROP_EXPOSURE, desired_exposure_value)
i have to first set the auto_exposure to 3 (auto mode)
then i have to set it to 1 (manual mode)
and then i can set the manual exposure
you can set the settings with shell too
list available options
video_id=1
v4l2-ctl --device /dev/video$video_id -l
set them with python
def set_manual_exposure(dev_video_id, exposure_time):
commands = [
("v4l2-ctl --device /dev/video"+str(id)+" -c exposure_auto=3"),
("v4l2-ctl --device /dev/video"+str(id)+" -c exposure_auto=1"),
("v4l2-ctl --device /dev/video"+str(id)+" -c exposure_absolute="+str(exposure_time))
]
for c in commands:
os.system(c)
# usage
set_manual_exposure(1, 18)
Related
I've been having a tough time getting my webcam working quickly with opencv. Frames take a very long time to read, (a recorded average of 124ms across 500 frames) I've tried on three different computers (running Windows 10) with a logitech C922 webcam. The most recent machine I tested on has a Ryzen 9 3950X, with 32gbs of ram; no lack of power.
Here is the code:
cv::VideoCapture cap = cv::VideoCapture(m_cameraNum);
// Check if camera opened successfully
if (!cap.isOpened())
{
m_logger->critical("Error opening video stream or file\n\r");
return -1;
}
bool result = true;
result &= cap.set(cv::CAP_PROP_FRAME_WIDTH, 1280);
result &= cap.set(cv::CAP_PROP_FRAME_HEIGHT, 720);
bool ready = false;
std::vector<string> timeLog;
timeLog.reserve(50000);
int i = 0;
while (i < 500)
{
auto start = std::chrono::system_clock::now();
cv::Mat img;
ready = cap.read(img);
// If the frame is empty, break immediately
if (!ready)
{
timeLog.push_back("continue");
continue;
}
i++;
auto end = std::chrono::system_clock::now();
timeLog.push_back(std::to_string(std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count()));
}
for (auto& entry : timeLog)
m_logger->info(entry);
cap.release();
return 0;
Notice that I write the elapsed time to a log file at the end of execution. The average time is 124ms for debug and release, and not one instance of "continue" after half a dozen runs.
It doesn't matter if I use USB 2 or USB 3 ports (the camera is USB2) or if I run a debug build or a release build, the log file will show anywhere from 110ms to 130ms of time for each frame. The camera works fine in other app, OBS can get a smooth 1080#30fps or 720#60fps.
Stepping through the debugger and doing a lot of Googling, I've learned the following about my system:
The backend chosen by default is DSHOW. GStreamer and FFMPEG are also available.
DSHOW uses FFMPEG somehow (it needs the FFMPEG dll) but I cannot use FFMPEG directly through opencv. Attempting to use cv::VideoCapture(m_cameraNum, cv::CAP_FFMPEG) always fails. It seems like Opencv's interface to FFMPEG is only capable of opening video files.
Microsoft really screwed up camera devices in Windows a few years back, not sure if this is related to my problem.
Here's a short list of the fixes I have tried, most taken from older SO posts:
result &= cap.set(cv::CAP_PROP_FRAME_COUNT, 30); // Returns false, does nothing
result &= cap.set(cv::CAP_PROP_CONVERT_RGB, 0); // Returns true, does nothing
result &= cap.set(cv::CAP_PROP_MODE, cv::VideoWriter::fourcc('M', 'J', 'P', 'G')); // Returns false, does nothing
Set registry key from http://alax.info/blog/1693 that should disable the new Windows camera server.
Updated from 4.5.0 to 4.5.2, no change.
Asked device manager to find a newer driver, no newer driver found.
I'm out of ideas. Any help?
I have a scientific application which captures a video4Linux video stream. It's crucial that we capture each frame and no one gets lost. Unfortunately frames are missing here and there and I don't know why.
To detect dropped frames I compare the v4l2_buffer's sequence number with my own counter directly after reading a frame:
void detectDroppedFrame(v4l2_buffer* buffer) {
_frameCounter++;
auto isLastFrame = buffer->sequence == 0 && _frameCounter > 1;
if (!isLastFrame && _frameCounter != buffer->sequence+1)
{
std::cout << "\n####### WARNING! Missing frame detected!" << std::endl;
_frameCounter = buffer->sequence+1; // re-sync our counter with correct frame number from driver.
}
}
My running 1-file example gist can be found at github (based on official V4L2 capture example): https://gist.github.com/SebastianMartens/7d63f8300a0bcf0c7072a674b3ea4817
Tested with webcam on Ubuntu 18.04 virtual machine on notebook-hardware (uvcvideo driver) as well as with CSI camera on our embedded hardware running ubuntu 18.04 natively. Frames are not processed and buffers seems to be grabbed fast enough (buffer status checked with VIDIOC_QUERYBUF which shows that all buffers are in the driver's incoming queue and V4L2_BUF_FLAG_DONE flag is not set). I have lost frames with MMAP as well as with UserPtr method. Also it seems to be independent of pixelformat, image size and framerate!
To me it looks like if the camera/v4l2 driver is not able to fill available buffers fast enough but also increasing the file descriptor priority with VIDIOC_S_PRIORITY command does not help (still likely to be a thread scheduling problem?).
=> What are possible reasons why V4L2 does not forward frames (does not put them into it's outgoing queue)?
=> Is my method to detect lost frames correct? Are there other options or tools for that?
I had a similar problem when using the bttv driver. All attempts to capture at full resolution would result in dropped frames (usually around 10% of frames, often in bursts). Capturing at half resolution worked fine.
The solution that I found, though far from ideal, was to apply a load to the linux scheduler. Here is an example, using the "tvtime" program to do the capture:
#! /bin/bash
# apply a load in the background
while true; do /bin/true; done &> /dev/null &
# do the video capture
/usr/bin/tvtime "$#"
# kill the loop running in the background
pkill -P $$
This creates a loop that repeatedly runs /bin/true in the background. Almost any executable will do (I used /bin/date at first). The loop will create a heavy load, but with a multi-core system there is still plenty of headroom for other tasks.
This obviously isn't an ideal solution, but for what it's worth, it allows me to capture full-resolution video with no dropped frames. I have little desire to poke around in the driver/kernel code to find a better solution.
FYI, here are the details of my system:
OS: Ubuntu 20.04, kernel 5.4.0-42
MB: Gigabyte AB350M-D3H
CPU: AMD Ryzen 5 2400G
GPU: AMD Raven Ridge
Driver name : bttv
Card type : BT878 video (Hauppauge (bt878))
Bus info : PCI:0000:06:00.0
Driver version : 5.4.44
Capabilities : 0x85250015
I am attempting to create a multilayered RNN using LSTMs in tensorflow. I am using Tensorflow version 0.9.0 and python 2.7 on Ubuntu 14.04.
However, I keep getting the following error:
tensorflow.python.framework.errors.InvalidArgumentError: Expected begin[1] in [0, 2000], but got 4000
when I use
rnn_cell.MultiRNNCell([cell]*num_layers)
if num_layers is greater than 1.
My code:
size = 1000
config.forget_bias = 1
and config.num_layers = 3
cell = rnn_cell.LSTMCell(size,forget_bias=config.forget_bias)
cell_layers = rnn_cell.MultiRNNCell([cell]*config.num_layers)
I would also like to be able to switch to using GRU cells but this gives me the same error:
Expected begin[1] in [0, 1000], but got 2000
I have tried explicitly setting
num_proj = 1000
which also did not help.
Is this something to do with my use of concatenated states? As I have attempted to set
state_is_tuple=True
which gives:
`ValueError: Some cells return tuples of states, but the flag state_is_tuple is not set. State sizes are: [LSTMStateTuple(c=1000, h=1000), LSTMStateTuple(c=1000, h=1000), LSTMStateTuple(c=1000, h=1000)]`
Any help would be much appreciated!
I'm not sure why this worked but, I added in a dropout wrapper. i.e.
if Training:
cell = rnn_cell.DropoutWrapper(cell,output_keep_prob=config.keep_prob)
And now it works.
This works for both LSTM and GRU cells.
This problem is occurring because you have increased layer of your GRU cell but your initial vector is not doubled. If your initial_vector size is [batch_size,50].
Then initial_vector = tf.concat(1,[initial_vector]*num_layers)
Now input this to decoder as initial vector.
I want to get the number of available cameras.
I tried to count cameras like this:
for(int device = 0; device<10; device++)
{
VideoCapture cap(device);
if (!cap.isOpened())
return device;
}
If I have a camera connected, it never failed to open.
So I tried to preview different devices but I get always the image of my camera.
If I connect a second camera, device 0 is camera 1 and device 1-10 are camera 2.
I think there is a problem with DirectShow devices.
How to solve this problem? Or is there a function like in OpenCV1 cvcamGetCamerasCount()?
I am using Windows 7 and USB cameras.
OpenCV still has no API to enumerate the cameras or get the number of available devices. See this ticket on OpenCV bug tracker for details.
Behavior of VideoCapture is undefined for device numbers greater then number of devices connected and depends from API used to communicate with your camera. See OpenCV 2.3 (C++,QtGui), Problem Initializing some specific USB Devices and Setups for the list of APIs used in OpenCV.
Even if it's an old post here a solution for OpenCV 2/C++
/**
* Get the number of camera available
*/
int countCameras()
{
cv::VideoCapture temp_camera;
int maxTested = 10;
for (int i = 0; i < maxTested; i++){
cv::VideoCapture temp_camera(i);
bool res = (!temp_camera.isOpened());
temp_camera.release();
if (res)
{
return i;
}
}
return maxTested;
}
Tested under Windows 7 x64 with :
OpenCV 3 [Custom Build]
OpenCV 2.4.9
OpenCV 2.4.8
With 0 to 3 Usb Cameras
This is a very old post but I found that under Python 2.7 on Ubuntu 14.04 and OpenCv 3 none of the solutions here worked for me. Instead I came up with something like this in Python:
import cv2
def clearCapture(capture):
capture.release()
cv2.destroyAllWindows()
def countCameras():
n = 0
for i in range(10):
try:
cap = cv2.VideoCapture(i)
ret, frame = cap.read()
cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
clearCapture(cap)
n += 1
except:
clearCapture(cap)
break
return n
print countCameras()
Maybe someone will find this useful.
I do this in Python:
def count_cameras():
for i in range(10):
temp_camera = cv.CreateCameraCapture(i-1)
temp_frame = cv.QueryFrame(temp_camera)
del(temp_camera)
if temp_frame==None:
del(temp_frame)
return i-1 #MacbookPro counts embedded webcam twice
Sadly Opencv opens the Camera object anyway, even if there is nothing there, but if you try to extract its content, there will be nothing to attribute to. You can use that to check your number of cameras. It works in every platform I tested so it is good.
The reason for returning i-1 is that MacBookPro Counts its own embedded camera twice.
Python 3.6:
import cv2
# Get the number of cameras available
def count_cameras():
max_tested = 100
for i in range(max_tested):
temp_camera = cv2.VideoCapture(i)
if temp_camera.isOpened():
temp_camera.release()
continue
return i
print(count_cameras())
I have also faced similar kind of issue. I solved the problem by using videoInput.h library instead of Opencv for enumerating the cameras and passed the index to Videocapture object. It solved my problem.
I am working with a high resolution camera: 4008x2672. I a writing a simple program which grabs frame from the camera and sends the frame to a avi file. For working with such a high resolution, I found only x264 codec that could do the trick (Suggestions welcome). I am using opencv for most of the image handling stuff. As mentioned in this post http://doom10.org/index.php?topic=1019.0 , I modified the AVCodecContext members as per ffmpeg presets for libx264 (Had to do this to avoid broken ffmpeg defaults settings error). This is output I am getting when I try to run the program
libx264 # 0x992d040]non-strictly-monotonic PTS
1294846981.526675 1 0 //Timestamp camera_no frame_no
1294846981.621101 1 1
1294846981.715521 1 2
1294846981.809939 1 3
1294846981.904360 1 4
1294846981.998782 1 5
1294846982.093203 1 6
Last message repeated 7 times
[avi # 0x992beb0]st:0 error, non monotone timestamps
-614891469123651720 >= -614891469123651720
OpenCV Error: Unspecified error (Error while writing video frame) in
icv_av_write_frame_FFMPEG, file
/home/ajoshi/ext/OpenCV-2.2.0/modules/highgui/src/cap_ffmpeg.cpp, line 1034
terminate called after throwing an instance of 'cv::Exception'
what(): /home/ajoshi/ext/OpenCV-2.2.0/modules/highgui/src/cap_ffmpeg.cpp:1034:
error: (-2) Error while writing video frame in function icv_av_write_frame_FFMPEG
Aborted
Modifications to the AVCodecContext are:
if(codec_id == CODEC_ID_H264)
{
//fprintf(stderr, "Trying to parse a preset file for libx264\n");
//Setting Values manually from medium preset
c->me_method = 7;
c->qcompress=0.6;
c->qmin = 10;
c->qmax = 51;
c->max_qdiff = 4;
c->i_quant_factor=0.71;
c->max_b_frames=3;
c->b_frame_strategy = 1;
c->me_range = 16;<br>
c->me_subpel_quality=7;
c->coder_type = 1;
c->scenechange_threshold=40;
c->partitions = X264_PART_I8X8 | X264_PART_I4X4 | X264_PART_P8X8 | X264_PART_B8X8;
c->flags = CODEC_FLAG_LOOP_FILTER;
c->flags2 = CODEC_FLAG2_BPYRAMID | CODEC_FLAG2_MIXED_REFS | CODEC_FLAG2_WPRED | CODEC_FLAG2_8X8DCT | CODEC_FLAG2_FASTPSKIP;
c->keyint_min = 25;
c->refs = 3;
c->trellis=1;
c->directpred = 1;
c->weighted_p_pred=2;
}
I am probably not setting the dts and pts values which I believed ffmpeg should be setting it for me.
Any sugggestions welcome.
Thanks in advance
I would probably run the x264 executable in another process and pipe either rgb or yuv pixels to it. Then you can use all the normal x264 (or ffmpeg) flags and it handles multi threading for you.
And since x264 is GPL licensed it also gives you more freedom on licensing your app.
ps. Here is some sample code using ffmpeg from Qt you can ignore the Qt specific bits but it gives a good starting point for using ffmpeg from a c++ app.
Actual error is "non monotone timestamps". I seems that you didn't properly initialized video frame properties. If its possible use libx264 directly. It'll be more easy to handle.
PS. you can work around ffmpeg x264 setting problem by specify 264 preset file with -fvpre option.
The pts value of the AVFrame you send as the last argument to avcodec_encode_video needs to be set by you. Once you set this, the codec context's coded_from->pts field will have the correct value which you can av_rescale_q() and set in the AVPacket for your av_interleaved_write_frame().