I want to get the number of available cameras.
I tried to count cameras like this:
for(int device = 0; device<10; device++)
{
VideoCapture cap(device);
if (!cap.isOpened())
return device;
}
If I have a camera connected, it never failed to open.
So I tried to preview different devices but I get always the image of my camera.
If I connect a second camera, device 0 is camera 1 and device 1-10 are camera 2.
I think there is a problem with DirectShow devices.
How to solve this problem? Or is there a function like in OpenCV1 cvcamGetCamerasCount()?
I am using Windows 7 and USB cameras.
OpenCV still has no API to enumerate the cameras or get the number of available devices. See this ticket on OpenCV bug tracker for details.
Behavior of VideoCapture is undefined for device numbers greater then number of devices connected and depends from API used to communicate with your camera. See OpenCV 2.3 (C++,QtGui), Problem Initializing some specific USB Devices and Setups for the list of APIs used in OpenCV.
Even if it's an old post here a solution for OpenCV 2/C++
/**
* Get the number of camera available
*/
int countCameras()
{
cv::VideoCapture temp_camera;
int maxTested = 10;
for (int i = 0; i < maxTested; i++){
cv::VideoCapture temp_camera(i);
bool res = (!temp_camera.isOpened());
temp_camera.release();
if (res)
{
return i;
}
}
return maxTested;
}
Tested under Windows 7 x64 with :
OpenCV 3 [Custom Build]
OpenCV 2.4.9
OpenCV 2.4.8
With 0 to 3 Usb Cameras
This is a very old post but I found that under Python 2.7 on Ubuntu 14.04 and OpenCv 3 none of the solutions here worked for me. Instead I came up with something like this in Python:
import cv2
def clearCapture(capture):
capture.release()
cv2.destroyAllWindows()
def countCameras():
n = 0
for i in range(10):
try:
cap = cv2.VideoCapture(i)
ret, frame = cap.read()
cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
clearCapture(cap)
n += 1
except:
clearCapture(cap)
break
return n
print countCameras()
Maybe someone will find this useful.
I do this in Python:
def count_cameras():
for i in range(10):
temp_camera = cv.CreateCameraCapture(i-1)
temp_frame = cv.QueryFrame(temp_camera)
del(temp_camera)
if temp_frame==None:
del(temp_frame)
return i-1 #MacbookPro counts embedded webcam twice
Sadly Opencv opens the Camera object anyway, even if there is nothing there, but if you try to extract its content, there will be nothing to attribute to. You can use that to check your number of cameras. It works in every platform I tested so it is good.
The reason for returning i-1 is that MacBookPro Counts its own embedded camera twice.
Python 3.6:
import cv2
# Get the number of cameras available
def count_cameras():
max_tested = 100
for i in range(max_tested):
temp_camera = cv2.VideoCapture(i)
if temp_camera.isOpened():
temp_camera.release()
continue
return i
print(count_cameras())
I have also faced similar kind of issue. I solved the problem by using videoInput.h library instead of Opencv for enumerating the cameras and passed the index to Videocapture object. It solved my problem.
Related
I've been having a tough time getting my webcam working quickly with opencv. Frames take a very long time to read, (a recorded average of 124ms across 500 frames) I've tried on three different computers (running Windows 10) with a logitech C922 webcam. The most recent machine I tested on has a Ryzen 9 3950X, with 32gbs of ram; no lack of power.
Here is the code:
cv::VideoCapture cap = cv::VideoCapture(m_cameraNum);
// Check if camera opened successfully
if (!cap.isOpened())
{
m_logger->critical("Error opening video stream or file\n\r");
return -1;
}
bool result = true;
result &= cap.set(cv::CAP_PROP_FRAME_WIDTH, 1280);
result &= cap.set(cv::CAP_PROP_FRAME_HEIGHT, 720);
bool ready = false;
std::vector<string> timeLog;
timeLog.reserve(50000);
int i = 0;
while (i < 500)
{
auto start = std::chrono::system_clock::now();
cv::Mat img;
ready = cap.read(img);
// If the frame is empty, break immediately
if (!ready)
{
timeLog.push_back("continue");
continue;
}
i++;
auto end = std::chrono::system_clock::now();
timeLog.push_back(std::to_string(std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count()));
}
for (auto& entry : timeLog)
m_logger->info(entry);
cap.release();
return 0;
Notice that I write the elapsed time to a log file at the end of execution. The average time is 124ms for debug and release, and not one instance of "continue" after half a dozen runs.
It doesn't matter if I use USB 2 or USB 3 ports (the camera is USB2) or if I run a debug build or a release build, the log file will show anywhere from 110ms to 130ms of time for each frame. The camera works fine in other app, OBS can get a smooth 1080#30fps or 720#60fps.
Stepping through the debugger and doing a lot of Googling, I've learned the following about my system:
The backend chosen by default is DSHOW. GStreamer and FFMPEG are also available.
DSHOW uses FFMPEG somehow (it needs the FFMPEG dll) but I cannot use FFMPEG directly through opencv. Attempting to use cv::VideoCapture(m_cameraNum, cv::CAP_FFMPEG) always fails. It seems like Opencv's interface to FFMPEG is only capable of opening video files.
Microsoft really screwed up camera devices in Windows a few years back, not sure if this is related to my problem.
Here's a short list of the fixes I have tried, most taken from older SO posts:
result &= cap.set(cv::CAP_PROP_FRAME_COUNT, 30); // Returns false, does nothing
result &= cap.set(cv::CAP_PROP_CONVERT_RGB, 0); // Returns true, does nothing
result &= cap.set(cv::CAP_PROP_MODE, cv::VideoWriter::fourcc('M', 'J', 'P', 'G')); // Returns false, does nothing
Set registry key from http://alax.info/blog/1693 that should disable the new Windows camera server.
Updated from 4.5.0 to 4.5.2, no change.
Asked device manager to find a newer driver, no newer driver found.
I'm out of ideas. Any help?
My program uses OpenCV library to capture a single image from the webcam and then save it onto the local file system.
Currently it is working fine on my desktop; where my program was developed. But when I test it on my laptop (different device), it is unable to detect the webcam.
I went ahead to test it on another laptop, but still the same issue occurs.
Both of the laptop's webcam was not in use when I executed my program, and both laptop has only 1 integrated webcam.
Here are the specifications of the devices and the CMake options for OpenCV.
Desktop
Windows 10 64-bit
External Webcam connected via USB
Webcam exist in Device Manager
Webcam exist in Devices and Printers
Laptop 1
Windows 10 64-bit
On-board Webcam (Integrated)
Webcam exist in Device Manager
Webcam exist in Devices and Printers
Able to access webcam on laptop's "Camera" app
Laptop 2
Windows 10 64-bit
On-board Webcam (Integrated)
Webcam exist in Device Manager
Webcam does not exist in Devices and Printers
Able to access webcam on laptop's "Camera" app
OpenCV CMake
v3.30
BUILD_SHARED_LIBS OFF (Static Libs)
WITH_MSMF ON (Media Foundation Support)
WITH_DSHOW OFF (Tried ON, didn't work, read that it was obsolete)
BUILD_opencv_world ON
Shown below are my codes:
Program Headers
#include <opencv2/opencv.hpp>
void GetDevID();
void OpenDevice(int);
cv::Mat frame;
main
int main()
{
// Initially I tried cv::VideoCapture cap.open(0) for default webcam.
// Unfortunately it only works on my desktop, so I tried opening
// from -1 to 254, hence GetDevID(). Still does not work.
GetDevID();
}
GetDevID - Gets the first device that is opened
void GetDevID()
{
int MaxTested = 254;
int DevID = -2;
for (int i = -1; i < MaxTested; i++)
{
cv::VideoCapture TestDev(i);
bool IsDevOpen = TestDev.isOpened();
TestDev.release();
if (IsDevOpen)
{
DevID = i;
break;
}
}
OpenDevice(DevID);
}
OpenDevice - Opens tested device
void OpenDevice(int DevID)
{
cv::VideoCapture cap;
cap.open(DevID);
if (!cap.isOpen())
{
// This is where it fails when the program is executed on the laptop
// Which means the ID from -1 to 254 is not valid
}
else
{
// Capture 20th frame, for best clarity
for (int i = 0; i < 20; i++)
{
cap.read(frame);
}
cap.release();
if (frame.empty())
{
// Error handling occurs here
}
else
{
// Continues to save file
}
}
}
I am all out of luck on Google, and it seems like OpenCV related issues are quite unique.
So if anyone is able to advice me, would be very much appreciated.
Do let me know if I missed out anything on the question as well.
Edit
It seems that I might have missed out on a few things, as mentioned by DaveS.
The program ran fine on the desktop and laptops, just that whilst it was running on the laptops, it didn't detect any webcam devices, nor did it throw any error codes.
To the program, it was just as if there are no webcam devices connected, but physically, there is.
I have also tested by connecting the external webcam on the laptop, and the program is still unable to find any webcam devices.
I use application IPCamera on my mobile phone with Android to output (share) video image from it's camera to LAN. I can access it on PC browser - that is ok.
However, I want to make OpenCV capture this video stream from IP address by typing
VideoCapture cap("http://admin:admin#192.168.0.11:8081/?action=stream?dummy=param.mjpg");
while( cap.isOpened() )
{
Mat frame;
if ( ! cap.read(frame) )
break;
cout << "Connected!!";
imshow("lalala",frame);
int k = waitKey(10);
if ( k==27 )
break;
}
and i got error:
Actual codec, which is used by phone is mjpeg (i read it from application on my mobile). I don't know if OpenCV supports this, but is that about mobile application uses some kind of unique codec, or my PC lacks it, or maybe C++/OpenCV code is wrong?
On PC opencv can capture your video stream from your mobile prone..
Like. You are using right connection string, like this for rtsp stream in my case.
VideoCapture capture("rtsp://USER:PASS#xxx.xxx.xxx.xxx/axis-media/media.amp?camera=2");
Probably, You don't have FFMPEG instaled corectly. You need to reinstall Opencv. First you need to install FFMPEG and Opencv After that.
In opencv 3.0.0 and 3.1 try to add
#include <opencv2\videoio.hpp>
#include <opencv2\imgcodecs.hpp>
Some tips how to install ffmpeg and sample in C++ on linux debian Here Code and tips and tricks
I am using mac book and have a program written in C++, the program is to extract successive frames from the webcam. The extracted frames are then grayscaled and smoothed using opencv functions. After that i would use CVNorm to find out the relative difference between 2 frames. I am using videoCapture class.
I found out that the frame rate is 30fps and using CVNorm, the relative difference obtained between successive frames are less than 200 most of the time.
I am trying to do the same thing on xcode so as to implement the program on ipad. This time I am using AVCaptureSession, the same steps are performed but i realize that the relative difference between 2 frames are much higher (>600).
Thus i would like to know about the default setting for videoCapture class, I know that i can edit the setting using cvSetCaptureProperty but i cannot find the default setting of it. After that i would compare it with the setting of the AVcaptureSession and hope to find out why there is such a huge difference in CVNorm when i use these 2 approaches to extract my frame.
Thanks in advance.
OpenCV's VideoCapture class is just a simple wrapper for capturing video from cameras or for reading video files. It is built upon several multimedia frameworks (avfoundation, dshow, ffmpeg, v4l, gstreamer, etc.) and totally hides them from the outside. The problem is coming from here, it is really hard to achieve the same behaviour of capturing under different platform and multimedia frameworks. The common set of (capture's) properties are poor and setting their values is rather only a suggestion instead of a requirement.
In summary, the default properties can differ under different platforms, but in case of AV Foundation framework:
The cvCreateCameraCapture_AVFoundation(int index) function will create a CvCapture instance under iOS, which is defined in cap_qtkit.mm. Seems like you are not able to change the sampling rate, only CV_CAP_PROP_FRAME_WIDTH, CV_CAP_PROP_FRAME_HEIGHT and DISABLE_AUTO_RESTART are supported.
The grabFrame() implementation is below. I'm absolutely not an Objective-C expert, but it seems like it waits until the capture updates the image or a time out occurs.
bool CvCaptureCAM::grabFrame() {
return grabFrame(5);
}
bool CvCaptureCAM::grabFrame(double timeOut) {
NSAutoreleasePool* localpool = [[NSAutoreleasePool alloc] init];
double sleepTime = 0.005;
double total = 0;
[NSTimer scheduledTimerWithTimeInterval:100 target:nil selector:#selector(doFireTimer:) userInfo:nil repeats:YES];
while (![capture updateImage] && (total += sleepTime)<=timeOut) {
[[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:sleepTime]];
}
[localpool drain];
return total <= timeOut;
}
Here's the deal, I'm trying to interface my S3 as webcam, using IP WebCam app for android, then making a IP webcam within the software, usually the address is http://192.168.1.XX:8080/greet.html maybe the last two digits changes , the webpage give me options and info like this:
"Here is the list of IP Webcam service URLs:
http://192.168.1.XX:8080/video is the MJPEG URL."
The code I'm using is simply like this:
include "opencv2/highgui/highgui.hpp
include "opencv2/imgproc/imgproc.hpp
using namespace cv;
int main(){
VideoCapture cap("http://192.168.1.XX:8080/video.mjpg"); // connect to an ip-cam ( might need some additional dummy param like: '?type=mjpeg' at the end
while(cap.isOpened()){
Mat frame;
if (!cap.read(frame))
break;
imshow("lalala",frame);
int k = waitKey(10);
if ( k==27 )
break;
}
return 0;
}
So the IP WebCam app recognice a connection but there's no image whatsoever... and then it says:
warning: Error opening file <../../modules/highgui/src/cap_ffmpeg_imp
Cannot open the web cam
Process returned -1 <0xFFFFFFF> execution time: 37.259 s
Press any key to continue.
I am using:
Windows 7 Professional
Open CV 2.4.4
Codeblocks 13.12
USB 2.0 webcam 640x480 at 30fps, 50 Hz and all standard.
Try to connect another video streaming android application.
I use Smart WebCam.
open it with
cap.open("http://192.168.1.13:8080/?x.mjpg);