I am trying to calibrate two cameras but my problem is the auto-focus. Im using the webcam logitech c920. Anyone knows a way to disable the auto-focus feature?? Im using C++ and opencv 2.4.9 in osx system.
You can try this.
cap = cv2.VideoCapture(1) // Generate camera object
cap.set(cv2.CAP_PROP_AUTOFOCUS, 0) // turn the autofocus off
You can find more information on how to set some properties at these links.
The VideoCapture class
http://docs.opencv.org/3.2.0/d8/dfe/classcv_1_1VideoCapture.html
The VideoCapture properties
http://docs.opencv.org/3.2.0/d4/d15/group__videoio__flags__base.html#ga023786be1ee68a9105bf2e48c700294d
Try v4l-utils:
Install: sudo apt-get install v4l-utils
Find your device v4l2-ctl --list-devices
Replace video0 with output from the previous command and disable autofocus with:
v4l2-ctl -d /dev/video0 --set-ctrl=focus_auto=0
Related
I'm trying to get my PiCamera Module v2.1 running on my RaspberryPi4. Unfortunately I must install the Ubuntu 19.10 64bit distribution. So far so good.
I've installed Opencv4. There was some big trouble because it seems like Ubuntu does not come with VideoCore, raspi-config etc. So I downloaded and updated my firmware with sudo rpi-update and installed userland.
First I tried to open the camera with Videocapture cap(0), but this throws a bunch of errors (see here Ubuntu 19.10: Enabling and using Raspberry Pi Camera Module v2.1) and I've read that this is only for usb cameras (actually i can't believe this, because under Raspbian, I can use the module like this)
So I googled and found this repo https://github.com/cedricve/raspicam. I've installed it, but even with this I cannot get it running.
Again here is what I've down:
install opencv4
update firmware
install userland
writing start_x=1 and "gpu_mem=128" to /boot/firmware/config.txt
doing modprobe bcm2835-v4l2
sudo vcgencmd get_camera results in supported and detected = 1
When I use sudo raspistill -o test.jpg a window opens and the image is saved. But there are some errors:
mmal: mmal_vc_shm_init: could not initialize vc shared memory service
mmal: mmal_vc_component_create: failed to initialise shm for 'vc.camera_info' (7:EIO)
mmal: mmal_component_create_core: could not create component 'vc.camera_info' (7)
mmal: Failed to create camera_info component
Also I need to start it with sudo, although I've run sudo usermod -a -G video ubuntu several times (also rebooted). Strange, isn't?
My example script for accessing the camera is:
#include <iostream>
#include <raspicam/raspicam_cv.h>
using namespace std;
int main ( int argc,char **argv ) {
time_t timer_begin,timer_end;
raspicam::RaspiCam_Cv Camera;
cv::Mat image;
int nCount=100;
//set camera params
Camera.set( cv::CAP_PROP_FORMAT, CV_8UC1 );
//Open camera
cout<<"Opening Camera..."<<endl;
if (!Camera.open()) {cerr<<"Error opening the camera"<<endl;return -1;}
//Start capture
cout<<"Capturing "<<nCount<<" frames ...."<<endl;
time ( &timer_begin );
for ( int i=0; i<nCount; i++ ) {
Camera.grab();
Camera.retrieve ( image);
if ( i%5==0 ) cout<<"\r captured "<<i<<" images"<<std::flush;
}
cout<<"Stop camera..."<<endl;
Camera.release();
}
Compilation is successful:
sudo g++ stream.cpp -I/usr/local/include/opencv4 -I/usr/local/include -L/usr/local/lib -L/opt/vc/lib -lraspicam_cv -lopencv_core -lraspicam -lmmal -lmmal_core -lmmal_util -lopencv_highgui -lmmal_vc_client -lvcos -lbcm_host -o stream
Executing stream (even with sudo) results in:
Opening Camera...
mmal: mmal_component_create_core: could not find component 'vc.ril.camera'
Failed to create camera componentopen Failed to create camera component/home/raspicam/src/private/private_impl.cpp 103
Error opening the camera
Does anyone have an idea what I can try?
Thanks !
I had this error while compiling a ROS node for the raspicam.
I fixed it by adding the following to my CMakeLists.txt:
set (CMAKE_SHARED_LINKER_FLAGS "-Wl,--no-as-needed")
The issue was that the reference to the library containing the 'vc.ril.camera' was optimized out by the linker and count not be found at run time.
Hopefully it will work for you.
I am able to stream and receive webcam feed in two terminal via udp
command for streaming:
ffmpeg -i /dev/video0 -b 50k -r 20 -s 858x500 -f mpegts udp://127.0.0.1:2000
command for recieving:
ffplay udp://127.0.0.1:2000
Now i have to use this received video stream as input in python/opencv how can i do that.
I will be doing this using rtp and rstp as well.
But in case of rtsp it is essential to initiate the receiving terminal, but if I do that then port will become busy and my program will not be able to take the feed.How could it be resolved.
I am currently using opencv 2.4.13, python 2.7 in ubuntu 14.04
Check this tutorial, and use cv2.VideoCapture("udp://127.0.0.1:2000"). You will need to build opencv with FFmpeg so that it works.
I'm trying to receive and display a udp live mjpeg - network video stream from a network cam.
I can play the video stream by starting VLC with the Argument --demux=mjpeg and then typing udp://#:1234 in the network stream field. Or with gstreamer by the console line: gst-launch -v udpsrc port=1234 ! jpegdec ! autovideosink. My Cam has the IP Address 192.168.1.2 and it sends the stream to the address 192.168.1.1:1234.
I've tried to capture the stream with OpenCV with:
cv::VideoCapture cap;
cap.open("udp://#192.168.1.1:1234");
I tried also:
cap.open("udp://#:1234")
cap.open("udp://#localhost:1234")
cap.open("udp://192.168.1.1:1234")
cap.open("udp://192.168.1.1:1234/")
But the function hangs until I press ctrl+C. I have the same problem when I use ffmpeg with: ffmpeg -i udp://#192.168.1.1:1234 -vcodec mjpeg
What did I do wrong? When i installed ffmpeg i couldn't install the dependency libsdl1.2-dev. Is that the problem?
If so, there is any way to read the udp-frames from the socket and then decode the JPEG pictures and display it with OpenCV?
I have the OS Ubuntu linaro oneiric 11.10 with the kernel 3.0.35 from Freescale
thanks any way. i have fixed this problem by installing a newr version of ffmpeg and using the C-Api of ffmpeg
I ran into an issue getting the standard open CV face detection (facedetect) working. The web cam light comes on but noting happens, the program is launched with a tiny sized window like this:
I am working from an excellent blog post and sample code. Here I what I have done:
Install OpenCV & get OpenCV source
brew tap homebrew/science
brew install --with-tbb opencv
wget http://downloads.sourceforge.net/project/opencvlibrary/opencv-unix/2.4.6/opencv-2.4.6.tar.gz
tar xvzf opencv-2.4.6.tar.gz
Run the facedetect sample with the standard classifier.
cd ~/opencv-2.4.6/samples/c
chmod +x build_all.sh
./build_all.sh
./facedetect --cascade="../../data/haarcascades/haarcascade_frontalface_alt.xml"
I can modify the C++ sample code and recompile and run, but I have no idea what the issue is.
Does anyone have a suggestion?
Update The issue is the image from cvQueryFrame is empty:
IplImage* iplImg = cvQueryFrame( capture );
frame = iplImg;
if( frame.empty() )
{
cout << "FRAME EMPTY\n"; // This is getting logged
break;
}
Update: It works ok when the source is a static image, so the issue is something related to the webcam source.
You can try to localise the problem, did you try to capture an image from the and web cam show it, without running any other operation?
It seems there is a problem capturing image from the web cam via OpenCV, this kind of problems may happen due to hardware, for instance on my friends macbook pro captured image was 320x240 and on mine it was 640x480. My friend just changed a simple configuration from settings of the camera and his problem was solved. Your problem might be something like this.
Or you can try to run face detector just with some images, you need to change the code such that it loads an image from your disk and try to detect face on them. If it doesn't work that way either we can say that your problem is not camera, there is a bigger issue, or if it works we can surely say that the problem is web cam.
EDIT
If you are using IplImage type be sure to get couple more images from the camera, sometimes first image is empty.
This was due to a bug in OpenCV - its been fixed ( bug report here http://code.opencv.org/issues/3156) , but the version in homebrew/science is from before the fix.
You can get install a newer version by editing the brew formula for opencv ( based on this pull request https://github.com/Homebrew/homebrew-science/pull/540 )
edit /usr/local/Library/Formula/opencv.rb , and replace these lines:
url 'http://downloads.sourceforge.net/project/opencvlibrary/opencv-unix/2.4.6.1/opencv-2.4.6.1.tar.gz'
sha1 'e015bd67218844b38daf3cea8aab505b592a66c0'
with these ones
url 'https://github.com/Itseez/opencv/archive/2.4.7.1.tar.gz'
sha1 'b6b0dd72356822a482ca3a27a7a88145aca6f34c'
Then do
brew remove opencv
brew install opencv
Works on Mavericks (for me at least), should work on Mountain Lion
UPDATE: the version of OpenCV in homebrew/science has now been updated, so this answer is now out of date!
brew upgrade opencv
will make homebrew get the latest version, with fixed webcam capture.
I installed the library opencv in my ubuntu pc and i wrote a program that takes a video from a webcam and it works.
Yesterday I installed the driver for video capture "media_build" to take a video from a video grabber, but the same program doesn't work, while if I open "video for linux 2" on VLC it works.
This is the error:
libv4l2: error set_fmt gave us a different result then try_fmt!
HIGHGUI ERROR: libv4l unable convert to requested pixfmt
HIGHGUI ERROR: V4L: device /dev/video0: Unable to query number of channels
ERROR: capture is NULL
The instruction is:
CvCapture* capture = cvCaptureFromCAM( CV_CAP_ANY );
any suggestion?
thanks
I think your program is crashing. If so you can add if(capture!=NULL)
{// your normal code here}else{// display some error message}. In this case the program wont crash. Probably your video driver isn't providing a interface known to openCv.
Maybe you can use this command:
sudo chmod 666 /dev/video0