Take high resolution image from ip camera using opencv - c++

I have an ip camera. It allows me to have two different encoding type, h264 and mjpeg and its best resolution is 1920x1080.
I use iSpy software to find URL address of my camera. It works and take photo, but its resulotion is 640*360.
Here is my code:
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/core/core.hpp"
#include "opencv2/opencv.hpp"
int main()
{
cv::VideoCapture vcap;
const std::string videoStreamAddress = "rtsp://admin:admin#192.168.0.120/snl/live/1/2/stream1.cgi";
if (!vcap.open(videoStreamAddress))
{
printf("camera is null\n");
return -1;
}
else
{
vcap.set(CV_CAP_PROP_FRAME_WIDTH, 1920);
vcap.set(CV_CAP_PROP_FRAME_HEIGHT, 1080);
cv::Mat image;
vcap.read(image);
cv::imshow("image",image);
cv::imwrite("image.jpg", image);
}
cv::waitKey(1000);
return 0;
}
How can I take image with higher quality. I don't know the problem is from my camera, or my url, or my code.
I work with opencv 2.4 on windows 7.
Any help would be appreciated

I'm answering my own question.
At first I'd thought the problem is rooted in OpenCV, since I'd found many threads about setting camera parameters failures. It seems that vcap.set(CV_CAP_PROP_FRAME_WIDTH, 1920);vcap.set(CV_CAP_PROP_FRAME_HEIGHT, 1080); doesn't work well.
Anyway, I've checked my camera Development Document on its website and found another URL for H.264 video stream. I've changed my URL, and it works. It takes 1920X1080 images.

Related

Can't change OpenCV video capture resolution

The problem I am having is that I am unable to change the resolution of an OpenCV video capture. The resolution is always 640x480, no matter what. The code I'm using is written in C++ and I'm using opencv 3.4.8. I've created a super simple program with which to do this and it just doesn't seem to work no matter what I try.
Here is the code in its entirety:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int argc, char** argv)
{
VideoCapture cap(0);
cap.set(CAP_PROP_FRAME_HEIGHT, 1080);
cap.set(CAP_PROP_FRAME_WIDTH, 1920);
// open the default camera, use something different from 0 otherwise;
// Check VideoCapture documentation.
if (!cap.open(0))
return 0;
for (;;)
{
Mat frame;
cap.read(frame);
if (frame.empty()) break; // end of video stream
imshow("this is you, smile! :)", frame);
if (waitKey(10) == 27) break; // stop capturing by pressing ESC
}
// the camera will be closed automatically upon exit
// cap.close();
return 0;
}
When I run the above code frame is always 640x480.
I've tried changing the resolution with cap.set() to smaller and higher resolutions. I am using an ImageSource camera and I know that the resolutions I am attempting to use are supported by the camera and I can view video at those resolutions in another program.
I've tried using different cameras/webcams.
I've tried explicitly changing the backend API when I create the VideoCapture object - i.e. VideoCapture cap(0, CAP_DSHOW). I tried DSHOW, FFMPEG, IMAGES, etc.
I've tried running the same program on different computers.
The result is always the same 640x480 resolution.
Is there something simple I am missing? Every other post I can seem to find on SO just points toward using the cap.set() to change the width and height.
It depends on what your camera backend is. As the documentation says:
Each backend supports devices properties (cv::VideoCaptureProperties)
in a different way or might not support any property at all.
Also mentioned in this documentation:
Reading / writing properties involves many layers. Some unexpected
result might happens along this chain. Effective behaviour depends
from device hardware, driver and API Backend.
It seems your camera backend is not supported by OpenCV Video I/O module.
Note: I also met such kind of cameras, some of them different resolutions are working with different numbers. For example, you may catch desired resolution by trying VideoCaptur(-1) , VideoCapture(1) , VideoCapture(2) ...
Turns out the error was in the "if(!cap.open(0))" line that I was trying to use to check if cap had successfully initialized.
I was under the impression open was just returning true if the video capture object was open or false otherwise. But it actually releases the video capture object if it is already open and then it re-opens it.
Long story short that means that the cap.set() calls that I was using to change the resolution were being erased when the object was re-opened with cap.open(0). At which point the resolution was set back to the default of 640x480.
The method I was looking for is cap.isOpened(), which simply returns true or false if the object is open. A simple, silly mistake.

Streaming an IP camera in OpenCV

I am trying to obtain video from an IP camera Axis 6034E using OpenCV in c++.
I can easily read stream using following simple code:
VideoCapture vid;
vid.open("http://user:password#ipaddres/mjpg/video.mjpg");
Mat frame;
while(true){
vid.read(frame);
imshow("frame", frame)
waitKey(10);
}
But my problem is, the password contain # and unfortunately it is the last charterer of the password. Any idea I will appreciate it.
I tried \# and some other encoding methods and it didn't help.

c++ opencv get encoded webcam stream

I am currently work on a project that capture video from webcam and send the encoded stream via UDP to do a real time streaming.
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
VideoCapture cap(0); // open the video camera no. 0
double dWidth = cap.get(CV_CAP_PROP_FRAME_WIDTH); //get the width of frames of the video
double dHeight = cap.get(CV_CAP_PROP_FRAME_HEIGHT); //get the height of frames of the video
while (1)
{
Mat frame;
bool bSuccess = cap.read(frame); // read a new frame from video
if (!bSuccess) //if not success, break loop
{
cout << "Cannot read a frame from video stream" << endl;
break;
}
return 0;
}
Some people say that the frame get from cap.read(frame) is already the decoded frame,I have no idea how and when does that happen. And what I want is the encoded frame or stream. What should I do to get it? Should I encoded it back again?
According to the docs, calling VideoCapture::read() is equivalent to calling VideoCapture::grab() then VideoCapture::retrieve().
The docs for the Retrieve function say it does indeed decode the frame.
Why not just use the decoded frame; presumably you'd be decoding it at the far end in any case?
OpenCV API does not give access to the encoded frames.
You will have to use a more low-level library, probably device and platform dependent. If your OS is Linux, Video4Linux2 may be an option, there must be equivalent libraries for Windows/MacOS. You may also have a look at mjpg-streamer, which does something very similar to what you want to achieve (on linux only).
Note that the exact encoding of the image will depend on your webcam, some usb webcam support mjpeg compression (or even h264), but other are only able to send raw data (usually in yuv colorspace).
Another option is to grab the decoded image wit Opencv, and reencode it, for example with imencode. It has the advantages of simplicity and portability, but image reencoding will use more resource.

Opencv doesn't detect firewire webcam on linux

I have connected a cam through firewire and tried to access it using opencv. The camera is detected in coriander and able to get a video stream. Below is the code I used
#include "/home/iiith/opencv-2.4.9/include/opencv/cv.h"
#include "/home/iiith/opencv-2.4.9/include/opencv/highgui.h"
#include "cxcore.h"
#include <iostream>
using namespace cv;
using namespace std;
int main(int,char**)
{
VideoCapture cap(0);
if(!cap.isOpened())
cout<<"Camera not detected"<<endl;
while(1)
{
Mat frame;
namedWindow("display",1);
cap >> frame;
imshow("display",frame);
waitKey(0);
}
}
When I run this code, the video is streamed from the webcam instead of my firewire cam. I tried the same code in my friend's system and there the firewire cam was detected. I tested the settings using different commands such as testlibraw , lsmod and they are all the same. Even the Opencv version, 2.4.9, Ubuntu 12.04 are all the same. This is really bizarre and am at this for 2 days. Can anyone please tell me what the difference could be? How can I get the external cam detected in opencv? Thanks in advance.
Note : Does this have something to have with setting the default cam? Thanks.
Update 1 : VideoCapture cap(1) gives the following error
HIGHGUI ERROR: V4L: index 1 is not correct!
Does this mean the camera is not recognized?
First, you should be sure that camera is recognized from your s.o.
unplug camera and wait few seconds;
open terminal and digit:
watch dmesg
lspci | grep -E -i "(1394|firewire)" #this could give you something
plug your device and read new entry on terminal
if your device is recognized you can launch a command like this:
mplayer tv:// -tv driver=v4l2:width=352:height=288
The Possible problem could be that the camera connected through firewire is not recognized by the system.
First try to see the camera output using AMcap or some other webcam software and check if you are able to see this.
If you not able to see the video in amcap then it means that drivers of that particular camera is missing.

Video from 2 cameras (for Stereo Vision) using OpenCV, but one of them is lagging

I'm trying to create Stereo Vision using 2 logitech C310 webcams.
But the result is not good enough. One of the videos is lagging as compared to the other one.
Here is my openCV program using VC++ 2010:
#include <opencv\cv.h>
#include <opencv\highgui.h>
#include <iostream>
using namespace cv;
using namespace std;
int main()
{
try
{
VideoCapture cap1;
VideoCapture cap2;
cap1.open(0);
cap1.set(CV_CAP_PROP_FRAME_WIDTH, 1040.0);
cap1.set(CV_CAP_PROP_FRAME_HEIGHT, 920.0);
cap2.open(1);
cap2.set(CV_CAP_PROP_FRAME_WIDTH, 1040.0);
cap2.set(CV_CAP_PROP_FRAME_HEIGHT, 920.0);
Mat frame,frame1;
for (;;)
{
Mat frame;
cap1 >> frame;
Mat frame1;
cap2 >> frame1;
transpose(frame, frame);
flip(frame, frame, 1);
transpose(frame1, frame1);
flip(frame1, frame1, 1);
imshow("Img1", frame);
imshow("Img2", frame1);
if (waitKey(1) == 'q')
break;
}
cap1.release();
return 0;
}
catch (cv::Exception & e)
{
cout << e.what() << endl;
}
}
How can I avoid the lagging?
you're probably saturating the usb bus.
try to plug one in front, the other in the back(in the hope to land on different buses),
or reduce the frame size / FPS to generate less traffic.
I'm afraid you can't do it like this. The opencv Videocapture is really only meant for testing, it uses the simplest underlying operating system features and doesn't really try and do anything clever.
In addition simple webcams aren't very controllable of sync-able even if you can find a lower level API to talk to them.
If you need to use simple USB webcams for a project the easiest way is to have an external timed LED flashing at a few hertz and detect the light in each camera and use that to sync the frames.
I know this post is getting quite old but I had to deal with the same problem recently so...
I don't think you were saturating the USB bus. If you were, you should have had an explicit message in the terminal. Actually, the creation of a VideoCapture object is quite slow and I'm quite sure that's the reason of your lag: you initialize your first VideoCapture object cap1, cap1 starts grabbing frames, you initialize your second VideoCapture cap2, cap2 starts grabbing frames AND THEN you start getting your frames from cap1 and cap2 but the first frame stored by cap1 is older than the one stored by cap2 so... you've got a lag.
What you should do if you really want to use opencv for that is to add some threads: one dealing with left frames and the other with right frames, both doing nothing but saving the last frame received (so you'll always deal with the newest frames only). If you want to get your frames, you'll just have to get them from theses threads.
I've done a little something if you need here.