Streaming an IP camera in OpenCV - c++

I am trying to obtain video from an IP camera Axis 6034E using OpenCV in c++.
I can easily read stream using following simple code:
VideoCapture vid;
vid.open("http://user:password#ipaddres/mjpg/video.mjpg");
Mat frame;
while(true){
vid.read(frame);
imshow("frame", frame)
waitKey(10);
}
But my problem is, the password contain # and unfortunately it is the last charterer of the password. Any idea I will appreciate it.
I tried \# and some other encoding methods and it didn't help.

Related

USB camera encoded data stream

Currently, I am working on a streaming project. I need to grab frames from USB camera and send them over TCP.
To open USB camera video stream I'm using cv::VideoCapture. This allows me to read already decoded frames. According to this question I understood that there is no way to get encoded frame data using cv::VideoCapture and I need to encode each frame again and send it whatever I need using cv::imencode. The problem is that I can encode frames to some specific format listed here, and, in case, I use either .jpg or .png the file size still quite big and on receiving side frame rate very poor.
My question is: Is there any way to get mjpeg or h264 encoded data directly
or maybe you can suggest a better way to encode frames.
OpenCV 3.4.3, camera RICOH THETA V, language C++.
My code:
void Streamer::start()
{
cv::Mat img;
cv::VideoCapture cap(0);
cap.set(CV_CAP_PROP_FOURCC, CV_FOURCC('H', '2', '6', '4'));
if (!cap.isOpened())
throw std::invalid_argument("No device found.");
std::vector<int> format_params;
format_params.push_back(CV_LOAD_IMAGE_COLOR);
format_params.push_back(CV_IMWRITE_PNG_STRATEGY);
for (;;)
{
cap.read(img);
cv::imencode(".png", img, buffer_, format_params);
std::string strbuf(buffer_.begin(), buffer_.end());
server_->sendString(socket, strbuf);
}
cap.release();
}

How to detect motion from already store webm video in c++

I want to detect motion in already existing video, The video is stored in the webm format. I have seen some demo of opencv but those samples is capturing the motion of the live webcam streaming.
Is there any library or api which capture the motion of the webm video file in c++?
please help me.
If you have the code that run with the webcam input you only have to change the input type to accept the video file as input.
Basically, you can accomplish it using the VideoCapture object.
cv::VideoCapture cap("path/for/file.fileextension")
and then, putting this input into a Mat datatype (separating by frame):
Mat frame;
cap >> frame;

Video from 2 cameras (for Stereo Vision) using OpenCV, but one of them is lagging

I'm trying to create Stereo Vision using 2 logitech C310 webcams.
But the result is not good enough. One of the videos is lagging as compared to the other one.
Here is my openCV program using VC++ 2010:
#include <opencv\cv.h>
#include <opencv\highgui.h>
#include <iostream>
using namespace cv;
using namespace std;
int main()
{
try
{
VideoCapture cap1;
VideoCapture cap2;
cap1.open(0);
cap1.set(CV_CAP_PROP_FRAME_WIDTH, 1040.0);
cap1.set(CV_CAP_PROP_FRAME_HEIGHT, 920.0);
cap2.open(1);
cap2.set(CV_CAP_PROP_FRAME_WIDTH, 1040.0);
cap2.set(CV_CAP_PROP_FRAME_HEIGHT, 920.0);
Mat frame,frame1;
for (;;)
{
Mat frame;
cap1 >> frame;
Mat frame1;
cap2 >> frame1;
transpose(frame, frame);
flip(frame, frame, 1);
transpose(frame1, frame1);
flip(frame1, frame1, 1);
imshow("Img1", frame);
imshow("Img2", frame1);
if (waitKey(1) == 'q')
break;
}
cap1.release();
return 0;
}
catch (cv::Exception & e)
{
cout << e.what() << endl;
}
}
How can I avoid the lagging?
you're probably saturating the usb bus.
try to plug one in front, the other in the back(in the hope to land on different buses),
or reduce the frame size / FPS to generate less traffic.
I'm afraid you can't do it like this. The opencv Videocapture is really only meant for testing, it uses the simplest underlying operating system features and doesn't really try and do anything clever.
In addition simple webcams aren't very controllable of sync-able even if you can find a lower level API to talk to them.
If you need to use simple USB webcams for a project the easiest way is to have an external timed LED flashing at a few hertz and detect the light in each camera and use that to sync the frames.
I know this post is getting quite old but I had to deal with the same problem recently so...
I don't think you were saturating the USB bus. If you were, you should have had an explicit message in the terminal. Actually, the creation of a VideoCapture object is quite slow and I'm quite sure that's the reason of your lag: you initialize your first VideoCapture object cap1, cap1 starts grabbing frames, you initialize your second VideoCapture cap2, cap2 starts grabbing frames AND THEN you start getting your frames from cap1 and cap2 but the first frame stored by cap1 is older than the one stored by cap2 so... you've got a lag.
What you should do if you really want to use opencv for that is to add some threads: one dealing with left frames and the other with right frames, both doing nothing but saving the last frame received (so you'll always deal with the newest frames only). If you want to get your frames, you'll just have to get them from theses threads.
I've done a little something if you need here.

Loading an axis Camera in qt with open cv

I have been trying for some time to load an image from a axis 205 network cam into my qt programming using opencv running on a windows laptop. According to the cameras configuration page
The Motion JPEG image stream is fetched from the file:
http://192.168.0.90/axis-cgi/mjpg/video.cgi?resolution=640x480
the login for the camera is root for the username and pass for the password
I have tried several variations in the code but I can not get the program to display the image
VideoCapture * cap = new VideoCapture("http://192.168.0.90/axis-cgi/mjpg/video.cgi?resolution=640x480");
Mat frame;
cap->read(frame);
Everything I try results in an empty frame, Thanks for any help
~Gibby
After a bit more messing around I discovered that the correct code is
VideoCapture * cap = new VideoCapture("http://root:pass#192.168.0.90/axis-cgi/mjpg/video.cgi?resolution=640x480.mjpg");
For whatever reason the url must end in mjpg.

Showing a rectangle over a video from camera

Basically I need to capture the video from videocamera do some processing on frames and for each frame show a rectangle of detection.
Example: http://www.youtube.com/watch?v=aYd2kAN0Y20
How would you superimpose this rectangle on the output of videocamera (usb)? (c++)
I would use OpenCV, an open source imaging library to get input from a webcam/video file.
Here is a tutorial on how to install it:
http://opensourcecollection.blogspot.com.es/2011/04/how-to-setup-opencv-22-in-codeblocks.html
Then I would use this code:
CvCapture *capture = cvCreateCameraCapture(-1);
IplImage* frame = cvQueryFrame(capture);
To get the image, frame from the CvCapture, capture.
In this case, capture is taken directly from a video camera, but you can also create it from a video file with:
CvCapture *capture = cvCreateFileCapture("filename.avi");
Then, I would draw on the image with functions defined here: http://opencv.willowgarage.com/documentation/drawing_functions.html
By the way, the shape in the Youtube video is not a rectangle. It's a parallelogram.
If you want to do it live, then you can basically put this in a loop, getting a frame, processing it, drawing on it, and then outputting the image, like this:
You would include this before your loop:
cvNamedWindow("Capture", CV_WINDOW_AUTOSIZE);
And then, in your loop, you would say this:
cvShowImage("Capture", frame);
After the processing.
EDIT To do this in C++, open your webcam like this:
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
To initialize it from a file, instead of putting in the camera index, put the file path.
Get a frame from the camera like this:
Mat frame;
cap >> frame; // get a new frame from camera
Then you can find drawing functions here:
http://opencv.willowgarage.com/documentation/cpp/core_drawing_functions.html
Cheers!