I have a Mobotix Camera. It is an IP Camera. In the API they offer us the possibility to get the feed via
http:// [user]:[password]#[ip_adress]:[port]/cgi-bin/faststream.jpg?[options]
What I've tried is to open it like a normal webcam feed :
cv::VideoCapture capture("http://...");
cv::Mat frame;
if (capture.isOpened())
// always false anyway.
while(1)
{
capture.read(frame);
cv::imshow("Hi there", frame);
cv::waitkey(10);
}
FYI : Developer Mobotix API Docs
EDIT : Now thanks to berak I just had to add &data=v.mjpg to the options :
?stream=full&fps=5.0&noaudio&data=v.mjpg
Note that in v.mjpg, only the [dot]mjpg is important, you could as well put myfile.mjpg.
Now the problem is the speed at which the feed update. I got a 2 seconds delay, plus the feed is very very slow.
And when I change the stream option for MxJPG or mxg I get a corrupted image where the bytes aren't ordering properlly.
EDIT : I tried to change the camera parameters directly with the mobotix control center but only the resolution affected my OpenCV program, without actually changing the speed at which I access the images.
for max speed use fps=0 Its in the api docs
something like
http://cameraip/cgi-bin/faststream.jpg?stream=full&fps=0
see http://developer.mobotix.com/paks/help_cgi-image.html
faststream is the mjpeg stream (for image capture) , make sure mxpeg is turned off and pick the smallest image that gives you enough resolution. i.e get it working using 640 by 480 (set it camera webgui) then increase the image size.
Note this is for image capture not video and you need to detect the beginning and end of each jpeg then copy from receive buffer into memory.
vlc can handle mxpeg ,but need to either start from command line with vlc --ffmpeg-format=mxg or set an edit option ffmpeg-format=mxg in the gui
see https://wiki.videolan.org/MxPEG
I know this post is quite old but I thought to answer for anyone else who comes across this issue. To get a stream without frame rate limitations you need to use a different CGI command:
http://<camera_IP>/control/faststream.jpg?stream=full&fps=0
As per the camera's on-line help:
http://<camera_IP>/cgi-bin/faststream.jpg (guest access)
http://<camera_IP>/control/faststream.jpg (user access)
The default limitation of the "guest" access is indeed 2 fps but it can be modified from the page Admin Menu > Language and Start Page.
A detailed description of how to retrieve a live stream from a MOBOTIX camera is available at the following link: https://community.mobotix.com/t/how-to-access-a-live-stream-with-a-video-client-e-g-vlc/202
Related
I am having a hard time figuring out a seemingly simple problem : my aim is to send a video stream to a server, process it using opencv, then send back the processed feed to be displayed.
I am thinking of using kafka to send and receive the feed since I already have some experience with it. However, this is raising a problem : opencv process video streams using the VideoCapture method, which is different from just reading a single image using the Read method.
If I stream my video feed frame by frame, will I be able to process my feed on the server as a video rather than a single image at time ? And when I get back the processed frame, can I display it again as a video ?
I am sure I misunderstood some concepts so please let me know if you need further explanations.
Apologies for the late response. I have built a Live-streaming project with a basic Analytics (Face Detection) using Kafka and OpenCV.
The publisher application has OpenCV to access the Live video from Webcam/Ip Camera / USB camera. Like you have mentioned VideoCapture.read(frame) fetches a continuous stream of frames/Images of the video as a Mat. Mat is then converted into a String (JSON) and published it to Kafka.
You can then, transform these objects as per their requirement (into Buffered Image for live streaming application) or work with the raw form (for face detection application). This will be the desired solution as it exhibits reusability by allowing a publisher application to produce data for multiple consumers.
I'm trying to use DirectShow to capture video from webcam. I assume to use SampleGabber class. For now I see that DirectShow can only read frames continiously with some desired fps. Can DirectShow read frames by request?
DirectShow pipeline sets up streaming video. Frames will continuously stream through Sample Grabber and its callback, if you set it up. The callback itself adds minimal processing overhead if you don't force format change (to force video to be RGB in particular). It is up to whether to process or skip a frame there.
On request grabbing will be taking either last known video frame streamed, or next to go through Sample Grabber. This is typical mode of operation.
Some devices offer additional feature of taking a still on request. This is a rarer case and it's described on MSDN here: Capturing an Image From a Still Image Pin:
Some cameras can produce a still image separate from the capture
stream, and often the still image is of higher quality than the images
produced by the capture stream. The camera may have a button that acts
as a hardware trigger, or it may support software triggering. A camera
that supports still images will expose a still image pin, which is pin
category PIN_CATEGORY_STILL.
The recommended way to get still images from the device is to use the
Windows Image Acquisition (WIA) APIs. [...]
To trigger the still pin, use [...]
I'm capturing images from a Cam using OpenCV C API and send them using TCP sockets.
The server is running C++ (QT) and receive the frame.
The process is working fine and I can see the images on the server.
The weird problem is when I close both programs and rerun the client and the server, I can see the previous frame again that I saw in the previous test.
If, I close both programs again and rerun them, I can see a new frame not the second one, and the process continues.
To make it more clear:
capture1, close, cap1, close, cap3, close, cap3, close, cap5 ......etc
I didn't see something like this before!
I had the same problem before.
the frame size is pretty much and you read from the buffer in a random way (just guessing) , you have to make a timer or an acknowledge between the camera and OpenCV.
Just try to control the way the camera is capturing frames.
I dont know about TCP/IP programming or client/server much...but all I can suggest is initialize the images, generally in the constructors of the camera/client/server class ,
Mat Frame = Mat::zeros(rows,cols,CV_8UC3);
so that every time the client/server is initialized or before you are ready to exchange images...the start up image is a blank image...
you must be initializing using cvCreatImage()..so you can do the following...
IplImage *m = cvCreateImage(cvSize(200,200),8,3);// say its 200 x 200
cvZero(m);
cvShowImage("BLANK",m);
cvvWaitKey();
this shows a black image with each pixel as zero...
Of course this issue comes from the camera. It seems that camera has to receive any acknowledgment once a frame is grabbed. One thing you can try is go to the line of code that sends the image and save the image in the disk in order to check if cap1 is sent twice.
I am working on the gateway Simulator where Simulator will stream image/video to Data center
I have JPEG file for 30 min(lot of individual JPEG images).
Data Center Center can request video/Image with varying value of these parameter.
Image Option
1. Mirror Effect (None,Column,Row,Row / Column)
2. Brightness (Normal,Intense Light,Low Light,MAX)
3. Zoom Level (1X, 2X, 4X, 8X)
Capture mode
Single Snapshot- requests one image from the camera
Burst Number- NUMBER will gather N (1-65535) number of images from the camera
Burst Second-option produces a stream of images and will go until a CancelImageRequest command is sent
Continuous- option produces a stream of images and will go until a CancelImageRequest command is sent
Round-Robi-, is a mode to allow the user to get a single snapshot from each active and selected sensor
Schedule Continuous- THis is similar to Continuous except timing.
Now I need to read JPEG files based above mentioned option and send it to data center.
I wanted to how I can enforce these Image option while reading the data.
is there any Api which will allow reading JPeg imges on following Image option.
If you have any suggestion please go ahead.
GDI+ has an Image class that can load JPEGs and manipulate them -
http://msdn.microsoft.com/en-us/library/ms534462%28VS.85%29.aspx
If you don't find then manipulation you're looking for you can use the Bitmap class that ingerits from Image and the BitmapData class that allows you direct access to pixels
http://msdn.microsoft.com/en-us/library/ms534420%28VS.85%29.aspx
I'm building a web cam application as my C++ project in my college. I am integrating QT (for GUI) and OpenCV (for image processing). My application will be a simple web cam app that will access the web cam, show/record videos, capture images and other stuffs.
Well, I also want to put in a feature to add cliparts to captured images, or the streaming video. While on my research, I found out that there is no way we can overlay two images using OpenCV. The best alternative I was able to find was to reconfigure the whole image to add the clipart into the original image making it a single image. You see, that's not going to work for me as I have to be able to move the clipart and resize or rotate the clipart in my canvas.
So, I was wondering if anybody could tell me how to achieve the effect I want most efficiently.
I would really appreciate your help. The deadline for the project submission is closing in and its a huge bump on the road to completion. PLEEEASE... RELP!!
If you just want to stick a logo onto the openCV image then you simply define a region of interest (roi) on the destination video image and copy the source image to this (the details vary with each version of opencv)
If you want the logo to be semi transparent - like a TV channel ID - then you can copy the image but loop over the pixels writing a destination that is source_pixel/2 + dest_pixel/2;