Getting a snapshot from an rtsp video stream from an IP camera - c++

Normally, I can get a still snapshot from an IP camera with a vendor provided url. However, the jpegs served this way are not of good enough quality and the vendor says there is no facility provided for serving snapshots in other image formats or smaller/lossless compression.
I noticed when I open an rtsp h264 stream from the camera with VLC then manually take a screenshot, the resulting image has none of the jpeg artifacts observed previously.
The question is, how would I obtain these superior snapshots from an h264 stream with a c++ program? I need to perform multiple operations on the image (annotations, cropping, face recognition) but those have to come after getting as high quality as possible initial image.
(note that this is related to my previous question. I obtained jpeg images with CURL but would now like to replace the snapshot getter with this new one if possible. I am again running on linux, Fedora 11)

You need an RTSP client implementation to connect to the camera, start receiving video feed, defragment/depacketize the video frame and then you will get it and save/process/present as needed.
You might want to look towards live555 library as a well known RTSP library/implemetnation.

Related

Live streaming and processing with opencv

I am having a hard time figuring out a seemingly simple problem : my aim is to send a video stream to a server, process it using opencv, then send back the processed feed to be displayed.
I am thinking of using kafka to send and receive the feed since I already have some experience with it. However, this is raising a problem : opencv process video streams using the VideoCapture method, which is different from just reading a single image using the Read method.
If I stream my video feed frame by frame, will I be able to process my feed on the server as a video rather than a single image at time ? And when I get back the processed frame, can I display it again as a video ?
I am sure I misunderstood some concepts so please let me know if you need further explanations.
Apologies for the late response. I have built a Live-streaming project with a basic Analytics (Face Detection) using Kafka and OpenCV.
The publisher application has OpenCV to access the Live video from Webcam/Ip Camera / USB camera. Like you have mentioned VideoCapture.read(frame) fetches a continuous stream of frames/Images of the video as a Mat. Mat is then converted into a String (JSON) and published it to Kafka.
You can then, transform these objects as per their requirement (into Buffered Image for live streaming application) or work with the raw form (for face detection application). This will be the desired solution as it exhibits reusability by allowing a publisher application to produce data for multiple consumers.

Problems on OpenCV with IP Camera

Recently I am working on OpenCV to complete a design, I now have got an IP camera and just by typing the IP address of the camera and the port in my browser, like 192.168.1.1:8080, I can watch the video.
I have installed VS2010 and correctly complete the setting. I now can deal with pictures in my computer and capture videos of the camera on my computer. But when I tried to capture videos from IP camera by
VideoCapture cap;
cap.open("http://192.168.137.235:8082/index.html")
there is an error:
Error opening file (../../modules/highgui/src/cap_ffmpeg_impl.hpp:545)
so how can I solve the problem. Can anyone tell me specifically how to solve this problem?
I've tried to capture video directly from my IP camera to OpenCV-based application directly via RTSP as it was advised previously. It works, but ffmpeg decoder works very unstable with RTSP stream from some cameras.
I found the next solution.
Some people like to perform live streaming of their PC screen to youtube. The standard tool for it is a XSplit Broadcaster.
This tool has a side-effect. It's able to create a virtual usb-webcam via RTSP-compatible IP camera.
OpenCV captures video from usb-webcams perfectly.
And basic license of XSplit Broadcaster is absolutely free.
Unfortunately this solution has a limitation. With no dependence on real resolution of your IP camera, resolution of virtual webcam will be 640*480
The page "index.html" could be only the main page of your video camera, the page where a human uses to navigate the camera and to watch the live.
The ip cameras are very different each of the others. If your ip camera is "onvif" it should have a rtsp socket. For example I can watch my ip camera using this path:
rtsp://address:554/onvif1
If you camera implements the mjpeg stream you shoult use the right path, for example
http://192.168.137.235:8082/live.html
To know wich is the righ way to connect to your camera, it is necessary to know wich is your camera.

how to get raw mjpg stream from webcam

I have logitech webcam, which streams 1080p#30fps using MJPG compression via USB2.0. I need to write this raw stream to hard drive or send via network. I do NOT need to decompress it. OpenCV gives me decompressed frames, so i need to compress them back. This leads to heavy CPU utilization waste. How to get raw MJPEG stream instead as it comes from camera? (Windows 7, Visual Studio, C++)
Windows native video capture related APIs DirectShow and Media Foundation let you capture video from a webcam in original format. It is a natural task for these APIs and is done in a straightforward way (specifically, if a web camera gets hardware compressed M-JPEG feed, you can have that programmatically).
About Video Capture in DirectShow
Audio/Video Capture in Media Foundation
You are free to do whatever you want with the data afterwards: decompress, send over network, compose a Motion JPEG over HTTP response feed etc.

How to use live555 streaming media forwarding

I use Live555 h.264 stream client to query the frame packets from an IP camera, I use ffmpeg to decode the buffer and analysis the frame by OpenCV.(those pipeline are based on testRTSPClient sample, I decode the h.264 frame buffer in DummySink::afterGettingFrame() by ffmpeg)
And now I wanna stream the frame to another client(remote client) OnDemand mode in real-time, the frame may added the analysis result(boundingboxs, text, etc), how to use Live555 to achieve this?
Well, your best bet is to re-encode the resultant frame (with bounding boxes etc), and pass this to an RTSPServer process which will allow you to connect to it using an rtsp url, and stream the encoded data to any compatible rtsp client. There is a good reference on the FAQ for how to do this http://www.live555.com/liveMedia/faq.html#liveInput which walks you through the steps taken, and provides example source code which you can modify for your needs.

Best way to load in a video and to grab images using c++

I am looking for a fast way to load in a video file and to create images from them at certain intervals ( every second, every minute, every hour, etc.).
I tried using DirectShow, but it just ran too slow for me to start the video file and move to a certain location to get data and to save it out to an image. Even if I disabled the reference clock. Tried OpenCV, but it has trouble opening the AVI file unless I know the exact codec information. So if I know a way to get the codec information out from OpenCV I may give it another shot. I tried to use FFMPEG, but I don't have as much control over it as well as I would wish.
Any advice would be greatly appreciated. This is being developed on a Windows box since it has to be hosted on a Windows box.
MPEG-4 format is not an intra-coded format, so you can't just jump to a random frame and decode it on its own, as most frames only encode the differences from one or more other frames. I suspect your decoding is slow because when you land on a frame for which several other dependent frames to be decoded first.
One way to improve performance would be to determine which frames are keyframes (or sometimes also called 'sync' points) and limit your decoding to those frames, since these can be decoded on their own.
I'm not very familiar with DirectShow capabilities, but I would expect it has some API to expose sync points.
Also, I should mention that the QuickTime SDK on Windows is possibly another good option that you have for decoding frames from movies. You should first test that your AVI movies are played correctly in the QuickTime Player. And the QT SDK does expose sync points, see the section Finding Interesting Times in the QT SDK documentation.
ffmpeg's libavformat might work for ya...