Using FFmpeg with Direct Show - c++

I have a problem using DirectShow for HD video streams from IP cameras. Direct Show does not seem to support HD video. I was thinking of using FFmpeg to grab the RTSP stream from the camera and pipe it to direct show. I'm wondering if this will produce HD video ? If not do you have any suggestions ?
Thanks in Advance

Short answer yes.
A longer answer would be that HD streams are no different for SD streams. They just contain much more data and would require bandwidth. In your case you would need to know what type of encoding is being used by the IP camera. In most cases, it should be H.264.
For playing back HD streams using directshow, you would require two filters:
A network receiver filter to receive data off a socket
A codec filter to decode the stream
If you are using a IP cam like Axis, it would be using H.264 codec and the stream would be over RTP.
You can take a look at Mainconcept SDK for a demo version of filters to support HD over RTP/H.264

Related

Frame loss for above FullHD resolution .Is AVI Decompressor transform filter available in MediaFoundation?

I'm developing a multimedia streaming application for Desktop using SourceReader MediaFoundation technique.
I'm using USB camera device to show streaming. The camera supports 2-video formats: YUY2 and MJPG.
For 1980x1080p YUY2 video resolution, receiving only 48fps for 60fps. I fetched YUY2-RGB32 conversion from MSDN page and using in my application (Note: I didn't use any transform filter for color conversion).
For MJPG video format, I used MJPEG Decoder MFT to convert MJPG - YUY2 - RGB32 and then displaying on the window using Direct3D9. For specific resolution, I'm facing framerate drops from 60fps to 30fps(Ex: 1920x1080 60fps but drawing only 30-33fps).
Two ways, I verified in Graphedit to confirm about the filter:
Added MJPEG Decompressor filter and built the graph for MJPG video format to check fps for FullHD resolution and its showing 28fps for 60fps.
Added AVI Decompressor filter and built the graph for MJPG video format to check fps for FullHD resolution and its showing 60fps.
I have searched on many sites to find AVI decompressor for media foundation but no luck.
Anyone confirm, is there any filter available in MFT?
Microsoft ships [recent versions of] Windows with stock Motion JPEG decoders:
MJPEG Decompressor Filter for DirectShow
MJPEG Decoder MFT for Media Foundation
To my best knowledge those do not share codebases, however both are not supposed to be performance efficient decoders.
Your using GraphEdit means you are trying DirectShow decoders and AVI Decompressor is supposedly using another (Video for Windows) codec which you did not identify.
For Media Foundation, you might be able to use Intel Hardware M-JPEG Decoder MFT or NVIDIA MJPEG Video Decoder MFT is you have respective hardware and drivers. Presumably, vendor specific decoders deliver better performance, and also have higher priority compared to generic software peers. Other than this, for an MFT form factor you might need to look at commercial decoders and/or custom developed, as the API itself is not so much popular to offer a wide range of options.

how to get raw mjpg stream from webcam

I have logitech webcam, which streams 1080p#30fps using MJPG compression via USB2.0. I need to write this raw stream to hard drive or send via network. I do NOT need to decompress it. OpenCV gives me decompressed frames, so i need to compress them back. This leads to heavy CPU utilization waste. How to get raw MJPEG stream instead as it comes from camera? (Windows 7, Visual Studio, C++)
Windows native video capture related APIs DirectShow and Media Foundation let you capture video from a webcam in original format. It is a natural task for these APIs and is done in a straightforward way (specifically, if a web camera gets hardware compressed M-JPEG feed, you can have that programmatically).
About Video Capture in DirectShow
Audio/Video Capture in Media Foundation
You are free to do whatever you want with the data afterwards: decompress, send over network, compose a Motion JPEG over HTTP response feed etc.

Capturing H264 with logitech C920 to OpenCV

I’ve been trying to capture a H264 stream from my two C920 Logitech camera with OpenCV (On a Raspberry Pi 2). I have come to the conclusion that this is not possible because it is not yet implemented. I’ve looked a little in OpenCV/modules/highgui/cap_libv4l.cpp and found that the “Videocapture-function” always convert the pixelformat to BGR24. I tried to change this to h264, but only got a black screen. I guess this is because it is not being decoded the right way.
So I made a workaround using:
V4l2loopback
h264_v4l2_rtspserver
Gstreamer-0.10
(You can find the loopback and rtspserver on github)
First I setup a virtual device using v4l2loopback. Then the rtspserver captures in h264 then streams rtsp to my localhost(127.0.0.1). Then I catch it again with gstreamer and pipe it to my virtual v4l2 video device made by loopback using the “v4l2sink” option in gst-launch-0.10.
This solution works and I can actually connect to the virtual device with the opencv videocapture and get a full HD picture without overloading the cpu, but this is nowhere near a good enough solution. I get a roughly 3 second delay which is too high for my stereo vision application and it uses a ton of bandwidth.
So I was wondering if anybody knew a way that I could use the v4l2 capture program from Derek Molloys boneCV/capture program (which i know works) to capture in h264 then maybe pipe it to gst-launche-0.10 and then again pipe it to the v4l2sink for my virtual device?
(You can find the capture program here: https://github.com/derekmolloy/boneCV)
The gstreamer command I use is:
“gst-launch-0.10 rtspsrc location=rtsp://admin:pi#127.0.0.1:8554/unicast ! decodebin ! v4l2sink device=/dev/video4”
OR maybe in fact you know what I would change in the opencv highgui code to be able to capture h264 directly from my device without having to use the virtual device? That would be amazingly awesome!
Here is the links to loopback and the rtspserver that I use:
github.com/mpromonet/h264_v4l2_rtspserver
github.com/umlaeute/v4l2loopback
Sorry about the wierd links I don't have enough reputation yet to poste more links..
I don't know exactly where you need to change in the OpenCV, but very recently I started to code using video on Raspberry PI.
I'll share my findings with you.
I got this so far:
can read the C920 h264 stream directly from the camera using V4L2 API at 30 FPS (if you try to read YUYV buffers the driver has a limit of 10 fps, 5 fps or 2 fps from USB...)
can decode the stream to YUV 4:2:0 buffers using the broadcom chip from raspberry using OpenMax IL API
My Work In Progress code is at: GitHub.
Sorry about the code organization. But I think the abstraction I made is more readable than the plain V4L2 or OpenMAX code.
Some code examples:
Reading camera h264 using V4L2 Wrapper:
device.streamON();
v4l2_buffer bufferQueue;
while (!exit_requested){
//capture code
device.dequeueBuffer(&bufferQueue);
// use the h264 buffer inside bufferPtr[bufferQueue.index]
...
device.queueBuffer(bufferQueue.index, &bufferQueue);
}
device.streamOFF();
Decoding h264 using OpenMax IL:
BroadcomVideoDecode decoder;
while (!exit_requested) {
//capture code start
...
//decoding code
decoder.writeH264Buffer(bufferPtr[bufferQueue.index],bufferQueue.bytesused);
//capture code end
...
}
check out Derek Molloy on youtube. He's using a Beaglebone, but presumably ticks this box
https://www.youtube.com/watch?v=8QouvYMfmQo

How to use live555 streaming media forwarding

I use Live555 h.264 stream client to query the frame packets from an IP camera, I use ffmpeg to decode the buffer and analysis the frame by OpenCV.(those pipeline are based on testRTSPClient sample, I decode the h.264 frame buffer in DummySink::afterGettingFrame() by ffmpeg)
And now I wanna stream the frame to another client(remote client) OnDemand mode in real-time, the frame may added the analysis result(boundingboxs, text, etc), how to use Live555 to achieve this?
Well, your best bet is to re-encode the resultant frame (with bounding boxes etc), and pass this to an RTSPServer process which will allow you to connect to it using an rtsp url, and stream the encoded data to any compatible rtsp client. There is a good reference on the FAQ for how to do this http://www.live555.com/liveMedia/faq.html#liveInput which walks you through the steps taken, and provides example source code which you can modify for your needs.

FFMpeg encoding RGB images to H264

I'm developing a DirectShow filter which has 2 input pins (1 for audio, 1 for video). I'm using libavcodec/libavformat/libavutil of FFMpeg for encoding the video to H264, audio to AAC and mux it/stream using RTP. So far I was able to encode video and audio correctly using libavcodec but now I see that FFMpeg seems to support RTP muxing too. Unfortunatelly, I can't find any example code which shows how to perform H264 encoding and RTP muxing. Does anybody know good samples?
Try checking out the code in HandBrake. Specifically, this file muxmp4.c, which was a jem I found working with FFMpeg / RTP. Be sure and use av_interleaved_write_frame() and the extradata fields correctly. Those were some key differences I remember for RTP.
Still, I had some stability issues with RTP/RTSP with FFMpeg, (I'm sure it's getting better). I had much better luck with live555, and you can look at the code in VLC and MPlayer for good examples on how to use it.