How to use live555 streaming media forwarding - c++

I use Live555 h.264 stream client to query the frame packets from an IP camera, I use ffmpeg to decode the buffer and analysis the frame by OpenCV.(those pipeline are based on testRTSPClient sample, I decode the h.264 frame buffer in DummySink::afterGettingFrame() by ffmpeg)
And now I wanna stream the frame to another client(remote client) OnDemand mode in real-time, the frame may added the analysis result(boundingboxs, text, etc), how to use Live555 to achieve this?

Well, your best bet is to re-encode the resultant frame (with bounding boxes etc), and pass this to an RTSPServer process which will allow you to connect to it using an rtsp url, and stream the encoded data to any compatible rtsp client. There is a good reference on the FAQ for how to do this http://www.live555.com/liveMedia/faq.html#liveInput which walks you through the steps taken, and provides example source code which you can modify for your needs.

Related

Live streaming and processing with opencv

I am having a hard time figuring out a seemingly simple problem : my aim is to send a video stream to a server, process it using opencv, then send back the processed feed to be displayed.
I am thinking of using kafka to send and receive the feed since I already have some experience with it. However, this is raising a problem : opencv process video streams using the VideoCapture method, which is different from just reading a single image using the Read method.
If I stream my video feed frame by frame, will I be able to process my feed on the server as a video rather than a single image at time ? And when I get back the processed frame, can I display it again as a video ?
I am sure I misunderstood some concepts so please let me know if you need further explanations.
Apologies for the late response. I have built a Live-streaming project with a basic Analytics (Face Detection) using Kafka and OpenCV.
The publisher application has OpenCV to access the Live video from Webcam/Ip Camera / USB camera. Like you have mentioned VideoCapture.read(frame) fetches a continuous stream of frames/Images of the video as a Mat. Mat is then converted into a String (JSON) and published it to Kafka.
You can then, transform these objects as per their requirement (into Buffered Image for live streaming application) or work with the raw form (for face detection application). This will be the desired solution as it exhibits reusability by allowing a publisher application to produce data for multiple consumers.

how to get raw mjpg stream from webcam

I have logitech webcam, which streams 1080p#30fps using MJPG compression via USB2.0. I need to write this raw stream to hard drive or send via network. I do NOT need to decompress it. OpenCV gives me decompressed frames, so i need to compress them back. This leads to heavy CPU utilization waste. How to get raw MJPEG stream instead as it comes from camera? (Windows 7, Visual Studio, C++)
Windows native video capture related APIs DirectShow and Media Foundation let you capture video from a webcam in original format. It is a natural task for these APIs and is done in a straightforward way (specifically, if a web camera gets hardware compressed M-JPEG feed, you can have that programmatically).
About Video Capture in DirectShow
Audio/Video Capture in Media Foundation
You are free to do whatever you want with the data afterwards: decompress, send over network, compose a Motion JPEG over HTTP response feed etc.

Stream OpenGL framebuffer over HTTP (via FFmpeg)

I have an OpenGL application of which rendered images need to be streamed over internet to mobile clients. Previously, it sufficed to simply record the rendering into a video file, which is already working, and now this should be extended to subsequent streaming.
What is working right now:
Render a scene to an OpenGL framebuffer object
Capture the FBO content using NvIFR
Encode it to H.264 using NvENC (no CPU round trip required)
Download the encoded frame to host memory as a byte array
Append this frame to a video file
None of this steps involves FFmpeg or any other library so far. I now want to replace the last step with "Stream the current frame's byte array over internet" and I assume that using FFmpeg and FFserver would be a reasonable choice for this. Am I correct? If not, what would be the proper way?
If so, how do I approach this within my C++ code? As pointed out, the frame is already encoded. Also, there is no sound or other stuff, simply a H.264 encoded frame as byte array that is updated irregularly and should be converted into a steady video stream. I assume that this would be FFmpeg's job and that the subsequent streaming via FFserver would be simple from there. What I don't know is how to feed my data to FFmpeg in the first place, as all FFmpeg tutorials I found (in a non-exhaustive search) work on a file or webcam/capture device as data source, not volatile data in main memory.
The file mentioned above that I am already able to create is a C++ file stream to which I append each single frame, meaning that different framerates of video and rendering are not treated correctly. This also needs to be taken care of at some point.
Can somebody point me in the right direction? Can I forward data from my application to FFmpeg to build a proper video feed without writing to the hard disk? Tutorials are greatly appreciated. By the way FFmpeg/FFserver is not mandatory. If you have a better idea for streaming of OpenGL framebuffer contents, I'm eager to know.
You can feed the ffmpeg process readily encoded H.264 data (-f h264) and tell it to simply copy the stream into to the output multiplexer (-c:v copy). To get the data actually into ffmpeg just launch it as a child process with a pipe connected to its stdin and specify stdin as reading source
FILE *ffmpeg_in = popen("ffmpeg -i /dev/stdin -f h264 -c copy ...", "w");
you can then write your encoded h264 stream to ffmpeg_in.

DXGI Desktop Duplication: encoding frames to send them over the network

I'm trying to write an app which will capture a video stream of the screen and send it to a remote client. I've found out that the best way to capture a screen on Windows is to use DXGI Desktop Duplication API (available since Windows 8). Microsoft provides a neat sample which streams duplicated frames to screen. Now, I've been wondering what is the easiest, but still relatively fast way to encode those frames and send them over the network.
The frames come from AcquireNextFrame with a surface that contains the desktop bitmap and metadata which contains dirty and move regions that were updated. From here, I have a couple of options:
Extract a bitmap from a DirectX surface and then use an external library like ffmpeg to encode series of bitmaps to H.264 and send it over RTSP. While straightforward, I fear that this method will be too slow as it isn't taking advantage of any native Windows methods. Converting D3D texture to a ffmpeg-compatible bitmap seems like unnecessary work.
From this answer: convert D3D texture to IMFSample and use MediaFoundation's SinkWriter to encode the frame. I found this tutorial of video encoding, but I haven't yet found a way to immediately get the encoded frame and send it instead of dumping all of them to a video file.
Since I haven't done anything like this before, I'm asking if I'm moving in the right direction. In the end, I want to have a simple, preferably low latency desktop capture video stream, which I can view from a remote device.
Also, I'm wondering if I can make use of dirty and move regions provided by Desktop Duplication. Instead of encoding the frame, I can send them over the network and do the processing on the client side, but this means that my client has to have DirectX 11.1 or higher available, which is impossible if I would want to stream to a mobile platform.
You can use IMFTransform interface for H264 encoding. Once you get IMFSample from ID3D11Texture2D just pass it to IMFTransform::ProcessInput and get the encoded IMFSample from IMFTransform::ProcessOutput.
Refer this example for encoding details.
Once you get the encoded IMFSamples you can send them one by one over the network.

Getting a snapshot from an rtsp video stream from an IP camera

Normally, I can get a still snapshot from an IP camera with a vendor provided url. However, the jpegs served this way are not of good enough quality and the vendor says there is no facility provided for serving snapshots in other image formats or smaller/lossless compression.
I noticed when I open an rtsp h264 stream from the camera with VLC then manually take a screenshot, the resulting image has none of the jpeg artifacts observed previously.
The question is, how would I obtain these superior snapshots from an h264 stream with a c++ program? I need to perform multiple operations on the image (annotations, cropping, face recognition) but those have to come after getting as high quality as possible initial image.
(note that this is related to my previous question. I obtained jpeg images with CURL but would now like to replace the snapshot getter with this new one if possible. I am again running on linux, Fedora 11)
You need an RTSP client implementation to connect to the camera, start receiving video feed, defragment/depacketize the video frame and then you will get it and save/process/present as needed.
You might want to look towards live555 library as a well known RTSP library/implemetnation.