How to parse MJPEG HTTP Stream within C++? - c++

I need to access and read an http stream which is sending live MJPEG footage from a network camera, in order to do some opencv image processing on the image.
I can access the camera's footage through VLC, or simply by going to the URL in chrome or firefox. But how can I programmatically access the http server and separate each frame, when the server is just sending a continuous feed?
The data seems to be simply formatted, looping between the HTTP Header and JPEG data. The only way I can think of approaching this is somehow sending a request to the server, parsing the data as it comes in, and separating the header from the actual jpeg data, and, in turn, passing that to opencv.
However, that sounds awfully convoluted and I'm not quite sure where I'd start. Do you guys know if there are any libraries out there, or just a simpler approach I'm overlooking, that could make all this easier?
Thanks alot

For HTTP download, you can use Libcurl library.
AFAIK MJPEG format is not a standardized format. Its actual byte format vary by implementations. But basically just concatenation of jpeg file with delimiters. If you look at bytes with a hex editor you could easily distinguish each jpeg file.
For example, ffmpeg's mjpeg output is structured like below:
0xff 0xd8 // start of jpeg
{ ... } // jpeg body
0xff 0xd9 // end of jpeg
...
0xff 0xd8 // start of jpeg
{ ... } // jpeg body
0xff 0xd9 // end of jpeg
...

In this page:
http://thistleshrub.net/www/index.php?controller=posts&action=show&id=2012-05-13DisplayingStreamedMJPEGinJava.txt
Parse a MJPEG Stream with Java, I implemented this with flawlessly results in Java.
If you try to use with C++ you find some things missed: socket conection and render canvas, libcurl seems to be a good option to http request, but still missing the canvas, you can use something like GLUT or Qt.
I read in some forums that OpenCV can read input stream of type MJPEG Streamer, but seems they need to be the recent version of OpenCV (compile OpenCV from scratch it's hard).
I hope this help.

Related

MediaFoundation - How to encode audio to MP3 straight from capture device into another bytestream?

I can record audio using MediaFoundation, but it only gives me a PCM Wave buffer. I want to grab the full buffer, encode it to MP3, and then use the new buffer for networking stuff.
What is the right way to encode the audio after receiving the samples? I have gotten lost reading through MediaSession, MediaSinks, SinkWriter, Transcode API, Transform API, Source Resolver, etc.
I see there is an MP3 encoder object, but I cant find the documentation on how to use it. I also found an MP3 MediaSink but im not sure how the MediaSink fits in with the SourceReader / SinkWriter schema or how to create/use the IMFByteStream it requires.
Is the MediaFoundation the right WinAPI for the task?

how to create AVPacket manually with encoded AAC data to send mpegts?

I'm developing app which sends mpeg2ts stream using FFMPEG API.(avio_open, avformat_new_stream etc..)
The problem is that the app already has AAC-LC audio so audio frame does not need to be encoded because my app just bypass data received from socket buffer.
To open and send mpegts using FFMPEG, I must have AVFormattContext data which is created from FFMPEG API for encoder as far as I know.
Can I create AVFormatContext manually with encoded AAC-LC data? or I should decode and encode the data? The information I know is samplerate, codec, bitrate..
Any help will be greatly appreciated. Thanks in advance.
Yes, you can use the encoded data as-is if your container supports it. There are two steps involved here - encoding and muxing. Encoding compress the data, muxing mixes it together in the output file, so the packets are properly interleaved. Muxing example in FFMpeg distribution helped me with this.
You might also take a look at the following class: https://sourceforge.net/p/karlyriceditor/code/HEAD/tree/src/ffmpegvideoencoder.cpp - this file is from one of my projects, and contains video encoder. Starting from the line 402 you'll see the setup for non-converted audio - it is kind of a hackish way, but it worked. Unfortunately I still end up reencoding audio because for my formats it was not possible to achieve frame-perfect synchronization which I needed

FFmpeg C++, H.264 parser build

I currently working on a project simulate webcam video transmission in C++, at sender side, I capture the raw webcame video with v4l2, encoded with FFmpeg, video file are put into an array and transmitted. And at decoder side, video data received to an array, decoded and play. The program works fine with codec_id AV_CODEC_ID_MPEG1VIDEO, but when I try replace it with AV_CODEC_ID_H264, some problem happen in decoding, please refer to FFmpeg c++ H264 decoding error. Some people suggest me to use parser but I have no idea how is a parse in ffmpeg looks like. Any simple example of how to build a parser for H.264 in FFmepg? I cannot find such tutorial in google.....

Panasonic Camera Streaming MJPEG Video with G.726 Audio

I'm finding out panasonic camera. I want to get audio data from this and try to play it. I sent a command to camera and receiced a stream data as following:
HTTP/1.0 200 OK
Content-type: multipart/x-mixed-replace;boundary=--myboundary
--myboundary
Content-type: audio/g726
...binary data...
I have searched some open sources on internet to decode the stream data but actually i don't know the data encoded by this camera using u/a/l law, 2/3/4 bits, rigt/left packed these parameter neccessary to decode exactly. Does anyone know please tell me? Is there any format for g726?
Assume that I decoded successfully. Could you give me a piece of code written by C/C++ how to play the data after decoding. I mean that I want to hear the audio data which get from camera on my PC.
ITU makes available source code for the G.726 codec here, though giving you code to do placyback seems a bit out of scope for an answer, and we'd have to at least know your operatating system..

How to create a video streaming httpserver?

I'm using c++ and poco libraries. I'm trying to implement a video streaming httpserver.
Initially i used Poco::StreamCopier.
But client failed to stream.
Instead client is downloading the video.
How can i make the server to send a streamresponse so that client can stream the video in browser instead of downloading?
While not within POCO, you could use ffmpeg. It has streaming servers for a number of video protocols and is written in C (which you could write POCO-like adapters for).
http://ffmpeg.org/ffmpeg.html#rtp
http://ffmpeg.org/ffmpeg.html#toc-Protocols
http://git.videolan.org/?p=ffmpeg.git;a=tree
And it has a pretty liberal license:
http://ffmpeg.org/legal.html
You need to research which video encoding and container that is right for streaming -- not all video files can stream
Without using something to decode the video at the other end but simply over HTTP, you can use The mime encoding "content-type:multipart/x-mixed-replace; boundary=..." and send a series of jpeg images.
This is actually called M-JPEG over HTTP. See: http://en.wikipedia.org/wiki/Motion_JPEG
The browser will replace each image as it receives it which makes it look like it's video. It's probably the easiest way to stream video to a browser and many IP webcameras support this natively.
However, it's not bandwidth friendly by any means since it has to send a whole jpeg file for each frame. So if you're going to be using this over the internet it'll work but will use more bandwidth than other method.
However, It is naively supported in most browsers now and it sounds like that is what you're after.