Transport stream processing in Openmax al - openmax

I am trying to understand the flow of audio/video in OpenMAX AL.I have the following doubts.
According to my understanding the audio/video flow goes like:
From the android application the mpeg2 stream goes to OpenMAX AL which gives it to soc where the stream is given to demux. Then video stream goes to decoding, then preprocessing, then it is rendered to surface/surface texture for display. Audio stream after demuxing-->decoding goes to audio output device.
what kind of effects/processing OpenMAX AL applies on mpeg2 transport stream.
how opensl is different from OpenMAX AL audio. at
It renders the stream to demuxer-->through AudioSink, VideoSink(little confused here). if it directly gives to AudioSink/ VideoSink where it is demuxed and decoded.
what is the input module in OpenMAX AL i.e where it received mpeg2ts and gives it for processing.
Regards
Mayank

Related

GStreamer send 16 raw video over rtp

I've a 16bit greyscale video stream from a LWIR (thermal camera) and I want to forward the stream over RTP without any compression.
gstreamer format is: video/x-raw,format=GRAY16_LE,width=640,height=520,framerate=9/1
But I can't find any plugin to transmit the data over RTP.
https://gstreamer.freedesktop.org/documentation/rtp/index.html?gi-language=c
Do you have an idea?
Thanks, Martin
Check for the specs of uncompressed video data over RTP:
https://www.rfc-editor.org/rfc/rfc4175
As you will notice your specific format is not covered by the specification.

Why my App cannot decode the RTSP stream?

I use live555 to receive RTP video frame (frame encoded in H264). I use Live555 open my local .sdp file to receive frame data. I just saw DummySink::afterGettingFrame was called ceaselessly。 if fReceiveBuffer in DummySink is correct, Why FFMPEG cannot decode the frame? My code is wrong?
Here is my Code Snippet:
http://paste.ubuntu.com/12529740/
the function avcodec_decode_video2 is always return failed , its value less than zero
fReceiveBuffer is present one video frame?
Oh, Here is my FFMPEG init code need to open related video decoder:
http://paste.ubuntu.com/12529760/
I read the document related H264 again, I found out that I-frame(IDR) need SPS/PPS separated by 0x00000001 insert into the header and decoder have a capacity to decode the frame correctly. Here is a related solution
FFmpeg can't decode H264 stream/frame data
Decoding h264 frames from RTP stream
and now, My App works fine, it can decode the frame and convert it to OSD Image for displaying to screen .

Capturing H264 with logitech C920 to OpenCV

I’ve been trying to capture a H264 stream from my two C920 Logitech camera with OpenCV (On a Raspberry Pi 2). I have come to the conclusion that this is not possible because it is not yet implemented. I’ve looked a little in OpenCV/modules/highgui/cap_libv4l.cpp and found that the “Videocapture-function” always convert the pixelformat to BGR24. I tried to change this to h264, but only got a black screen. I guess this is because it is not being decoded the right way.
So I made a workaround using:
V4l2loopback
h264_v4l2_rtspserver
Gstreamer-0.10
(You can find the loopback and rtspserver on github)
First I setup a virtual device using v4l2loopback. Then the rtspserver captures in h264 then streams rtsp to my localhost(127.0.0.1). Then I catch it again with gstreamer and pipe it to my virtual v4l2 video device made by loopback using the “v4l2sink” option in gst-launch-0.10.
This solution works and I can actually connect to the virtual device with the opencv videocapture and get a full HD picture without overloading the cpu, but this is nowhere near a good enough solution. I get a roughly 3 second delay which is too high for my stereo vision application and it uses a ton of bandwidth.
So I was wondering if anybody knew a way that I could use the v4l2 capture program from Derek Molloys boneCV/capture program (which i know works) to capture in h264 then maybe pipe it to gst-launche-0.10 and then again pipe it to the v4l2sink for my virtual device?
(You can find the capture program here: https://github.com/derekmolloy/boneCV)
The gstreamer command I use is:
“gst-launch-0.10 rtspsrc location=rtsp://admin:pi#127.0.0.1:8554/unicast ! decodebin ! v4l2sink device=/dev/video4”
OR maybe in fact you know what I would change in the opencv highgui code to be able to capture h264 directly from my device without having to use the virtual device? That would be amazingly awesome!
Here is the links to loopback and the rtspserver that I use:
github.com/mpromonet/h264_v4l2_rtspserver
github.com/umlaeute/v4l2loopback
Sorry about the wierd links I don't have enough reputation yet to poste more links..
I don't know exactly where you need to change in the OpenCV, but very recently I started to code using video on Raspberry PI.
I'll share my findings with you.
I got this so far:
can read the C920 h264 stream directly from the camera using V4L2 API at 30 FPS (if you try to read YUYV buffers the driver has a limit of 10 fps, 5 fps or 2 fps from USB...)
can decode the stream to YUV 4:2:0 buffers using the broadcom chip from raspberry using OpenMax IL API
My Work In Progress code is at: GitHub.
Sorry about the code organization. But I think the abstraction I made is more readable than the plain V4L2 or OpenMAX code.
Some code examples:
Reading camera h264 using V4L2 Wrapper:
device.streamON();
v4l2_buffer bufferQueue;
while (!exit_requested){
//capture code
device.dequeueBuffer(&bufferQueue);
// use the h264 buffer inside bufferPtr[bufferQueue.index]
...
device.queueBuffer(bufferQueue.index, &bufferQueue);
}
device.streamOFF();
Decoding h264 using OpenMax IL:
BroadcomVideoDecode decoder;
while (!exit_requested) {
//capture code start
...
//decoding code
decoder.writeH264Buffer(bufferPtr[bufferQueue.index],bufferQueue.bytesused);
//capture code end
...
}
check out Derek Molloy on youtube. He's using a Beaglebone, but presumably ticks this box
https://www.youtube.com/watch?v=8QouvYMfmQo

mp4 video created using direct show filter is not playing

Using direct show filters I have created a mp4 file writer with two input pin (one for audio and one for video) . I was writing audio sample received through one pin into one track and video sample received from the other pin into another track. But my video is not playing. If I connected only one pin, either Audio or Video, I can play the output file. that is if there is only one track.
I am using h264 encoder for video and mpeg4 encoder for audio. The encoders are working fine as I am able to play the audio and video separately.
I am setting the track count as 2. Is there any information to be provided in the moov box to make the video playing. Or Should we tell the decoder that which track is audio and which track is video. As we are setting those fields in track information I don't think that is important, but Why my video is not playing?

How to use live555 streaming media forwarding

I use Live555 h.264 stream client to query the frame packets from an IP camera, I use ffmpeg to decode the buffer and analysis the frame by OpenCV.(those pipeline are based on testRTSPClient sample, I decode the h.264 frame buffer in DummySink::afterGettingFrame() by ffmpeg)
And now I wanna stream the frame to another client(remote client) OnDemand mode in real-time, the frame may added the analysis result(boundingboxs, text, etc), how to use Live555 to achieve this?
Well, your best bet is to re-encode the resultant frame (with bounding boxes etc), and pass this to an RTSPServer process which will allow you to connect to it using an rtsp url, and stream the encoded data to any compatible rtsp client. There is a good reference on the FAQ for how to do this http://www.live555.com/liveMedia/faq.html#liveInput which walks you through the steps taken, and provides example source code which you can modify for your needs.