How to further investigate linking problems in gstreamer? - gstreamer

First of all, you should know this question is titled that way because that's were I ended up stuck after narrowing down my problem for quite a while. Since there probably are better approaches to my problem I'm also explaining below my problem and what I've been doing to try and solve it. Suggestions on other approaches would be very welcome.
The problem
I'm using a gstreamer port to Android to render videos from remote cameras through the RTSP protocol (UDP is the transport method).
Using playbin things were working quite fine until they didn't anymore for a subset of these cameras.
Unfortunately I don't have access to the cameras themselves since they belong to our company's client, but the first thing that sprung to my mind was that it's got to be a problem with them.
Then, there's another Android app which we're using as reference that is still able to play video from these cameras normally, so I'm now trying to do my best to further investigate the issue on my end (our Android app).
The problem has been quite deterministic: some cameras always fail, others always work. When they fail, sometimes it would be with reason not-linked as the cause.
I managed to dump the pipeline graph associated with each of these cameras when the application tries to play video from them. Then I could notice that for each of the cameras that are failing, the associated pipelines are always missing something. Some miss just the sink element, others miss both the source and the sink:
Dump of pipeline with source only:
Dump of pipeline without a source or a sink:
Dump of pipeline with both (these are the cases where we can indeed play):
These are dumps of pipelines built by the playbin.
Attempted solution
I've been trying to test what would happen if I built the pipeline manually from scratch (so that it's the same being build by the playbin in the third image above) and forced all camera's videos to be processed by this pipeline. Since all cameras used to work, my guess is that somehow negotiation is failing now for some cameras so that the playbin is not building the pipeline properly for these cameras but if I assemble it myself, eventually it all would work as expected (I'm assuming that rtspsrc in combination with glimagesink was also the chosen pipeline by the playbin for playing video from these cameras).
This is how I'm trying to build this pipeline myself:
priv->pipeline = gst_pipeline_new("rtspstreamer");
source = gst_element_factory_make("rtspsrc", NULL);
if (!source) {
GST_DEBUG("Source could not be created");
}
sink = gst_element_factory_make("glimagesink", NULL);
if (!sink) {
GST_DEBUG("Sink could not be created");
}
if (!gst_bin_add(GST_BIN(priv->pipeline), source)) {
GST_DEBUG("Could not add source to pipeline");
}
if (!gst_bin_add(GST_BIN(priv->pipeline), sink)) {
GST_DEBUG("Could not add sink to pipeline");
}
if (!gst_element_link(source, sink)) {
GST_DEBUG("Source and sink could not be linked");
}
g_object_set(source, "location", uri, NULL);
So, running the code above, I get the following error:
Source and sink could not be linked
This is where I'm stuck. How could I investigate further on why these components are unable to link to each other? I think that maybe there should be some other component between them in the pipeline, but I think that's not the case by looking at the dump of the successful pipeline (third image) above.
Thanks in advance for any help.

Related

Gstreamer webrtc pipeline problem for open source camera

Hello everyone,
I am trying to implement low-latency video streaming using WebRTC. I write my code in C++ (websocket etc.), use only webrtc signalling server which is written in Python (ref1).
When I use a webcam, I do not have any problem streaming video to the client, however, I try to use the FLIR camera, I get a lot of problems while implementation.
There are a few questions in my mind to clear. I hope you guys give me some recommendations.
Is there any specific data-type that I should do pipeline to webrtc as a source? I just would like to know what kind of data I should send as a source in webrtc?
I try to send an image to check whether my WebRTC implementation works properly (except webcam), it gives me the error "Pipeline is empty". What can cause this problem? This is actually the main problem why I would like to know data type etc. to understand what exactly I should pipe into webrtc.
ref1: https://github.com/centricular/gstwebrtc-demos/tree/master/signalling
P.S.:
Client and Jetson Nano in the network
Server for signals is running on Jetson Nano
By running gst-inspect-1.0 webrtcbin you will find that both source and sink capability for this plugin is just application/x-rtp.
Therefore, if you want webrtcbin to work as a source pad, you will need to pipe it to some sort of RTP depayloader such as rtph264depay for video and rtpopusdepay for audio.

running gstreamer app without v4l2 driver

I would like to implement a gstreamer pipeline for video streaming without using a v4l2 driver in Linux. The thing is that the video frames I have them already in the RAM(the vdma core which is configured by a different OS on a different core takes care of that) . And also I had difficulties debugging some DMA slave errors which appeared always after a dma completion callback.
Therefore I would be happy if I would not have to use v4l2 driver in order to have gstreamer on top.
I have found this plugin from Bosch that fits my case:
https://github.com/igel-oss/v4l-gst
My question would be if somebody has experience with this approach and if is a feasible one?
Other question would be how to configure the source in the gstreamer pipeline as it is not a device /dev/videoxxx but rather a memory location or even a bmp file.
Thanks, Mihaita
You could use appsrc and repeatedly call gst_app_src_push_buffer (). Your application will have all freedom to read the video data from anywhere it likes - memory, files etc. See also the relevant section of the GStreamer Application Development Manual.
If you want more flexibility, like using the video source in several applications, you should consider implementing your own custom GStreamer element.

DirectShow video stream ends immediately (m_pMediaSample is NULL)

I have a directshow Video renderer redived from CBaseVideoRenderer. The renderer is used in a graph that receives data from a live source (BDA). It looks like the connections are established properly, but the video rendering immediately ends because there is no sample. However, audio Rendering works, ie I can hear the sound while DoRenderSample of my renderer is never called.
Stepping through the code in the debugger, I found out that in CBaseRenderer::StartStreaming, the stream ends immedately, because the member m_pMediaSample is NULL. If I replace my renderer with the EVR renderer, it shows frames, ie the stream is not ending before the first frame for the EVR renderer, but only for my renderer.
Why is that and how can I fix it? I implemented (following the sample from http://www.codeproject.com/Articles/152317/DirectShow-Filters-Development-Part-Video-Render) what I understand as the basic interface (CheckMediaType, SetMediaType and DoRenderSample), so I do not see any possibility to influence what is happening here...
Edit: This is the graph as seen from the ROT:
What I basically try to do is capturing a DVB stream that uses VIDEOINFOHEADER2, which is not supported by the standard sample grabber. Although the channel is a public German TV channel without encryption, could it be that this is a DRM issue?
Edit 2: I have attached my renderer to another source (a Blackmagic Intensity Shuttle). It seams that the source causes the issue, because I get samples in the other graph.
Edit 3: Following Roman's Suggestion, I have created a transform filter. The graph looks like
an has unfortunately the same problem, ie I do not get any sample (Transform is not called).
You supposedly chose wrong path of fetching video frames out of media pipeline. So you are implementing a "network renderer", something that terminates the pipeline to further send data to network.
A renderer which accepts the feed sounds appropriate. Implementing a custom renderer, however, is an untypical task and then there is not so much information around on this. Additionally, a fully featured renderer is typically equipped with sample scheduling part, which end of stream delivery - things relatively easy to break when you customize it through inheriting from base classes. That is, while the approach sounds good, you might want to compare it to another option you have, which is...
A combination of Sample Grabber + Null Renderer, two standard filters, which you can attach your callback to and get frames having the pipeline properly terminated. The problem here is that standard Sample Grabber does not support VIDEOINFOHEADER2. With another video decoder you could possibly have the feed decoded info VIDEOINFOHEADER, which is one option. And then improvement of Sample Grabber itself is another solution: DirectX SDK Extras February 2005 (dxsdk_feb2005_extras.exe) was the SDK which included a filter similar to standard Sample Grabber called Grabber \DirectShow\Samples\C++\DirectShow\Filters\Grabber. It is/was available in source code and provided with a good description text file. It is relatively easy to extend to allow it accept VIDEOINFOHEADER2 and make payload data available to your application this way.
The easiest way to get data out of a DirectShow graph, if youњre not going to use
MultiMedia Streaming, is probably to write your own TransInPlace filter, a sub-variety
of a Transform filter. Then connect this filter to the desired stream of data you wish to
monitor, and then run, pause, seek, or otherwise control the graph. The data, as it passes
through the transform filter, can be manipulated however you want. We call this kind of
filter, a Њsample grabberћ. Microsoft released a limited-functionality sample grabber
with DX8.0. This filter is limited because it doesnњt deal with DV Data or mediatypes
with a format of VideoInfo2. It doesnњt allow the user to receive prerolled samples.
(Whatњs a preroll sample? See the DX8.1 docs) Its ЊOneShotћ mode also has some problems.
To add to this, the Grabber sample is pretty simple itself - perhaps 1000 lines of code all together, including comments.
Looks like your decoder or splitter isn't demuxing the video frames. Look further up the chain to see what filters are supplying your renderer pin with data, chances are its only recognising audio.
Try dropping the file into Graphedit (there's a better one on the web BTW) and see what filters it creates.
Then look at the samples in the DirectShow SDK.

Multiple input MFT in Microsoft Media Foundation

I'm struggling with mixing two audio streams into single output stream. MFNode has an AudioMixerMFT but TopoEdit crashes when I try to build a topology like this & execute it:
Note: I tried TopoEdit that comes with Windows SDK 7.1 & also the one with few fixes by the author of "Developing Microsoft® Media Foundation Applications"
I thought it could be some issue with TopoEdit so I built the Topology in code (by modifying the code from Ch#9 of "Developing Microsoft® Media Foundation Applications") but it still failed with 'E_UNEXPECTED Catastrophic failure' on mediaEvent->GetStatus(&hrStatus) inside HRESULT CPlayer::ProcessEvent(CComPtr<IMFMediaEvent>& mediaEvent) on Session Start event.
Now at this point I thought it could be some issue with AudioMixerMFT so I wrote a Custom MFT with 2 inputs that acts like a simple pass-through (Only sends 1st input & ignores 2nd one). And I built a topology in TopoEdit like and it worked:
But when I connected 'Audio 2.wav' to MFT, it crashed. Now I tried to use this custom MFT in my own code & it worked again with single input but failed with 'E_UNEXPECTED Catastrophic failure' when applied two inputs.
Not sure what could be the problem, I started to doubt if multiple input MFT is supported, I came across a post http://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/21596e11-c4e2-480a-b28f-9e2f5fa8820d/mutlinput-and-multioutput (yes it is quite old) that says it is not supported.
Is there anyone out there who was able to run AudioMixerMFT from MFNode successfully? Any alternates to Microsoft Media Foundation? or Any hint would be appreciated. Thanks
MFNode is my open source project.
If you read the MFNode's documentation, you will see that TopoEdit does not handle more than one inputstream in a MFT. And yes TopoEdit crashes. You can fix the bug in TopoEdit source code. It is just a null pointer that TopoEdit does not checked. But unfortunatly, it does not solve the problem. TopoEdit is not able to call ProcessInput twice on the two input streams, before calling ProcessOutput.
You have to provide a custom media session to make it work (implement IMFMediaSession).
In a next update of MFNode Project, i will provide a player to use all the MFNode, and especially the MFNode Audio Mixer.
EDIT: in tededit.cpp, TopoEdit crashes at CTedEditorVisualObjectEventHandler::NotifyObjectDeleted :
...
CTedTopologyNode* pNode = m_pEditor->FindNode(pConn->GetOutputNodeID());
...
pNode can be null pointer and TopoEdit does not check.
EDIT
I've updated my project. Check MFNodePlayer. I use a custom MediaSession to handle the wave mixer topology.
It works well but it is not perfect because of two things. If you stop the topo and then replay, it fails (because i must stop all source, and perhaps reset the time clock and bytestream). Second, there is a function wich handles IMFTransform in a recursive way. It is hard to debug.
I will fix later.
PS : Special thanks to "Developing Microsoft Media Foundation Applications" book. It helps me a lot to create a custom MediaSession.

streaming video to and from multiple sources

I wanted to get some ideas one how some of you would approach this problem.
I've got a robot, that is running linux and uses a webcam (with a v4l2 driver) as one of its sensors. I've written a control panel with gtkmm. Both the server and client are written in C++. The server is the robot, client is the "control panel". The image analysis is happening on the robot, and I'd like to stream back the video from the camera to the control panel for two reasons:
A) for fun
B) to overlay image analysis results
So my question is, what are some good ways to stream video from the webcam to the control panel as well as giving priority to the robot code to process it? I'm not interested it writing my own video compression scheme and putting it through the existing networking port, a new network port (dedicated to video data) would be best I think. The second part of the problem is how do I display video in gtkmm? The video data arrives asynchronously and I don't have control over main() in gtkmm so I think that would be tricky.
I'm open to using things like vlc, gstreamer or any other general compression libraries I don't know about.
thanks!
EDIT:
The robot has a 1GHz processor, running a desktop like version of linux, but no X11.
Gstreamer solves nearly all of this for you, with very little effort, and also integrates nicely with the Glib event system. GStreamer includes V4L source plugins, gtk+ output widgets, various filters to resize / encode / decode the video, and best of all, network sink and sources to move the data between machines.
For prototype, you can use the 'gst-launch' tool to assemble video pipelines and test them, then it's fairly simply to create pipelines programatically in your code. Search for 'GStreamer network streaming' to see examples of people doing this with webcams and the like.
I'm not sure about the actual technologies used, but this can end up being a huge synchronization ***** if you want to avoid dropped frames. I was streaming a video to a file and network at the same time. What I eventually ended up doing was using a big circular buffer with three pointers: one write and two read. There were three control threads (and some additional encoding threads): one writing to the buffer which would pause if it reached a point in the buffer not read by both of the others, and two reader threads that would read from the buffer and write to the file/network (and pause if they got ahead of the producer). Since everything was written and read as frames, sync overhead could be kept to a minimum.
My producer was a transcoder (from another file source), but in your case, you may want the camera to produce whole frames in whatever format it normally does and only do the transcoding (with something like ffmpeg) for the server, while the robot processes the image.
Your problem is a bit more complex, though, since the robot needs real-time feedback so can't pause and wait for the streaming server to catch up. So you might want to get frames to the control system as fast as possible and buffer some up in a circular buffer separately for streaming to the "control panel". Certain codecs handle dropped frames better than others, so if the network gets behind you can start overwriting frames at the end of the buffer (taking care they're not being read).
When you say 'a new video port' and then start talking about vlc/gstreaming i'm finding it hard to work out what you want. Obviously these software packages will assist in streaming and compressing via a number of protocols but clearly you'll need a 'network port' not a 'video port' to send the stream.
If what you really mean is sending display output via wireless video/tv feed that's another matter, however you'll need advice from hardware experts rather than software experts on that.
Moving on. I've done plenty of streaming over MMS/UDP protocols and vlc handles it very well (as server and client). However it's designed for desktops and may not be as lightweight as you want. Something like gstreamer, mencoder or ffmpeg on the over hand is going to be better I think. What kind of CPU does the robot have? You'll need a bit of grunt if you're planning real-time compression.
On the client side I think you'll find a number of widgets to handle video in GTK. I would look into that before worrying about interface details.