On the application I'm working on, which uses Gstreamer 0.10, we receive streaming audio data from a TCP socket (from another process running locally).
We can issue "seek" command to the process, which is working: we start receiving data corresponding the new position we specify.
So far so good.
However, there is delay between the time we issue the seek and the time we start playing the data at the correct position.
I'm pretty sure this is because we buffer data.
So I would like to flush the data buffered in our pipeline when we issue the seek command.
However, I didn't managed to do this: I used gst_pad_push_event (gst_event_new_flush_start()) on the pad, then gst_event_new_flush_stop short after, which both return TRUE.
However, music stops, and never start again.
Using export GST_DEBUG=2 I can see the following warning:
gdpdepay gstgdpdepay.c:429:gst_gdp_depay_chain:<gdpdepay-1> pushing depayloaded buffer returned -2
As the other process continue to push data while flush might be "on" for a short amount of time, that might explain this warning. But I would expect the other process to be able to continue to push data, and our pipeline to be able to continue to read data from this socket and process them in the pipeline, after sending a flush_stop event.
Googling this issue, I found some suggestions like changing the pipeline state, but that didn't help either.
Any help very welcome!
Related
My pipeline splits in the middle to be sent over an unreliable connection. This results in some buffers having bit errors that break the pipeline if I do not account for them. To solve this, I have an appsink that parses buffers for their critical information (timestamps, duration, data, and data size), serializes them, and then sends that over the unreliable channel with a CRC. If the receiving pipeline reads a buffer from the unreliable channel and detect a bit error with the CRC, the buffer is dropped. Most decoders are able to recover fine from a dropped buffer, aside from some temporary visual artifacts.
Is there a GStreamer plugin that does this automatically? I looked into the GDPPay and GDPDepay plugins which appeared to meet my needs due to there serialization of buffers and inclusion of CRC's for their header and payload, however the plugin assumes that the data is being sent over a reliable channel (why this assumption and the inclusion of CRCs, I do not know).
I am tempted to take the time to make a plugin/make a pull request to the GDP plugins that just drop bad buffers instead of pausing the pipeline with a GST_FLOW_ERROR.
Any suggestions would be greatly appreciated. Ideally it would also be tolerant to either pipeline crashing/restarting. (The plugin also expects the Caps filter information to be the first buffer sent, which in my case I do not need to send as I have a fixed purpose and can hard-code both ends to know what to expect. This is only a problem if the receiver restarts and the sender is already sending data, but the receiver will not get the data because it is waiting for the Caps data that the sender already sent.)
When faced with similar issue (but for GstEvents), I used GstProbe. You'll probably need to install it for GST_PAD_PROBE_TYPE_BUFFER and return GST_PAD_PROBE_DROP for the buffers that doesn't satisfy your conditions. It is easier than defining a plugin and definitely it is easier to modify (GstProbe can be created and handled from the code, so changing the dropping logic is easier). Caveat: I haven't done it for the buffers, but it should be doable.
Let me know if it worked!
I have my own MediaSink in Windows Media Foundation with one stream. In the OnClockStart method, I instruct the stream to queue (i) MEStreamStarted and (ii) MEStreamSinkRequestSample on itself. For implementing the queue, I use the IMFMediaEventQueue, and using the mtrace tool, I can also see that someone dequeues the event.
The problem is that ProcessSample of my stream is actually never called. This also has the effect that no further samples are requested, because this is done after processing a sample like in https://github.com/Microsoft/Windows-classic-samples/tree/master/Samples/DX11VideoRenderer.
Is the described approach the right way to implement the sink? If not, what would be the right way? If so, where could I search for the problem?
Some background info: The sink is an RTSP sink based on live555. Since the latter is also sink-driven, I thought it would be a good idea queuing a MEStreamSinkRequestSample whenever live555 requests more data from me. This is working as intended.
However, the solution has the problem that new samples are only requested as long as a client is connected to live555. If I now add a tee before the sink, eg to show a local preview, the system gets out of control, because the tee accumulates samples on the output of my sink which are never fetched. I then started playing around with discardable samples (cf. https://social.msdn.microsoft.com/Forums/sharepoint/en-US/5065a7cd-3c63-43e8-8f70-be777c89b38e/mixing-rate-sink-and-rateless-sink-on-a-tee-node?forum=mediafoundationdevelopment), but the problem is either that the stream does not start, queues are growing or the frame rate of the faster sink is artificially limited depending on which side is discardable.
Therefore, the next idea was rewriting my sink such that it always requests a new sample when it has processed the current one and puts all samples in a ring buffer for live555 such that whenever clients are connected, they can retrieve their data from there, and otherwise, the samples are just discarded. This does not work at all. Now, my sink does not get anything even without the tee.
The observation is: if I just request a lot of samples (as in the original approach), at some point, I get data. However, if I request only one (I also tried moderately larger numbers up to 5), ProcessSample is just not called, so no subsequent requests can be generated. I send MeStreamStarted once the clock is started or restarted exactly as described on https://msdn.microsoft.com/en-us/library/windows/desktop/ms701626, and after that, I request the first sample. In my understanding, MEStreamSinkRequestSample should not get lost, so I should get something even on a single request. Is that a misunderstanding? Should I request until I get something?
I'm working for a few days now on a pipeline with the following configuration :
- 2 live input streams (RTMP)
- going into one compositor
- outputing to another RTMP stream
With some converter, queue, etc. in between, it works pretty well.
But my problem is that one of the RTMP input may not be available at start time, so the pipeline can't start, crashing with the followings errors:
- error: Failed to read any data from stream
- error: Internal data flow error
What would be the proper way to make this work, that is, to start the stream with the first input, even if the second one is not ready yet ?
I tried several ways : dynamically changing the pipeline, playing with pad probes, listening to error message, .. but so far I can't make it work.
Thanks,
PL
As you didnt post any code I guess you are ok with conceptual answer..
There are few options for rtspsrc with which you can control when it will fail - reagarding timeout exceeded or number of trials exceeded maximum. Those are (not sure if all):
retry - this may be not very useful if it deals only with ports ..
timeout - if you want to try with UDP some longer time you can enlarge this one
tcp-timeout - this is important, try to play with it - make it much larger
connection-speed - maybe it will help to make smaller this one
protocols - I have experience that for bad streams TCP was much better for me
The actual concept (I am not an expert, take it as another view to the problem):
You can create two bins - one for each stream. I would use rtspsrc and decodebin and block the output pads of decodebin untill I have all the pads, then I would connect to the compositor.
When you recieve any error (it should be during the phase of waiting for all pads) then you would put the bin to NULL state (I mean GStreamer state called NULL) and PLAYING/PAUSED again..
Well you have to use the pad probles properly (no idea what that is :D) .. can you post your code regarding this?
Maybe try to discard the error message to not disintegrate the pipe..
Also, do you have only video inputs?
I guess no, you can use audiomixer for audio.. also the compositor has nice OpenGL version which is much faster its called glvideomixer.. but it may introduce another OpenGL related problems.. if you have intel GPUs then you are probably safe.
I am trying to rewind a video file with "-1" rate parameter.
It rewinds for a small duration and then the playback stops. Finally the player gets killed.
However the fast forward for the same video file works fine. I tested it with "2x" and "4x" speed. If I just seek backwards with certain duration (rate is "1.0" ), it goes to that timestamp and starts the playback as expected.
From what I understand, Seek event is handled in the Demuxer element of the pipeline, wherein:
It flushes the currently queued stream data
Creates a new-segment with updated values from the seek event.
Once the new segment is ready with the new stream data, playback starts.
From here on playback will be started,based on the new parameters set in new segment.
For the reverse playback, I'm not able to figure out where the pipeline is actually getting blocked.
I'm able to see the demuxer element is fetching the data and pushing it on the new segment.
Can anyone suggest or point where the issue could be?
Reverse playback might not be properly implemented here. Please file a bug, give as much details about the format (e.g. using gst-discoverer) and if possible link to the file.
This issue is about the MIDI application that will receive sudden overflow of MIDI buffer when the application startup.
Anyone has idea how to clear any MIDI data on queued from MIDI Yoke or LoopBe before the program accept incoming data?
I'm having a hard time understanding exactly what you are asking, but it sounds like you are wanting to flush an input stream before you start using it. If that is the case, then you can use a simple loop like this early in your program's start-up code (pseudo-code):
while input queue is not empty:
buffer = read_from_queue()
// Don't do anything with 'buffer'
loop
Essentially, read a little bit from the input queue and throw it away, then repeat until the queue is empty. I can't give a more detailed description than that without knowing more about your program.