I am trying to rewind a video file with "-1" rate parameter.
It rewinds for a small duration and then the playback stops. Finally the player gets killed.
However the fast forward for the same video file works fine. I tested it with "2x" and "4x" speed. If I just seek backwards with certain duration (rate is "1.0" ), it goes to that timestamp and starts the playback as expected.
From what I understand, Seek event is handled in the Demuxer element of the pipeline, wherein:
It flushes the currently queued stream data
Creates a new-segment with updated values from the seek event.
Once the new segment is ready with the new stream data, playback starts.
From here on playback will be started,based on the new parameters set in new segment.
For the reverse playback, I'm not able to figure out where the pipeline is actually getting blocked.
I'm able to see the demuxer element is fetching the data and pushing it on the new segment.
Can anyone suggest or point where the issue could be?
Reverse playback might not be properly implemented here. Please file a bug, give as much details about the format (e.g. using gst-discoverer) and if possible link to the file.
Related
After pausing icecast radio player app with just audio and audio service packages from Ryan Heise, I wait a little and tap play again, the stream continues to play from saved position with data in buffers, then suddenly (I think when buffers are empty) changes to live position in stream. It is possible for me to clear buffers on pause/stop in my Audio Handler? Or should I use another approach?
I tried player.seek(null), but it needed more work, so I decided to not use it. We haven't encountered this behaviour anymore since a month or so. It works fine now. Thanks!
I write the received packets in binary files. When the recording of the first file is completed, I call flush:
avcodec_send_frame(context, NULL);
This is the signal to end the stream. But when I send a new frame to the encoder, function return AVERROR_EOF (man: the encoder has been flushed, and no new frames can be sent to it). What to do to make the encoder take the frames after flushing?
Example: when decoding, you can call:
avcodec_flush_buffers(context);
This function changes the stream, but only for decoding.
Maybe analogic function for encoding?
Ideas:
1) do not call flush. But the encoder buffers frames inside and gives some packets only after flushing (using h.264 with b-frames), while some packets get into the next file.
2) Recreate codec context?
Details: use Win 7, Qt 5.10, ffmpeg 4.0.2
The correct answer is that you should create a new codec context for each file, or headache will follow. The little expense of additional headers and key frames should be small unless you are doing something very exotic.
B-frames can refer to both previous and future frames, how would you even decide such a beast?
In theory you could probably force a keyframe and hope for the best, but then there is really no point in not starting a new context, unless the hundreds of bytes or so of H264 init data is a problem.
I have my own MediaSink in Windows Media Foundation with one stream. In the OnClockStart method, I instruct the stream to queue (i) MEStreamStarted and (ii) MEStreamSinkRequestSample on itself. For implementing the queue, I use the IMFMediaEventQueue, and using the mtrace tool, I can also see that someone dequeues the event.
The problem is that ProcessSample of my stream is actually never called. This also has the effect that no further samples are requested, because this is done after processing a sample like in https://github.com/Microsoft/Windows-classic-samples/tree/master/Samples/DX11VideoRenderer.
Is the described approach the right way to implement the sink? If not, what would be the right way? If so, where could I search for the problem?
Some background info: The sink is an RTSP sink based on live555. Since the latter is also sink-driven, I thought it would be a good idea queuing a MEStreamSinkRequestSample whenever live555 requests more data from me. This is working as intended.
However, the solution has the problem that new samples are only requested as long as a client is connected to live555. If I now add a tee before the sink, eg to show a local preview, the system gets out of control, because the tee accumulates samples on the output of my sink which are never fetched. I then started playing around with discardable samples (cf. https://social.msdn.microsoft.com/Forums/sharepoint/en-US/5065a7cd-3c63-43e8-8f70-be777c89b38e/mixing-rate-sink-and-rateless-sink-on-a-tee-node?forum=mediafoundationdevelopment), but the problem is either that the stream does not start, queues are growing or the frame rate of the faster sink is artificially limited depending on which side is discardable.
Therefore, the next idea was rewriting my sink such that it always requests a new sample when it has processed the current one and puts all samples in a ring buffer for live555 such that whenever clients are connected, they can retrieve their data from there, and otherwise, the samples are just discarded. This does not work at all. Now, my sink does not get anything even without the tee.
The observation is: if I just request a lot of samples (as in the original approach), at some point, I get data. However, if I request only one (I also tried moderately larger numbers up to 5), ProcessSample is just not called, so no subsequent requests can be generated. I send MeStreamStarted once the clock is started or restarted exactly as described on https://msdn.microsoft.com/en-us/library/windows/desktop/ms701626, and after that, I request the first sample. In my understanding, MEStreamSinkRequestSample should not get lost, so I should get something even on a single request. Is that a misunderstanding? Should I request until I get something?
Currently using the lib's from FFPMEG to stream some MPEG2 TS (h264 encoded) video. The streaming is done via UDP multicast.
The issue I am having currently is two main things. There is a long initial connection time / getting the video to show (the stream also contains metadata, and that stream is detected by my media tool immediately).
Once the video gets going things are fine but it is always delayed by that initial connection time.
I am trying to get as near to LIVE streaming as possible.
Currently using the av_dict_set(&dict, "tune", "zerolatency", 0) and "profile" -> "baseline" options.
GOP size = 12;
At first I thought the issue was an i frame issue, but the initial delay is there if gopsize is 12 or default 250. Sometimes the video will connect quickly, but it is immediately dropped, the delay occurs, then it starts back up and is good from that point on.
According to documentation the zero latency option should be sending many i frames, to limit initial syncing delays.
I am starting to think its a buffering type issue, as when I close the application and leave the media player up, it then fast forwards through the delay till it hits basically where the file stops streaming.
So while I don't completely understand what was wrong, I at least fixed the problem I was having.
The issue came from using the av_write_interleaved_frame() vs. the regular av_write_frame()(this one works for live streaming), when writing out the video frames. Ill have to dig into the differences a bit more to fully understand it, but its funny sometimes how you figure out the problem you are having on a total whim after bashing your face for a few days.
I can get pretty good live ish video streaming with the tune "zerolatency" option set.
On the application I'm working on, which uses Gstreamer 0.10, we receive streaming audio data from a TCP socket (from another process running locally).
We can issue "seek" command to the process, which is working: we start receiving data corresponding the new position we specify.
So far so good.
However, there is delay between the time we issue the seek and the time we start playing the data at the correct position.
I'm pretty sure this is because we buffer data.
So I would like to flush the data buffered in our pipeline when we issue the seek command.
However, I didn't managed to do this: I used gst_pad_push_event (gst_event_new_flush_start()) on the pad, then gst_event_new_flush_stop short after, which both return TRUE.
However, music stops, and never start again.
Using export GST_DEBUG=2 I can see the following warning:
gdpdepay gstgdpdepay.c:429:gst_gdp_depay_chain:<gdpdepay-1> pushing depayloaded buffer returned -2
As the other process continue to push data while flush might be "on" for a short amount of time, that might explain this warning. But I would expect the other process to be able to continue to push data, and our pipeline to be able to continue to read data from this socket and process them in the pipeline, after sending a flush_stop event.
Googling this issue, I found some suggestions like changing the pipeline state, but that didn't help either.
Any help very welcome!