I need to limit video file duration in seconds with gst-launch-1.0, but didn't find any mentions of this feature in gstreamer documentation.
How can i solve this issue?
Related
So I know that one can adjust the rate of download with youtube-dl using the -r or --limit-rate flags; however, as part of a simulation testing, I am trying to simulate a user watching a video , and so I want to download a video at a rate such that the download would take as long as the video's duration is if one were to watch the video, so that a 2min long video would take 2min to download, and so on and so forth.
I have meticulously reviewed the available options on their github page, but it seems like there are no options natively to do that. But then the next best thing I can think of is to get the video duration in seconds (lets call it t) and the video size in bytes (lets call it s) and then use s/t as a value for the --limit-rate flag.
However now the problem is that there doesn't seem to be any options/flags to get the video file-size in bytes!
Is there anyway I can accomplish what my goal is here? I am open to using other tools/programs if this is outside the capabilities of youtube-dl.
To be more specific, I am working in linux server environment (no video-card and needs to be able headlessly), and the videos I'm dealing with are MPEG Dash videos from an MPD file, so whatever tool I use needs to be able to parse and work with MPD files.
Thank you for your help,
I created an application using libsndfile and wasapi, it allows playing an audio file slowly and manually with the cursor using QSlider.
I see that libsndfile uses frames, I want to know how to get the duration of this audio file but I can not find it in the documentation of libsndfile and play parts of the audio file using ms.
I managed to make it work with the frames but I want to understand how to do it with the milliseconds.
The objects libsndfile creates have a frames() method that gives you this information. samplerate() gives you the sampling rate, so then the duration is:
static_cast<double>(frames())/samplerate()
I'm interested in webRTC's ability to P2P livestream an mp3 audio from user's machine. Only example, that I found is this: https://webrtc-mp3-stream.herokuapp.com/ from this article http://servicelab.org/2013/07/24/streaming-audio-between-browsers-with-webrtc-and-webaudio/
But, as you can see, the audio quality on receiving side is pretty poor (45kb\sec), is there any way to get a full quality MP3 streaming + ability to manipulate this stream's data (like adjusting frequencies with equalizer) on the each user's sides?
If impossible through webRTC, is there any other flash-plugin or pluginless options for this?
Edit: also I stumbled upon this 'shoutcast kinda' guys http://unltd.fm/ , declaring, that they are using webRTC to deliver top quality radio broadcasting including streaming mp3. If they are, then how?
WebRTC supports 2 audio codecs: OPUS (max bitrate 510kbit/s) and G711. You stick with OPUS, it is modern and more promising, introduced in 2012.
Main files in webrtc-mp3-stream are outdated by 2 years (Jul 18, 2013). I couldn't find OPUS preference in the code, so possibly demo runs via G711.
The webrtc-mp3-stream demo does the encoding job (MP3 as a media source), then it transmits the data over UPD/TCP via WebRTC. I do not think you need to decode it to MP3 on receiver side, this would be an overkill. Just try to enable OPUS to make the code of webrtc-mp3-stream more up-to-date.
Please refer to Is there a way to choose codecs in WebRTC PeerConnection? to enable OPUS to see the difference.
I'm the founder of unltd.fm.
igorpavlov is right but I can't comment answer. We also use OPUS (Stereo / 48Khz) codec over WebRTC.
Decoding mp3 ( or any other audio format ) using webaudio then encoding it in OPUS is the way to go. You "just" need to force SDP negotiations to use OPUS.
You should have send us an email you would have saved your 50 points ;)
You can increase the quality of a stream by setting the SDP to be stereo and increase the maxaveragebitrate:
let answer = await peer.conn.createAnswer(offerOptions);
answer.sdp = answer.sdp.replace('useinbandfec=1', 'useinbandfec=1; stereo=1; maxaveragebitrate=510000');
await peer.conn.setLocalDescription(answer);
This should output a SDP string which looks like this:
a=fmtp:111 minptime=10;useinbandfec=1; stereo=1; maxaveragebitrate=510000
This gives a potential maximum bitrate of 520kb/s for stereo, which is 260kps per channel. Actual bitrate depends on the speed of your network and strength of your signal.
You can read more about the other available SDP attributes at: https://www.rfc-editor.org/rfc/rfc7587
I'm creating an application for video conferencing using media foundation and I'm having an issue decoding the H264 video frames I receive over the network.
The Design
Currently my network source queues a token on every request sample, unless there is an available stored sample. If a sample arrives over the network and no token is available the sample is stored in a linked list. Otherwise it is queued with the MEMediaSample event. I also have the decoder set to low latency.
My Issue
When running the topology using my network source I immediately see the first frame rendered to the screen. I then experience a long pause until a live stream begins to play perfectly. After a few seconds the stream appears to pause but then you notice that it's just looping through the same frame over and over again adding in a live frame every couple of seconds that then disappears immediately and goes back to displaying the old loop.
Why is this happening? I'm by no means an expert in H264 or media foundation for that matter but, I've been trying to fix this issue for weeks with no success. I have no idea where the problem might be. Please help me!
The time stamp is created by starting at 0 and adding the duration to it for every new sample. The other data is retrieved from a IMFSampleGrabberSinkCallback.
I've also posted some of my MFTrace onto the msdn media foundation forums Link
I mentioned on there that the presentation clock doesn't seem to change on the trace but, I'm unsure if that's the cause or how to fix it.
EDIT:
Could you share the video and a full mftrace log for this issue? It's not clear for me what really happens: do you see the live video after a while?
The current log does not contain enough information to trace sample processing. From your description is looks like that only keyframes are rendered. Plus, duration is weird for the rendered keyframe:
Sample #00A74970, Time 6733ms, Duration 499ms. <- Duration is not 33ms.
I would like to see what happened to that sample.
In any case, if you are using standard encoder and decoder, the issue should be with your media source, and how it buffers frames. Incorrect circular buffer implementation? You may want to try and cache a second or two of samples before starting giving them to the decoder.
I have a Silverlight5 based application which works in both In Browser as well as OOB mode.
I need to play video files which are of mp4 extension and encoding as shown by VLC media player:
Stream 0
Type: Video
Codec: MPEG-4 Video (mp4v)
Resolution: 320*240
Frame Rate: 10
Decoded Format: Planar 4:2:0 YUV.
I have tried using the Media Element provided by Silverlight framework and also tried using the Azure media services to display the video, but did not have any success. It says that the file format is not supported.
Will really appreciate your inputs.
Alpee
Windows does support MPEG4 video decoder. See below links
http://msdn.microsoft.com/en-us/library/windows/desktop/ff819502(v=vs.85).aspx
http://msdn.microsoft.com/en-us/library/windows/desktop/ff819503(v=vs.85).aspx
You can use MediaInfo a free tool to check the codec info in each stream inside a media container.
Without examining files it is hard to tell what's wrong.