Best video format / codec to optimise 'seeking' with Videogular - ionic2

I am using the Videogular2 library within my Ionic 3 application. A major feature of the application is the ability to seek to different places within a video.
I noticed that some formats have very quick seek response, while others take seconds to get there, even if the video is in the buffer already - I assume this may depend on the decoding process being used.
What would the best compromise be in order to speed up seek time while still keeping the file size reasonably small so that the video can be streamed from a server?
EDIT
Well, I learned that the fact that my video was recorded in the mov format caused the seek delays. Any transcoding applied to this didn't help because mov is lossy and the damage must have been done already. After screen-capturing the video and encoding it in regular mp4, the seeking happens almost instantaneously.

What would the best compromise be in order to speed up seek time while
still keeping the file size reasonably small so that the video can be
streamed from a server?
Decrease key-frame distance when encoding the video. This will allow for building a full frame quicker with less scanning, depending on codec.
This will increase the file size if using the same quality parameters, so the compromise for this is to reduce quality at the same time.
The actual effect depends on the codec itself, how it builds intermediate frames, and how it is supported/implemented in the browser. This together with the general load/caching-strategy (you can control some of the latter via media source extensions).

Related

Streaming File Delta Encoding/Decoding

Here's the problem - I want to generate the delta of a binary file (> 1 MB in size) on a server and send the delta to a memory-constrained (low on RAM and no dynamic memory) embedded device over HTTP. Deltas are preferred (as opposed to sending the full binary file from the server) because of the high cost involved in transmitting data over the wire.
Trouble is, the embedded device cannot decode deltas and create the contents of the new file in memory. I have looked into various binary delta encoding/decoding algorithms like bsdiff, VCDiff etc. but was unable to find libraries that supported streaming.
Perhaps, rather than asking if there are suitable libraries out there, are there alternate approaches I can take that will still solve the original problem (send minimal data over the wire)? Although it would certainly help if there are suitable delta libraries out there that support streaming decode (written in C or C++ without using dynamic memory).
Maintain a copy on the server of the current file as held by the embedded device. When you want to send an update, XOR the new version of the file with the old version and compress the resultant stream with any sensible compressor. (Algorithms which allow high-cost encoding to allow low-cost decoding would be particularly helpful here.) Send the compressed stream to the embedded device, which reads the stream, decompresses it on the fly and XORs directly (a copy of) the target file.
If your updates are such that the file content changes little over time and retains a fixed structure, the XOR stream will be predominantly zeroes, and will compress extremely well: number of bytes transmitted will be small, effort to decompress will be low, memory requirements on the embedded device will be minimal. The further your model is from these assumptions, the less this approach will gain you.
Since you said the delta could be arbitrarily random (from zero delta to a completely different file), compression of the delta may be a lost cause. Lossless compression of random binary data is theoretically impossible. Also, since the embedded device has limited memory anyway, using a sophisticated -and therefore computationally expensive- library for compression/decompression of the occasional "simple" delta will probably be infeasible.
I would recommend simply sending the new file to the device in raw byte format, and overwriting the existing old file.
As Kevin mentioned, compressing random data should not be your goal. A few more comments about the type of data your working with would be helpful. Context is key in compression.
You used the term image which makes it sound like the classic video codec challenge. If you've ever seen weird video aliasing effects that impact the portion of the frame that has changed, and then suddenly everything clears up. You've likely witnessed the notion of a key frame along with a series of delta frames. Where the delta frames were not properly applied.
In this model, the server decides what's cheaper:
complete key frame
delta commands
The delta commands are communicated as a series of write instructions that can overlay the clients existing buffer.
Example Format:
[Address][Length][Repeat][Delta Payload]
[Address][Length][Repeat][Delta Payload]
[Address][Length][Repeat][Delta Payload]
There are likely a variety of methods for computing these delta commands. A brute force method would be:
Perform Smith Waterman between two images.
Compress the resulting transform into delta commands.

FFmpeg seeking in Mpeg4 streams

I am currently attempting to develop a player that can perform accurate seeking based on an mpeg4 elementary video stream. I'm in the planning stage and trying to decide how to go about things and I'd like some advice before I start.
Some things to note are:
I will have complete control over the encoding of the file.
The original content will be I-frame only
FFmpeg is the encoding/decoding library
Audio can be disregarded for now. I will only be dealing with the video stream.
Frame accurate seeking must be implemented
So, when I'm encoding the content, can I query what type of frame (I, P, B) has been encoded so I can construct an additional index stream for the seeking operation. If not, I can query the GOP after it has been encoded to find the I-frame.
As for playback, the user needs to be able to type in a specific time and go to that frame (the nearest I-frame will be suitable for now). We can assume that the GOP is closed and the length is fairly short (e.g. 15 frames). My thoughts are to query the index stream that I created during encode and determine the relevant distance into the stream for the requested time.
I'm not sure how to seek using the FFMpeg library when playing back files.
Has anyone done anything similar and if so, can you give a brief explanation of how you did it?

Extract and analyse sound from mp3 files

I have a set of mp3 files, some of which have extended periods of silence or periodic intervals of silence. How can I programmatically detect this?
I am looking for a library in C++, or preferably C#, that will allow me to examine the sound content of these files for the silences.
EDIT: I should elaborate what I am trying to achieve. I am capturing streaming sports commentary using VLC and saving it to mp3. When a game is delayed, or cancelled, the streaming commentary is replaced by a repetitive message saying commentary is not available. By looking for these periodic silences (or total silence), I can detect if there is no commentary and stop the streaming recording
For this reason I am reluctant to decompress the mp3 because if would mean my test for these silences would be very slow. Unless I can decode the last 5 minutes of the file?
Thanks
Andrew
I'm not aware of a library that will detect silence directly in the MP3 encoded data, since its not a trivial task to detect silence without first decompressing. Luckily, its easy to find libraries that decode MP3 files and access them as PCM data, and its trivial to detect silence in PCM Data. Here is one such Library for C# I found, but I'm sure there are tons: http://www.robburke.net/mle/mp3sharp/
Once you decode the data, you will have a list of PCM samples. In the most basic form, the algorithm you need to detect silence is simply to analyze a small chunks (could be as little as .25s or as much as several seconds), and make sure that the absolute value of each sample in the chunk is below a threshold. The threshold value you use determines how 'quiet' the sound has to be to be considered silence, and the chunk size determines how long the volume needs to be below that threshold to be considered silence (If you go with very short chunks, you will get lots of false positives due to samples near zero-crossings, but .25s or higher should be ok. There are improvements to the basic approach such as using historesis (which is basically using two thresholds, one for the transition to silence, and one for the transition from silence), and filtering.
Unfortunately, I don't know a library for C++ or C# that implements level detection off hand, and nothing immediately springs up on google, but at least for the simple version its pretty easy to code.
Edit: Also, this library seems interesting: http://naudio.codeplex.com/
Also, while not a true duplicate question, the answers here will be useful for you:
Detecting audio silence in WAV files using C#

Can you encode an mp3 file with multiple bitrates?

Is it possible to encode an mp3 file using multiple bit rates?
e.g., 0-2min 64kbps, 2-4min 128kbps, and 4-10min 64kbps (the middle section needs higher sound quality)
Or am I stuck having to encode it all at the highest?
Yes. See the following:
Variable Bitrate # Wikipedia
You will either need an encoder that supports it, or, if you are emitting frames on your own- you can vary the rate per segment as you wish.
edit:
Also, you may have better luck looking for resources using the VBR (variable-bit-rate) keyword.
edit (caveat):
You should note that there are potentially two different concepts in conflict here, as mentioned by sellibitze.
A higher bitrate allows the capability of storing more audio detail, but doesn't do anything for the fidelity of your recording. If your recording was already of low quality, higher bitrates will only help preserve the level of fidelity available in your audio sample.
Does the middle section need to be higher quality or just higher bitrate to maintain constant quality. If it's the latter you get it with a decent encoder in VBR mode (variable bitrate). If you want the quality of the middle section ("region of interest") to be higher I don't think it's that easy. In theory you can encode the track twice and mix & match afterwards. But mixing frames is not that easy due to the bitreservoir.
I think you are looking for the term variable bit rate, more info here.

Video and audio file

Duplicate: audio and video file compressor
I would like to compress a wmv 2mb or larger file to 3gp 250kb file for mobile devices.
any great compressors for video or audio?
I'm a huge fan of ffmpeg. Find out what codec and resolution your mobile device wants. If you're lucky, H.264 will be supported.
You might have some trouble here. WMV is a container, not a codec, so we can't tell specifically the level of compression we're dealing with and what needs to be changed where, but it may be difficult to get such a dramatic reduction in filesize without making huge compromises, like decreasing the resolution of the video by several orders of magnitude. These compromises may be acceptable for mobile viewing, but there's no guarantee you'll be able to get that filesize down, especially if your file is encoded in a modern codec like H.264 or VC-1.
My first piece of advice is to attempt to locate a good wizard-like transcoder, with a nice non-developer interface on it, etc. Video compression is intense work, and the power tools behind it, and the tools that these wizard-like applications use to actually perform their work, are very complex and take lots of practice and tweaking to get right, and are usually restricted to commandlines. If your mobile device's vendor provides these utilities, for instance, you'll be much better off using them.
If you aren't able to locate such a utility, godspeed and spend lots of time with mencoder and ffmpeg's man pages and IRC rooms. It's not difficult per se, it just takes a lot of study and reading to get acceptable output, especially when you're going after the reductions you've mentioned. Good luck.