I'm taking a look at some random Icecast playlists (available here: http://dir.xiph.org/index.php) and I'm wondering why many seem to contain a list of the same mp3 file.
e.g.:
GET http://dir.xiph.org/listen/193289/listen.m3u
http://87.230.101.49:80/top100station.mp3
http://87.230.103.107:80/top100station.mp3
http://87.230.101.16:80/top100station.mp3
http://87.230.101.78:80/top100station.mp3
http://87.230.101.11:80/top100station.mp3
http://87.230.103.85:80/top100station.mp3
http://80.237.158.87:80/top100station.mp3
http://87.230.101.30:80/top100station.mp3
http://80.237.158.88:80/top100station.mp3
http://87.230.103.9:80/top100station.mp3
http://87.230.103.58:80/top100station.mp3
http://87.230.101.12:80/top100station.mp3
http://87.230.101.24:80/top100station.mp3
http://87.230.103.60:80/top100station.mp3
http://87.230.103.8:80/top100station.mp3
http://87.230.101.25:80/top100station.mp3
http://87.230.101.56:80/top100station.mp3
http://87.230.101.50:80/top100station.mp3
For what it's worth, Icecast streams are intended for playing those Shoutcast-type live streams (think live radio over internet) so it makes sense that there wouldn't be a list of different tracks..but what are those repeats? Different bitrates or just mirrors?
I'm asking all of this because I'm looking to stream one of those mp3s inside my mobile app so I'm wondering if there any need to somehow figure out which url to use...
Internet radio streams are typically mirrored across many servers. This balances the bandwidth load, and reduces points of failure.
In addition, it is common for servers to fill up as they get popular. Most players will go to the next track in the playlist when a track fails, so this allows autofailover when a client cannot connect to a specific server, or if it gets disconnected.
Related
I'm currently working on simple VOD browser-based service, using mostly AWS technologies. HLS will be used as the streaming protocol, which is supported by Elastic Transcoder.
Currently, the source material is 720p (1280x720), and this is also the resolution I want to show to all devices that can handle it. I would like the videos to work on desktops, iPad's, and most smartphones. I'm using ViBlast with videojs, as the player.
I have the following questions:
the m3u8 playlist allows to specify multiple streams. Should each resolution get's it own playlist (with different source-streams on different bitrates), or can I put everything in one playlist (so one playlist can serve different resolutions and bitrates).
Seems desktops and most recent tablets can display 1280x720, I assume the same playlist can be used. I just need to specify bitrates. However, what is the best resolution for mobile phones? Seems every device has other dimensions (looking at Android here).
Which bitrate should I use for each device? I'm doing some research, but it seems every article has a different recommendation for the "best" setting, but never explain how they got those numbers.
If I use a playlist which contains different sources with different resolutions, does the order in the playlist matter? I've read somewhere that lowest bitrates should be listed first, but does this also apply to resolutions? Or does the player automatically picks the stream which best matches the screen?
I'm looking for a "good enough" solution that will fit most devices.
Hope this helps.
the m3u8 playlist allows to specify multiple streams. Should each
resolution get's it own playlist (with different source-streams on
different bitrates), or can I put everything in one playlist (so one
playlist can serve different resolutions and bitrates).
For reference, here is Apple's 'Technical Note TN2224' on the subject which is a good guideline for the info below.
https://developer.apple.com/library/content/technotes/tn2224/_index.html
Short answer: Each resolution should have its own variant playlist.
Typically there is one master playlist with references to the variant playlists (aka renditions). The variant playlists are different quality streams of the same video, varying in bitrate and resolution. But each variant only contains one bitrate level. Sample master playlist:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=4648000,RESOLUTION=3840x2160
4648k/stream.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=2670000,RESOLUTION=1920x1080
2670k/stream.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1823000,RESOLUTION=1280x720
1823k/stream.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=975000,RESOLUTION=854x480
975k/stream.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=491000,RESOLUTION=640x360
491k/stream.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=186000,RESOLUTION=256x144
186k/stream.m3u8
"The bitrates are specified in the EXT-X-STREAM-INF tag using the BANDWIDTH attribute" (TN2224). And each descending bandwidth (bitrate) level has a corresponding lower resolution because there is less data available and usually expected to be viewed on smaller, mobile screens.
Seems desktops and most recent tablets can display 1280x720, I assume
the same playlist can be used. I just need to specify bitrates.
However, what is the best resolution for mobile phones? Seems every
device has other dimensions (looking at Android here).
Resolution and bitrate go together. A stream encoded with a 186K bitrate (very low) does not have enough data to fill a 1280x720 screen. But a mobile device on a cell network might not be able to download a high bitrate. So you need several variants options available, each with the appropriate resolution and bitrate.
Don't focus on a specific device or else you'll never finish. Build a ladder of bitrate/resolution variants using common 16:9 aspect ratios. E.g. 1280x720, 1024x576, 640x360,...
There are several things to consider though. Bitrate, resolution you are already considering. But are these videos encoded using H.264? If so you should consider the profile level. Here is a good article on the topic: http://www.streamingmedia.com/Articles/ReadArticle.aspx?ArticleID=94216&PageNum=1.
Which bitrate should I use for each device? I'm doing some research,
but it seems every article has a different recommendation for the
"best" setting, but never explain how they got those numbers.
Same answer as resolution. Don't focus on the actual device. Build a ladder of bitrate/resolution variants that allows the device to select the most appropriate based on available bandwidth, battery life, processing power, etc.
If I use a playlist which contains different sources with different
resolutions, does the order in the playlist matter? I've read
somewhere that lowest bitrates should be listed first, but does this
also apply to resolutions? Or does the player automatically picks the
stream which best matches the screen?
Each publisher or manufacturer might build their player differently. But this is what Apple recommends in TN2224.
"First bit rate should be one that most clients can sustain
The first entry in the master playlist will be played at the initiation of a stream and is used as part of a test to determine which stream is most appropriate. The order of the other streams is irrelevant. Therefore, the first bit rate in the playlist should be the one that most clients can sustain."
Hope that helps.
Ian
I want to use bokeh to display a time series and provide the data with updates via source.stream(someDict). The data is, however, generated by a c++ application (server) that may run on the same machine or a machine in the network. I was looking into transmitting the updated data (only the newly added lines of the time series) via ZMQ to the python program (client).
The transmission of the message seems easy enough to implement but
the dictionary is column based. Is it not more efficient to append lines, i.e. one line per point in time, and send this?
If there is no good way for the first, what kind of object should I send? Do I need to marshal the information or is it sufficient to make a long string like {col1:[a,b,c,...], col2:[...],...} and send this to the client? I expect to send not more than a few hundred lines with 10 floats per second.
Thanks for all helpful answers.
i'm working on an academic project(a search engine), the main functions of this search engine are:
1/-crawling
2/-storing
3/-indexing
4/-page ranking
all the sites that my search engine will crawl are available locally which means it's an intranet search engine.
after storing the files found by the crawler, these files need to be served quickly for caching purpose.
so i wonder what is the fastest way to store and retrieve these file ?
the first idea that came up is to use FTP or SSH, but these protocols are connection based protocols, the time to connect, search for the file and get it is lengthy.
i've already read about google's anatomy, i saw that they use a data repository, i'd like to do the same but i don't know how.
NOTES: i'm using Linux/debian, and the search engine back-end is coded using C/C++. HELP !
Storing individual files is quite easy - wget -r http://www.example.com will store a local copy of example.com's entire (crawlable) content.
Of course, beware of generated pages, where the content is different depending on when (or from where) you access the page.
Another thing to consider is that maybe you don't really want to store all the pages yourself, but just forward to the site that actually contains the pages - that way, you only need to store a reference to what page contains what words, not the entire page. Since a lot of pages will have much repeated content, you only really need to store the unique words in your database and a list of pages that contain that word (if you also filter out words that occur on nearly every page, such as "if", "and", "it", "to", "do", etc, you can reduce the amount of data that you need to store. Do a count of the number of each word on each page, and then see compare different pages, to find the pages that are meaningless to search.
Well, if the program is to be constantly running during operation, you could just store the pages in RAM - grab a gigabyte of RAM and you'd be able to store a great many pages. This would be much faster than caching them to the hard disk.
I gather from the question that the user is on a different machine from the search engine, and therefore cache. Perhaps I am overlooking something obvious here, but couldn't you just sent them the HTML over the connection already established between the user and the search engine? Text is very light data-wise, after all, so it shouldn't be too much of a strain on the connection.
I have created a web application using django , html and jquery( and js ).
I need to record audio from a mic and store it as a .wav file. What is the best way to go about doing this ? (Better if it's supported on most browsers like chrome, firefox, safari)
I don't mind using a flash plugin if it's easy to understand and use.
Please suggest good ideas and links.
Thanks in advance.
Flash highly compresses the audio data before sending it, if you use the conventional ways of acquiring data from microphone. That is, if you use NetStream.publish() with a microphone attached to it. I'm actually not sure about the format, but would imagine that it is something proprietary... could be MP3. But it could be also Speex... at least I know that Flash supports this format.
Now, Microphone class is capable of exposing the raw sound data within the application. You need to listen to sampleData event dispatched from its instance. However, the documentation, for some reason, doesn't cover that... This is relatively new feature, so, perhaps they just forgot to add it in the docs. Here however, they posted an example of how to do that (scroll to the "Capturing microphone sound data" paragraph). You will need to write the "encoder" for WAV data yourself, but the format it outputs the audio is already some sort of PCM, so you will only need to write the proper headers (or so I think).
I need some help on software design. Let's say I have a camera that gets acquisitions, send them to a filter, and display the images one at a time.
Now, what I want is to wait for two images and after that send the two images to the filter and both of them to the screen.
I thought of two options and I'm wondering which one to choose:
In my Acquisitioner (or whatever) class, should I put a queue which waits for two images before sending them to the Filterer class?
Should I put an Accumulator class between the Acquisitionner & Filterer?
Both would work in the end, but which one do you think would be better?
Thanks!
To give a direct answer, I would implement the Accumulator policy in a separate object. Here's the why:
Working on similar designs in the past, I found it very helpful to think of different 'actors' in this model as sources and sinks. A source object would be capable of producing or outputting an image to the attached sink object. Filters or accumulators in this system would be designed as pipes -- in other words they would implement interfaces of both a sink and a source. Once you come up with a mechanism for connecting generic sources, pipes, and sinks, it's very easy to implement an accumulation policy as a pipe, which for every nth image received, would hold on to it if n is odd, and output both of them, if n is even.
Once you have this system, it will be trivial for you to change out sources (image file readers, movie decoders, camera capture interfaces), sinks (image file or movie encoders, display viewers, etc), and pipes (filters, accumulators, encoders, multiplexers) without disrupting the rest of your code.
It depends. But, if all that your queue does is waiting for a second image to come, i reckon you could simply implement it right in the Acquisitioner.
On the other hand, if you want to incorporate whatever additional functionality there, then added modularity and all the benefits that come hand in hand with it would not hurt one tiny bit.
I don't think it matters all that much in this particular case.