Is there a way to add "Album" to the stream title of an Icecast2 server? - icecast

I currently have a radio player that streams audio from an Icecast server. The player includes the current song metadata from the Icecast admin, but I also need to include album in the metadata. Is there any way this is possible using just the Icecast server?

This will depend on the format you are streaming in.
If you are streaming Ogg encapsulated audio, including Opus, then the whole metadata is available to you in the stream. It is entirely up to the individual player software to display it in a sensible way.
In case of Firefox there is a experimental metadata API in Javascript that provides information about a HTML5 <audio> element.
If you are streaming one of the other formats, like MP3 or AAC, then there is really only one metadata field. You can put anything you want there. Players might interpret it in certain ways though, like splitting it at a "-" into Artist and Title fields. Nothing of this is really well defined as it originates from the hacks introduced by Shoutcast. Inside icecast it's handled as a single field.

Related

Is there a Digital Signage Program with Cached Credentials for Browser View?

I'm trying to find a digital signage program that can display an ongoing powerpoint on half the screen, and a live view of Outlook calendars on the other half. We want a certain group of employees to be able to see what they're doing for the day, and for them to be able to see changes happen.
Here's an example of how Outlook Calendar would be displayed
I was looking into PiSignage, as well as Galaxy Signage. However, none of them seem to have the capability of displaying the calendar properly, or they don't store credentials.
I was looking for something relatively simple to use for the users that will be updating the content of the rotating powerpoint.
Having that live view of Outlook is mainly what is desired though.
There is no "relatively simply" solution, as you need a combination of features and some web-app developing.
PowerPoint:
I do not know any Digital Signage Player who plays PowerPoint files directly. In most cases, you have to convert ppt as videos or images.
Outlook Calendar with credentials:
This is possible via digital signage widgets.
Widgets are websites/web-app which are run locally on the player. This way, you can handle credentials and use every web API/service you want via ordinary HTML and JavaScript. In your case, it is not complex, but you will need some JS-developing.
Multiple Zones:
You require a software which can display these widgets and websites in multiple zones.
The player hardware from IAdea which base on W3C-SMIL language, features multiple zones and widgets. As an alternative, there is an open source SMIL-Player developed by me.
You can use both player solutions on any SMIL compatible SaaS. IAdea includes a Windows software for creating playlists. You can also create SMIL-indexes manually like HTML or XML.

What is the correct way to organize multiple recordings from a single device: AWS Kinesis video streams

What is the correct way to create a searchable archive of videos from a single Raspberry PI type device?
Should I create a single stream per device, then whenever that device begins a broadcast it adds to that stream?
I would then create a client that lists timestamps of those separate recordings on the stream? I have been trying to do this, but I have only gotten as far are ListFragments and GetClip. Neither of which seem to do the job. What is the use case for working with fragments? I'd like to get portions of the stream separated by distinct timestamps. As in, if I have a recording from 2pm to 2:10pm, that would be a separate list item from a recording taken between 3pm and 3:10pm.
Or should I do a single stream per broadcast?
I would create a client to list the streams then allow users to select between streams to view each video. This seems like an inefficient use of the platform, where if I have 5 10 second recordings made by the same device over a few days, it creates 5 separate archived streams.
I realize there are implications related to data retention in here, but am also not sure how that would act if part of a stream expires, but another part does not.
I've been digging through the documentation to try to infer what best practices are related to this but haven't found anything directly answering it.
Thanks!
Hard to tell what your scenario really is. Some applications use sparsely populated streams per device and use ListFragments API and other means to understand the sessions within the stream.
This doesn't work well if you have very sparse streams and large number of devices. In this case, some customers implement "stream leasing" mechanism by which their backend service or some centralized entity keeps the track of pool of streams and leases those to the requestor, potentially adding new streams to the pool. The streams leased times are then stored in a database somewhere for the consumer side application to be able to do its business logic. The producer application can also "embed" certain information within the stream using FragmentMetadata concept which really evaluates into outputting MKV tags into the stream.
If you have any further scoped down questions regarding the implementations, etc, don't hesitate to cut GitHub issues against particular KVS assets in question which would be the fastest way to get answers.

Record live streaming video with WebRTC and stream with AWS

I'm trying to develop a website that basically lets a user visit a page, and lets say click a button, and use their built in camera to live stream videos with audio to others that visit another url.
I need some clarity on what I need to develop, what I can get from 3rd party to save time. AWS looks to cover all the encoding and delivery http://aws.amazon.com/cloudfront/streaming/, but I'm confused on the process on which I should record and delivery the content to S3. Just to much information overload.
In all my research I looks like I should build a WebRTC, which I have done, then transport that data with javascript from the clients browser to my server, and thus to AWS. Is this the best format, or should I been using a 3rd party thats putting more time into that element?
I have seen the Kurento project, as well as this RecordRTC project.
Like I said, I'm finding there is just to much information overload on the topic.
So what are my options for:
In browser recording with WebRTC. Anything else I should do or just force users to roll up to a supporting browser?
WebRTC means I have to do Javascript for the delivery, is node a better option for the server to take delivery of this streaming data?
Anything else I need to know before I pass it off to S3 for delivery to the cloud front?
As you can see the core of my question comes within the recording and transporting the data to the web server so I can delivery it for streaming.
I am looking for the same thing.
In 2020, it seems it should be possible with RecordRTC and then uploading blobs / multiform data directly to S3.

On the fly Stream and transcode video with Django

I have a model that uses "models.FileField()", which I then display back to the user so they may click the link and have a file rendered in their browser. The user can upload various types of files.
Problem is, I'd like to handle large avi's differently, and have the file stream to the user.
The requirement I have is to simply stream/transcode video files from the media_root dir to an end user's browser, preferably in a mac friendly format. It would be for a couple users at most.
I've search and stumbled upon a few projects:
https://github.com/andrewebdev/django-video
https://github.com/rugginoso/django-transcodeandstream
As I am I a relatively newbie when it comes to django, I'm not sure how to incorporate their code into my proj.
Any thoughts, suggestions?
You can check Amazon Elastic Transcoder. It is a media transcoding in the cloud. It is designed to be a highly scalable, easy to use and a cost effective way for developers and businesses to convert (or “transcode”) media files from their source format into versions that will playback on devices like smartphones, tablets and PCs.
Or else you can check Webfaction, they have Video and image processing on their servers which you can use.
If you will use any of those, you can ask them about the installing process and how to integrate it in your project.
And one more thing, if you want to play the video on the browser, you will need a video player like jwplayer.
Hope this will help you get started! Best wishes!

HTML5 and huge local storage of downloaded files

I'm choosing a platform for distributed MP3 player. Its basic functionality will be:
authenticate itself and query central server (through HTTP request to asmx web service)
get list of approved MP3s in some JSON
download those MP3s to local disc
play downloaded files (not stream, player must work also without temporary connection to server)
I'm thinking about HTML5 based player. But I'm not sure about new HTML5's File object capatibilities. Is possible to download and store huge (GBs) amount of mp3 data to local disc and access it purely in HTML5/js?
Didn't I miss something? Are there some other gaps? Customer need as multi-platform player as possible, so quick work in WPF C# is last choice.
I would look into the HTML5 File API, or a chrome extension (Allows infinite store with Web SQL DBs).
The file API spec is here:
http://dev.w3.org/2009/dap/file-system/pub/FileSystem/
Basically all other storage mechanisms have a hard limit on most browsers, even though Web SQL is spec'd to allow you to specify max db size in your app.