icecast2. Two sources, same streaming - icecast

Is it posible to have 2 sources in the same mount point?
Example:
Source 1 (from S1 IP adress) sends music to IP Icecast2 server.
Source 2 (from S2 IP adress) sends voice to Ip Icecast2 server.
Listener run: mplayer ip_icecast2:8000/example.ogg..
The listener listen the music and sound at the send time..

Liquidsoap should be able to handle the mixing and setting the proper metadata.. The web site is at http://savonet.sourceforge.net/

Yes, but not with Icecast alone.
What you need to do is mix the two streams. Icecast doesn't have any features for doing anything like this. There are many ways to do this. I would probably look at mixing the streams together with FFMPEG. You can use the amerge and amix filters.
Now, you need to get the output of FFMPEG to your Icecast server. With some scripting, you should be able to pipe the STDOUT from FFMPEG to a TCP connection to Icecast. Prior to sending data, you will need to send the appropriate headers and what not.

you can actually do it without anything else, you need to specify 3 streams
stream
live
autodj
the trick relies in a tag called fallback, you directly configure it on icecast xml file, and it does something like - if the live audio is not available, fall back to autodj, and any1 of it will directly play on stream, giving preference to live

I'm assuming you mean one source of audio and one of speaking, from different URLs. If you don't know how to use Soap, you could grab both the audio and music streams using a 3rd party application like SAM broadcaster.
This will decode the streams and mix them like a conventional audio mixer before re-encoding and sending out to a single Icecast server as one stream.
Keep in mind, if you are doing voice overs, there will be latency to deal with. i.e. speaking will be heard by the final listener slightly after the part of audio you will be speaking to. This depends on the buffer lengths involved, and is because SAM broadcaster will be 'listening' to the audio at the same place you are (assuming you are speaking to the source audio stream). Then you need to add to that, the playing buffer SAM needs to process, playing your voice's stream to be mixed and passed on.

Related

How to write VLC plugin that can interact with the operating system

I need to find out if it is possible and how (I do not care about the language C/C++, Lua, Python ...) to make a VLC plugin which purpose will be to be called by the VLC player and at specific times of the video stream will do some action.
The action that I need to do is to open a UDP socket and send some data read from a file that comes along with the video currently played.
I need to make something like a subtitle reader that on it's best can initialize UDP socket and send the read data to the server.
I am not sure that creation of UDP socket is possible in Lua maybe the better option will be a binary C/C++ plugin but can't find any example.
In general at the best my requirements are:
Load settings file at VLC launch
Need to be triggered by the player at specific times of the video stream
Get the file name of the source video stream
Open the file (script) with the same name but different extension
Open a UDP socket
Compose the message
Send the message
Continue the loop until the end of the video stream
Any information, example or site, link is greatly appreciated.
Looks like you would like to create a control interface module. Those are written in C/C++ within the VLC context and in turn need to be (re-) compiled for each platform you would like to target.
Have a look at the audioscrobbler module to see how to interact with the current input stream and how to retrieve metadata such as file name, etc. Since those modules are in C, opening sockets and transmitting data is not a big deal.
The biggest caveat is probably that you need a complex compilation environment if you would like to target the Windows platform. Have a look at the compilation HOWTO's on the wiki http://wiki.videolan.org/Compile_VLC/ since this is probably what you would like to try prior to doing any coding.
Thinking about it, you can probably achieve a similarly featured extension in lua, which is easier to develop (since you don't need to compile VLC yourself and it will cross-platform). Opening UDP sockets might be problematic though. TCP will just work. This page could be a nice starting point: http://www.coderholic.com/extending-vlc-with-lua/

How to create a video streaming httpserver?

I'm using c++ and poco libraries. I'm trying to implement a video streaming httpserver.
Initially i used Poco::StreamCopier.
But client failed to stream.
Instead client is downloading the video.
How can i make the server to send a streamresponse so that client can stream the video in browser instead of downloading?
While not within POCO, you could use ffmpeg. It has streaming servers for a number of video protocols and is written in C (which you could write POCO-like adapters for).
http://ffmpeg.org/ffmpeg.html#rtp
http://ffmpeg.org/ffmpeg.html#toc-Protocols
http://git.videolan.org/?p=ffmpeg.git;a=tree
And it has a pretty liberal license:
http://ffmpeg.org/legal.html
You need to research which video encoding and container that is right for streaming -- not all video files can stream
Without using something to decode the video at the other end but simply over HTTP, you can use The mime encoding "content-type:multipart/x-mixed-replace; boundary=..." and send a series of jpeg images.
This is actually called M-JPEG over HTTP. See: http://en.wikipedia.org/wiki/Motion_JPEG
The browser will replace each image as it receives it which makes it look like it's video. It's probably the easiest way to stream video to a browser and many IP webcameras support this natively.
However, it's not bandwidth friendly by any means since it has to send a whole jpeg file for each frame. So if you're going to be using this over the internet it'll work but will use more bandwidth than other method.
However, It is naively supported in most browsers now and it sounds like that is what you're after.

Designing live video stream for wxWidgets

In my application we will present the video stream from a traffic camera to a client viewer. (And eventually several client viewers.) The client should have the ability to watch the live video or rewind the video and watch earlier footage including video that occurred prior to connecting with the video stream. We intend to use wxWidgets to view the video and within that we will probably use the wxMediaCtrl.
Now, from the above statements some of you might be thinking "Hey, he doesn't know what he's talking about." And you would be correct! I'm new to these concepts and I confused by the surplus of information. Are the statements above reasonable? Can anyone recommend a basic server/client architecture for this? We will definitely be using C++ wxWidgets for the GUI, but perhaps wxMediaCtrl is not what I want... should I be directly using something like the ffmpeg libraries?
Our current method seems less than optimal. The server extracts a bitmap from each video frame and then waits for the single client to send a "next frame" message, at which point the server sends the bitmap. Effectively we've recreated our own awkward, non-standard, inefficient, and low-functionality video streaming protocol and viewer. There has to be something better!
You should check out this C++ RTMP Server: http://www.rtmpd.com/. I quickly downloaded, compiled and successfully tested it without any real problems (on Ubuntu Maverick). The documentation is pretty good if a little all over the place. I suspect that once you have a streaming media server capable of supporting the typical protocols (which rtmpd seems to do), then writing a client should fall into place naturally, especially if you're using wxWidgets as the interface api. Of course, it's easy to write that here, from the comfort of my living room, it'll be a different story when you're knee deep in code :)
you can modify your software such that:
The server connect, server grabs an image, passes it to ffmpeg establishing stream, then copy the encoded data from ffmpeg stream and send to client via network, if the connection drops, close the ffmpeg stream.
Maybe you can use the following to your own advantage:
http://www.kirsle.net/blog.html?u=kirsle&id=63
There is a player called VLC. It has a library for c++ and you can use it to embed the player in your GUI application. It supports a very wide range of protocols. So you should leave connecting, retrieving and playing jobs to VLC and take care of the starting and stopping jobs only. It would be easy and probably a better solution than doing it yourself.
For media playing facility, both music and audio, you can a look on GStream. And talking about the server, I think Twisted (A network library in Python) should be good option. The famous live video social website justin.tv is based on Twisted. Here you can read the story from here. Also, I built a group of server for streaming audio on Twisted, too. They can serve thousands of listener on line in same time.

streaming video to and from multiple sources

I wanted to get some ideas one how some of you would approach this problem.
I've got a robot, that is running linux and uses a webcam (with a v4l2 driver) as one of its sensors. I've written a control panel with gtkmm. Both the server and client are written in C++. The server is the robot, client is the "control panel". The image analysis is happening on the robot, and I'd like to stream back the video from the camera to the control panel for two reasons:
A) for fun
B) to overlay image analysis results
So my question is, what are some good ways to stream video from the webcam to the control panel as well as giving priority to the robot code to process it? I'm not interested it writing my own video compression scheme and putting it through the existing networking port, a new network port (dedicated to video data) would be best I think. The second part of the problem is how do I display video in gtkmm? The video data arrives asynchronously and I don't have control over main() in gtkmm so I think that would be tricky.
I'm open to using things like vlc, gstreamer or any other general compression libraries I don't know about.
thanks!
EDIT:
The robot has a 1GHz processor, running a desktop like version of linux, but no X11.
Gstreamer solves nearly all of this for you, with very little effort, and also integrates nicely with the Glib event system. GStreamer includes V4L source plugins, gtk+ output widgets, various filters to resize / encode / decode the video, and best of all, network sink and sources to move the data between machines.
For prototype, you can use the 'gst-launch' tool to assemble video pipelines and test them, then it's fairly simply to create pipelines programatically in your code. Search for 'GStreamer network streaming' to see examples of people doing this with webcams and the like.
I'm not sure about the actual technologies used, but this can end up being a huge synchronization ***** if you want to avoid dropped frames. I was streaming a video to a file and network at the same time. What I eventually ended up doing was using a big circular buffer with three pointers: one write and two read. There were three control threads (and some additional encoding threads): one writing to the buffer which would pause if it reached a point in the buffer not read by both of the others, and two reader threads that would read from the buffer and write to the file/network (and pause if they got ahead of the producer). Since everything was written and read as frames, sync overhead could be kept to a minimum.
My producer was a transcoder (from another file source), but in your case, you may want the camera to produce whole frames in whatever format it normally does and only do the transcoding (with something like ffmpeg) for the server, while the robot processes the image.
Your problem is a bit more complex, though, since the robot needs real-time feedback so can't pause and wait for the streaming server to catch up. So you might want to get frames to the control system as fast as possible and buffer some up in a circular buffer separately for streaming to the "control panel". Certain codecs handle dropped frames better than others, so if the network gets behind you can start overwriting frames at the end of the buffer (taking care they're not being read).
When you say 'a new video port' and then start talking about vlc/gstreaming i'm finding it hard to work out what you want. Obviously these software packages will assist in streaming and compressing via a number of protocols but clearly you'll need a 'network port' not a 'video port' to send the stream.
If what you really mean is sending display output via wireless video/tv feed that's another matter, however you'll need advice from hardware experts rather than software experts on that.
Moving on. I've done plenty of streaming over MMS/UDP protocols and vlc handles it very well (as server and client). However it's designed for desktops and may not be as lightweight as you want. Something like gstreamer, mencoder or ffmpeg on the over hand is going to be better I think. What kind of CPU does the robot have? You'll need a bit of grunt if you're planning real-time compression.
On the client side I think you'll find a number of widgets to handle video in GTK. I would look into that before worrying about interface details.

How can I stream video from my application to the web?

I have an application that grabs video from multiple webcams, does some image processing, and displays the result on the screen. I'd like to be able to stream the video output on to the web - preferably to some kind of distribution service rather than connecting to clients directly myself.
So my questions are:
Do such streaming distribution services exist? I'm thinking of something like ShoutCAST relays, but for video. I'm aware of ustream.tv, but I think they just take a direct webcam connection rather than allow you to send any stream.
If so, is there a standard protocol for doing this?
If so, is there a free library implementation of this protocol for Win32?
Ideally I'd just like to throw a frame of video in DIB format at a SendToServer(bitmap) function, and have it compress, send, and distribute it for me ;)
Take a look at video LAN client (or VLC for short) as a means for streaming video.
As for distribution sites, I don't know how well it works with ustream.tv and similar new services.
ustream.tv works by using Adobe Flash's support for reading input from a webcam. To fake it out, you need a fake webcam driver. Looking on the ustream.tv site, they point to an application called WebCamMax that allows effects and splicing in video. It works by creating a pseudo-webcam that mixes video from one or more cameras along with other sources. Since that app can do it, your own code could do that too, although you'll probably need to write a Windows driver to get it all working correctly.