How to write VLC plugin that can interact with the operating system - c++

I need to find out if it is possible and how (I do not care about the language C/C++, Lua, Python ...) to make a VLC plugin which purpose will be to be called by the VLC player and at specific times of the video stream will do some action.
The action that I need to do is to open a UDP socket and send some data read from a file that comes along with the video currently played.
I need to make something like a subtitle reader that on it's best can initialize UDP socket and send the read data to the server.
I am not sure that creation of UDP socket is possible in Lua maybe the better option will be a binary C/C++ plugin but can't find any example.
In general at the best my requirements are:
Load settings file at VLC launch
Need to be triggered by the player at specific times of the video stream
Get the file name of the source video stream
Open the file (script) with the same name but different extension
Open a UDP socket
Compose the message
Send the message
Continue the loop until the end of the video stream
Any information, example or site, link is greatly appreciated.

Looks like you would like to create a control interface module. Those are written in C/C++ within the VLC context and in turn need to be (re-) compiled for each platform you would like to target.
Have a look at the audioscrobbler module to see how to interact with the current input stream and how to retrieve metadata such as file name, etc. Since those modules are in C, opening sockets and transmitting data is not a big deal.
The biggest caveat is probably that you need a complex compilation environment if you would like to target the Windows platform. Have a look at the compilation HOWTO's on the wiki http://wiki.videolan.org/Compile_VLC/ since this is probably what you would like to try prior to doing any coding.
Thinking about it, you can probably achieve a similarly featured extension in lua, which is easier to develop (since you don't need to compile VLC yourself and it will cross-platform). Opening UDP sockets might be problematic though. TCP will just work. This page could be a nice starting point: http://www.coderholic.com/extending-vlc-with-lua/

Related

Play AudioStream of WebRTC C++ Application with Audio Device

I wrote two command line applications in C++ which use WebRTC:
Client creates a PeerConnection and opens an AudioStream
Server receives and plays the AudioStream
The basic implementation works: They exchange SDP-Offer and -Answer, find their external IPs using ICE, a PeerConnection and a PeerConnectionFactory with the corresponding constraints are created, etc. I added a hook on the server side to RtpReceiverImpl::IncomingRtpPacket which writes the received payload to a file. The file contains valid PCM audio. Therefore, I assume the client streams data successfully through the network to the server application.
On the server side, my callback PeerConnectionObserver::OnAddStream is called and receives a MediaStreamInterface. Furthermore, i can iterate with my DeviceManagerInterface::GetAudioOutputDevices through my audio devices. So basically, everything is fine.
What is missing: I need some kind of glue to tell WebRTC to play my AudioStream on the corresponding device. I have seen that I can get an AudioSink, AudioRenderer and AudioTrack objects. Again: Unfortunatly, I do not see an interface to pass them to the audio device. Can anyone help me with that?
One important note: I want to avoid to debug real hardware. Therefore, I added -DWEBRTC_DUMMY_FILE_DEVICES when building my WebRTC modules. It should write audio to an output file but the file just contains 0x00. The input file is read successfully because as I mentioned earlier, audio is send via RTP.
Finally, I found the solution: First, I have to say that my Code uses a WebRTC from 2017. So, the following things may have been changed and/or are fixed already:
After debugging my code and the WebRTC library I saw: When a remote stream is added, playback should start automatically. There is no need on the developer side to call playout() in the VoiceEngine or something comparable. When the library recognizes a remote audio stream, playback is paused, the new source is added to the mixer, and playback is resumed. The only APIs to control playback are provided by webrtc::MediaStreamInterface which is passed via the PeerConnectionObserver::OnAddStream. Examples are SetVolume() or set_enabled().
So, what went wrong in my case? I used the FileAudioDevice class which should write raw audio data to an output file instead of speakers. My implementation contains two bugs:
FileAudioDevice::Playing() returned true in any case. Due to this, the library added the remote stream, wanted to resume playout, called FileAudioDevice::Playing() which returned true and aborted because it thought the AudioDevice was already in playout mode.
There seems to be a bug in the FileWrapper class. The final output is written in FileAudioDevice::PlayThreadProcess() via _outputFile.Write(_playoutBuffer, kPlayoutBufferSize) onto disk. However, this does not work. Luckily, a plain C fwrite() as hacky bugfix does work.

Create audio buffer from application's audio interface

Using PortAudio, how can I access running applications' audio interface so that I can capture the audio they produce in real time? The goal would be then to send this audio as UDP packet to a server.
I've had a look at PortAudio's code samples but can't find anything similar.
Maybe PortAudio is not the right library for me?
I'm working mainly on Mac OS.
Core Audio does not have the sort of functionality you're looking for. Processes are sandboxed/isolated from one another.
You could probably achieve this using library injection, but there are a number of complications. OSX has added System Integrity Protection which disables injections. If you're willing to disable SIP (which is dangerous! Proceed at your own risk!) then you could try something like mach_inject and intercepting the target processes' calls to Core Audio. But you'd never be able to ship something like this, since asking users to disable SIP is not reasonable.

icecast2. Two sources, same streaming

Is it posible to have 2 sources in the same mount point?
Example:
Source 1 (from S1 IP adress) sends music to IP Icecast2 server.
Source 2 (from S2 IP adress) sends voice to Ip Icecast2 server.
Listener run: mplayer ip_icecast2:8000/example.ogg..
The listener listen the music and sound at the send time..
Liquidsoap should be able to handle the mixing and setting the proper metadata.. The web site is at http://savonet.sourceforge.net/
Yes, but not with Icecast alone.
What you need to do is mix the two streams. Icecast doesn't have any features for doing anything like this. There are many ways to do this. I would probably look at mixing the streams together with FFMPEG. You can use the amerge and amix filters.
Now, you need to get the output of FFMPEG to your Icecast server. With some scripting, you should be able to pipe the STDOUT from FFMPEG to a TCP connection to Icecast. Prior to sending data, you will need to send the appropriate headers and what not.
you can actually do it without anything else, you need to specify 3 streams
stream
live
autodj
the trick relies in a tag called fallback, you directly configure it on icecast xml file, and it does something like - if the live audio is not available, fall back to autodj, and any1 of it will directly play on stream, giving preference to live
I'm assuming you mean one source of audio and one of speaking, from different URLs. If you don't know how to use Soap, you could grab both the audio and music streams using a 3rd party application like SAM broadcaster.
This will decode the streams and mix them like a conventional audio mixer before re-encoding and sending out to a single Icecast server as one stream.
Keep in mind, if you are doing voice overs, there will be latency to deal with. i.e. speaking will be heard by the final listener slightly after the part of audio you will be speaking to. This depends on the buffer lengths involved, and is because SAM broadcaster will be 'listening' to the audio at the same place you are (assuming you are speaking to the source audio stream). Then you need to add to that, the playing buffer SAM needs to process, playing your voice's stream to be mixed and passed on.

C++ stream server for HTML5 Audio

Is it possible to have the src of an HTML5 Audio tag be a C++ program, and for the C++ program to stream audio to the audio element? For example, let's say I have an HTML5 Audio element trying to get audio from a local program like so:
<audio src='file://(path to program)'>
If it is possible, which libraries should I use? I just want to try it locally for now, so file:// is what I want.
EDIT: Setting the source as file:// won't work, so how can I tell it to get audio from the specific C++ program?
I am not sure about the C++ side of the question, but trying to embed a would-be program via file: will not work, as the browser would simply read the binary file foo.exe instead of calling it and reading in the standard output (or whatever).
Instead, for testing purposes, you would probably like to run the server locally on your machine, referring to it via localhost.
Certainly if your C++ program was stand-alone, you could write/include a mini-web server to service only audio requests that come in and then execute whatever code you wanted to in C++ to return the data.
Otherwise you could write a C++ plugin/module to an existing web server like IIS or apache and configure the web server to direct traffic for a specific url to your C++ functions to return the data. This might be a little more complicated but alot more powerful by allowing you to focus more on your audio code than worrying about handling HTTP protocol and TCP connections.
In either case then your C++ code would be referenced the same as any webserver. "<audio src='http://localhost:port/etc'>

Designing live video stream for wxWidgets

In my application we will present the video stream from a traffic camera to a client viewer. (And eventually several client viewers.) The client should have the ability to watch the live video or rewind the video and watch earlier footage including video that occurred prior to connecting with the video stream. We intend to use wxWidgets to view the video and within that we will probably use the wxMediaCtrl.
Now, from the above statements some of you might be thinking "Hey, he doesn't know what he's talking about." And you would be correct! I'm new to these concepts and I confused by the surplus of information. Are the statements above reasonable? Can anyone recommend a basic server/client architecture for this? We will definitely be using C++ wxWidgets for the GUI, but perhaps wxMediaCtrl is not what I want... should I be directly using something like the ffmpeg libraries?
Our current method seems less than optimal. The server extracts a bitmap from each video frame and then waits for the single client to send a "next frame" message, at which point the server sends the bitmap. Effectively we've recreated our own awkward, non-standard, inefficient, and low-functionality video streaming protocol and viewer. There has to be something better!
You should check out this C++ RTMP Server: http://www.rtmpd.com/. I quickly downloaded, compiled and successfully tested it without any real problems (on Ubuntu Maverick). The documentation is pretty good if a little all over the place. I suspect that once you have a streaming media server capable of supporting the typical protocols (which rtmpd seems to do), then writing a client should fall into place naturally, especially if you're using wxWidgets as the interface api. Of course, it's easy to write that here, from the comfort of my living room, it'll be a different story when you're knee deep in code :)
you can modify your software such that:
The server connect, server grabs an image, passes it to ffmpeg establishing stream, then copy the encoded data from ffmpeg stream and send to client via network, if the connection drops, close the ffmpeg stream.
Maybe you can use the following to your own advantage:
http://www.kirsle.net/blog.html?u=kirsle&id=63
There is a player called VLC. It has a library for c++ and you can use it to embed the player in your GUI application. It supports a very wide range of protocols. So you should leave connecting, retrieving and playing jobs to VLC and take care of the starting and stopping jobs only. It would be easy and probably a better solution than doing it yourself.
For media playing facility, both music and audio, you can a look on GStream. And talking about the server, I think Twisted (A network library in Python) should be good option. The famous live video social website justin.tv is based on Twisted. Here you can read the story from here. Also, I built a group of server for streaming audio on Twisted, too. They can serve thousands of listener on line in same time.