I want to write a Kernel Streaming driver (that operates in user mode). I've been doing some research and it looks like for USB devices I'll want to use AVStream and for other devices I'll use Portcls. I can communicate with AVStream (I think) through some of the functionality defined in the KSProxy. However, I have not found a way to communicate with Portcls. Can anybody offer some insight?
Also I don't know a whole lot about Kernel Streaming. I just know what I have researched so any advice would be helpful.
The first functionality I want to implement is device management. I want to be able to manage devices (ex. get devices properties and take control of devices). I'm not sure of the best way to do this. Also, I want to be able to choose which device I use for audio streaming.
Also eventually I want to be able to support playback and recording but I'm sure by the time I'm ready to implement that I will know the path to take.
Related
Does anyone know how to programmatically capture the sound that is being played (that is, everything that is coming from the sound card, not the input devices such as a microphone).
Assuming that you are talking about Windows, there are essentially three ways to do this.
The first is to open the audio device's main output as a recording source. This is only possible when the driver supports it, although most do these days. Common names for the virtual device are "What You Hear" or "Wave Out". You will need to use a suitable API (see WaveIn or DirectSound in MSDN) to do the capturing.
The second way is to write a filter driver that can intercept the audio stream before it reaches the physical device. Again, this technique will only work for devices that have a suitable driver topology and it's certainly not for the faint-hearted.
This means that neither of these options will be guaranteed to work on a PC with arbitrary hardware.
The last alternative is to use a virtual audio device, such as Virtual Audio Cable. If this device is set as the defualt playback device in Windows then all well-behaved apps will play through it. You can then record from the same device to capture the summed output. As long as you have control over the device that the application you want to record uses then this option will always work.
All of these techniques have their pros and cons - it's up to you to decide which would be the most suitable for your needs.
You can use the Waveform Audio Interface, there is an MSDN article on how to access it per PInvoke.
In order to capture the sound that is being played, you just need to open the playback device instead of the microphone. Open for input, of course, not for output ;-)
If you were using OSX, Audio Hijack Pro from Rogue Amoeba probably is the easiest way to go.
Anyway, why not just looping your audio back into your line in and recording that? This is a very simple solution. Just plug a cable in your audio output jack and your line in jack and start recordung.
You have to enable device stero mix. if you do this, direct sound find this device.
I want to create a virtual audio mixer that allows me to route the audio signal coming from any input (recording) device to any output (playback) device.
I.E.: lets say I have 2 virtual input devices (IN-A and IN-B) and 2 output devices (OUT-C and OUT-D), so I want to
Spotify playing to IN-A -> OUT-C
MIC -> OUT-D
Chrome playing to IN-B -> OUT-C
Also, I want to be able to set devices volume or gain, mute a device and monitor signal or volume level in real time.
Question:
I dont know even where to start. I'm guessing I will have to go for C++, but I dont know if there is an existing library that allows me to do so.
I have been researching and I found portaudio (and others), but before investing more time, I want to know from experst which would be a good starting point to continue my research and POCs development.
NOTE: windows, or any OS native mixer does not cover my needs. I need to achieve this programmatically.
Thanks in advance!
Take a look in this wikipedia artice:
https://en.wikipedia.org/wiki/Windows_legacy_audio_components
There are several ways to achieveing this.
I personally did not wrote an audio mixer tool, but i did a synthesizer project based on winmm.lib functionality.
If you want low-latency mixer you have to write a software driver, which i would not recommend doing.
I am trying to write a pro music/audio processing application, and I would like to be able to interact with the audio inputs/outputs at a very low level - ideally something allowing me to apply effects to the audio inputs and output this in real-time, similar to programs like Logic, Ableton etc.
I have written a pretty basic program that detects audio endpoint devices and can change their volumes using the MMDevice interface, but this is nowhere near the functionality I would like.
I have learned from the Microsoft docs that the four core-audio APIs are:
MMDevice
WASAPI
DeviceTopology
EndpointVolume
but it doesn't seem like any of these have the capabilities that I need. I'm thinking that I will need to be able to interact with the speakers at the level of setting the position of the membrane at a given time.
Is this even possible? If so, what can I use to do this?
The Windows Audio Session API (WASAPI) is the best bet for this purpose. It allows interaction with audio endpoints and setting up audio streams (which are streams of data that you can send or receive in real time). A good example is here.
I have a feature request on a project I work on, it is to integrate with a Paylife CC handheld, which has a USB connector to connect with the computer. I have the docs, and am reading up on it.
When I searched on google how to read/write to a usb device on linux, it said, use libusb.
I was wondering, is there another possibility? Can't I just open it like a file and write a stream to it, and read a stream from it?
I don't actually need to do anything fancy. I just need to write a string of control codes to the device, and it would be mildly nice to read back the ACK and Error codes. But since those are already displayed on the device screen, I don't have to do much with it, just deliver the total required for payment.
So my question is, what are my options?
The connected computer is a regular ol ubuntu linux box.
It is definitely possible when the device complies with one of the USB device classes -- drivers for them are universal.
If that's not the case, then you may stick with a manufacturer-provided or a third-party driver, given there is one and you possess enough of it's documentation.
If that's also not the case, libusb-1.0 is your resort, unless you want to write a kernel driver youself :)
I wanted to get some ideas one how some of you would approach this problem.
I've got a robot, that is running linux and uses a webcam (with a v4l2 driver) as one of its sensors. I've written a control panel with gtkmm. Both the server and client are written in C++. The server is the robot, client is the "control panel". The image analysis is happening on the robot, and I'd like to stream back the video from the camera to the control panel for two reasons:
A) for fun
B) to overlay image analysis results
So my question is, what are some good ways to stream video from the webcam to the control panel as well as giving priority to the robot code to process it? I'm not interested it writing my own video compression scheme and putting it through the existing networking port, a new network port (dedicated to video data) would be best I think. The second part of the problem is how do I display video in gtkmm? The video data arrives asynchronously and I don't have control over main() in gtkmm so I think that would be tricky.
I'm open to using things like vlc, gstreamer or any other general compression libraries I don't know about.
thanks!
EDIT:
The robot has a 1GHz processor, running a desktop like version of linux, but no X11.
Gstreamer solves nearly all of this for you, with very little effort, and also integrates nicely with the Glib event system. GStreamer includes V4L source plugins, gtk+ output widgets, various filters to resize / encode / decode the video, and best of all, network sink and sources to move the data between machines.
For prototype, you can use the 'gst-launch' tool to assemble video pipelines and test them, then it's fairly simply to create pipelines programatically in your code. Search for 'GStreamer network streaming' to see examples of people doing this with webcams and the like.
I'm not sure about the actual technologies used, but this can end up being a huge synchronization ***** if you want to avoid dropped frames. I was streaming a video to a file and network at the same time. What I eventually ended up doing was using a big circular buffer with three pointers: one write and two read. There were three control threads (and some additional encoding threads): one writing to the buffer which would pause if it reached a point in the buffer not read by both of the others, and two reader threads that would read from the buffer and write to the file/network (and pause if they got ahead of the producer). Since everything was written and read as frames, sync overhead could be kept to a minimum.
My producer was a transcoder (from another file source), but in your case, you may want the camera to produce whole frames in whatever format it normally does and only do the transcoding (with something like ffmpeg) for the server, while the robot processes the image.
Your problem is a bit more complex, though, since the robot needs real-time feedback so can't pause and wait for the streaming server to catch up. So you might want to get frames to the control system as fast as possible and buffer some up in a circular buffer separately for streaming to the "control panel". Certain codecs handle dropped frames better than others, so if the network gets behind you can start overwriting frames at the end of the buffer (taking care they're not being read).
When you say 'a new video port' and then start talking about vlc/gstreaming i'm finding it hard to work out what you want. Obviously these software packages will assist in streaming and compressing via a number of protocols but clearly you'll need a 'network port' not a 'video port' to send the stream.
If what you really mean is sending display output via wireless video/tv feed that's another matter, however you'll need advice from hardware experts rather than software experts on that.
Moving on. I've done plenty of streaming over MMS/UDP protocols and vlc handles it very well (as server and client). However it's designed for desktops and may not be as lightweight as you want. Something like gstreamer, mencoder or ffmpeg on the over hand is going to be better I think. What kind of CPU does the robot have? You'll need a bit of grunt if you're planning real-time compression.
On the client side I think you'll find a number of widgets to handle video in GTK. I would look into that before worrying about interface details.