Send audio data from usermode to Sysvad (virtual audio driver) use IOCTL - c++

In my application(usermode), i receive audio data and save it use function:
VOID CSoundRecDlg::ProcessHeader(WAVEHDR * pHdr)
{
MMRESULT mRes=0;
TRACE("%d",pHdr->dwUser);
if(WHDR_DONE==(WHDR_DONE &pHdr->dwFlags))
{
mmioWrite(m_hOPFile,pHdr->lpData,pHdr->dwBytesRecorded);
mRes=waveInAddBuffer(m_hWaveIn,pHdr,sizeof(WAVEHDR));
if(mRes!=0)
StoreError(mRes,TRUE,"File: %s ,Line Number:%d",__FILE__,__LINE__);
}
}
pHdr Pointer points to the audio data(byte[11025])
How to I can get this data in sysvad using IOCTL. Thanks for help.

If I understand correctly you have an audio buffer than you want to send for output in sysvad. For this case scenario you would have to write the butter in using "writebytes"
please look at this example for more in depth details.
https://github.com/microsoft/Windows-driver-samples/blob/master/audio/sysvad/EndpointsCommon/minwavertstream.cpp
UPDATE
in answer to your comment:
Circular buffer is not a must it really depends from the implementation you want to do, the main point is to get the buffer in memory, writing it is simply like this
adapterObject->WriteEtwEvent(eMINIPORT_LAST_BUFFER_RENDERED,
m_ullLinearPosition + ByteDisplacement, // Current linear buffer position
m_ulCurrentWritePosition, // The very last WaveRtBufferWritePosition that the driver received
0,
0);
ideally you would use separation of concerns with the logic for reading and writing independent from each other, with the buffer object just passed between them

Related

Write IMU data to csv file using buffer and overflow buffer?

I have been trying to implement a C++ complementary filter for a LSM9DS1 IMU connected via I2C to an mbed board, but timing issues are preventing me from getting the angular rate integration right. This is because in my code I'm assuming that my sample rate is 100Hz, while this isn't exactly the rate at which data is being sampled due to the printf() statements I am using to display values in real time. This results in my filter outputting angles that drift/don't go back to the original value when the IMU is put back in its original position.
I've been recommended to follow the following steps in order to avoid delays in my code that could disrupt my time sensitive application:
On each iteration of the program, add the raw IMU data to a buffer
When the buffer is nearly full, use an interrupt to write all the data from
the buffer to a .csv file
When/if the buffer overflows, add the remaining data to a new "overflow
buffer"
Empty the first buffer and refill it with the data stored in the overflow
buffer, and so on
Handle the filtering calculations separately by manually treating the data
from the .csv file once it's all been collected, so as to avoid timing
issues, and see if the output is as expected
The whole buffer/overflow buffer back and forth thing really confuses me, could someone please help me clarifying how to technically achieve the above steps? Thanks in advance!
Edit:
#include "LSM9DS1.h"
#define DT 1/100
void runFilter()
{
// calculate Euler angles from accelerometer and magnetometer (_roll,
// _pitch,_yaw)
calcAttitude(imu.ax, imu.ay, imu.az, -imu.my, -imu.mx, imu.mz);
_gyroAngleX += (_rateX*DT);
_gyroAngleY += (_rateY*DT);
_gyroAngleZ += (_rateZ*DT);
_xfilt = 0.98f*(_gyroAngleX) + 0.02f*_roll;
_yfilt = 0.98f*(_gyroAngleY) + 0.02f*_pitch;
_zfilt = 0.98f*(_gyroAngleZ) + 0.02f*_yaw;
printf("%.2f, %.2f, %.2f \n", _xfilt, _yfilt, _zfilt);
}
in main.cpp:
int main()
{
init(); // Initialise IMU
while(1) {
readValues(); // Read data from the IMUs
runFilter();
}
}
As Kentaro also mentioned in the comments, use a separate thread for printf and use the Mbed OS EventQueue to defer printf statements to it.
EventQueue queue;
Thread event_thread(osPriorityLow);
int main() {
event_thread.start(callback(&queue, &EventQueue::dispatch_forever));
// after sampling
queue.call(&printf, "%.2f, %.2f, %.2f \n", _xfilt, _yfilt, _zfilt);
However, you might still run into issues with the speed. Some general tips:
Use the highest baud rate that your development board can handle.
Use a RawSerial object over printf (which uses Serial) to avoid claiming a mutex.
Don't write to UART but rather write to a file (e.g. mount a FATFileSystem to an SD card). This will be much faster.

Raw files aren't playing, or are playing incorrectly - Oboe (Android-ndk)

I'm attempting to Play a Raw (int16 PCM) encoded audio file in my android application. I've been following and reading through the Oboe documentation/samples to try to get one of my own audio files to play.
The audio file I need to play is roughly 6kb, or 1592 frames (stereo).
Either no sound plays, or sound/jitter plays on startup (with varying output - see bellow)
Troubleshooting
update
I have switched to floats for buffer queuing, instead of keeping everything to int16_t (and converting back to int16_t when done), although now I'm back to no sound.
The audio seems to be either not playing, or playing on startup (which is wrong). The sound should play after I press 'start'.
When the app was implemented with int16_t only, the premature sound was relative to how big the buffer size was. If the buffer size is smaller than the audio file, the sound is very fast and clipped (more drone-like at lower buffer sizes). Bigger than the Raw audio size it seems like it plays on a loop and gets quieter at higher buffer sizes. The sound would also get "softer" when the start button is pressed. I'm not even entirely sure this means the raw audio was playing, it could just be random nonsense jitters from Android.
When filling the buffers with floats, and converting to int16_t afterwards, no audio is played.
(I have tried running systrace, but I honestly don't know what I'm looking for)
The stream opens fine.
The buffer size fails to be ajusted in createPlaybackStream() (although somehow it still sets it to twice the burst size)
The stream starts fine.
The Raw resources are being loaded fine.
Implementation
What I am currently trying in the builder:
Setting the callback to this, or onAudioReady()
Setting the performance mode to LowLatency
Setting the sharing mode to Exclusive
Setting the buffer capacity to (anything bigger than my audio file frame count)
Setting the burst size (frames per call back) to (anything equal to or lower than the buffer capacity / 2)
I am using the Player class and the AAssetManager class from the Rhythm Game sample here: https://github.com/google/oboe/blob/master/samples/RhythmGame. I am using these classes to load my resources and play the sound. Player.renderAudio writes the audio data to the output buffer.
Here are the relevant methods from my audio engine:
void AudioEngine::createPlaybackStream() {
// // Load the RAW PCM data files into memory
std::shared_ptr<AAssetDataSource> soundSource(AAssetDataSource::newFromAssetManager(assetManager, "sound.raw", ChannelCount::Mono));
if (soundSource == nullptr) {
LOGE("Could not load source data for sound");
return;
}
sound = std::make_shared<Player>(soundSource);
AudioStreamBuilder builder;
builder.setCallback(this);
builder.setPerformanceMode(PerformanceMode::LowLatency);
builder.setSharingMode(SharingMode::Exclusive);
builder.setChannelCount(mChannelCount);
Result result = builder.openStream(&stream);
if (result == Result::OK && stream != nullptr) {
mSampleRate = stream->getSampleRate();
mFramesPerBurst = stream->getFramesPerBurst();
int channelCount = stream->getChannelCount();
if (channelCount != mChannelCount) {
LOGW("Requested %d channels but received %d", mChannelCount, channelCount);
}
// Set the buffer size to (burst size * 2) - this will give us the minimum possible latency while minimizing underruns
stream->setBufferSizeInFrames(mFramesPerBurst * 2);
if (setBufferSizeResult != Result::OK) {
LOGW("Failed to set buffer size. Error: %s", convertToText(setBufferSizeResult.error()));
}
// Start the stream - the dataCallback function will start being called
result = stream->requestStart();
if (result != Result::OK) {
LOGE("Error starting stream. %s", convertToText(result));
}
} else {
LOGE("Failed to create stream. Error: %s", convertToText(result));
}
}
DataCallbackResult AudioEngine::onAudioReady(AudioStream *audioStream, void *audioData, int32_t numFrames) {
int16_t *outputBuffer = static_cast<int16_t *>(audioData);
sound->renderAudio(outputBuffer, numFrames);
return DataCallbackResult::Continue;
}
// When the 'start' button is pressed, it calls this method with true
// There should be no sound on app start-up until this button is pressed
// Sound stops when 'stop' is pressed
setPlaying(bool isPlaying) {
sound->setPlaying(isPlaying);
}
Setting the buffer capacity to (anything bigger than my audio file frame count)
You don't need to set the buffer capacity. This will be set automatically at a reasonable level for you. Typically ~3000 frames. Note that buffer capacity is different from buffer size which defaults to 2*framesPerBurst.
Setting the burst size (frames per call back) to (anything equal to or lower than the buffer capacity / 2)
Again, don't do this. onAudioReady will be called every time the stream requires more audio data and numFrames indicates how many frames you should supply. If you override this value with a value which isn't an exact ratio of the audio device's native burst size (typical values are 128, 192 and 240 frames depending on underlying hardware) then you may get audio glitches.
I have switched to floats for buffer queuing
The format which you need to supply data in is determined by the audio stream and it is only known after the stream has been opened. You can get it by calling stream->getFormat().
In the RhythmGame sample (at least the version you're referring to) here's how the formats work:
Source file is converted from 16-bit to float inside AAssetDataSource::newFromAssetManager (floats are the preferred format for any kind of signal processing)
If the stream format is 16-bit then convert it back inside onAudioReady
1592 frames (stereo).
You said that your source was stereo but you're specifying it as mono here:
std::shared_ptr soundSource(AAssetDataSource::newFromAssetManager(assetManager, "sound.raw", ChannelCount::Mono));
Without doubt that will cause audio problems because the AAssetDataSource will have a value for numFrames which is double the correct value. This will cause audio glitches because half the time you'll be playing random parts of system memory.

Continuous WASAPI Ring-Buffer Sampling

How to use WASAPI (or something like it) to continuously sample audio into a (thread-safe) ring-buffer, so that a consumer thread can read from that buffer in an a set interval?
Currently we have a .sample() method that returns a chunk of samples after a set sampling interval, but this has quite the overhead due to memory allocation etc.. maybe this method could be optimized; I'm pretty sure we're doing it wrong.
std::vector<short> sampler2::sample()
{
// prepare header
waveInPrepareHeader(hWaveIn, &WaveInHdr, sizeof(WAVEHDR));
// insert a wave input buffer
waveInAddBuffer(hWaveIn, &WaveInHdr, sizeof(WAVEHDR));
// commence sampling input
waveInStart(hWaveIn);
// sleep for the duration of a sample interval
std::this_thread::sleep_for(milliseconds(SAMPLE_INTERVAL));
// create vector
std::vector<short> samplesChunk(&waveIn[0], &waveIn[0] + NUMPTS);
// return vector
return samplesChunk;
}
GitHub Links: sampler2.h & sampler2.cpp
The code is very shitty and we have no clue how to properly use WASAPI. Our goal was to (quickly) create a sampler class that can leverage a sampling interval of >10 ms.
Your sample uses waveout API. You can check MSDN for WASAPI reference and usage.
Here is the basic description of WASAPI usage:
The client calls the methods in the IAudioRenderClient interface to write rendering data to an endpoint buffer.To request an endpoint buffer of a particular size, the client calls the IAudioClient::Initialize method. To get the size of the allocated buffer, which might be different from the requested size, the client calls the IAudioClient::GetBufferSize method.
To move a stream of rendering data through the endpoint buffer, the client alternately calls the IAudioRenderClient::GetBuffer method and the IAudioRenderClient::ReleaseBuffer method. The client accesses the data in the endpoint buffer as a series of data packets. The GetBuffer call retrieves the next packet so that the client can fill it with rendering data. After writing the data to the packet, the client calls ReleaseBuffer to add the completed packet to the rendering queue.
There is also this Microsoft C++ WASAPI example.

LibsUsbK buffers not being filled when using function UsbK_IsoReadPipe

I'm trying to write some code to read from an Isochronous pipe using LibUsbK in Win32. I have successfully initialised the device into the correct state to send and receive Isochronous data and I can see data being sent over the USB in my hardware USB analyser, but the buffers I am receiving are always unfilled even though the analyser shows that there was data in the packets sent to the PC.
I'm new to LibUsbK and using Isochronous transfers though I'm not new to USB in general but I've been really struggling with this.
The code I'm using to read from the device is something like this...
UsbK_SelectInterface(usbHandle,1,0);
UsbK_SetAltInterface(usbHandle,1,0,1);
IsoK_Init(&isoCtx, ISO_PACKETS_PER_XFER, 0);
IsoK_SetPackets(isoCtx, ISO_PACKET_SIZE); // Size of each individual packet
OvlK_Init(&ovlPool, usbHandle, 4, 0);
OvlK_ResetPipe(usbHandle, 0x83);
OclK_Acquire(&ovlkHandle, ovlPool);
UsbK_IsoReadPipe(usbHandle, 0x83, inBuffer, sizeof(inBuffer), ovlkHandle, isoCtx);
while(!finished)
{
if(OvlK_IsComplete(ovlkHandle)
{
fwrite(inBuffer, sizeof(inBuffer), 1, outFile);
memset(inBuffer,0xcc,sizeof(inBuffer));
OvlK_ReUse(ovlkHandle);
UsbK_IsoReadPipe(usbHandle, 0x83, inBuffer, sizeof(inBuffer), ovlkHandle, isoCtx);
{
}
If I put a breakpoint at the fwrite line then the inBuffer is always full of 0xCC - ie, not having been filled by the iso read.
I've checked all the error return values from the UsbK/OvlK function calls and they are all as they should be. I've checked my buffers are sufficiently big to receive the data.
I use very similar code to write to the ISO out pipe on endpoint 0x02 and that works perfectly, the only difference really between the code above and my write code is that the fwrite/memset commands are replaced with a call to a "fillbuffer" function that populates my outBuffer before calling UsbK_IsoWritePipe function.
I tried looking through any examples I could find in the samples and also online but struggled to understand/get them to work with my particular device.
Any suggestions or help greatly appreciated.
So it appears that the above code did work and I was being mislead by the fact that the debugger was interrupting the flow of things - I keep forgetting that trying to debug real time stuff can introduce it's own issues.
The first issue was that stepping through the code in the debugger was causing issues with the low level libusbk code capturing the usb packets and filling my buffers correctly - once I let it run full speed and found other ways to test the buffers I did actually find there was some data in there.
The second problem I had was that quite often the buffer was starting to be filled part way through only (and not always right from the start) so when I examined the data I was only printing the first part of the buffer to the console and so all I saw was 0xCC and I was therefore assuming it hadn't worked.
Once I realised that there was actually some data later in the buffer I just started looking through the buffer in packet sized chuncks, if the packet was completely contained of 0xCC I would skip it and move on, but if any of it was not 0xCC then I would treat it as a valid packet - this worked perfectly and I was successfully receiving all the data. I'm sure there's a more "proper" way of doing this, but it works for me now.

Restarting Streaming OpenAL Source?

Why does my streaming OpenAL source somtimes go to AL_STOPPED state, forcing me to call alSourcePlay? This usually happens when I do not call send fast enough, i.e. in debug mode. Does the oal source automatically stop when it doesn't have enough queue buffers? How do I avoid that?
void send(audio_buffer audio) override
{
ALenum state;
alGetSourcei(source_, AL_SOURCE_STATE,&state);
if(state != AL_PLAYING)
alSourcePlay(source_); // This happens sometimes, usually when "send" is not called fast enough.
ALuint buffer = 0;
alSourceUnqueueBuffers(source_, 1, &buffer);
if(buffer)
{
alBufferData(buffer, AL_FORMAT_STEREO16, audio.data(), static_cast<ALsizei>(audio.size()*sizeof(int16_t)), 48000);
alSourceQueueBuffers(source_, 1, &buffer);
}
else
LOG << "Dropped audio.";
}
It sounds like your basic problem is that your audio stream is starved. There are a few options you can use to mitigate this, but they all have their own side effects:
(1) You can configure it to play from a looping buffer, to which you are supplying the relevant data. The downside to this is that it will audibly repeat itself if you starve the buffer too long, but it will have some better performance characteristics (fragmentation, etc).
(2) You can increase the send buffer size. This will only cover up small problems, and potentially increases the latency in dynamic content.
(3) Finally, you can thread the audio send operation, that way so long as the audio thread isn't starved, it can continue to send data in the background.
The high production / quality solution probably involes all three of these. Sorry for the lack of OpenAL specific terminology, but every audio system I've seen has these capabilities.