SDL_OpenAudioDevice with multiple devices - c++

I am currently writing an application that will output sound to 3 different audio devices. For that, I have installed 3 USB Audio Devices in my PC, all the same brand. I wanted to use SDL2 as output device (although I am open minded for other suggestions). I had a look at the documentation and it states as the first parameter (device):
a UTF-8 string reported by SDL_GetAudioDeviceName(); see Remarks
Now I have written a sample program, that just lists the available devices:
#include <SDL2/SDL.h>
#include <iostream>
int main()
{
SDL_Init( SDL_INIT_EVERYTHING );
std::cout << SDL_GetError() << std::endl;
atexit( SDL_Quit );
int count = SDL_GetNumAudioDevices(0);
for (int i = 0; i < count; ++i)
{
std::cout << "Device " << i << ": " << SDL_GetAudioDeviceName(i, 0) << std::endl;
}
return 0;
}
It enumerates all available devices. The problem is, that all 3 devices will give me the same name:
Device 0: Internes Audio Digital Stereo (IEC958)
Device 1: C-Media Electronics, Inc. Audio Adapter
Device 2: C-Media Electronics, Inc. Audio Adapter
Device 3: C-Media Electronics, Inc. Audio Adapter
Is there any way to use something else (e.g. the USB Path) as the parameter or to make sure that I can open all 3 devices?

Related

IPortableDeviceManager::GetDevices returning 0 Devices

I'm currently writing a simple application to retrieve a list of the PnP devices of my computer. To do this, I'm making use of the Windows PortableDeviceApi Library.
So far I have the following code:
#include <iostream>
#include <PortableDeviceApi.h>
#include <wrl.h>
inline void getDeviceHWIDs() {
// Initialize
CoInitialize(nullptr);
// create portable device manager object
Microsoft::WRL::ComPtr<IPortableDeviceManager> device_manager;
HRESULT hr = CoCreateInstance(CLSID_PortableDeviceManager,
nullptr,
CLSCTX_INPROC_SERVER,
IID_PPV_ARGS(&device_manager));
if (FAILED(hr)) {
std::cout << "! Failed to CoCreateInstance CLSID_PortableDeviceManager, hr = " << std::hex << hr << std::endl;
}
// obtain amount of devices
DWORD pnp_device_id_count = 0;
if (SUCCEEDED(hr)) {
hr = device_manager->GetDevices(nullptr, &pnp_device_id_count);
if (FAILED(hr)) {
std::cout << "! Failed to get number of devices on the system, hr = " << std::hex << hr << std::endl;
}
}
std::cout << "Devices found: " << pnp_device_id_count << std::endl;
// Uninitialize
CoUninitialize();
}
The code compiles and runs successfully, however pnp_device_id_count is returning 0, indicating that there are no PnP devices connected to my computer. This is obviously an incorrect result, since Get-PnpDevice in PowerShell returns a large list of devices.
Any help would be much appreciated, as I'm a bit stumped over this ':(
Thank you :)
This is expected, Windows Portable Devices provides only a way to communicate with music players, storage devices, mobile phones, cameras, and many other types of connected devices.
It will not enumerate all devices on your system. Connect an IPhone and you will see pnp_device_id_count become 1.
To enumerate all devices you can use WinRT's Windows.Devices.Enumeration Namespace or the ancient Setup API.
Here is a (C# but you can do the same with C++) sample for using WinRT's one https://github.com/smourier/DeviceExplorer and a C++ sample for using Setup API: https://www.codeproject.com/Articles/14412/Enumerating-windows-device

Disable CPU package idle states in Windows from C++ code

I am successfully disabling CPU core C-states using this code (I’m working on Win10 and use Qt):
#include <QCoreApplication>
#include <QDebug>
#include "Windows.h"
extern "C" {
#include "powrprof.h"
}
#pragma comment(lib, "powrprof.lib")
int main()
{
const DWORD DISABLED = 1;
const DWORD ENABLED = 0;
GUID *scheme;
int error;
error = PowerGetActiveScheme(NULL, &scheme);
qDebug() << "PowerGetActiveScheme error code = " << error;
error = PowerWriteACValueIndex(NULL, scheme, &GUID_PROCESSOR_SETTINGS_SUBGROUP, &GUID_PROCESSOR_IDLE_DISABLE, DISABLED);
qDebug() << "PowerWriteACValueIndex error code = " << error;
error = PowerWriteDCValueIndex(NULL, scheme, &GUID_PROCESSOR_SETTINGS_SUBGROUP, &GUID_PROCESSOR_IDLE_DISABLE, DISABLED);
qDebug() << "PowerWriteDCValueIndex error code = " << error;
error = PowerSetActiveScheme(NULL, scheme);
qDebug() << "PowerSetActiveScheme error code = " << error;
return 0;
}
The reason behind this is that I am running an USB camera and figured out that I’m losing data packets when the processor enters idle modes. The code above works fine and overcomes this issue successfully. But it’s actually a bit too much (disabling all C states appears to be unnecessary). I made some tests with the vendor software of the camera and found out that during acquisition not the core C-states stop, but the package C-states (if it is of any interest, I posted the analysis of this problem in the answer here https://superuser.com/questions/1648849/monitor-used-usb-bandwidth-in-win10).
So my question is: Can I adapt the above code to only disable package idle states? In case that’s not possible, can I selectively disable core C-states?
Update:
Based on the suggestion of #1201ProgramAlarm I tried to use SetThreadPriority() as in the minimal example below:
#include <QDebug>
#include <windows.h>
#include <conio.h>
#include "processthreadsapi.h"
int main()
{
bool ok = false;
ok = SetPriorityClass(GetCurrentProcess(), HIGH_PRIORITY_CLASS);
qDebug() << "SetPriorityClass ok = " << ok;
ok = SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_HIGHEST);
qDebug() << "SetThreadPriority ok = " << ok;
for (int i=1;i<100;i++) {
qDebug() << "Here I am in some dummy loop...";
if (_kbhit()) {
break;
}
Sleep(1000);
}
return 0;
}
Unfortunately, this doesn't help and when monitoring the cpu package idle states (using HWiNFO64) I see no effect (package goes still idle as before).
The system is shutting down the USB device to save power.
This link provides the solution USB system power issue.
Open the Device manager as administrator.
Find the camera.
On the "Power Management" tab, deselect "Allow the computer to turn off this device to save power".
This can be done programmatically if the device ID is known.

How to create an OpenGL context on an NodeJS native addon on MacOS?

Follow-up for this question.
I'm trying to create a NodeJS native addon that uses OpenGL.
I'm not able to use OpenGL functions because CGLGetCurrentContext() always returns NULL.
When trying to create a new context to draw into, CGLChoosePixelFormat always returns the error kCGLBadConnection invalid CoreGraphics connection.
What is bugging me out is that when I isolate the code that creates the OpenGL context into a standalone CPP project, it works! It just gives an error when I run it inside the NodeJS addon!
I created this NodeJS native addon project to exemplify my error: https://github.com/Psidium/node-opengl-context-error-example
This is the code that works when executed on a standalone project and errors out when running inside NodeJS:
//
// main.cpp
// test_cli
//
// Created by Borges, Gabriel on 4/3/20.
// Copyright © 2020 Psidium. All rights reserved.
//
#include <iostream>
#include <OpenGL/OpenGL.h>
int main(int argc, const char * argv[]) {
std::cout << "Context before creating it: " << CGLGetCurrentContext() << "\n";
CGLContextObj context;
CGLPixelFormatAttribute attributes[2] = {
kCGLPFAAccelerated, // no software rendering
(CGLPixelFormatAttribute) 0
};
CGLPixelFormatObj pix;
CGLError errorCode;
GLint num; // stores the number of possible pixel formats
errorCode = CGLChoosePixelFormat( attributes, &pix, &num );
if (errorCode > 0) {
std::cout << ": Error returned by choosePixelFormat: " << errorCode << "\n";
return 10;
}
errorCode = CGLCreateContext( pix, NULL, &context );
if (errorCode > 0) {
std::cout << ": Error returned by CGLCreateContext: " << errorCode << "\n";
return 10 ;
}
CGLDestroyPixelFormat( pix );
errorCode = CGLSetCurrentContext( context );
if (errorCode > 0) {
std::cout << "Error returned by CGLSetCurrentContext: " << errorCode << "\n";
return 10;
}
std::cout << "Context after being created is: " << CGLGetCurrentContext() << "\n";
return 0;
}
I already tried:
Using fork() to create a context in a subprocess (didn't work);
Changing the pixelformat attributes to something that will create my context (didn't work);
I have a hunch that it may have something to do with the fact that a Node native addon is a dynamically linked library, or maybe my OpenGL createContext function may not be executing on the main thread (but if this was the case, the fork() would have solved it, right?).
Accessing graphics hardware requires extra permissions - Windows and macOS (don't know for others) restrict creation of hardware-accelerated OpenGL context to interactive user session (I may be wrong with the terms here). From one of articles on the web:
In case the user is not logged in, the CGLChoosePixelFormat will return kCGLBadConnection
Interactive session is easier to feel than to understand; e.g. when you interactively login and launch application - it is interactive session; when process is started as service - it is non-interactive. How this is actually managed by system requires deeper reading. As far as I know, there is no easy way "escaping" non-interactive process flag.
NodeJS can be used as part of a web-server, so that I may expect that it can be exactly the problem - it is started as a service, by another non-interactive user or has other special conditions making it non-interactive. So maybe more details on how you use / start NodeJS itself might clarify why the code doesn't work. But I may expect that using OpenGL on server part might be not a good idea anyway (if this is a target); although it might be possible that software OpenGL implementation (without kCGLPFAAccelerated flag might work).
By the way, there are at least two OpenGL / WebGL extensions to NodeJS - have you tried their samples to see if they behave in the same or different way in your environment?
https://github.com/google/node-gles
https://github.com/mikeseven/node-webgl

QT USB hardware ID detection Mac OS 10.8.x and Mac Book (Pro)

Background:
We have an application which is written using the QT framework. One key requirement is that we can correctly detect the hardware serial number of the USB flash drive used (not an external hard drive). This function has been working correctly for 4 years and worked on Windows XP, Windows Vista, Windows 2007 and on all Mac versions.
To our surprise we got clients complaining that the USB hardware ID was not being read and shown completely. After testing extensively it turns out that ONLY the combination of Mac OS mountain Lion + Mac Book (Pro) does NOT detect the hardware ID.
Mountain Lion on an iMac, on a Mini Mac, works fine. Leopard, Snow Leopard and Lion work fine on all Macs, including the Mac Book Pro.
We have been looking for a fix for nearly 1 one month but without results. Can anyone provide a small piece of code that works or give information what causes this issue (and really only on this combination) and how to fix it.
Note
Several resources can be found on the internet of other USB hardware that has issues with mountain lion but nowhere an answer was given to the solution to exactly the problem as described above.
Additional info:
We use the following code at the moment which works correctly on all Macs except those that have a USB3.0 port.
matchingDict = IOServiceMatching(kIOUSBDeviceClassName);
if (matchingDict == NULL)
{
return uret; // fail
}
/* Now we have a dictionary, get an iterator.*/
kr = IOServiceGetMatchingServices(kIOMasterPortDefault, matchingDict, &iter);
if (kr != KERN_SUCCESS)
{
return uret;
}
/* iterate */
/* scan all USB device in Mac machine*/
while ((device = IOIteratorNext(iter)))
{
int vendorId;
int productId;
getVidAndPid(device, &vendorId, &productId);
/*Get USB serial number by CFTypeRef*/
CFTypeRef serialNoRef = IORegistryEntryCreateCFProperty(device, CFSTR("USB Serial Number"), 0, 0);
/*Get USB bsd name by CFStringRef */
CFStringRef bsdNameRef = (CFStringRef)IORegistryEntrySearchCFProperty(device, kIOServicePlane,CFSTR(kIOBSDNameKey),kCFAllocatorDefault, kIORegistryIterateRecursively );
char* bsdName = cfStringRefToCString(bsdNameRef) ;
qDebug() << "bsd Name " << bsdName ;
if (bsdName != NULL)
{
char* serialNo = cfTypeToCString(serialNoRef);
qDebug() << "serialNo " << serialNo ;
/*Get USB manufacturerRef by CFTypeRef */
CFTypeRef manufacturerRef =IORegistryEntrySearchCFProperty(device, kIOServicePlane, CFSTR(kUSBVendorString), kCFAllocatorDefault, 0);
char* manufacrurer = cfTypeToCString(manufacturerRef);
qDebug() << "manufacrurer " << manufacrurer ;
}
IOObjectRelease(device);
}
/* Done, release the iterator */
IOObjectRelease(iter);

Problems capturing audio from the second sound card

I have written a program that captures sound via waveInOpen() in Wuindows. It works great on the michrophone-device on the main-board, but when I try to capture from the second sound-card, I get only [static] noise. Recording with SoundRecorder works great on both cards. Does any1 know if there are any known problems with waveInOpen() and multiple input-devices?
The code that opens the input-device looks like this:
void Audio::OpenDevice(const int device,
const Audio::SamplingRate samplingRate)
throw (Exception, std::exception)
{
switch(samplingRate)
{
...
case AUDIO_16BIT_44KHZ_STEREO:
bits_per_sample_ = 16;
hertz_ = 44100;
channels_ = 2;
break;
...
default:
throw Exception("Audio::OpenDevice(): Invalid enum value");
}
// Open the device
const UINT_PTR dev = (-1 == device) ? (UINT_PTR)WAVE_MAPPER : (UINT_PTR)device;
WAVEFORMATEX wf = {0};
wf.wFormatTag = WAVE_FORMAT_PCM;
wf.nChannels = channels_;
wf.wBitsPerSample = bits_per_sample_;
wf.nSamplesPerSec = hertz_;
wf.nBlockAlign = wf.nChannels * wf.wBitsPerSample / 8;
`
const MMRESULT result = waveInOpen(&hwi_, dev, &wf,
(DWORD_PTR)OnWaveEvent, (DWORD_PTR)this, CALLBACK_FUNCTION);
if (MMSYSERR_NOERROR != result)
throw Exception("waveInOpen()");
std::cout << "Audio: Sampling at " << hertz_ << " hertz from "
<< channels_ << " channel(s) with " << bits_per_sample_
<< " bits per sample. "
<< std::endl;
}
Did you check the microphone gain settings, mixer settings, that the microphone hardware you're using is compatible with the input you have it hooked to, etc? Hooking most microphones to a line in connection does not work well. The microphone doesn't have enough output voltage to drive that kind of input.
My guess (purely a guess) is that the bit depth or sample rate is somehow incorrect. If you are using 16/44100, then I would assume it is supported (pretty common). But maybe the sound card is not set for those rates. I have an external Edirol sound card that I have to physically turn on and off when I change bit depth (and adjust a separate switch on it).