Problems capturing audio from the second sound card - c++

I have written a program that captures sound via waveInOpen() in Wuindows. It works great on the michrophone-device on the main-board, but when I try to capture from the second sound-card, I get only [static] noise. Recording with SoundRecorder works great on both cards. Does any1 know if there are any known problems with waveInOpen() and multiple input-devices?
The code that opens the input-device looks like this:
void Audio::OpenDevice(const int device,
const Audio::SamplingRate samplingRate)
throw (Exception, std::exception)
{
switch(samplingRate)
{
...
case AUDIO_16BIT_44KHZ_STEREO:
bits_per_sample_ = 16;
hertz_ = 44100;
channels_ = 2;
break;
...
default:
throw Exception("Audio::OpenDevice(): Invalid enum value");
}
// Open the device
const UINT_PTR dev = (-1 == device) ? (UINT_PTR)WAVE_MAPPER : (UINT_PTR)device;
WAVEFORMATEX wf = {0};
wf.wFormatTag = WAVE_FORMAT_PCM;
wf.nChannels = channels_;
wf.wBitsPerSample = bits_per_sample_;
wf.nSamplesPerSec = hertz_;
wf.nBlockAlign = wf.nChannels * wf.wBitsPerSample / 8;
`
const MMRESULT result = waveInOpen(&hwi_, dev, &wf,
(DWORD_PTR)OnWaveEvent, (DWORD_PTR)this, CALLBACK_FUNCTION);
if (MMSYSERR_NOERROR != result)
throw Exception("waveInOpen()");
std::cout << "Audio: Sampling at " << hertz_ << " hertz from "
<< channels_ << " channel(s) with " << bits_per_sample_
<< " bits per sample. "
<< std::endl;
}

Did you check the microphone gain settings, mixer settings, that the microphone hardware you're using is compatible with the input you have it hooked to, etc? Hooking most microphones to a line in connection does not work well. The microphone doesn't have enough output voltage to drive that kind of input.

My guess (purely a guess) is that the bit depth or sample rate is somehow incorrect. If you are using 16/44100, then I would assume it is supported (pretty common). But maybe the sound card is not set for those rates. I have an external Edirol sound card that I have to physically turn on and off when I change bit depth (and adjust a separate switch on it).

Related

I'm Trying to open a stream in PortAudio

i'm using this api: Pa_OpenStream()
// Open line-in stream
err = Pa_OpenStream(&m_stream,
&m_inputParameters,
&m_outputParameters,
44100, // sample rate
128, // frames per buffer
0, // paClipOff
OmniLineInCallback,
NULL);
and i'm getting err = -9993, i.e. paBadIODeviceCombination.
I configured both input and output device and i want to record from the input and transmit to the output playback device.
i don't know why i'm getting this err?!
Appreciate your help,
Aviel
Make sure you pass correct parameters to the method. For that you may to do the following.
Initialize PortAudio via Pa_Initialize()
Check what audio devices are actually available for you through PortAudio. Use Pa_GetDeviceCount() and then Pa_GetDeviceInfo() for each available device. Look how many inputs and outputs are actually available for each device, and don't pass a quantity greater than it supports.
Fill the corresponding fields of the PaStreamParameters struct with the correct values.
This is how I open my ASIO/CoreAudio device (I also use Qt framework, but this does not affect the meaning of the example).
How I init the library and find the device I need:
int MyClass::initSoundInterfaces()
{
int result = -1; // target ASIO/CoreAudio device index
PaError err = Pa_Initialize();
const PaDeviceInfo* deviceInfo;
int numDevices = Pa_GetDeviceCount();
for( int DevIndex=0; DevIndex<numDevices; DevIndex++ )
{
deviceInfo = Pa_GetDeviceInfo( DevIndex );
QString str = Pa_GetHostApiInfo(deviceInfo->hostApi)->name;
qDebug() << "DEV: ApiInfo: " << str;
qDebug() << "defaultSampleRate = " << deviceInfo->defaultSampleRate;
qDebug() << "maxInputChannels = " << deviceInfo->maxInputChannels;
qDebug() << "maxOutputChannels = " << deviceInfo->maxOutputChannels;
QRegExp reg_exp(".*(ASIO|Core.*Audio).*", Qt::CaseInsensitive);
if( str.contains(reg_exp) )
{
if(deviceInfo->maxInputChannels > 0
&& deviceInfo->maxOutputChannels > 1)
{
result = DevIndex;
break;
}
}
}
return result;
}
Then I pass the given device index to the following method to open and start a stream:
bool MyClass::startAudio(int DevIndex)
{
PaError err = paNoError;
PaStreamParameters in_param;
in_param.device = DevIndex;
g_ChannelCount = min(Pa_GetDeviceInfo(DevIndex)->maxInputChannels,
MAX_INPUT_COUNT);
in_param.channelCount = g_ChannelCount;
in_param.hostApiSpecificStreamInfo = NULL;
in_param.sampleFormat = paFloat32 | paNonInterleaved;
in_param.suggestedLatency = 0;
// Pa_GetDeviceInfo(DevIndex)->defaultLowInputLatency;
PaStreamParameters out_param;
out_param.device = DevIndex;
out_param.channelCount = 2; // I do not need more than 2 output channels
out_param.hostApiSpecificStreamInfo = NULL;
out_param.sampleFormat = paFloat32 | paNonInterleaved; // Not all devices support 32 bits
out_param.suggestedLatency = 0;
// Pa_GetDeviceInfo(DevIndex)->defaultLowOutputLatency;
if(err == paNoError)
{
err = Pa_OpenStream(&stream,
&in_param,
&out_param,
nSampleRate,
cBufferSize/*paFramesPerBufferUnspecified*/,
paNoFlag,
process,
0);
}
err = Pa_StartStream(stream);
...
}
OK, when call Pa_GetDeviceCount() i get many available devices.
currently i have the onborad sound card and a usb sound card. (each of them have input and output devices)
when i configure input and output of the OnBoard sound card it works fine.
but when i configure input of the usb card and output of the onboard card it returns err = paInvalidDevice.
also i saw that each card has several devices that differs in hostApi (paInDevelopment=0, paDirectSound=1, paMME=2)
what is the diffeence between them? and which device should i choose? it's ok to mix between them, i.e. choose input device that have "paDirectSound" and output that have "paInDevelopment"?
another thing that i paid attention to is the sample rate and number of channels, is it ok to have input with sample rate of 44100 and output of 48000?
and one last thing: you confiugured the variables: nSampleRate nBufferSize according to what?
thanks,
aviel.
it is because the host api type of input device and output device are not same. For example:
import sounddevice as sd
sd.query_devices()
will get the available devices:
> 1 ADAT (7+8) (RME Fireface UC), MME (2 in, 0 out)
2 SPDIF/ADAT (1+2) (RME Fireface , MME (2 in, 0 out)
3 Analog (1+2) (RME Fireface UC), MME (2 in, 0 out)
...
14 扬声器 (RME Fireface UC), MME (0 in, 8 out)
...
44 ASIO Fireface USB, ASIO (18 in, 18 out)
where I have delete the unconcerned devices. here we can see:
device 3 is an input device with host api MME,
device 14 is an output device with host api MME,
device 44 is an io device with host api ASIO.
Now if you want call the sounddevice.playrec() method, you must make sure the io devices you choose has the same kind of api, for example:
import sounddevice as sd
sd.playrec(data, device=(3, 14)) # OK
sd.playrec(data, device=(44, 44)) # OK
sd.playrec(data, device=(3, 44)) # bad
sd.playrec(data, device=(44, 14)) # bad

MT166-С connection not responding. С++ Library

I have a MT166-C dispenser. I am writing C ++ code to manage a dispenser.
In development use SDK (attach the link) and I have a problem.
To work with the dispenser, I open the COM port. Code:
int input_port;
string com_str = "\\\\.\\COM";
std::cin >> input_port;
std::cout << "\nInput COM value: " << input_port << std::endl;
com_str = com_str + to_string(input_port);
char* cstr = &com_str[0];
char* port_com = cstr;
HANDLE port = CommOpenWithBaut(port_com, 9600);
if (port == 0)
{
std::cout << "Cannot open connect!\n\n" << std::endl;
return -1;
}
After I use the HANDLE port to call methods.
int iRetn = 0;
BYTE byStatus = 0;
string str = "";
iRetn = MT166_GetStatus(hPortHandle, 0x98, byStatus);
Similar to documentation (p. 3.1 in MT166-C.docx - Link Too)
DLLEXPORT int APIENTRY MT166_GetStatus(HANDLE hComHandle, BYTE CardNum,BYTE &byStatus)
///Parameter:
// hComHandle: Input parameter, serial port handle, obtained by opening the serial port
// CarderNum: Input parameter, card dispenser NO. Default is 0x98
// byStatus: output parameter, card dispenser status word
//Return value:
//Succeed, return value is 0
//failed, return value is not 0 = -1 no communication
In response, I get the code -1 - no communication. For other methods, the situation is the same.
I do not understand why there is no answer from the dispenser (no communication). I would be very grateful for any help.
I use connections via rs232 cable or USB adapter rs232 - without change.
Thank you for your time.
First of all, you need to check the physical availability of an external device.
Check baud speed, data bits, stop bits, row control parameters...
Check the OS hardware list for driver correctness.

Experiencing audio dropouts with OS X core audio playback/output

I'm doing playback using core audio (OS X, 10.11.4 Beta, older mac mini) using a simple output audio unit configured for input and output (though all of my problems seem to be with output). This is a streaming audio source from socket/internet feeding into a boost lockless queue, which then feeds into the output AU. I'm getting audio dropouts that appear to be a result of the AU render callback not being called by core audio intermittently.
Here is a graph. There were ~10 seconds of flawless audio before this section.
black: sample audio, simple sine wave
blue: wall clock duration of render callback (OutputProc) in ms, point off the chart above is ~120ms
orange: size of lockless queue (playback_buf) in samples/1000 to fit it in graph nicely
x-axis: time in ms
Everything is logged in OutputProc, so if that isn't called, then nothing gets logged, but the graphing tool will connect the dots across those periods. There is always enough samples in the buffer. It seems that from ~22475ms to ~22780ms, OutputProc is only called once at 22640. It does have a long wall clock time on that particular instance, but seems to be due to pre-emption. Later in the 22800 to 23000 range there are still dropouts but the OutputProc doesn't last any longer than normal and certainly doesn't overrun the real time window (~6ms here...HW sample rate is 96kHz). So, I'm thinking this is some other thread that is pre-empting somehow. I would expect core audio thread to have very high prio though. I do have some boost asio socket input/output going on in parallel (e.g. boost::asio::io_service io_service) but I would expect that to always lose priority to core audio. If you have any pointers to the actual problem...that is always welcome...but, I can make progress if I can just find out what thread(s) are executing during those times of interest? Is there something in Xcode that tells me a scheduler history or thread history, possibly per CPU core?
The OutputProc if it helps:
OSStatus AudioStream::OutputProc(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *TimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
AudioStream *This = (AudioStream *) inRefCon;
playback_cb_dur_log.StartTime();
static bool first_call = true;
if (first_call)
{
std::cout << TIME(timer) << " playback starting\n";
This->playback_state = PLAYBACK_ACTIVE;
first_call = false;
}
int playback_buf_avail = (int) This->playback_buf.read_available();
playback_buf_size_log.AddPoint(playback_buf_avail/1000.);
if (playback_buf_avail >= This->playback_buf_thresh)
{
std::cout << TIME() << " audio, thresh: " << This->playback_buf_thresh << ", buf_size: " << playback_buf_avail << std::endl;
// new threshold just one frame of data
This->playback_buf_thresh = This->frames_total;
for(int i = 0; i < This->num_channels; i++)
{
float *temp = (float *) ioData->mBuffers[i].mData;
This->playback_buf.pop(temp, inNumberFrames);
playback_sample_log.AddData(ioData->mBuffers[i].mData, inNumberFrames, This->chan_params.sample_rate);
}
}
else
{
std::cout << TIME() << " silence, thresh: " << This->playback_buf_thresh << ", buf_size: " << This->playback_buf.read_available() << std::endl;
for(int i = 0; i < This->num_channels; i++)
{
memset(ioData->mBuffers[i].mData, 0, inNumberFrames * sizeof(Float32));
playback_sample_log.AddData(ioData->mBuffers[i].mData, inNumberFrames, This->chan_params.sample_rate);
}
}
playback_cb_dur_log.StopAndCaptureTime();
return noErr;
}
Your logging mechanism might be interfering with the real-time thread. Anything, any call, which can take a lock, or manage memory (such a string creation or stdout file IO) can cause dropouts and other failures in Audio Unit callbacks.
If that's the case, you might try stuffing your time stamps in a lock-free circular logging FIFO, and doing any file IO in another thread.

Is it possible to get good FPS using Raspberry Pi camera v4l2 in c++?

I'm trying to stream video on a Raspberry Pi using the official V4L2 driver with the Raspberry Pi camera, from C++ on raspbian (2015-02 release), and I'm having low FPS issues.
Currently I'm just creating a window and copying the buffer to the screen (which takes about 30ms) whereas the select() takes about 140ms (for a total of 5-6 fps). I also tried sleeping for 100ms and it decreases the select() time by a similar amount (resulting in the same fps). CPU load is about 5-15%.
I also tried changing the driver fps from console (or system()) but it only works downwards (for example, if I set the driver fps to 1fps, I'll get 1fps but if I set it to 90fps I still get 5-6fps, even though the driver confirms setting it to 90fps).
Also, when querying FPS modes for the used resolution I get 90fps.
I included the parts of the code related to V4L2 (code omitted between different parts) :
//////////////////
// Open device
//////////////////
mFD = open(mDevName, O_RDWR | O_NONBLOCK, 0);
if (mFD == -1) ErrnoExit("Open device failed");
//////////////////
// Setup format
//////////////////
struct v4l2_format fmt;
memset(&fmt, 0, sizeof(fmt));
fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_YUYV;
fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
Xioctl(VIDIOC_G_FMT, &fmt);
mImgWidth = fmt.fmt.pix.width;
mImgHeight = fmt.fmt.pix.height;
cout << "width=" << mImgWidth << " height=" << mImgHeight << "\nbytesperline=" << fmt.fmt.pix.bytesperline << " sizeimage=" << fmt.fmt.pix.sizeimage << "\n";
// For some reason querying the format always sets pixelformat to JPEG
// no matter the input, so set it back to YUYV
fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_YUYV;
if (Xioctl(VIDIOC_S_FMT, &fmt) == -1)
{
cout << "Set video format failed : " << strerror(errno) << "\n";
}
//////////////////
// Setup streaming
//////////////////
struct v4l2_requestbuffers req;
memset(&req, 0, sizeof(req));
req.count = 20;
req.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
req.memory = V4L2_MEMORY_MMAP;
if (-1 == Xioctl(VIDIOC_REQBUFS, &req))
{
ErrnoExit("Reqbufs");
}
if (req.count < 2)
throw "Not enough buffer memory !";
mNBuffers = req.count;
mBuffers = new CBuffer[mNBuffers];
if (!mBuffers) throw "Out of memory !";
for (unsigned int i = 0; i < mNBuffers; i++)
{
struct v4l2_buffer buf;
memset(&buf, 0, sizeof(buf));
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buf.memory = V4L2_MEMORY_MMAP;
buf.index = i;
if (-1 == Xioctl(VIDIOC_QUERYBUF, &buf))
ErrnoExit("Querybuf");
mBuffers[i].mLength = buf.length;
mBuffers[i].pStart = mmap(NULL, buf.length, PROT_READ | PROT_WRITE, MAP_SHARED, mFD, buf.m.offset);
if (mBuffers[i].pStart == MAP_FAILED)
ErrnoExit("mmap");
}
//////////////////
// Start streaming
//////////////////
unsigned int i;
enum v4l2_buf_type type;
struct v4l2_buffer buf;
for (i = 0; i < mNBuffers; i++)
{
memset(&buf, 0, sizeof(buf));
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buf.memory = V4L2_MEMORY_MMAP;
buf.index = i;
if (-1 == Xioctl(VIDIOC_QBUF, &buf))
ErrnoExit("QBUF");
}
type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
if (-1==Xioctl(VIDIOC_STREAMON, &type))
ErrnoExit("STREAMON");
And the last two parts in the main loop :
//////////////////
// Get frame
//////////////////
FD_ZERO(&fds);
FD_SET(mFD, &fds);
tv.tv_sec = 3;
tv.tv_usec = 0;
struct timespec t0, t1;
clock_gettime(CLOCK_REALTIME, &t0);
// This line takes about 140ms which I don't get
r = select(mFD + 1, &fds, NULL, NULL, &tv);
clock_gettime(CLOCK_REALTIME, &t1);
cout << "select time : " << ((float)(t1.tv_sec - t0.tv_sec))*1000.0f + ((float)(t1.tv_nsec - t0.tv_nsec))/1000000.0f << "\n";
if (-1 == r)
{
if (EINTR == errno)
continue;
ErrnoExit("select");
}
if (r == 0)
throw "Select timeout\n";
// Read the frame
//~ struct v4l2_buffer buf;
memset(&mCurBuf, 0, sizeof(mCurBuf));
mCurBuf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
mCurBuf.memory = V4L2_MEMORY_MMAP;
// DQBUF about 2ms
if (-1 == Xioctl(VIDIOC_DQBUF, &mCurBuf))
{
if (errno == EAGAIN) continue;
ErrnoExit("DQBUF");
}
clock_gettime(CLOCK_REALTIME, &mCaptureTime);
// Manage frame in mBuffers[buf.index]
mCurBufIndex = mCurBuf.index;
break;
}
//////////////////
// Release frame
//////////////////
if (-1 == Xioctl(VIDIOC_QBUF, &mCurBuf))
ErrnoExit("VIDIOC_QBUF during mainloop");
I have been looking at the various methods of using the picamera and I'm hardly an expert, but it would seem that the default camera settings are what's holding you back. There are many modes and switches. I don't know if they are exposed through ioctls or how yet, I just started. But I had to use a program called v4l-ctl to get things ready for the mode I wanted. A deep look at that source and some code lifting should let you achieve greatness. Oh, and I doubt the select call is an issue, it's simply waiting on the descriptor which is slow to become readable. Depending on mode, etc. there can be a mandatory wait for autoexposure, etc.
Edit: I meant to say "a default setting" as you've changed some. There are also rules not codified in the driver.
The pixelformat matters. I encountered the similar low-fps problem, and I spent some time testing using my program in Go and C++ using V4L2 API. What I found is, Rpi Cam Module has good accelaration with H.264/MJPG pixelformat. I can easily get 60fps at 640*480, same as non-compressed formats like YUYV/RGB. However JPEG runs very slow. I can only get 4fps even at 320*240. And I also found the current is higher (>700mA) with JPEG compare to 500mA with H.264/MJPG.

How to get a list video capture devices NAMES (web cameras) using Qt (crossplatform)? (C++)

So all I need is simple - a list of currently avaliable video capture devices (web cameras). I need it in simple C++ Qt console app. By list I mean something like such console output:
1) Asus Web Camera
2) Sony Web Camera
So my question is how to cout such list using Qt C++? (if it is possible I'd love to see how to do it in pure Qt - no extra libs...)
also from this series:
How to get a list of video capture devices on linux? and special details on getting cameras NAMES with correct, tested answers
How to get a list of video capture devices on Mac OS? with correct, not yet tested by my answers
How to get a list of video capture devices on windows? with correct, tested answers
How to get a list video capture devices NAMES using Qt (crossplatform)?
I used this example code to list the cameras and get some info about them.
#include <QtMultimedia/QCameraInfo>
QList<QCameraInfo> cameras = QCameraInfo::availableCameras();
foreach (const QCameraInfo &cameraInfo, cameras) {
qDebug() << "Name: " << cameraInfo.deviceName();
qDebug() << "Position: " << cameraInfo.position();
qDebug() << "Orientation: " << cameraInfo.orientation();
}
remember to include in pro file:
QT += multimedia
I've wrote the following code to list all the USB capture devices. Remember to include webcam.h and libwebcam.h and link your code to libwecam using -lwebcam.
bool QextCamera::listAvailableDevices(QStringList * captureDeviceList){
CResult ret;
CDevice *devices = NULL;
quint32 req_size = 0;
quint32 buffer_size = 0;
quint32 count = 0;
QStringList availableDevices;
c_init();
do {
if (devices){
free(devices);
}
if(req_size){
devices = (CDevice *)malloc(req_size);
if(devices == NULL){
// LOG ERROR...
return false;
}
buffer_size = req_size;
}
// Try to enumerate. If the buffer is not large enough, the required size is returned.
ret = c_enum_devices(devices, &req_size, &count);
if(ret != C_SUCCESS && ret != C_BUFFER_TOO_SMALL){
// LOG ERROR...
return false;
}
} while(buffer_size < req_size);
if(count == 0) {
// LOG ERROR...
return false;
}
for(quint32 i = 0; i < count; i++) {
CDevice *device = &devices[i];
availableDevices << QString("%1 : %2 : %3").arg(device->shortName).arg(device->driver).arg(device->location);
}
if(devices){
free(devices);
}
c_cleanup();
*captureDeviceList = availableDevices;
return true;
}