iOS - Generate tone at specific frequency and volume - audiounit

I have to make ear testing app like Audiogram. I have to calibrate headphone at specific frequency and volume. I have to generate beep tone at specific frequency and volume and output volume of sound should be same of headphone. Could you please anyone help me?

Related

kAudioUnitSubType_VoiceProcessingIO cause volume step down

I'm using AudioUnit to Play and Record,when i set kAudioUnitSubType_VoiceProcessingIO, the sound is lower than RemoteIO, Why? Who can tell me how to change this issue?
It's lower because the voice processing algorithm and audio filters need some dynamic range, or headroom to turn the volume up and down, or have frequency response peaks above the average level. Thus this processing needs to start with the volume lower so there is room to go up.
The way to change it is to not use Audio Unit voice processing.
The way we got around the low volume of VoiceProcessingIO is to use an additional Compressor Audio Unit and control the gain from there. If you do so, don't forget to disable the AGC property of kAudioUnitSubType_VoiceProcessingIO.

How can I get the frequency value at given time with XAudio2?

I've already loaded the .wav audio to the buffer with XAudio2 (Windows 8.1) and to play it I just have to use:
//start consuming audio in the source voice
/* IXAudio2SourceVoice* */ g_source->Start();
//play the sound
g_source->SubmitSourceBuffer(buffer.xaBuffer());
I wonder, how can I get the frequency value at given time with XAudio2?
The question does not make much sense, a .wav file contains a great many frequencies. It is the blend of them that makes it sound like music to your ears, instead of just an artificial generated tone. A blend that's constantly changing.
A signal processing step is required to convert the samples in the .wav file from the time domain to the frequency domain. Generally known as spectrum analysis, the Fast Fourier Transform (FFT) is the standard technique.
A random Google hit on "xaudio2 fft" produced this code sample. No idea how good it is, but something to play with to get the lay of the land. You'll find more about it in this gamedev question.

Night Vision Video: Occupancy Detection using OpenCV C++

Using Raspberry Pi NoIR camera, I am trying to do occupancy detection using OpenCV for shared office space.
In order to identify vacant spots, I am using image of empty office space as a background model. Once we have the background model, I subtract the live video frame from the background image model(using cvAbsDiff) and identify the changes.
I am getting the images using a webcam mounted on the roof. The problem is to identify a person from this angle. When I work with live video, there are subtle variations (changes in chair position, physical objects or variations of camera) which act as noise. I tried using BackgroundSubtractorMOG, MOG2 but MOG and MOG2 can only do differences from a video stream but not using an image reference model.
Any inputs on possible directions will be greatly appreciated. I will be happy to share the technique with you guys that works once I have a working solution. Thanks!

Traffic Motion Recognition

I'm trying to build a simple traffic motion monitor to estimate average speed of moving vehicles, and I'm looking for guidance on how to do so using an open source package like OpenCV or others that you might recommend for this purpose. Any good resources that are particularly good for this problem?
The setup I'm hoping for is to install a webcam on a high-rise building next to the road in question, and point the camera down onto moving traffic. Camera altitude would be anywhere between 20 ft and 100ft, and the building would be anywhere between 20ft and 500ft away from the road.
Thanks for your input!
Generally speaking, you need a way to detect cars so you can get their 2D coordinates in the video frame. You might want to use a tracker to speed up the process and take advantage of the predictable motion of the vehicles. You, also, need a way to calibrate the camera so you can translate the 2D coordinates in the image to depth information so you can approximate speed.
So as a first step, look at detectors such as deformable parts model DPM, and tracking by detection methods. You'll probably need to port some code from Matlab (and if you do, please make it available :-) ). If that's too slow, maybe do some segmentation of foreground blobs, and track the colour histogram or HOG descriptors using a Particle Filter or a Kalman Filter to predict motion.

Getting real / physical / hardware samplig rate of sound card

I need to get real / physical / hardware (don't know the word) samplig rate of my soundcard.
I'm generating High frequency square sound wave and playing it using DirectSound.
I need to match DirectSound sampling rate with the sound-card.
I don't want windows mixer to re sample my sound because output wave will be totally unusable.
In short: How to get sound-card native sample rate?