Real-time pitch detection using FFT [closed] - c++

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
I'm trying to do real-time pitch detection using C++. I'm testing some code from performous (http://performous.org/), because everything else hasn't worked for me. I know for sure that this works, but i just cant get it to work. I've been trying this for a few weeks now, and I haven't been able to get any pitch detection code working.

Instead of using input from the mic, you should create data of a known single frequency and run that through the program and see if it gets you the correct result. Then you can add harmonics to it and see if that works. Real world data is just too variable for initial testing.

Performous audio code has some optimizations, frequency limits and heuristics that make it only suitable for singing (and other similar tones). The optimal range is around 80-600 Hz.

C/C++/Obj-C Real-time algorithm to ascertain Note (not Pitch) from Vocal Input
Check the accepted answer on this link.
I have scoured SO for an answer to this problem, and this is the most useful resource I have found.
It appears that Performous uses this algorithm, but it's hard to make out from the Performous code
EDIT: I have finally managed a working solution. e-mail me if interested sunfish|gmail|c0m

Related

C++ Microphone input [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 9 years ago.
Improve this question
I know there are tons of questions about this already After Doing tons of research, so don't bother redirecting to another question.
I need to capture the stream and get the microphone stream data such as frequency pitch all of that good stuff.
I have heard of DirectX audio, and OPENAL but have not tested them because they do not look entirely like what i need.
I need direct Access to the microphone,
I am starting to think i need to write a driver for this.
Assist me in this please.
Direct access to the microphone does not give you "frequency, pitch, all of that good stuff". First, frequency and pitch are the same. Secondly, they are found by processing microphone data.
The raw microphone data consists of a sequence of periodically measured voltage samples. A "pure" sound would be a sine function, but of course there are always background noise and harmonics and measurement noise.
The waveInOpen function is where you start if you want low-level access to data from the microphone on Windows.
Google knows about a number of "waveInOpen samples", but here's one that looks like above-average quality:
http://www.techmind.org/wave/

Fast voice recognition for limited number of commands [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
Does anyone have experience programming voice recognition in C++ (Windows and/or Mac) for a limited number of commands aiming for SPEED? Is it realistic at this point to achieve recognition of the command from the first syllable - ie, command is recognized by the time user gets to second syllable at reasonably fast speaking tempo? All commands would be programmed to start with a different syllable, if necessary, a radically different one (like, "oo", "xy", "fay" - only some 30 commands would be required).
Similar questions have been asked, but this is a fast moving field. Would the best idea be to look for open source libraries or to interface with compiled implementations?
I'm working professionally in this field, and I seriously doubt whether it is possible at all. C++ isn't the problem, the question is whether a computer allows it. The error rate on small sound clips is large, it's the Hidden Markov Model that fixes recognition. But in your case you simply can't feed it enough data.
Not that humans can do it either. Speech processing isn't as instant as your brain makes you believe.
You can do this with CMUSphinx with Pocketsphinx decoder.
The partial hypothesis of decoding is avaiable during the recognition process and you can usually get a first syllable as soon as it is uttered. If you give it 0.1s to stabilize (not visible for the user), you will get an accurate results on a command set.
There are even tools build on the top of CMUSphinx specifically designed for realtime control, for example in games, you can check InProTK and their demonstrations.

Weka analysis and predictions [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
i am doing this project that has several inputs and several outputs at the same time.
is there a way to do a prediction for more than 1 class at the same time in Weka?
any help is highly appreciated
thank you
Morning,
Weka only supports single class attributes. However, untested, I think you may be able to work around this. Try the following, I'm interested as to what kind of accuracy you get.
eg data.
BloodIron real
BloodSuger real
BloodColor real
Diabetes bool class
Lifespan real class
Using the data above, train to solve only the 'Diabetes' class. Once you have the variables enter them into your dataset and use them to solve your second 'LifeSpan' class attribute.
To increase accuracy solve the class variable with the highest entropy first.

How is it possible to create an information fetcher for a game like League of Legends? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Basically what I'm wondering is, how could you get like a list of all mobs, champions their hp, mana etc with programming? I know this is possible because it has been done before but I just can't see how you would be able to do this. Is looking in the assembler code necessary or can you do it in some other way? I'm mostly wondering about the theory behind it. (Using C++ if that helps anything at all)
Such things are usually done using crawling (e.g. retrieving the data from the web pages provided by Riot Games; might be partially outdated) or using reverse engineering to get this data from the game client's files (might not contain everything). In either way you'd get datasets which you'll have to read or interpret (look for values or replicate the way the game client reads and interprets the data).
I'm not sure whether there are some tools or APIs released somewhere, at least I haven't heard of anything officially supported or endorsed; most of this is essentially in a gray zone usage wise.

EEG blink detection algorithm for neurosky mindwave? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
This might be a long shot but does anyone know how I can implement a blink detection algorithm using the eeg raw data? I've tried just detecting a spike in brain activity but it also gives a false positive whenever the electrode moves on my forehead.
eye blinks in EEG data aren't actual waves they are actually artifacts due to the potential difference between the cornea and the retina. According to wikipedia they are usually in the 4-7Hz or 8-13Hz.
They are detectable in the fp1 and fp2 , which are closest to the eyes.
Here is a useful paper about removing the artifact in question
you might also want to look into Independent Component Analysis (ICA) and Regression Analysis This is something thats going to take a lot of research from you.