Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
This might be a long shot but does anyone know how I can implement a blink detection algorithm using the eeg raw data? I've tried just detecting a spike in brain activity but it also gives a false positive whenever the electrode moves on my forehead.
eye blinks in EEG data aren't actual waves they are actually artifacts due to the potential difference between the cornea and the retina. According to wikipedia they are usually in the 4-7Hz or 8-13Hz.
They are detectable in the fp1 and fp2 , which are closest to the eyes.
Here is a useful paper about removing the artifact in question
you might also want to look into Independent Component Analysis (ICA) and Regression Analysis This is something thats going to take a lot of research from you.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
i am doing this project that has several inputs and several outputs at the same time.
is there a way to do a prediction for more than 1 class at the same time in Weka?
any help is highly appreciated
thank you
Morning,
Weka only supports single class attributes. However, untested, I think you may be able to work around this. Try the following, I'm interested as to what kind of accuracy you get.
eg data.
BloodIron real
BloodSuger real
BloodColor real
Diabetes bool class
Lifespan real class
Using the data above, train to solve only the 'Diabetes' class. Once you have the variables enter them into your dataset and use them to solve your second 'LifeSpan' class attribute.
To increase accuracy solve the class variable with the highest entropy first.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
This may be a repost but I thought I might open a topic myself since nothing really answers my questions so far.
Okay, so if I attach a webcam to a robot, is it possible to use the webcam to determine which way the robot is moving (Forward, back, turning left, turning right) because I am doing a project that requires me to detect alignment of the robot down a hallway using a webcam.
Is this possible?
You can do this with optical flow. It is implemented in OpenCV
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I am working on face recognition project for security purposes.First step is to detect the faces in still image.I have detected the faces but not able to save them.
Saving the faces will require you to compute the bounding box of the faces from the detection output, then write them to a file. Hint: have a look at OpenCV's documentation here and here.
You can detect more features such as eyes, nose... in just the same way as the faces. However, you need a different trained cascade for each different feature. OpenCV already provides you with a cascade for eyes (and glasses). For eyebrows, nose... you'll likely build your own cascade, for example by following the instructions in this great blog post.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Basically what I'm wondering is, how could you get like a list of all mobs, champions their hp, mana etc with programming? I know this is possible because it has been done before but I just can't see how you would be able to do this. Is looking in the assembler code necessary or can you do it in some other way? I'm mostly wondering about the theory behind it. (Using C++ if that helps anything at all)
Such things are usually done using crawling (e.g. retrieving the data from the web pages provided by Riot Games; might be partially outdated) or using reverse engineering to get this data from the game client's files (might not contain everything). In either way you'd get datasets which you'll have to read or interpret (look for values or replicate the way the game client reads and interprets the data).
I'm not sure whether there are some tools or APIs released somewhere, at least I haven't heard of anything officially supported or endorsed; most of this is essentially in a gray zone usage wise.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
I'm trying to do real-time pitch detection using C++. I'm testing some code from performous (http://performous.org/), because everything else hasn't worked for me. I know for sure that this works, but i just cant get it to work. I've been trying this for a few weeks now, and I haven't been able to get any pitch detection code working.
Instead of using input from the mic, you should create data of a known single frequency and run that through the program and see if it gets you the correct result. Then you can add harmonics to it and see if that works. Real world data is just too variable for initial testing.
Performous audio code has some optimizations, frequency limits and heuristics that make it only suitable for singing (and other similar tones). The optimal range is around 80-600 Hz.
C/C++/Obj-C Real-time algorithm to ascertain Note (not Pitch) from Vocal Input
Check the accepted answer on this link.
I have scoured SO for an answer to this problem, and this is the most useful resource I have found.
It appears that Performous uses this algorithm, but it's hard to make out from the Performous code
EDIT: I have finally managed a working solution. e-mail me if interested sunfish|gmail|c0m