how to track moving sphere path? [closed] - c++

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 8 years ago.
Improve this question
I'm working on a project in C++ and OpenGL on Visual Studio 2010 and I have some spheres moving around the screen. How can I track their movement and draw a line behind them while they move?

Store their positions in a vector over time and each frame you draw the points from the vector with GL_LINE_STRIP.
Checkout some OpenGL tutorial if you don't know how to draw the lines themselves.

Related

OpenGL - Low FPS on simple 3d game [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
I am developping a 3d game, but already at one of the first tests, with really few calculations I get around 3 or 4 fps.
The following is my entire code: http://pastebin.com/j2DWPS6Z
This is the Terrain.cpp file I used in the main code: http://pastebin.com/d1gnE5KH
Looking to the code I use for drawing, I am drawing only 400 polygons. As far as I know that's not supposed to drop the fps to around 3 to 4 fps.
The computer I am using is HP Elitebook 8570w, with 8GB ram and an Intel core i7, so that's not the problem.
Does anyone know what I am doing wrong to cause the fps to be this low?
I think that the problem could be the call to glFlush() from the inner loop of drawTerrain(). You do not usually need to call this function, least of all from the inner loop. Try just removing it.

OpenCV to detect movement [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
This may be a repost but I thought I might open a topic myself since nothing really answers my questions so far.
Okay, so if I attach a webcam to a robot, is it possible to use the webcam to determine which way the robot is moving (Forward, back, turning left, turning right) because I am doing a project that requires me to detect alignment of the robot down a hallway using a webcam.
Is this possible?
You can do this with optical flow. It is implemented in OpenCV

c++ program to extract the eyes,nose features of detected face using opencv [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I am working on face recognition project for security purposes.First step is to detect the faces in still image.I have detected the faces but not able to save them.
Saving the faces will require you to compute the bounding box of the faces from the detection output, then write them to a file. Hint: have a look at OpenCV's documentation here and here.
You can detect more features such as eyes, nose... in just the same way as the faces. However, you need a different trained cascade for each different feature. OpenCV already provides you with a cascade for eyes (and glasses). For eyebrows, nose... you'll likely build your own cascade, for example by following the instructions in this great blog post.

How to disable/exclude unneeded operations while using tesseract? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
Suppose I need to recognize single word or even a letter. So I need not find rows of text, deskew them, split into words...
How to run recognition process with these exclusions?
I see only was to set rectangle, but this also does not mean that all steps won't be performed.
Setting rectangle and page segmentation mode should do.

EEG blink detection algorithm for neurosky mindwave? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
This might be a long shot but does anyone know how I can implement a blink detection algorithm using the eeg raw data? I've tried just detecting a spike in brain activity but it also gives a false positive whenever the electrode moves on my forehead.
eye blinks in EEG data aren't actual waves they are actually artifacts due to the potential difference between the cornea and the retina. According to wikipedia they are usually in the 4-7Hz or 8-13Hz.
They are detectable in the fp1 and fp2 , which are closest to the eyes.
Here is a useful paper about removing the artifact in question
you might also want to look into Independent Component Analysis (ICA) and Regression Analysis This is something thats going to take a lot of research from you.