frisbee trajectory [closed] - c++

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
This is my first post. I'm the lead programmer on a FIRST robotics team, and this year's competition is about throwing Frisbees. I was wondering if there was some sort of "grand unified equation" for the trajectory that takes into account the air resistance, initial velocity, initial height, initial angle, etc. Basically, I would like to acquire data from an ultrasonic rangefinder, the encoders that determine the speed of our motors, the angle of our launcher, the rotational force (should be pretty constant. We'll determine this on our own) and the gravitational constant, and plug it into an equation in real time as we're lining up shots to verify/guesstimate whether or not we'll be close. If anyone has ever heard of such a thing, or knows where to find it, I would really appreciate it! (FYI, I have already done some research, and all I can find are a bunch of small equations for each aspect, such as rotation and whatnot. It'll ultimately be programmed in C++). Thanks!

I'm a mechanical engineer who writes software for a living. Before moving to tech startups, I worked for Lockheed Martin writing simulation software for rockets. I've got some chops in this area.
My professional instinct is that there is no such thing as a "grand unified equation". In fact, this is a hard enough problem that there might not be very good theoretical models for this even if they are correct: for instance, one of your equations will have to be the lift generated from the frisbee, which will depend on its cross-section, speed, angle of attack, and assumptions about the properties of the air. Unless you're going to put your frisbee in a wind tunnel, this equation will be, at best, an approximation.
It gets worse in the real world: will you be launching the frisbee where there is wind? Then you can kiss your models goodbye, because as casual frisbee players know, the wind is a huge disturbance. Your models can be fine, but the real world can be brutal to them.
The way this complexity is handled in the real world is that almost all systems have feedback: a pilot can correct for the wind, or a rocket's computer removes disturbances from differences in air density. Unless you put a microcontroller with control surfaces on the frisbee, you're just not going to get far with your open loop prediction - which I'm sure is the trap they're setting for you by making it a frisbee competition.
There is a solid engineering way to approach the problem. Give Newton a boot and do they physics equations yourselves.
This is the empirical modeling process: launch a frisbee across a matrix of pitch and roll angles, launch speeds, frisbee spin speeds, etc... and backfit a model to your results. This can be as easy as linear interpolation of the results of your table, so that any combination of input variables can generate a prediction.
It's not guess and check, because you populate your tables ahead of time, so can make some sort of prediction about the results. You will get much better information faster than trying the idealized models, though you will have to keep going to fetch your frisbee :)

Related

speed of (vector of pairs) vs (pair of vectors) C++ [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 9 years ago.
Improve this question
Background:
I have finished writing a game as a homework assignment. where we had to make a game of hex. I decided to implemented the board using a 2d vector of nodes, and used 2 vectors to keep track of x and y coordinates of a node's neighbors. The path finding algorithm i used to determine a winner was similar to that of Dijkstra's.
I realize disadvantage of using 2 vectors is that they would must always be in sync, but i am asking about speed. I also realize that a faster way to implement the board is to probably use a 1d vector (which I realized halfway into finishing the program).
Question: In terms of raw speed, would the path finding algorithm run faster with a 2 vectors to keep track of (x,y) or would the algorithm run faster if I implemented using a vector of pairs?
Choose whichever is more suitable for your needs.
You should not be concerned about performance at this stage of software design.
What is much more important is to choose the data structure you can best work with.
On doing that, the performance benefit is probably already with you.
aoeu has it right: first worry about nice representation.
Second, if you worry about the game being slow, profile. Find problematic areas and worry about those.
That said, a bit about speed:
Memory access is fastest when it is sequential. Jumping around is bad. Accessing values one after the other is good.
The question of whether a vector of pairs (more generally, vector of structs, VoS) or a pair of vectors (struct of vectors, SoV) is faster depends entirely on your access pattern. Do you access the coordinate pairs together, or do you first process all x and then all y values? The answer will most probably show the faster data layout.
That said, "most probably" means squat. Profile, profile, profile.
My intuition tells me that a vector of pairs would be faster, but you probably need to test it. Create a test program that inserts a few million points into both formats and time which is faster. Then time which format is faster to extract stored data.
First, aoeu is on point.
Second, regarding optimization in general:
The first step of optimization is measurement, otherwise you don't have a baseline to compare improvements against.
The next step is to use some sort of profiler to see where your code spend the cycles/memory/other, it might not be where you think.
When you've done those two you can begin to work on optimizing the right parts of your code in the right way.
In addition to my comment. If your implementation runs slower as the game progresses ( I'm guessing for the AI part of it), then it is probably because you are running Dijkstra after each move. And as the game gets larger with more moves on the board, the performance of the AI deteriorates.
A suggestion will be to use a disjoint-set method rather than Dijkstra's to determine a winner. The advantage of a disjoint-set over Dijkstra is that not only does it use less memory, but it does not become slower as the game progresses, so you are able to run it after each player makes a move. Disjoint-set - Wikipedia, Union-find - princeton
I realize that changing one's implementation of a key part of a project is a daunting task because it requires almost having to DO IT ALL OVER AGAIN, but this is just a suggestion and if you are worried about the speed of the AI, then this is definitely worth looking into. There are other there are other methods for implementing an AI such as minMAX tree, Alphabeta pruning(This is an improvement to minmax tree)
Gl.

Computer Vision: Image Comparison and Counting? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I was curious if it would be possible to count the number of things in a picture, let's say the number of ducks, by first taking a sample picture and then seeing where it matched in a separate picture. So to clarify, we would have 2 pictures (one picture with a single duck, and one picture with 4 ducks for the sake of the argument) and the program would see how many matches it could make in the 4 duck picture by overlaying the one duck picture--thereby counting how many ducks there are in the picture. I've been reading up on computer vision a little bit, and I know that opencv's site talked about using a Fourier transform to break an image into its magnitude and phase. I was hoping to possibly take the magnitude of the one duck picture into a matrix and then compare it to a series of matrices from the four duck picture.
I imagine this would be quite difficult, seeing as how I would have to somehow tell the program the shape of the initial duck and then store that duck's broken down image information into a matrix and then compare that to matrices broken down from the other picture. Any ideas/suggestions? I thought this would be a good learning experience, since I'm an electrical engineering student and I learned Fourier Transforms, DFTs, etc. last semester--it'd just be cool to actually apply them to something.
You are talking about object recognition - one of fundamental problems in computer vision. Your main idea - take a picture of the object, get some features from it and then find same set of features on other image - is correct. However, pixel by pixel comparison (no matter in time or frequency domain) is very error-prone and normally gives poor results. In most cases more high-level features give much better results.
To get started, take a look at Cascade Classifier in OpenCV which uses Haar-like features (small rectangles with particular gray level). It is most well-known for face detection and recognition, but can also be trained for other objects.
You may also be interested in SURF method, which searches for points with similar characteristics, or even AAMs, which try to model shape and appearance of an object.

Bullet vs Newton Game Dynamics vs ODE physics engines [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am trying to pick a physics engine for a simple software application. It would be to simulate a rather small number of objects so performance isn't a huge concern. I am mostly concerned with the accuracy of the motion involved. I would also like the engine to be cross-platform between windows/linux/mac and usable with c++ code. I was looking at Bullet, Newton Game Dynamics, and ODE because they are open source. However, if Havok/PhysX are significantly more accurate I would consider those too.
All I seem to find are opinions on the engines, are there any thorough comparisons between the options? Or does anyone have experience trying the various engines out. Since what I'm trying to do is relatively simple there probably isn't a huge difference between them, but I'd like to hear what people have to say about the options? Thanks!
There is a nice comparison of ODE and Bullet here:
http://blog.wolfire.com/2010/03/Comparing-ODE-and-Bullet
Hope it can be useful in making a choice.
Although it is a bit dated, there is a comprehensive comparison of (in alphabetical order) Bullet, JigLib, Newton, ODE, PhysX, and others available here:
http://www.adrianboeing.com/pal/papers/p281-boeing.pdf
The comparison considers integrators, friction models, constraint solvers, collision detection, stacking, and computational performance.
Sorry, but you will never find a real comparison with respect to accuracy. I am searching for three months now for my master thesis and have not found it. So I started to do the comparison on my own but it's still a long way to go. I'm testing with 3d engines and even 2d engines and for now Chipmunk is the one with the highest accuracy so far. So if you have no need for 3d I would reccomend it. However if you have an urgent need for 3d and your problem is as simple as you described it (don't want to expand it in the future?) Bullet and ODE will do it. I would prefer Bullet because it is much more up-to-date and is still actively maintained. At least there is Newton, with which I am fighting right now. Therefore I can't give you pros and cons except that it is a bit more work to get familiar with the (crucial) bad documentation.
Hope that helps. Best regards.
One thing I found really valuable in ODE is the ability to change pretty much every single parameter 'on the fly'. As an example, the engine doesn't seem to complain if you modify the inertia or even the shape of a body. You could replace a sphere with a box and everything would just keep working, or change the size of the sphere.
Other engines are not as flexible usually, because they do a lot of work internally for optimization purposes.
As for accuracy, as far as I know, ODE still supports a very accurate (but slow) solver which is obviously not very popular in the games industry because you can't play around with more than 25-30 objects in real time. Hope this helps.
Check out Simbody, which is used in engineering. It's particularly good for simulating articulated bodies. It has been used for more than 5 years to simulate human musculoskeletal dynamics. It's also one of the physics engines used in Gazebo, a robot simulation environment.
https://github.com/simbody/simbody
http://nmbl.stanford.edu/publications/pdf/Sherm2011.pdf
A physics abstraction layer supports a large number of physics engines via a unified API, making it easy to compare engines for your situation.
PAL provides a unique interface for these physics engines:
Box2D (experimental)
Bullet
Dynamechs(deprecated)
Havok (experimental)
IBDS (experimental)
JigLib
Meqon(deprecated)
Newton
ODE
OpenTissue (experimental)
PhysX (a.k.a Novodex, Ageia PhysX, nVidia PhysX)
Simple Physics Engine (experimental)
Tokamak
TrueAxis
According to the December 2007 paper linked in this answer:
Of the open source engines the Bullet engine provided the best results
overall, outperforming even some of the commercial engines. Tokamak
was the most computationally efficient, making it a good choice for
game development, however TrueAxis and Newton performed well at low
update rates. For simulation systems the most important property of
the simulation should be determined in order to select the best
engine.
Here is a September 2007 demo by the same author:
https://www.youtube.com/watch?v=IhOKGBd-7iw

Tips for an AI for a 2D racing game [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have a school project to build an AI for a 2D racing game in which it will compete with several other AIs.
We are given a black and white bitmap image of the racing track, we are allowed to choose basic stats for our car (handling, acceleration, max speed and brakes) after we receive the map. The AI connects to the game's server and gives to it several times a second numbers for the current acceleration and steering. The language I chose is C++, by the way.
The question is
What is the best strategy or algorithm (since I want to try and win)? I currently have in mind some ideas found on the net and one or two of my own, but I would like before I start to code that my perspective is one of the best.
What good books are there on that matter?
What sites should I refer to?
There's no "right answer" for this problem - it's pretty open-ended and many different options might work out.
You may want to look into reinforcement learning as a way of trying to get the AI to best determine how to control the car once it's picked the different control statistics. Reinforcement learning models can train the computer to try to work toward a good system for making particular maneuvers in terms of the underlying control system.
To determine what controls you'll want to use, you could use some flavor of reinforcement learning, or you may want to investigate supervised learning algorithms that can play around with different combinations of controls and see how good of a "fit" they give for the particular map. For example, you might break the map apart into small blocks, then try seeing what controls do well in the greatest number of blocks.
In terms of plotting out the path you'll want to take, A* is a well-known algorithm for finding shortest paths. In your case, I'm not sure how useful it will be, but it's the textbook informed search algorithm.
For avoiding opponent racers and trying to drive them into trickier situations, you may need to develop some sort of opponent modeling system. Universal portfolios are one way to do this, though I'm not sure how useful they'll be in this instance. One option might be to develop a potential field around the track and opponent cars to help your car try to avoid obstacles; this may actually be a better choice than A* for pathfinding. If you're interested in tactical maneuvers, a straightforward minimax search may be a good way to avoid getting trapped or to find ways to trap opponents.
I am no AI expert, but I think the above links might be a good starting point. Best of luck with the competition!
What good books are there on that matter?
The best book I have read on this subject is "Programming Game AI by Example" by Mat Buckland. It has chapters on both path planning and steering behaviors, and much more (state machines, graph theory, the list goes on).
All the solutions above are good, and people have gone to great length to test them out. Look up "Togelius and Lucas" or "Loiacono and Lanzi". They have tries things like neuroevolution, imitation (done via reinforcement learning), force fields, etc. From my point of view the best way to go is center line. That will take an hour to implement. In contrast, neuroevolution (for example) is neither easy nor fast. I did my dissertation on that and it can easily take several months full time, if you have the right hardware.

Which linear programming package should I use for high numbers of constraints and "warm starts" [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I have a "continuous" linear programming problem that involves maximizing a linear function over a curved convex space. In typical LP problems, the convex space is a polytope, but in this case the convex space is piecewise curved -- that is, it has faces, edges, and vertices, but the edges aren't straight and the faces aren't flat. Instead of being specified by a finite number of linear inequalities, I have a continuously infinite number. I'm currently dealing with this by approximating the surface by a polytope, which means discretizing the continuously infinite constraints into a very large finite number of constraints.
I'm also in the situation where I'd like to know how the answer changes under small perturbations to the underlying problem. Thus, I'd like to be able to supply an initial condition to the solver based on a nearby solution. I believe this capability is called a "warm start."
Can someone help me distinguish between the various LP packages out there? I'm not so concerned with user-friendliness as speed (for large numbers of constraints), high-precision arithmetic, and warm starts.
Thanks!
EDIT: Judging from the conversation with question answerers so far, I should be clearer about the problem I'm trying to solve. A simplified version is the following:
I have N fixed functions f_i(y) of a single real variable y. I want to find x_i (i=1,...,N) that minimize \sum_{i=1}^N x_i f_i(0), subject to the constraints:
\sum_{i=1}^N x_i f_i(1) = 1, and
\sum_{i=1}^N x_i f_i(y) >= 0 for all y>2
More succinctly, if we define the function F(y)=\sum_{i=1}^N x_i f_i(y), then I want to minimize F(0) subject to the condition that F(1)=1, and F(y) is positive on the entire interval [2,infinity). Note that this latter positivity condition is really an infinite number of linear constraints on the x_i's, one for each y. You can think of y as a label -- it is not an optimization variable. A specific y_0 restricts me to the half-space F(y_0) >= 0 in the space of x_i's. As I vary y_0 between 2 and infinity, these half-spaces change continuously, carving out a curved convex shape. The geometry of this shape depends implicitly (and in a complicated way) on the functions f_i.
As for LP solver recommendations, two of the best are Gurobi and CPLEX (google them). They are free for academic users, and are capable of solving large-scale problems. I believe they have all the capabilities that you need. You can get sensitivity information (to a perturbation) from the shadow prices (i.e. the Lagrange multipliers).
But I'm more interested in your original problem. As I understand it, it looks like this:
Let S = {1,2,...,N} where N is the total number of functions. y is a scalar. f_{i}:R^{1} -> R^{1}.
minimize sum{i in S} (x_{i} * f_{i}(0))
x_{i}
s.t.
(1) sum {i in S} x_{i} * f_{i}(1) = 1
(2) sum {i in S} x_{i} * f_{i}(y) >= 0 for all y in (2,inf]
It just seems to me that you might want to try solve this problem as an convex NLP rather than an LP. Large-scale interior point NLP solvers like IPOPT should be able to handle these problems easily. I strongly recommended trying IPOPT http://www.coin-or.org/Ipopt
From a numerical point of view: for convex problems, warm-starting is not necessary with interior point solvers; and you don't have to worry about the combinatorial cycling of active sets. What you've described as "warm-starting" is actually perturbing the solution -- that's more akin to sensitivity analysis. In optimization parlance, warm-starting usually means supplying a solver with an initial guess -- the solver will take that guess and end up at the same solution, which isn't really what you want. The only exception is if the active set changes with a different initial guess -- but for a convex problem with a unique optimum, this cannot happen.
If you need any more information, I'd be pleased to supply it.
EDIT:
Sorry about the non-standard notation -- I wish I could type in LaTeX like on MathOverflow.net. (Incidentally, you might try posting this there -- I think the mathematicians there would be interested in this problem)
Ah now I see about the "y > 2". It isn't really an optimization constraint so much as an interval defining a space (I've edited my description above). My mistake. I'm wondering if you could somehow transform/project the problem from an infinite to a finite one? I can't think of anything right now, but I'm just wondering if that's possible.
So your approach is to discretize the problem for y in (2,inf]. I'm guessing you're choosing a very big number to represent inf and a fine discretization grid. Oooo tricky. I suppose discretization is probably your best bet. Maybe guys who do real analysis have ideas.
I've seen something similar being done for problems involving Lyapunov functions where it was necessary to enforce a property in every point within a convex hull. But that space was finite.
I encountered a similar problem. I searched the web and found just now that this problem may be classified as "semi-infinite" problem. MATLAB has tools to solve this kind of problems (function "fseminf"). But I haven't checked this in detail. Sure people have encountered this kind of questions.
You shouldn't be using an LP solver and doing the discretization yourself. You can do much better by using a decent general convex solver. Check out, for example, cvxopt. This can handle a wide variety of different functions in your constraints, or allow you to write your own. This will be far better than attempting to do the linearization yourself.
As to warm start, it makes more sense for an LP than a general convex program. While warm start could potentially be useful if you hand code the entire algorithm yourself, you typically still need several Newton steps anyway, so the gains aren't that significant. Most of the benefit of warm start comes in things like active set methods, which are mostly only used for LP.