Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I have to apply a heuristic algorithm for finding a minimum or maximum from a function.
I understood what heuristic means, but where can I find an algorithm for applying it on Rosenbrock function, for example.
(C++,JAVA,C# or even pseudocode could be very helpful).
The simplest, and most obvious, solution would be to use the Random walk algorithm. It works by starting with a random point within the search space and then visiting it's random neighbor.
A similar, but more reasonable, algorithm is the Hill climbing. Again, you start at a random point but this time you chose the best neighbor.
Another, technically, heuristic algorithm is Random sampling, which just means picking any point from the search space and remembering the best one you found.
An improvement over Random sampling is the Simulated annealing algorithm. It's a kind of a fusion of Random sampling and Hill climbing: you start with picking random points in the search space but as the time goes on you tend to stick with the higher quality ones.
You can find more information and pseudo code samples of all of the above on Wikipedia.
A whole different class of solutions are the Genetic algorithms. You can start learning about them by reading http://www.obitko.com/tutorials/genetic-algorithms/index.php. Unfortunately it does not seem to have any code samples.
The Wikipedia article that you reference mentions adaptive coordinate descent, which is a state-of-the-art evolutionary algorithm, as a technique for minimizing the Rosenbrock function. Googling that finds several papers with pseudocode and algorithms including this one. The paper even includes a reference to actual code in Matlab.
You could also use Expectation–maximization with random restart although that's probably significantly less efficient than adaptive coordinate descent.
You need to use partial derivatives of the function to find its extrema. You start from a random point on the plane and use these derivatives (calculated numerically) to find a direction to move to a next point. Repeat this process until you find a minimum (justified by second derivatives).
By the way, the function you are talking about is a known difficult case for such iterative search, so you'll need to experiment with step size and other parameters of your algorithm.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 9 years ago.
Improve this question
Background:
I have finished writing a game as a homework assignment. where we had to make a game of hex. I decided to implemented the board using a 2d vector of nodes, and used 2 vectors to keep track of x and y coordinates of a node's neighbors. The path finding algorithm i used to determine a winner was similar to that of Dijkstra's.
I realize disadvantage of using 2 vectors is that they would must always be in sync, but i am asking about speed. I also realize that a faster way to implement the board is to probably use a 1d vector (which I realized halfway into finishing the program).
Question: In terms of raw speed, would the path finding algorithm run faster with a 2 vectors to keep track of (x,y) or would the algorithm run faster if I implemented using a vector of pairs?
Choose whichever is more suitable for your needs.
You should not be concerned about performance at this stage of software design.
What is much more important is to choose the data structure you can best work with.
On doing that, the performance benefit is probably already with you.
aoeu has it right: first worry about nice representation.
Second, if you worry about the game being slow, profile. Find problematic areas and worry about those.
That said, a bit about speed:
Memory access is fastest when it is sequential. Jumping around is bad. Accessing values one after the other is good.
The question of whether a vector of pairs (more generally, vector of structs, VoS) or a pair of vectors (struct of vectors, SoV) is faster depends entirely on your access pattern. Do you access the coordinate pairs together, or do you first process all x and then all y values? The answer will most probably show the faster data layout.
That said, "most probably" means squat. Profile, profile, profile.
My intuition tells me that a vector of pairs would be faster, but you probably need to test it. Create a test program that inserts a few million points into both formats and time which is faster. Then time which format is faster to extract stored data.
First, aoeu is on point.
Second, regarding optimization in general:
The first step of optimization is measurement, otherwise you don't have a baseline to compare improvements against.
The next step is to use some sort of profiler to see where your code spend the cycles/memory/other, it might not be where you think.
When you've done those two you can begin to work on optimizing the right parts of your code in the right way.
In addition to my comment. If your implementation runs slower as the game progresses ( I'm guessing for the AI part of it), then it is probably because you are running Dijkstra after each move. And as the game gets larger with more moves on the board, the performance of the AI deteriorates.
A suggestion will be to use a disjoint-set method rather than Dijkstra's to determine a winner. The advantage of a disjoint-set over Dijkstra is that not only does it use less memory, but it does not become slower as the game progresses, so you are able to run it after each player makes a move. Disjoint-set - Wikipedia, Union-find - princeton
I realize that changing one's implementation of a key part of a project is a daunting task because it requires almost having to DO IT ALL OVER AGAIN, but this is just a suggestion and if you are worried about the speed of the AI, then this is definitely worth looking into. There are other there are other methods for implementing an AI such as minMAX tree, Alphabeta pruning(This is an improvement to minmax tree)
Gl.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am trying to fit some points to an inverse parabola, in the form of F(x)=1/(ax^2+bx+c).
My objective is to program a function in c++ that would take a set of 10-30 points and fit them to the inverse parabola.
I started trying to get an analytical expression using Least squares, but I can't reach to get a result. I tried by hand (little crazy) and then I tried to solve analytically the expressions for a,b and c, but mupad doesn't give me a result (I am pretty new to Matlab's mupad so maybe i am not doing it correctly).
i don't know anymore how to approach the problem.
Can I get a analytical expression for this specific problem? I have also seen algorithms for general least squares fitting but I don't need a so complicated algorithm, I just need it for this equation.
If not, how would StackOverflow people approach the problem?
If needed I can post the equations, and the small Mupad code I've tried, but I think is unnecessary.
EDIT: some example
Im sorry the image is a little bit messy but it is the thing i need.
The data is in blue (this data is particularly noisy). I need to use only the data that is between the vertical lines (a bunch of data in the left and another one in the right).
The result of the fit is in de red line.
all this has been made with matlab, but I need to make it in c++.
I'll try to post some data...
Edit 2: I actually did the fitting in Matlab as follows (not the actual code):
create linear system Ax = b, with
A = [x² x 1]
x = [a; b; c]
b = 1/y;
It should work, shouldn't it? I can solve then using Moore-Penrose pseudoinv calculated with SVD. Isn't it?
There is no analytic solution of least squares; it is an minimisation problem, and requires clever iterative methods to solve. (nonlinear LS - thanks #insilico)
You could try a Newton style iterative method (by rearranging your equation) but I don't think it will converge easily for your function -- which is going to be highly nonlinear at two points!
I'd recommend using a library for this. such as nelder-mead search
http://www.codecogs.com/code/maths/optimization/nelder.php
you simply provide your error function - which is
sum( pow(F(x) - dataY(x), 2) )
and provide a set of initial values (a stab in the dark at the solution);
I've had good success with nelder-mead.
I don't think you will find a good plain-coded solution.
If I understand you correctly, you just need to know the formula for a fit for a particular data set, right?
If so, then you just need to get a curve fitting program and fit the curve using your desired method. Then, implement the formula shown by the curve fit.
There are a few curve fit programs out there:
Curve Expert
http://www.curveexpert.net/
* Eurequa *
http://creativemachines.cornell.edu/eureqa
Additionally, some spreadsheet packages may have the curve fitting facilities you need.
I would be happy to try to do a fit for you if you provide the data. No guarantees on getting the fit you want.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
The findHomography() function in OpenCV finds a perspective transformation between two planes.The computed homography matrix is refined further (using inliers only in case of a robust method) with the Levenberg-Marquardt method to reduce the re-projection error even more. Can anyone provide any links to C/C++ code to the Levenberg Marquardt algorithm which is required to minimize the error function, as it will help me to understand the mathematics behind the algorithm. (Everywhere in the net ,only libraries or specific codes have been uploaded but not the detailed code to the algorithm).
I know it's not C/C++, but the file on the Matlab file exchange might have source code that could help you:
http://www.mathworks.com/matlabcentral/fileexchange/16063
If you understand C/C++ you will probably have no problem understanding Matlab, especially if you're looking at source code for the purposes of furthering your understanding.
The openCV code is open so check modules. The cheatsheet for openCV has Levenberg Marquardt formulation, see column 3 page 1. You can use grep to search for that cheatsheet formulation or just for the name of the method:
grep -rnw . -e "Levenberg"
For example, you can discover a lot of levmar*.c files in the modules/legacy folder. The reason for using LMA after solving a system of linear equations with RANSAC is because the former linear equations optimize a wrong metrics - an algebraic error in parameters of Homography. A better metrics is the sum of squared residual in terms of point coordinates. In this metrics the solution is non-linear and has to be found iteratively. The linear solution is used to initialize a non-linear algorithm and increase its chances to converge to a global minima.
Seeing the code, however, is not always the easiest way to understand a complex concept. Here is an 'evolutionary' line up of optimization algorithms that hopefully may facilitate your understanding:
Optimization can always be cast as minimization of cost or residuals
If the function convex (a bowl shape) it has only one global minima and it can be found by sliding down. Since gradient points up we take it with a minus sign:
param[t+1]=param[t]-k*cost', where t-iteration, k-step and ' stands for a gradient or a derivative w.r.t. parameters
So far so good but in some cases (imagine a cost function that looks like a snowboarding tube) going along gradient will make you overshoot and zig-zag from one side of the valley to the other while only slowly going down (to global min). Solution - use a better function approximation (Taylor expansion up to a second derivative) to arrive at this
param[t+1]=param[t]-(k/cost'') *cost', where '' is second derivative
Intuitively fast function means large second derivative thus we slow down convergence by reducing our step (dividing it by a large number) and vise versa.
4. For quadratic error which often represents a sum of squared residuals or some distances we can come up with an approximation of the second derivative as a product of first derivatives (called Jacobians shown as J which is generally a matrix); second derivative or Hessian = JTJ, so optimization looks like
param[t+1]=param[t]-(JTJ)-1 *cost' - called Gauss-Newton
Here we dropped k as a step for simplicity; Compare this and previous formulas, we are almost there! Also note, in wikipedia and other sites you may see that instead of k they use a residual yi-f(x, param) but you can ignore it for now as well. Go further only if you are comfortable with the last formulation.
5. Here comes Levenberg and says - it is great but may be unstable. We assumed too much with those second derivatives and it would be nice to dump them and make things look like first derivative as before. To do that we add a dumping factor that can make optimization look like
param[t+1]=param[t]-(JTJ + lambda*I)-1 *cost', where I is identity matrix and lambda is a scalar
Note that when lambda is large you can discard JTJ and you are left with standard gradient descent. They do it when convergence slows down, otherwice they use small lambda which makes the algorithm look more like Gauss-Newton. Note that J = d_z/d_param, cost’=d_f/d_param and f=zT*z
Here comes Marquardt and says, great but I like our gradient descent go faster - for small diagonal(JTJ) I'd like larger step, hence a final formula where we devide by small diagonal elements resulting in faster convergence when gradient is slow:
param[t+1]=param[t]-(JTJ + lambda*diagonal(JTJ))-1 *cost'
Here is a quick summary using downhill skiing analogy where we show the expression for parameter change param[t+1]-param[t]:
1. Gradient J points up but you want to ski down so go against gradient -kJ;
2. If zig-zagging in a 'snow tube' move in steps inverse to a second derivative (H); you will hopefully go faster straight down: -H-1J
3. Approximate a second derivative with the product of first derivatives -(JTJ)-1J ;
4. Dump it if it is too much and go back to a first derivative -(JTJ + lambdaI)-1 J;
5. don't want to get stuck on a flat slope? scale your step inversely to diagonal of approximated Hessian: -(JTJ + lambdadiag(JTJ))-1 J
It is a big mystery why all these work since my intuition gives up after a while. May be it is a good time to look at this philosophically and apply a principle of Occam's Razor - the less we assume the better we off but if assume too little we may not get too far. All we tried to do is to predict (assume) the global behaviour of the function looking at it locally (imaging getting married after being together for 1 week). Using Hessian is like having more assumptions in terms of its numerous parameters as opposed to more conservative Jacobian that has a few parameters (assumptions). Finally we may want to be flexible in being either conservative or risky and that's what Levenberg-Marquardt methods tries to accomplish.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
There are several open source implementations of conditional random fields (CRFs) in C++, such as CRF++, FlexCRF, etc. But from the manual, I can only understand how to use them for 1-D problems such as text tagging, it's not clear how to apply them in 2-D vision problems, suppose I have computed the association potentials at each node and the interaction potentials at each edge.
Did anyone use these packages for vision problems, e.g., segmentation? Or they simply cannot be used in this way?
All in all, is there any open source packages of CRFs for vision problems?
Thanks a lot!
The newest version of dlib has support for learning pairwise Markov random field models over arbitrary graph structures (including 2-D grids). It estimates the parameters in a max-margin sense (i.e. using a structural SVM) rather than in a maximum likelihood sense (i.e. CRF), but if all you want to do is predict a graph labeling then either method is just as good.
There is an example program that shows how to use this stuff on a simple example graph. The example puts feature vectors at each node and the structured SVM uses them to learn how to correctly label the nodes in the graph. Note that you can change the dimensionality of the feature vectors by modifying the typedefs at the top of the file. Also, if you already have a complete model and just want to find the most probable labeling then you can call the underlying min-cut based inference routine directly.
In general, I would say that the best way to approach these problems is to define the graphical model you want to use and then select a parameter learning method that works with it. So in this case I imagine you are interested in some kind of pairwise Markov random field model. In particular, the kind of model where the most probable assignment can be found with a min-cut/max-flow algorithm. Then in this case, it turns out that a structural SVM is a natural way to find the parameters of the model since a structural SVM only requires the ability to find maximum probability assignments. Finding the parameters via maximum likelihood (i.e. treating this as a CRF) would require you to additionally have some way to compute sums over the graph variables, but this is pretty hard with these kinds of models. For this kind of model, all the CRF methods I know about are approximations, while the SVM method in dlib uses an exact solver. By that I mean, one of the parameters of the algorithm is an epsilon value that says "run until you find the optimal parameters to within epsilon accuracy", and the algorithm can do this efficiently every time.
There was a good tutorial on this topic at this year's computer vision and pattern recognition conference. There is also a good book on Structured Prediction and Learning in Computer Vision written by the presenters.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I have a "continuous" linear programming problem that involves maximizing a linear function over a curved convex space. In typical LP problems, the convex space is a polytope, but in this case the convex space is piecewise curved -- that is, it has faces, edges, and vertices, but the edges aren't straight and the faces aren't flat. Instead of being specified by a finite number of linear inequalities, I have a continuously infinite number. I'm currently dealing with this by approximating the surface by a polytope, which means discretizing the continuously infinite constraints into a very large finite number of constraints.
I'm also in the situation where I'd like to know how the answer changes under small perturbations to the underlying problem. Thus, I'd like to be able to supply an initial condition to the solver based on a nearby solution. I believe this capability is called a "warm start."
Can someone help me distinguish between the various LP packages out there? I'm not so concerned with user-friendliness as speed (for large numbers of constraints), high-precision arithmetic, and warm starts.
Thanks!
EDIT: Judging from the conversation with question answerers so far, I should be clearer about the problem I'm trying to solve. A simplified version is the following:
I have N fixed functions f_i(y) of a single real variable y. I want to find x_i (i=1,...,N) that minimize \sum_{i=1}^N x_i f_i(0), subject to the constraints:
\sum_{i=1}^N x_i f_i(1) = 1, and
\sum_{i=1}^N x_i f_i(y) >= 0 for all y>2
More succinctly, if we define the function F(y)=\sum_{i=1}^N x_i f_i(y), then I want to minimize F(0) subject to the condition that F(1)=1, and F(y) is positive on the entire interval [2,infinity). Note that this latter positivity condition is really an infinite number of linear constraints on the x_i's, one for each y. You can think of y as a label -- it is not an optimization variable. A specific y_0 restricts me to the half-space F(y_0) >= 0 in the space of x_i's. As I vary y_0 between 2 and infinity, these half-spaces change continuously, carving out a curved convex shape. The geometry of this shape depends implicitly (and in a complicated way) on the functions f_i.
As for LP solver recommendations, two of the best are Gurobi and CPLEX (google them). They are free for academic users, and are capable of solving large-scale problems. I believe they have all the capabilities that you need. You can get sensitivity information (to a perturbation) from the shadow prices (i.e. the Lagrange multipliers).
But I'm more interested in your original problem. As I understand it, it looks like this:
Let S = {1,2,...,N} where N is the total number of functions. y is a scalar. f_{i}:R^{1} -> R^{1}.
minimize sum{i in S} (x_{i} * f_{i}(0))
x_{i}
s.t.
(1) sum {i in S} x_{i} * f_{i}(1) = 1
(2) sum {i in S} x_{i} * f_{i}(y) >= 0 for all y in (2,inf]
It just seems to me that you might want to try solve this problem as an convex NLP rather than an LP. Large-scale interior point NLP solvers like IPOPT should be able to handle these problems easily. I strongly recommended trying IPOPT http://www.coin-or.org/Ipopt
From a numerical point of view: for convex problems, warm-starting is not necessary with interior point solvers; and you don't have to worry about the combinatorial cycling of active sets. What you've described as "warm-starting" is actually perturbing the solution -- that's more akin to sensitivity analysis. In optimization parlance, warm-starting usually means supplying a solver with an initial guess -- the solver will take that guess and end up at the same solution, which isn't really what you want. The only exception is if the active set changes with a different initial guess -- but for a convex problem with a unique optimum, this cannot happen.
If you need any more information, I'd be pleased to supply it.
EDIT:
Sorry about the non-standard notation -- I wish I could type in LaTeX like on MathOverflow.net. (Incidentally, you might try posting this there -- I think the mathematicians there would be interested in this problem)
Ah now I see about the "y > 2". It isn't really an optimization constraint so much as an interval defining a space (I've edited my description above). My mistake. I'm wondering if you could somehow transform/project the problem from an infinite to a finite one? I can't think of anything right now, but I'm just wondering if that's possible.
So your approach is to discretize the problem for y in (2,inf]. I'm guessing you're choosing a very big number to represent inf and a fine discretization grid. Oooo tricky. I suppose discretization is probably your best bet. Maybe guys who do real analysis have ideas.
I've seen something similar being done for problems involving Lyapunov functions where it was necessary to enforce a property in every point within a convex hull. But that space was finite.
I encountered a similar problem. I searched the web and found just now that this problem may be classified as "semi-infinite" problem. MATLAB has tools to solve this kind of problems (function "fseminf"). But I haven't checked this in detail. Sure people have encountered this kind of questions.
You shouldn't be using an LP solver and doing the discretization yourself. You can do much better by using a decent general convex solver. Check out, for example, cvxopt. This can handle a wide variety of different functions in your constraints, or allow you to write your own. This will be far better than attempting to do the linearization yourself.
As to warm start, it makes more sense for an LP than a general convex program. While warm start could potentially be useful if you hand code the entire algorithm yourself, you typically still need several Newton steps anyway, so the gains aren't that significant. Most of the benefit of warm start comes in things like active set methods, which are mostly only used for LP.