Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am trying to pick a physics engine for a simple software application. It would be to simulate a rather small number of objects so performance isn't a huge concern. I am mostly concerned with the accuracy of the motion involved. I would also like the engine to be cross-platform between windows/linux/mac and usable with c++ code. I was looking at Bullet, Newton Game Dynamics, and ODE because they are open source. However, if Havok/PhysX are significantly more accurate I would consider those too.
All I seem to find are opinions on the engines, are there any thorough comparisons between the options? Or does anyone have experience trying the various engines out. Since what I'm trying to do is relatively simple there probably isn't a huge difference between them, but I'd like to hear what people have to say about the options? Thanks!
There is a nice comparison of ODE and Bullet here:
http://blog.wolfire.com/2010/03/Comparing-ODE-and-Bullet
Hope it can be useful in making a choice.
Although it is a bit dated, there is a comprehensive comparison of (in alphabetical order) Bullet, JigLib, Newton, ODE, PhysX, and others available here:
http://www.adrianboeing.com/pal/papers/p281-boeing.pdf
The comparison considers integrators, friction models, constraint solvers, collision detection, stacking, and computational performance.
Sorry, but you will never find a real comparison with respect to accuracy. I am searching for three months now for my master thesis and have not found it. So I started to do the comparison on my own but it's still a long way to go. I'm testing with 3d engines and even 2d engines and for now Chipmunk is the one with the highest accuracy so far. So if you have no need for 3d I would reccomend it. However if you have an urgent need for 3d and your problem is as simple as you described it (don't want to expand it in the future?) Bullet and ODE will do it. I would prefer Bullet because it is much more up-to-date and is still actively maintained. At least there is Newton, with which I am fighting right now. Therefore I can't give you pros and cons except that it is a bit more work to get familiar with the (crucial) bad documentation.
Hope that helps. Best regards.
One thing I found really valuable in ODE is the ability to change pretty much every single parameter 'on the fly'. As an example, the engine doesn't seem to complain if you modify the inertia or even the shape of a body. You could replace a sphere with a box and everything would just keep working, or change the size of the sphere.
Other engines are not as flexible usually, because they do a lot of work internally for optimization purposes.
As for accuracy, as far as I know, ODE still supports a very accurate (but slow) solver which is obviously not very popular in the games industry because you can't play around with more than 25-30 objects in real time. Hope this helps.
Check out Simbody, which is used in engineering. It's particularly good for simulating articulated bodies. It has been used for more than 5 years to simulate human musculoskeletal dynamics. It's also one of the physics engines used in Gazebo, a robot simulation environment.
https://github.com/simbody/simbody
http://nmbl.stanford.edu/publications/pdf/Sherm2011.pdf
A physics abstraction layer supports a large number of physics engines via a unified API, making it easy to compare engines for your situation.
PAL provides a unique interface for these physics engines:
Box2D (experimental)
Bullet
Dynamechs(deprecated)
Havok (experimental)
IBDS (experimental)
JigLib
Meqon(deprecated)
Newton
ODE
OpenTissue (experimental)
PhysX (a.k.a Novodex, Ageia PhysX, nVidia PhysX)
Simple Physics Engine (experimental)
Tokamak
TrueAxis
According to the December 2007 paper linked in this answer:
Of the open source engines the Bullet engine provided the best results
overall, outperforming even some of the commercial engines. Tokamak
was the most computationally efficient, making it a good choice for
game development, however TrueAxis and Newton performed well at low
update rates. For simulation systems the most important property of
the simulation should be determined in order to select the best
engine.
Here is a September 2007 demo by the same author:
https://www.youtube.com/watch?v=IhOKGBd-7iw
Related
I have got a job scheduling problem. We are given start time, time to
complete the order, deadline.
It is given that start time + time to
complete <= deadline.
I have also been given the loss that will occur if I am not able to
complete the job before the deadline. I have to design an algorithm to minimize the loss.
I have tried changing the standard algorithm of dynamic programming for maximizing the profit in job scheduling but to no success.
What algorithm can I use to solve the question?
Dynamic Programming isn't the right approach based on what you're aiming to optimize. You can find the optimized schedule by using a greedy approach.
Here's a thorough guide with sample code for your desire language (C++), in the guide it assumes each jobs takes only 1 unit of time, which you can easily modify by using time_to_complete instead.
Your problem is similar to the knapsack one. Using a greedy approach is convenient if you aren't actually looking for the best possible solution, but just a "good enough" one.
The big pro of the greedy approach is that the cost is rather lower than other "more thorough" approaches but, if you need the best solution to your problem, I would say that backtracking is the way to go.
Since the deadline can be violated, the problems looks like a Total Weighted Tardiness Scheduling Problem. There are many flavors of it, but most problems under this umbrella are computationally hard, therefore Dynamic Programming (DP) would not be my first choice. In my experience, DP also poses difficulties during modeling and implementation. Same comment for mathematical programming "as-is". Some approaches that can be implemented more quickly are:
constraint programming: very small learning curve, and there are many libraries out there, included very good open source ones (most have C++ API). Bonus: constraint programming can demonstrate optimality.
ad hoc heuristics: (1) start with a constructive algorithm (like the greedy approach suggested by Ling Zhong and Flavio Giobergia), then (2) use some local search approach to improve if and finally (3) embed the approach into a metaheuristic scheme. This way you can build on top of the previous step, and learn a lot about the problem. Note: in general, heuristics cannot demonstrate optimality
special mention: local solver, a hybrid approach between the two above: it lets you model the problem using a formalism similar to constraint programming and then it solves it using heuristics. It is very easy to learn, it usually lets you get started quickly and, in my tests, it provides remarkably good results.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have a school project to build an AI for a 2D racing game in which it will compete with several other AIs.
We are given a black and white bitmap image of the racing track, we are allowed to choose basic stats for our car (handling, acceleration, max speed and brakes) after we receive the map. The AI connects to the game's server and gives to it several times a second numbers for the current acceleration and steering. The language I chose is C++, by the way.
The question is
What is the best strategy or algorithm (since I want to try and win)? I currently have in mind some ideas found on the net and one or two of my own, but I would like before I start to code that my perspective is one of the best.
What good books are there on that matter?
What sites should I refer to?
There's no "right answer" for this problem - it's pretty open-ended and many different options might work out.
You may want to look into reinforcement learning as a way of trying to get the AI to best determine how to control the car once it's picked the different control statistics. Reinforcement learning models can train the computer to try to work toward a good system for making particular maneuvers in terms of the underlying control system.
To determine what controls you'll want to use, you could use some flavor of reinforcement learning, or you may want to investigate supervised learning algorithms that can play around with different combinations of controls and see how good of a "fit" they give for the particular map. For example, you might break the map apart into small blocks, then try seeing what controls do well in the greatest number of blocks.
In terms of plotting out the path you'll want to take, A* is a well-known algorithm for finding shortest paths. In your case, I'm not sure how useful it will be, but it's the textbook informed search algorithm.
For avoiding opponent racers and trying to drive them into trickier situations, you may need to develop some sort of opponent modeling system. Universal portfolios are one way to do this, though I'm not sure how useful they'll be in this instance. One option might be to develop a potential field around the track and opponent cars to help your car try to avoid obstacles; this may actually be a better choice than A* for pathfinding. If you're interested in tactical maneuvers, a straightforward minimax search may be a good way to avoid getting trapped or to find ways to trap opponents.
I am no AI expert, but I think the above links might be a good starting point. Best of luck with the competition!
What good books are there on that matter?
The best book I have read on this subject is "Programming Game AI by Example" by Mat Buckland. It has chapters on both path planning and steering behaviors, and much more (state machines, graph theory, the list goes on).
All the solutions above are good, and people have gone to great length to test them out. Look up "Togelius and Lucas" or "Loiacono and Lanzi". They have tries things like neuroevolution, imitation (done via reinforcement learning), force fields, etc. From my point of view the best way to go is center line. That will take an hour to implement. In contrast, neuroevolution (for example) is neither easy nor fast. I did my dissertation on that and it can easily take several months full time, if you have the right hardware.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm looking to add some genetic algorithms to an Operations research project I have been involved in. Currently we have a program that aids in optimizing some scheduling and we want to add in some heuristics in the form of genetic algorithms. Are there any good libraries for generic genetic programming/algorithms in c++? Or would you recommend I just code my own?
I should add that while I am not new to c++ I am fairly new to doing this sort of mathematical optimization work in c++ as the group I worked with previously had tended to use a proprietary optimization package.
We have a fitness function that is fairly computationally intensive to evaluate and we have a cluster to run this on so parallelized code is highly desirable.
So is c++ a good language for this? If not please recommend some other ones as I am willing to learn another language if it makes life easier.
thanks!
I would recommend rolling your own. 90% of the work in a GP is coding the genotype, how it gets operated on, and the fitness calculation. These are parts that change for every different problem/project. The actual evolutionary algorithm part is usually quite simple.
There are several GP libraries out there ( http://en.wikipedia.org/wiki/Symbolic_Regression#Implementations ). I would use these as examples and references though.
C++ is a good choice for GP because they tend to be very computationally intensive. Usually, the fitness function is the bottleneck, so it's worthwhile to at least make this part compiled/optimized.
I use GAUL
it's a C library with all you want.
( pthread/fork/openmp/mpi )
( various crossover / mutation function )
( non GA optimisation: Hill-Climbing, N-M Simplex, Simulated annealling, Tabu, ... )
Why build your own library when there is such powerful tools ???
I haven't used this personally yet, but the Age Layered Population Structure (ALPS) method has been used to generate human competitive results and has been shown to outperform several popular methods in finding optimal solutions in rough fitness landscapes. Additionally, the link contains source code in C++ FTW.
I have had similar problems. I used to have a complicated problem and defining a solution in terms of a fixed length vector was not desirable. Even a variable length vector does not look attractive. Most of the libraries focus on cases where the cost function is cheap to calculate which did not match my problem. Lack of parallelism is their another pitfall. Expecting the user to allocate memory for being used by the library is adding insult into injury. My cases were even more complicated because most of the libraries check the nonlinear conditions before evaluation. While, I needed to check the nonlinear condition during or after the evaluation based on the result of the evaluation. It is also undesirable when I needed to evaluate the solution to calculate its cost and then I had to recalculate the solution to present it. In most of the cases, I had to write the cost function two times. Once for GA and once for presentation.
Having all of these problems, I eventually, designed my own openGA library which is now mature.
This library is based on C++ and distributed with free Mozilla Public License 2.0. It guarantees that using this library does not limit your project and it can be used for commercial or none commercial purposes for free without asking for any permission. Not all libraries are transparent in this sense.
It supports three modes of single objective, multiple objective (NSGA-III) and Interactive Genetic Algorithm (IGA).
The solution is not mandated to be a vector. It can be any structure with any customized design containing any optional values with variable length. This feature makes this library suitable for Genetic Programming (GP) applications.
C++11 is used. Template feature allows flexibility of the solution structure design.
The standard library is enough to use this library. There is no dependency beyond that. The entire library is also a single header file for ease of use.
The library supports parallelism by default unless you turn it off. If you have an N-core CPU, the number of threads are set to N by default. You can change the settings. You can also set if the solution evaluations are distributed between threads equally or they are assigned to any thread which has finished its job and is currently idle.
The solution evaluation is separated from calculation of the final cost. It means that your evaluation function can simulate the system and keep a lot of information. Your cost function is called later and reports the cost based on the evaluation. While your evaluation results are kept to be used later by the user. You do not need to re-calculate it again.
You can reject a solution at any time during the evaluation. No waste of time. In fact, the evaluation and constraint check are integrated.
The GA assist feature help you to produce the C++ code base from the information you provide.
If these features match what you need, I recommend having a look at the user manual and the examples of openGA.
The number of the readers and citation of the related publication as well as its github favorite marks is increasing and its usage is keep growing.
I suggest you have a look into the matlab optimization toolkit - it comes with GAs out of the box, you only haver to code the fitness function (and a function to generate inital population eventually) and I believe matlab has some C++ interoperability so you could code you functions in C++. I am using it for my experiments and a very nice feature is that you get all sorts of charts out of the box as well.
Said so - if your aim is to learn about genetic algorithms you're better off coding it, but if you just want to run experiments matlab and C++ (or even just matlab) is a good option.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Basically, I'm looking for a library or SDK for handling large point clouds coming from LIDAR or scanners, typically running into many millions of points of X,Y,Z,Colour. What I'm after are as follows;
Fast display, zooming, panning
Point cloud registration
Fast low level access to the data
Regression of surfaces and solids (not as important as the others)
While I don't mind paying for a reasonable commercial library, I'm not interested in a very expensive library (e.g. in excess of about $5k) or one with a per user run-time license cost. Open source would also be good. I found a few possibilities via google, but they all tend to be too expensive for my budget.
Check Point Cloud Library (PCL). It is quite a complete toolkit for processing and manipulating point clouds. It also provides tools for point clouds visualisation: pcl::visualization::CloudViewer which makes use of VTK library and wxWidgets
Since 2011, point clout translation (read/write) and manipulating toolkit has been developed: PDAL - Point Data Abstraction Library
I second the call for R which I interface with C++ all the time (using e.g. the Rcpp and RInside packages).
R prefers all data in memory, so you probably want to go with a 64bit OS and a decent amount of RAM for lots of data. The Task View on High-Performance Computing with R has some pointers on dealing with large data.
Lastly, for quick visualization, the hexbin is excellent for visually summarizing large data sets. For the zooming etc aspect try the rgl package.
Why don't you go have a look at the R programming language which can link directly to C code, thereby forming a bridge. R was developed with statistical code in mind but can very easily help not only to handle large datasets but also visualize them as well. There are quite a number of atmospheric scientists who are using R in their work. I know, I work with them for exactly the stuff you're trying to do. Think of R as a poor man's Matlab or IDL (but soon won't be.)
In spirit of the R answers, ROOT also provides a good undeling framework for this kind of thing.
Possibly useful features:
C++ code base and the Cint c++ interpreter as the working shell. Python binding.
Can display three dim point clouds
A set of geometry classes (though I don't believe that they support all the operations that you need)
Developed by nuclear and particle physicists instead of by statisticians :p
Vortex by Pointools can go up to much higher numbers of points than the millions that you ask for:
http://www.pointools.com/vortex_intro.php
It can handle files of many gigabytes containing billions of points on modest hardware.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I'm interested in creating a game that uses fractal maps for more realistic geography. However, the only fractal map programs I have found are Windows-only, for example Fractal Mapper. Needless to say, they are also not open-sourced.
Are there any open-sourced fractal map creators available, preferably in Python or C/C++? Ideally I would like something that can be "plugged into" a program, rather then being standalone.
Fracplanet may be of use.
Basic terrain generation involves creating a height map (an image) and rendering it using the pixel colour as height. So you may find image or texture generation code useful. This is a good tutorial.
For the terrain aspect take a look at libnoise.
It's packaged for Debian, and has excellent documentation with a chapter on terrain generation with example C++ code.
Of course there's a lot more to "maps" than slapping some colours on a height field (for example Fracplanet adds rivers and lakes). And the sort of terrain you get from these methods isn't actually that realistic; continents don't generally ramp up from the coast into a rocky hinterland, so maybe simulating continental drift and mountain building and erosion processes would help (alternatively, fake it). And then if you want vegetation, or the artefacts of lifeforms (roads and towns, say) to populate your map you might want to look at cellular automata or other "artificial life" tools. Finally, the Virtual Terrain Project is well worth a browse for more links and ideas.
I'd highly recommend purchasing a copy of
Texturing & Modeling: A Procedural Approach
I see it's now in it's third edition (I only have the second) but it's packed full of useful articles about the use of procedural texturing including several chapters on their use in fractal terrains. It starts out with extensive discussion of noise algorithms too - so you have everything from the basics upwards. The authors include Musgrave, Perlin and Worley, so you really can't do better.
If you want truely realistic geography, you could use NASA's SRTM dataset, perhaps combined with OpenStreetMap features. :-)
A very simple implementation would be to use the midpoint displacement fractal, http://en.wikipedia.org/wiki/Diamond-square_algorithm, or the somewhat more complicated Diamond-Squares algorithm.
http://www.gameprogrammer.com/fractal.html#diamond
These are similar algorithms to the "Difference cloud" in Photoshop.