Offline embedded realtime routing - c++

I am currently working on a senior design project for school and have come across a design issue that i do not know how to solve. I need to have realtime, offline routing for an embedded walking application.
I have not been able to find any libraries that suit my need. I understand i might either have to make my own vectorized map of my local town or routing algorithm. I will not go into much detail what my project entails but it does not require a large map. Maybe a 5x5 mile grid. The maps can be loaded by SD if need to be changed.
I see there are GpsMid, YOURs, and others all using OpenStreetMap data.
We will have a TI micro-controller for processing and GPS card for real time lat/lon I just do not know how to take the real time info and route using a static map.
Thanks,
Matt

I'm not well versed in what is typically used for real-time routing with GPS and vectorized maps, but I can recommend some general algorithms that can be used as tools to help you get your project done.
A* search is a pretty typical path finding algorithm. http://en.wikipedia.org/wiki/A_star
Depending on how you organize your data, you may also find to Dijkstra's algorithm to be helpful. http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
These algorithms are popular enough that you should be able to find example code in whatever language you want, although I'd be very skeptical of the quality. I'd recommend writing your own, since you are in school, as it'd be beneficial for you to have written and debugged them on your own at least once in your career. When you are done, you'll have a tried and true implementation to call your own.

Seems to me there are two parts to this:
1 - Identifying map data that tells you what's a road/path (potential route), I would expect this is already in the data in some way. It could be as simple as which colour any given line is.
2 - Calculating a route over those paths. This is well documented/discussed and there are plenty of algorithms etc. out there on the problem. These days it's hardly worth trying very hard for elegance/efficiency, you can just throw CPU cycles at it until an answer pops out.
Also, should this be tagged [homework] ?

Related

SLAM system that uses deep learned features?

Has anybody tried developing a SLAM system that uses deep learned features instead of the classical AKAZE/ORB/SURF features?
Scanning recent Computer Vision conferences, there seem to be quite a few reports of successful usage of neural nets to extract features and descriptors, and benchmarks indicate that they may be more robust than their classical computer vision equivalent. I suspect that extraction speed is an issue, but assuming one has a decent GPU (e.g. NVidia 1050), is it even feasible to build a real-time SLAM system running say at 30FPS on 640x480 grayscale images with deep-learned features?
This was a bit too long for a comment, so that's why I'm posting it as an answer.
I think it is feasible, but I don't see how this would be useful. Here is why (please correct me if I'm wrong):
In most SLAM pipelines, precision is more important than long-term robustness. You obviously need your feature detections/matchings to be precise to get reliable triangulation/bundle (or whatever equivalent scheme you might use). However, the high level of robustness that neural networks provide is only required with systems that do relocalization/loop closure on long time intervals (e.g. need to do relocalization in different seasons etc). Even in such scenarios, since you already have a GPU, I think it would be better to use a photometric (or even just geometric) model of the scene for localization.
We don't have any reliable noise models for the features that are detected by the neural networks. I know there have been a few interesting works (Gal, Kendall, etc...) for propagating uncertainties in deep networks, but these methods seem a bit immature for deployment ins SLAM systems.
Deep learning methods are usually good for initializing a system, and the solution they provide needs to be refined. Their results depend too much on the training dataset, and tend to be "hit and miss" in practice. So I think that you could trust them to get an initial guess, or some constraints (e.g. like in the case of pose estimation: if you have a geometric algorithm that drifts in time, then you can use the results of a neural network to constrain them. But I think that the absence of a noise model as mentioned previously will make the fusion a bit difficult here...).
So yes, I think that it is feasible and that you can probably, with careful engineering and tuning produce a few interesting demos, but I wouldn't trust it in real life.

Best approach to storing scientific data sets on disk C++

I'm currently working on a project that requires working with gigabytes of scientific data sets. The data sets are in the form of very large arrays (30,000 elements) of integers and floating point numbers. The problem here is that they are too large too fit into memory, so I need an on disk solution for storing and working with them. To make this problem even more fun, I am restricted to using a 32-bit architecture (as this is for work) and I need to try to maximize performance for this solution.
So far, I've worked with HDF5, which worked okay, but I found it a little too complicated to work with. So, I thought the next best thing would be to try a NoSQL database, but I couldn't find a good way to store the arrays in the database short of casting them to character arrays and storing them like that, which caused a lot of bad pointer headaches.
So, I'd like to know what you guys recommend. Maybe you have a less painful way of working with HDF5 while at the same time maximizing performance. Or maybe you know of a NoSQL database that works well for storing this type of data. Or maybe I'm going in the totally wrong direction with this and you'd like to smack some sense into me.
Anyway, I'd appreciate any words of wisdom you guys can offer me :)
Smack some sense into yourself and use a production-grade library such as HDF5. So you found it too complicated, but did you find its high-level APIs ?
If you don't like that answer, try one of the emerging array databases such as SciDB, rasdaman or MonetDB. I suspect though, that if you have baulked at HDF5 you'll baulk at any of these.
In my view, and experience, it is worth the effort to learn how to properly use a tool such as HDF5 if you are going to be working with large scientific data sets for any length of time. If you pick up a tool such as a NoSQL database, which was not designed for the task at hand, then, while it may initially be easier to use, eventually (before very long would be my guess) it will lack features you need or want and you will find yourself having to program around its deficiencies.
Pick one of the right tools for the job and learn how to use it properly.
Assuming your data sets really are large enough to merit (e.g., instead of 30,000 elements, a 30,000x30,000 array of doubles), you might want to consider STXXL. It provides interfaces that are intended to (and largely succeed at) imitate those of the collections in the C++ standard library, but are intended to work with data too large to fit in memory.
I have been working on scientific computing for years, and I think HDF5 or NetCDF is a good data format for you to work with. It can provide efficient parallel read/wirte, which is important for dealing with big data.
An alternate solution is to use array database, like SciDB, MonetDB, or RasDaMan. However, it will be kinda painful if you try to load HDF5 data into an array database. I once tried to load HDF5 data into SciDB, but it requires a series of data transformations. You need to know if you will query the data often or not. If not often, then the time-consuming loading may be unworthy.
You may be interested in this paper.
It can allow you to query the HDF5 data directly by using SQL.

processing an image using CUDA implementation, python (pycuda) or C++?

I am in a project to process an image using CUDA. The project is simply an addition or subtraction of the image.
May I ask your professional opinion, which is best and what would be the advantages and disadvantages of those two?
I appreciate everyone's opinions and/or suggestions since this project is very important to me.
General answer: It doesn't matter. Use the language you're more comfortable with.
Keep in mind, however, that pycuda is only a wrapper around the CUDA C interface, so it may not always be up-to-date, also it adds another potential source of bugs, …
Python is great at rapid prototyping, so I'd personally go for Python. You can always switch to C++ later if you need to.
If the rest of your pipeline is in Python, and you're using Numpy already to speed things up, pyCUDA is a good complement to accelerate expensive operations. However, depending on the size of your images and your program flow, you might not get too much of a speedup using pyCUDA. There is latency involved in passing the data back and forth across the PCI bus that is only made up for with large data sizes.
In your case (addition and subtraction), there are built-in operations in pyCUDA that you can use to your advantage. However, in my experience, using pyCUDA for something non-trivial requires knowing a lot about how CUDA works in the first place. For someone starting from no CUDA knowledge, pyCUDA might be a steep learning curve.
Take a look at openCV, it contains a lot of image processing functions and all the helpers to load/save/display images and operate cameras.
It also now supports CUDA, some of the image processing functions have been reimplemented in CUDA and it gives you a good framework to do your own.
Alex's answer is right. The amount of time consumed in the wrapper is minimal. Note that PyCUDA has some nice metaprogramming constructs for generating kernels which might be useful.
If all you're doing is adding or subtracting elements of an image, you probably shouldn't use CUDA for this at all. The amount of time it takes to transfer back and forth across the PCI-E bus will dwarf the amount of savings you get from parallelism.
Any time you deal with CUDA, it's useful to think about the CGMA ratio (computation to global memory access ratio). Your addition/subtraction is only 1 float point operation for 2 memory accesses (1 read and 1 write). This ends up being very lousy from a CUDA perspective.

Wanting to write a raytracer, stuck on what algebra library to use (C++)

I've been wanting to write my own multithreaded realtime raytracer in C++, but I don't want to implement all the vector and matrix logic that comes with it. I figured I'd do some research to find a good library for this, but I haven't had much success...
It's important that the implementation is fast, and preferably that it comes with some friendly licensing. I've read that boost has basic algebra, but I couldn't find anything on how good it was regarding its speed.
For the rest, Google gave me Armadillo, which claims to be very fast, and compares itself to certain other libraries that I haven't heard of.
Then I got Seldon, which also claims to be efficient and convenient, although I couldn't find out where exactly they are on the scale.
Lastly I read about Eigen, which I've also seen mentioned here on StackOverflow while searching here.
In the CG lecture at my university, they use HLSL for the algebra (making the students implement/optimise parts of the raytracer), which got me thinking whether or not I could use GLSL for this. Again, I have no idea what option is most efficient, or what the general consensus is on algebra libraries. I was hoping SO could help me out here, so I can get started with some real development :)
PS: I tried linking to sites, but I don't have enough rep yet
I'd recommend writing your own routines. When I wrote my raytracer, I found that most of the algebra used the same small collection of methods. Basically all you need is a vector class that supports addition, subtraction, etc. And from there all you really need is Dot and Cross.
And to be honest using GLSL isn't going to give you much more than that anyways (they only support dot, cross and simple vector math, everything else must be hand coded). I'd also recommend prototyping in C++ then moving to CUDA afterwards. It's rather difficult to debug a GPU code, so you can get it working in the CPU then recode it a bit to work in CUDA.
In reality raytracers are fairly simple. It's making them fast that is hard. It's the acceleration structures that are going to take most of your time and optimization. At least it did for me.
You should take look at http://ompf.org/forum/
This forum treats of realtime raytracing, mostly in C++. It will give you pointers, and sample source.
Most of the time, as every cycle count, people do not rely on external vector math libs: optimizations depend on the compiler you're using, inlining, use of SSE (or kindof) or not, etc.
I recommend "IlmBase" that is part of the OpenEXR package. It's well-written C++, developed by ILM, and widely used by people who professionally write and use graphics software.
For my projects I used glm, maybe it would also suit you.
Note that libraries such as boost::ublas or seldon probably won't suit you, because they're BLAS-oriented (and I assume you're looking for a good 3D-driven linear algebra library).
Also, the dxmath DirectX library is pretty good, although sometimes hard to use, because of it's C-compatible style.
You might look at the source code for POVRAY

How to choose an integer linear programming solver?

I am newbie for integer linear programming.
I plan to use a integer linear programming solver to solve my combinatorial optimization problem.
I am more familiar with C++/object oriented programming on an IDE.
Now I am using NetBeans with Cygwin to write my applications most of time.
May I ask if there is an easy use ILP solver for me?
Or it depends on the problem I want to solve ? I am trying to do some resources mapping optimization. Please let me know if any further information is required.
Thank you very much, Cassie.
If what you want is linear mixed integer programming, then I would point to Coin-OR (and specifically to the module CBC). It's Free software (as speech)
You can either use it with a specific language, or use C++.
Use C++ if you data requires lots of preprocessing, or if you want to put your hands into the solver (choosing pivot points, column generation, adding cuts and so on...).
Use the integrated language if you want to use the solver as a black box (you're just interested in the result and the problem is easy or classic enough to be solved without tweaking).
But in the tags you mention genetic algorithms and graphs algorithms. Maybe you should start by better defing your problem...
For graphs I like a lot Boost::Graph
I have used lp_solve ( http://lpsolve.sourceforge.net/5.5/ ) on a couple of occasions with success. It is mature, feature rich and is extremely well documented with lots of good advice if your linear programming skills are rusty. The integer linear programming is not a just an add on but is strongly emphasized with this package.
Just noticed that you say you are a 'newbie' at this. Well, then I strongly recommend this package since the documentation is full of examples and gentle tutorials. Other packages I have tried tend to assume a lot of the user.
For large problems, you might look at AMPL, which is an optimization interpreter with many backend solvers available. It runs as a separate process; C++ would be used to write out the input data.
Then you could try various state-of-the-art solvers.
Look into GLPK. Comes with a few examples, and works with a subset of AMPL, although IMHO works best when you stick to C/C++ for model setup. Copes with pretty big models too.
Linear Programming from Wikipedia covers a few different algorithms that you could do some digging into to see which may work best for you. Does that help or were you wanting something more specific?