CGAL: Hole Filling .exe file is stuck - c++

This is my terminal(result) after running the .exe file. Click the link for the terminal.
It doesn't stop or gives an error. It's stuck like this for hours.
I got my code from here code, but this is the data (.off) I used.

You are probably trying to fill a hole that is too large or that cannot be filled using the 3D Delaunay triangulation search space (which can happen also if you have pinched holes). In CGAL 5.5 (not yet released but available in master), we added the option do_not_use_cubic_algorithm() (doc here) to not use the cubic search space to fill such holes.

Related

Combined two codes in ZED 2 camera

I try to combine 2 examples Depth Sensing and Advanced point cloud mapping. Link (https://github.com/stereolabs/zed-examples/tree/master/spatial%20mapping/advanced%20point%20cloud%20mapping)
I am successfully getting output with Depth-Sensing but I am not able to do point mapping. Also, I am not getting any type of error. When the program starts I can see a very small dot/pixel which moves when I move the camera but it’s not drawing any line or anything using OpenGL.
Does anyone, know where I am getting problems?
Also, when I do same changes in Point mapping programme I can see the output but I can not able to see the Depth-Sensing output. I done all this using CMAKE.

Using OpenGL from ArrayFire

If I have an af::array A already in GPU memory, what is the procedure to pass it through to OpenGl? My intention is to plot it as a line graph using OpenGl, but I'm not sure how to deal with the fact that the backend of Arrayfire could be OpenCl, CUDA or even the CPU. How does OpenGl achieve ownership of the array? I would preferably like to avoid copying if possible.
On a separate note, if I was to use the in built forge library to plot graphs in Arrayfire, I find that if I follow the tutorials to plot a graph, by pressing and holding on the data in the plot and dragging it somewhere else, the data plot moves from its original location and is no longer correctly aligned with the axis. Is there a way to correct this?
#HamzaAB
What you are asking is known as GL-CUDA or GL-OpenCL interoperability, if you already don't know about it, which is the area Forge tries to address. You can look the ComputeCopy.h header inside forge repository to understand how to do OpenGL interop.
Having said that, Do you want to reset the transformation you are doing to the line plot, if you want to reset it then there is way. While holding the left control, hit middle mouse button, that will reset the pan/zoom done to the line plot. If you are facing some other issue with forge's line plot, you may raise an issue here and we will try to look into it.
PS. I am one of the core developers of ArrayFire.

Find lines in shape

I have a binary image and I'm looking for a robust way to find the lines in the shape and the topology (how the lines connect).
I have experimented in matlab (although what I'm asking for is which methods to use).
I've tried using skeletonization on the binary image and then used hough-transform, works sometimes but not a robust solution. I struggled with boundary disturbance.
Could anyone point me in a direction of which methods to use here (and in what order).
Binary file for testing
Frankly speaking, I've been monitoring this question for some time in the hope to see useful answer. The task itself does not seem to be really complex (and I'll try to prove that), but elegant solution is still a bit far from me.
Some time ago I solved similar task and it does look like with my very basic home-grown solution your initial example is easily traceable:
It is just a simple scan (1), vertical and horizontal lines identification(2) along with further analysis of the more complex areas (3). Having all areas analysed (3) it is not that hard to find intersection points and optimise them as well (4).
The result is pretty rough, but it confirms the feasibility if this approach.
I do understand this is a bit far from Matlab, but I just want to highlight several important moments:
skeletonization will potentially break the initial geometry
further analysis of the skeleton seems to be a bit tricky and unreliable
with a bit of enhanced quality your images can be traced with way more simple approach
BTW, in my approach different operations can be performed in parallel. Scan step is adjustable and even with reduced number of scans result is pretty good:
With more steps intersection points can be identified more precisely:
I came to conclusion it is really important to use all information provided by an initial image. All simplifications, etc will remove valuable facts increasing overall complexity of the task.
Update
Would this approach work if the figure did not have a majority of vertical and horizontal lines?
Those steps are pretty independent, so there is no strict requirement to have vertical or horizontal lines. Naturally, it is more complex task to identify intersections and do some additional tweaking in order to enhance accuracy.
Easy to see there are some significant errors introduced by vertical lines at the beginning and at the end of the shape. Very straightforward optimisation gives us better results:
This is indeed a tricky problem, going from a binary image to a graph (i.e. topology). Fundamentally involving crossing from the discrete world of pixels and 2D image data, to a abstract data structure of nodes and connectivity...
But what can provide the “glue” between? Quite an open question I’m afraid, requiring a complex interpretation of visual data.
Fortunately, someone else has shared a good attempt here in python: http://planet.lengrand.fr/?post_id=267
(This obviously assumes a full python installation NetworkX and any other dependencies. I did this on Mac with homebrew via a few call such as > brew install opencv; pip install networkx; brew install graph-tool; brew install graphviz. The ipython notebook below also makes use of http://scikit-image.org and http://mahotas.readthedocs.org/en/latest/ - so quite a heady cocktail of Computer Vision and Image Processing code! Finally: you’ll of course need ipython installed...)
Here is an example (which first loads in the downloaded notebook from above - all run from: > ipython —pylab):
%run C8Skeleton_to_graph-01.ipynb
import scipy.io as sio # Need this to load matlab files...
mat = sio.loadmat('bw.mat')
img = np.zeros((30,70),np.uint8) # Buffered image border
img[5:25,5:65] = mat['BW'] # Insert matrix data into middle
skeleton = mh.thin(img) # Do skeletonization...
graph = nx.MultiGraph() # Graph we’ll create
C8_Skeleton_To_Graph_01(graph, skeleton) # Do it!
figure(1)
subplot(211)
plt.imshow(img,plt.get_cmap('gray'), vmin=0, vmax=1, origin='upper');
subplot(212)
plt.imshow(skeleton,plt.get_cmap('gray'), vmin=0, vmax=1, origin='upper');
figure(2)
nx.draw(graph)
Showing that skeletonization of your originally provided Matlab data (with buffer around edges):
Results in the following graph:
Notice the graph topology and layout correspond with the structure of the thinned image - including the "spurs" created at the end. This is an ongoing problem/research area with such an approach...
EDIT: but could be possibly solved by removing stray arcs (leading to leaf nodes with degree==1) in the graph. e.g.
remove = [node for node,degree in graph.degree().items() if degree == 1]
graph.remove_nodes_from(remove)
nx.draw(graph)

An algorithm for inflating/deflating (offsetting, buffering) polylines

Related Question:
An algorithm for inflating/deflating (offsetting, buffering) polygons
The difference is that I'm searching for a way to inflate a given polyline into a polygon:
I've got the following input:
List of 2D Points which form the polyline (bright green in the sketch)
Width of the line
The output should be a polygon which shows how the line looks expanded by the width.
I originally thought I could use Boost::Geometry::buffer for that, unfortunately it just seems to support boxes for now. A solution using Boost::Geometry or GDAL/OGR would be preffered.
UPDATE:
I chose to use the Clipper Library and its OffsetPolyLines function. As soon as Boost Geometry is released with Polyline-Buffer support I'll switch to Boost (as everything else runs with Boost in my software).
I understand that the OP was preferring a solution in Boost::Geometry or GDAL/OGR but, in case others are following this thread, my Clipper library can also do polyline offsetting. (The soon to be released version 6 that's already in the SourceForge repository simplifies this and it now supports open path (polyline) clipping too.)
Boost.Geometry extension (from Trunk) can do this. It is not yet released. It can buffer around polygons, polygons, points, and multi-geometries. You can specify sharp corners (miter) or rounded corners. It is not yet perfect, but lines as your sample above should not give any problems.
The released version (1.54) does not have this yet, and also the next one will not have it yet. So for now you have to use the Trunk (from SVN)

Debugging of image processing code

What kind of debugging is available for image processing/computer vision/computer graphics applications in C++? What do you use to track errors/partial results of your method?
What I have found so far is just one tool for online and one for offline debugging:
bmd: attaches to a running process and enables you to view a block of memory as an image
imdebug: enables printf-style of debugging
Both are quite outdated and not really what I would expect.
What would seem useful for offline debugging would be some style of image logging, lets say a set of commands which enable you to write images together with text (probably in the form of HTML, maybe hierarchical), easy to switch off at both compile and run time, and the least obtrusive it can get.
The output could look like this (output from our simple tool):
http://tsh.plankton.tk/htmldebug/d8egf100-RF-SVM-RBF_AC-LINEAR_DB.html
Are you aware of some code that goes in this direction?
I would be grateful for any hints.
Coming from a ray tracing perspective, maybe some of those visual methods are also useful to you (it is one of my plans to write a short paper about such techniques):
Surface Normal Visualization. Helps to find surface discontinuities. (no image handy, the look is very much reminiscent of normal maps)
color <- rgb (normal.x+0.5, normal.y+0.5, normal.z+0.5)
Distance Visualization. Helps to find surface discontinuities and errors in finding a nearest point. (image taken from an abandoned ray tracer of mine)
color <- (intersection.z-min)/range, ...
Bounding Volume Traversal Visualization. Helps visualizing a bounding volume hierarchy or other hierarchical structures, and helps to see the traversal hotspots, like a code profiler (e.g. Kd-trees). (tbp of http://ompf.org/forum coined the term Kd-vision).
color <- number_of_traversal_steps/f
Bounding Box Visualization (image from picogen or so, some years ago). Helps to verify the partitioning.
color <- const
Stereo. Maybe useful in your case as for the real stereographic appearance. I must admit I never used this for debugging, but when I think about it, it could prove really useful when implementing new types of 3d-primitives and -trees (image from gladius, which was an attempt to unify realtime and non-realtime ray tracing)
You just render two images with slightly shifted position, focusing on some point
Hit-or-not visualization. May help to find epsilon errors. (image taken from metatrace)
if (hit) color = const_a;
else color = const_b
Some hybrid of several techniques.
Linear interpolation: lerp(debug_a, debug_b)
Interlacing: if(y%2==0) debug_a else debug_b
Any combination of ideas, for example the color-tone from Bounding Box Visualization, but with actual scene-intersection and lighting applied
You may find some more glitches and debugging imagery on http://phresnel.org , http://phresnel.deviantart.com , http://picogen.deviantart.com , and maybe http://greenhybrid.deviantart.com (an old account).
Generally, I prefer to dump bytearray of currently processed image as raw data triplets and run Imagemagick to create png from it with number e.g img01.png. In this way i can trace the algorithms very easy. Imagemagick is run from the function in the program using system call. This make possible do debug without using any external libs for image formats.
Another option, if you are using Qt is to work with QImage and use img.save("img01.png") from time to time like a printf is used for debugging.
it's a bit primitive compared to what you are looking for, but i have done what you suggested in your OP using standard logging and by writing image files. typically, the logging and signal export processes and staging exist in unit tests.
signals are given identifiers (often input filename), which may be augmented (often process name or stage).
for development of processors, it's quite handy.
adding html for messages would be simple. in that context, you could produce viewable html output easily - you would not need to generate any html, just use html template files and then insert the messages.
i would just do it myself (as i've done multiple times already for multiple signal types) if you get no good referrals.
In Qt Creator you can watch image modification while stepping through the code in the normal C++ debugger, see e.g. http://labs.qt.nokia.com/2010/04/22/peek-and-poke-vol-3/