Normal estimation in C++ with CGAL framework - c++

I have been using CGAL to estimate the normals of a set of points with the CGAL::mst_orient_normals function, but it is too slow. I even tried the example in the documentation. It estimates normals in a set of points corresponding to a sphere and it takes more than 1 hour. I want to know if this is normal or if I am doing something wrong.

If you using Visual C++/Studio for compilation, make sure you running in Release mode. It is a lot faster than Debug mode for template code that is built upon STL.
Make sure you are using the simplest kernel that will estimate the normals. For example, the Exact_predicates_exact_constructions_kernel will be slower than Exact_predicates_inexact_constructions_kernel

Related

How to extract points that belong to a vertical plane from a point cloud using PCL?

I want to write a program that takes a point cloud data and extract the set of points that belong to a vertical plane. The process mustn’t take more than 100 milliseconds. What is the best way to do this?
I tried using RANSAC filter but, it's slow and also the result is not good.
pcl::SACSegmentation<pcl::PointXYZ> seg;
seg.setOptimizeCoefficients(true);
seg.setModelType(pcl::SACMODEL_PLANE);
seg.setMethodType(pcl::SAC_RANSAC);
seg.setMaxIterations(1000);
seg.setDistanceThreshold(0.01);
First of all, I would recommend to use the latest PCL release (or even the master branch), compiled from source. Compared to PCL 1.9.1, this includes several speed improvements in the sample consensus module. Additionally, by compiling from source, you make sure that you can use everything your computer is capable of (e.g. SIMD instructions).
With the latest PCL release (or master branch) you can also use the parallel RANSAC implementation by calling seg.setNumberOfThreads(0).
If this is still too slow, you can try to downsample the cloud before passing it to SACSegmentation, e.g. with https://pointclouds.org/documentation/classpcl_1_1_random_sample.html
If you only want vertical planes (that is, planes that are parallel to a specified axis), you should use SACMODEL_PARALLEL_PLANE instead of SACMODEL_PLANE and call https://pointclouds.org/documentation/classpcl_1_1_s_a_c_segmentation.html#a23abc3e522ccb2b2846a6c9b0cf7b7d3 and https://pointclouds.org/documentation/classpcl_1_1_s_a_c_segmentation.html#a7a2dc31039a1717f83ca281f6970eb18

Detect type of scanned document and normalize it to given size

I'm trying to implement a program that will take a scanned (possibly rotated) document like an ID card, detect its type based on two or more image templates and normalize it (de-rotated it and resize so it matches the template). Everything will be scanned, so luckily perspective is not a problem.
I have already tried a number of approaches with no success:
I tried using openCV's features2d to detect the template and findHomograpy to normalize it but it fails extremely often. If I take a template, change it a little bit (other data/photo on ID card), rotate ~40 degrees then it usually fails, no matter what configuration of descriptors detectors and matcher I use.
Also tried this http://manpages.ubuntu.com/manpages/gutsy/man1/unpaper.1.html which is an de-rotate tool and then tried to do normal matching but unpaper doesn't work great with rotation angles greater than 20 deg.
If there's a ready solution it would be really great, a commercial library (preferably c/c++ or a command line tool) is also an option. I hate to admit that but I fail miserably when try to understand computer vision papers so liniking unfortunately won't help me.
Thank you very much for help!

How do I write tests for a graphics library?

I'm writing an OpenGL 2D library in Python. Everything is going great, and the codebase is steadily growing.
Now I want to write unit tests so I don't accidently bring in new bugs while fixing others/making new features. But I have no idea how those would work with graphics libraries.
Some things I thought of:
make reference screenshots and compare them with autogenerated screenshots in the tests
replace opengl calls with logging statements and compare logs
But both seem a bad idea. What is the common way to test graphics libraries?
The approach I have used in the past for component level testing is:
Use a uniform colored background, with a few different colors.
Use uniform colored rectangles as graphical objects in tests (with a few different colors).
Place rectangles in known places where you can calculate their projected position in the image by yourself.
Calculate expected intensity of each channel of each pixel (background, foreground or mixture).
If you have a test scenario that results in non-round positions, use a non-accurate compare (e.g. correlation)
Use calculations to create expected result images.
Compare output images to expected result images.
If you have a blur effect, compare sum of intensity instead of discrete intensities.
As graham stated, internal units may be unit-testable free from graphics calls.
Break it down even further.
The calls that make the graphics will rely on algorithms - test the algorithms.

GLUTesselator for realtime tesselation?

I'm trying to make a vector drawing application using OpenGL which will allow the user to see the result in real time. The way I have it set up is with an edge flag callback so the glu tesselator only outputs triangles which I then pass to a VBO. I'v tried t make all my algorithms as fast as possible and this is not where my issue is. According to a few code profilers, my big slowdown occurs in a call to GLUTessEndPolygon() which is the function that makes the polygon. I have found that when the shape exceeds 100 input verticies, it gets really really slow and basically destroys all the hard work I did to optimize everything else. What can I do? I provide the normal of (0,0,1). I also tried all the tips from the GL redbook. Is there a way to make the tesselator tesselate quicker but with less precision?
Thanks
You might give poly2tri a try to see if it's any faster.

C++ & DirectX - setting shader

Does someone know a fast way to invoke shader processing via DirectX?
Right now I'm setting shaders using D3DXCreateEffectFromFile calls, which create shaders in runtime (once per each shader) from *.fx files.
Rendering part for every object (every patch in my case - see further) then means something like:
// --------------------
// Preprocessing
effect->Begin();
effect->BeginPass(0);
effect->SetMatrix (or Vector or whatever - internal shader parameters) (...)
effect->CommitChanges();
// --------------------
// Geometry rendering
// Pass the geometry to render
// ...
// --------------------
// Postprocessing
// End 'effect' passes
effect->EndPass();
effect->End();
This is okay, but the profiler shows weird things - preprocessing (see code) takes about 60% of time (I'm rendering terrain object of 256 patches where every patch contains about 10k vertices).
Actual geometry rendering takes ~35% and postprocessing - 5% of total rendering time.
This seems pretty strange to me and I guess that D3DXEffect interface may not be the best solution for this sort of things.
I've got 2 questions:
1. Do I need to implement my own shader controller / wrapper (probably, low-level) and where should I start from?
2. Would compiling shaders help to somehow improve parameter setting performance?
Maybe somebody knows how to solve this kind of problem / some implemented shader interface or could give some advices about how is this kind of problem solved in modern game engines.
Thank you.
Actual geometry rendering takes ~35% and postprocessing - 5% of total rendering time
If you want to profile shader performance you need to use NVPerfHud or something similar. Using CPU profiler and measuring ticks is not going to help you - rendering is often asynchronous.
Do I need to implement my own shader controller / wrapper (probably, low-level)
Using your own shader wrapper isn't a bad idea - I never liked ID3DXEffect anyway.
With your own wrapper you'll have a total control of resources and program behavior.
Whether you need it or not is for you to decide. With ID3DXEffect you won't have a warranty that implementation is as fast as it could be - it could be wasting cpu cycles doing something you don't really need. D3DX library contains few classes that are useful, but aren't guaranteed to be efficient (ID3DXEffect, ID3DXMesh, All animation-related and skin-related functions, etc).
and where should I start from?
D3DXAssembleShader, IDirect3DDevice9::CreateVertexShader, IDirect3DDevice9::CreatePixelShader on DirectX 9, D3D10CompileShader on DirectX 10. Also download DirectX SDK and read shader documentation/tutorials.
Would compiling shaders help to somehow improve parameter setting performance?
Shaders are automatically compiled when you load them. You could compiling try with different optimization settings, but don't expect miracles.
Are you using a DirectX profiler or just timing your client code? Profiling DirectX API calls using timers in the client code is generally not that effective because it's not necessarily synchronously processing your state updates/draw calls as you make them. There's a lot of optimization that goes on behind the scenes. Here is an article about this for DX9 but I'm sure this hasn't changed for later versions:
http://msdn.microsoft.com/en-us/library/bb172234(VS.85).aspx
I've used effects before in DirectX and the system generally works fine. It provides some nice features that might be a pain to implement yourself at a lower-level, so I would stick with it for the moment.
As bshields suggested, your timing information might be inaccurate. It sounds likely that the drawing actually is taking the most time, compared.
The shader is compiled when it's loaded. Precompiling will save you a half-second of startup time, but so long as the shader doesn't change during runtime, you won't see any actual speed increase. Precompiling is also kind of a pain, if you're still testing a shader. You can do it with the final copy, but unless you have a lot of shaders, you won't get much benefit while loading them.
If you're creating the shaders every frame or every time your geometry is rendered, that's probably the issue. Unless the shader itself (not parameters) changes every frame, you should create the effect once and reuse that.
I don't remember where SetParameter calls go, but you may want to check the docs to make sure your SetMatrix is in the right spot. Setting parameters after the pass has started won't help anything, certainly not speed. Make sure that's set up correctly. Also, set parameters as rarely as possible, there is some slight overhead involved. Per-frame sets will give you a notable slow-down, if you have too many.
All in all, the effects system does work fine in most cases and you shouldn't be seeing what you are. Make sure your profiling is correct, your shader is valid and optimized, and your calls are in the right places.