is there a good way to mix two known Transforms? i.e.
var transform = Transform.mix(Transform.scale(2,2,2), Transform.rotate(1.4, 1.4,1.4))
and mix them into one transform matrix?
What I was looking for is called Transform.multiply which can be found here on github it takes two transforms and makes them into one. And yes order matters in Matrix Multiplication.
Related
I'm trying to do triangulation for 3D reconstruction and I came across an interesting observation which I cannot justify.
I have 2 sets of images. I know the correspondences and I'm finding the intrinsic and extrinsic parameters using a direct linear transformation. While I'm able to properly reconstruct the original scene, the intrinsic parameters are different even though the pictures are taken from the same camera. How is it possible to have different intrinsic parameters if the camera is the same? Also, if the intrinsic parameters are different, how am I able to reconstruct the scene perfectly?
Thank you
You haven't specified what you mean by "different", so i'm just going to point two possible sources of differences that come to mind. Let's denote the matrix of intrinsic parameters with K.
The first possible difference could just come from a scaling difference. If the second time you estimate your intrinsics matrix, you end up with a matrix
K_2=lambda*K
then it doesn't make any difference when projecting or reprojecting, since for any 3d point X you'll have
K_2*X=K*lambda*X //X is the same as lambda*X in projective geometry
The same thing happens when you backproject the point: you just obtain a direction, and then your estimation algorithm (e.g. least squares or a simpler geometric solution) takes care of estimating the depth.
The second reason for the difference you observe could just come from numerical imprecisions. Since you haven't given any information regarding the magnitude of the difference, I'm not sure if that is relevant to your case.
I need to make some polygon computation on 2D plan. Typically, isInside operation.
I found boost::Polygon API but my points are inside a single big array.
That's I call indexed geometry.
See http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-9-vbo-indexing/
So my best way is just to boost::Polygon and give to it my array + indices of points to use.
The objective is just to don't copy my million of points (because they are shared at least by two polygons).
I don't know if API allows it ( or I need to inherit my own class :-( ).
Maybe, someone know another API (inside boost or other).
Thanks
Documentation
within demo : https://www.boost.org/doc/libs/1_68_0/libs/geometry/doc/html/geometry/reference/algorithms/within/within_2.html
Boost Geometry allows for adapted user-defined data types.
Specifically, C arrays are adapted here: https://www.boost.org/doc/libs/1_68_0/boost/geometry/geometries/adapted/c_array.hpp
I have another answer up where I show how to use Boost Geometry algorithms on a direct C array of structs (in that case I type punned using tuple as the point type): How to calculate the convex hull with boost from arrays instead of setting each point separately? (the other answers show alternatives that may be easier if you can afford to copy some data).
The relevant algorithms would be:
https://www.boost.org/doc/libs/1_68_0/libs/geometry/doc/html/geometry/reference/algorithms/within.html
https://www.boost.org/doc/libs/1_68_0/libs/geometry/doc/html/geometry/reference/algorithms/disjoint.html
I have a dataset from custom abstract objects and a custom distance function. Is there any good SVM libraries that allows me to train on my custom objects (not 2d points) and my custom distance function?
I searched the answers in this similar stackoverflow question, but none of them allows me to use custom objects and distance functions.
First things first.
SVM does not work on distance functions, it only accepts dot products. So your distance function (actually similarity, but usually 1-distance is similarity) has to:
be symmetric s(a,b)=s(b,a)
be positive definite s(a,a)>=0, s(a,a)=0 <=> a=0
be linear in first argument s(ka, b) = k s(a,b) and s(a+b,c) = s(a,c) + s(b,c)
This can be tricky to check, as you actually ask "is there a function from my objects to some vector space, phi such that s(phi(x), phi(y))" is a dot-product, thus leading to definition of so called kernel, K(x,y)=s(phi(x), phi(y)). If your objects are themselves elements of vector space, then sometimes it is enough to put phi(x)=x thus K=s, but it is not true in general.
Once you have this kind of similarity nearly any SVM library (for example libSVM) works with providing Gram matrix. Which is simply defined as
G_ij = K(x_i, x_j)
Thus requiring O(N^2) memory and time. Consequently it does not matter what are your objects, as SVM only works on pairwise dot-products, nothing more.
If you look appropriate mathematical tools to show this property, what can be done is to look for kernel learning from similarity. These methods are able to create valid kernel which behaves similarly to your similarity.
Check out the following:
MLPack: a lightweight library that provides lots of functionality.
DLib: a very popular toolkit that is used both in industry and academia.
Apart from these, you can also use Python packages, but import them from C++.
I see some options: you can use a list, you can use a vector, you can use tuples, you can use a custom datatype with a x and a y field, etc. In most languages, one way is preferred against another for some specific reasons. For example, in C++, it is much more performant to use a class.
Which is considered the linguistic way to do it in Haskell?
The V2 type from the linear package (https://hackage.haskell.org/package/linear-1.16.2/docs/Linear-V2.html) is both performant and has a whole bunch of standard operations defined around it.
I personnaly implement N-d vectors with UArray, since such arrays are strict and it let me combine arrays of different norms. I even have an extension (actually a class Vector that generalizes euclidian vectors) that lets me add matrices support.
Theoretically, let us assume we were to hard-code matrix multiplications for each different combination of 3D Homogeneous (4x4) transformation matrix (translation, rotation, scaling), and then for also each possible result of those (translation-rotation, translation-scaling, scaling-rotation)...
Suppose we were to handle matrix multiplication like that, a different function for each matrix type combination, where each matrix has an extra variable (type), and with the specific functions to use being determined at runtime (using a function pointer array). If we applied this kind of matrix multiplication, could it theoretically be faster than doing basic, standard 4x4 homogeneous matrix multiplication (which is still admittedly faster than generic 4x4 matrix multiplication)?
I'm doing this right now, its kinda hellish to code. I'm going to test it against standard matrix multiplication in the end, and compare results. I just wanted to see what other people think the results might be. Any ideas?
I think a better idea is to store only position and orientation of an object instead of the whole matrix. You only compute the matrix for rendering purpose, once after all transformations. The transformations are done by adding translations (for the position) and multiplying quaternions (for the orientation).