How to compute right kernel of a matrix with Eigen library? - c++

I've started to implementation of an algorithm with Eigen library. I needed to calculate null space(kernel) of a matrix. I have tried with a cube's matrix that,
0, 0, 1,
0, 1, 0,
1, 0, 0,
-1, 0, 0,
0, 0, -1,
0, -1, 0
Then, I call, its source
A.transposeInPlace();
std::cout << "and after being transposed:\n" << A << std::endl;
FullPivLU<MatrixXf> lu(A);
MatrixXf A_null_space = lu.kernel();
std::cout << "Null space:\n" << A_null_space << std::endl;
A_null_space.transposeInPlace();
std::cout << "Null space Transposed_A:\n" << A_null_space;
I obtain,
0.5 0 -1 1 0 0 0 0 0 0.5
-0.5 0 -0 0 1 0 0 0 0 -0.5
0.5 0 -0 0 0 1 0 0 0 -0.5
0.5 0 -0 0 0 0 1 0 0 0.5
-1 0 1 0 0 0 0 1 0 -1
-0.5 0 1 0 0 0 0 0 1 -0.5
-0.5 1 -0 0 0 0 0 0 0 0.5
But, I realized later on that its right kernel and left kernel is same and seemingly the code snippet calculates left kernel. The code is getting crazy output on the other test case. So, how can be the right kernel be calculated? The link is also to show the difference btw right and left kernels with examples. However, if I remove first line, the output is 0 0 0
Clearly problem of the case is,
MatrixXf A{10, 3};
A <<
1, 0, 1 ,
1, 0, 0 ,
0, 1, 1 ,
0, 1, 0 ,
0, 0, 1 ,
-1, 0, 0 ,
0, 0, -1 ,
0, -1, 1 ,
0, -1, 0 ,
-1, 0, 1;
Its output is expected as,
1 0 0 0 0 0 0 -2 2 1
0 1 0 0 0 0 0 -1 1 1
0 0 1 0 0 0 0 -1 2 0
0 0 0 1 0 0 0 0 1 0
0 0 0 0 1 0 0 -1 1 0
0 0 0 0 0 1 0 1 -1 -1
0 0 0 0 0 0 1 1 -1 0
QR factorization,
HouseholderQR<MatrixXf> qr(A);
cout << "\nQR matrix to compare \n" << qr.matrixQR().transpose();
Then I get,
-1.41421 0 0.414214
-0.707107 -0.707107 -1
-0.707107 0.707107 1
0 0 1
-0.707107 0.707107 0
0.707107 0.707107 0
0.707107 -0.707107 0
-0.707107 0.707107 -1
0 0 -1
1.19209e-07 1.41421 5.96046e-08
#Edit 2, Does Eigen calculate wrongly?
Source
#Edit 3,
I'm really but really confused because both matrix seem right! How come?

As you observed, both matrices are valid right kernels. This is because they correspond to two different basis of the same subspace. To check it, you can reduce the two matrices to the reduced row echelon form (rref function in matlab, or see this online calculator). This transformation is unique and does not change the span defined by the matrix. Your reference kernel basis is already in this form. So all you have to do is to reduce the one returned by Eigen and see that it gives you the same matrix as your reference one.

Related

OpenCV's connectedComponentsWithStats() returns NaN position for first centroid

I am using the OpenCV C++ API (version 4.7.0). I have an application where I need to determine the centroid of a laser spot through a camera feed, and to do that I am attempting to use the connectedComponentsWithStats() function.
In my current setup, the image data is defined in a text file as a 2D array. This data is of a binary image, which is expected by connectedCoponentsWithStats(). The size of the image is 625X889. Here is what the binary image looks like:
This is the result of a binary thresholding operation.
Here is my C++ code for computing the centroids of the connected components:
/* Grouping and centroiding implementation using OpenCV's
* ConnectedComponentsWithStats().
*/
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <iostream>
#include <fstream>
#include <opencv2/opencv.hpp>
using namespace std;
int main()
{
uint32_t ROWS=625;
uint32_t COLS=889;
uint8_t image_array[ROWS][COLS];
uint8_t *image_data = new uint8_t[ROWS * COLS];
int i;
int j;
//Read data from file and save into image_array[][].
ifstream fp("./data/binary_image/binary_image1.txt");
if (! fp)
{
cout << "Error opening file" << endl;
return 1;
}
for (i=0; i<ROWS; i++)
{
for (j=0; j<COLS; j++)
{
fp >> image_array[i][j];
if (! fp)
{
cout << "Error reading file for element " << i << "," << j << endl;
return 1;
}
}
}
//Copy data into a 1D array called image_data.
for (i=0; i<ROWS; i++)
{
for (j=0; j<COLS; j++)
{
if (image_array[i][j] == 1)
{
image_array[i][j] = 255;
}
image_data[i * COLS + j] = image_array[i][j];
}
}
//Create image object.
cv::Mat image(ROWS, COLS, CV_8U, image_data);
//Create output objects.
cv::Mat labels, stats, centroids;
//Pass image to connectedComponentsWithStats() function.
int num_components = cv::connectedComponentsWithStats(image, labels, stats, centroids);
//Print out the centroids
for (i=0; i<num_components; i++)
{
cout << "Centroid "<< i << ": (" << centroids.at<double>(i, 0) << ", " << centroids.at<double>(i,1) << ")" << endl;
}
return 0;
}
Not that is it important, but this is how I compiled the code:
g++ `pkg-config --cflags opencv4` -c opencv_grouping.cpp
g++ opencv_grouping.o `pkg-config --libs opencv4` -o opencv_test
This is the output after running ./opencv_test:
Centroid 0: (nan, nan)
Centroid 1: (444, 312)
I am not really sure why I am getting the nan result, but I suppose this means that the area of that connected component is 0? As for the (444, 312) result, that just corresponds (roughly) to the center of a 625X889 grid, which is the size of the inputted image. So it essentially just computed the center of that grid. I tried with several other input images of the same size that were generated using different thresholds (so the spot size varried), and it yielded exactly the same result.
Does anyone have any insight/suggestions as to what might be going on here? Does OpenCV provide other functions that can be used to compute the centroid of the spot?
--Update--
I think that I found out where the problem is coming from. Something very strange is happening when I am reading in the data from the text file. It's not reading the data correctly at all. It gets to row 440 and then the rest of the data is either 0 or some garbage value (like a really large negative number for example). I am not sure why this is happening. In addition to declaring image_array as uint8_t image_array[ROWS][COLS], I have also tried allocating memory for it with new:
uint8_t **image_array = new uint8_t*[ROWS];
for (i=0; i<ROWS; i++)
{
image_array[i] = new uint8_t[COLS];
}
Both of these run into the same problem. As a sanity check, I created a smaller (19X20) array to test with. It looks like this in a text file:
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 255 255 255 0 0 0 0 0 0 255 255 255 255 255 0 0 0
0 0 0 255 255 255 0 0 0 0 0 0 0 255 255 255 0 0 0 0
0 0 255 255 255 255 0 0 0 0 0 0 0 0 255 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 255 255 0 0 0 0 0 0 0 0 255 0 0 0 0 0
0 0 0 255 255 255 255 255 0 0 0 0 255 255 255 255 0 0 0 0
0 0 0 0 255 255 255 255 255 0 0 255 255 255 0 0 0 0 0 0
0 0 0 0 0 0 255 255 255 255 255 255 255 0 0 0 0 0 0 0
0 0 0 0 0 0 0 255 255 255 255 255 255 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 255 255 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I run this through the code my code (with ROWS=19 and COLS=20) and it actually gives a reasonable output for the centroids:
Centroid 0: (9.63497, 9.22086)
Centroid 1: (3.8, 3.1)
Centroid 2: (14, 2.55556)
Centroid 3: (8.71429, 10.2857)
So the issue is that the larger array is not getting read from the text file correctly. Any suggestions?

What is the 5x5 equivalent of the 3x3 emboss kernel?

-2 -1 0
-1 1 1
0 1 2
This is 3x3 emboss kernel. How should I write this in 5x5?
As I understand, these filters take directional differences (see the wikipidea page).
We can decompose you filter into directions
0 -1 0 0 0 0 -2 0 0
0 0 0 -1 0 1 0 0 0
0 1 0 0 0 0 0 0 2
So, I think you can expand it over these 3 directions giving emphasis
0 0 -1 0 0 0 0 0 0 0 -2 0 0 0 0
0 0 -1 0 0 0 0 0 0 0 0 -2 0 0 0
0 0 0 0 0 -1 -1 0 1 1 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 2 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 2
So, the final kernel would be
-2 0 -1 0 0
0 -2 -1 0 0
-1 -1 1 1 1
0 0 1 2 0
0 0 1 0 2
May be you can also try interpolating filter coefficients marked as x
-2 x -1 0 0
x -2 -1 0 0
-1 -1 1 1 1
0 0 1 2 x
0 0 1 x 2
The simple solution to fitting any lower-dimensional convolution kernel into a higher-dimensional matrix of the same rank is to surround it by zero weights. This is especially true when you're dealing with a concept like embossing, which is arguably more interested in immediate vector of change than the rate at which it is changing. That is, for this embossing matrix,
You could equivalently use this in 5 x 5:
Granted, this will get you a different visual effect than anything with any part of the matrix filled in; but sometimes, especially with edge-detection, immediate clarity is more important. We aren't always displaying it. If this were something like a Guassian blur kernel, having a greater range could improve the effect, but embossing isn't that different conceptually from Sobel-Feldman and it may be better to keep it tight.

Problem with rotating an object with face to another one. [C++]

I want to rotate an object with the face side to the center of another one, but I have some problems with it: when I try to rotate an object to another one and it lies on X axis, it works properly [first two screenshots], but when I try to rotate it as on the screenshot, everything breaks down [second two screenshots].
Before1:
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
After1:
0 0 -1 0
-0 -1 0 0
1 0 0 0
0 0 0 1
Before2:
0 0 -1 0
-0 -1 0 0
1 0 0 0
0 0 0 1
After2:
0 0 -0.707107 0
0.5 -0.5 0 0
0.707107 -0.707107 0 0
0 0 0 1
Here's my code:
void ConcreteObject::faceObjectTo(ConcreteObject otherObject) {
Vector<double> temp = {0, 1, 0};
Vector<double> forward = otherObject.getCenter() - this->getCenter();
forward.normalize();
Vector<double> right = temp.cross(forward);
right.normalize();
Vector<double> up = forward.cross(right);
Matrix<double> newMatrix = this->getTransformMatrix().getCurrentState();
newMatrix(0, 0) = right[0];
newMatrix(0, 1) = right[1];
newMatrix(0, 2) = right[2];
newMatrix(1, 0) = up[0];
newMatrix(1, 1) = up[1];
newMatrix(1, 2) = up[2];
newMatrix(2, 0) = forward[0];
newMatrix(2, 1) = forward[1];
newMatrix(2, 2) = forward[2];
TransformMatrix newObjectMatrix(newMatrix);
this->setTransformMatrix(newObjectMatrix);
}
You need to normalize right, there's no reason for temp and forward to be orthogonal, hence even if they are unit vectors, their crossproduct need not be.

Can't access elements of an eigen linear system solution

I'm just started of using eigen but for some strange reason I'm struggling with something that should be simple. The code below is a simplified version of some similar computation I would like to perform (Solve x in Ax = b).
Input:
auto N = 10;
auto A = Matrix<Float, Dynamic, Dynamic>::Identity(N, N);
auto b = Matrix<Float, Dynamic, 1>::Constant(N, 1, 1);
std::cout << "A: " << std::endl
<< A << std::endl
<< "b: " << std::endl
<< b << std::endl;
auto x = A.fullPivLu().solve(b);
std::cout << "x(" << x.rows() << ", " << x.cols()
<< "): " << std::endl << x << std::endl;
Output:
A:
1 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0 0 1
b:
1
1
1
1
1
1
1
1
1
1
x(10, 1):
mouse: /home/jansen/devel/build/external/eigen/include/eigen3/Eigen/src/Core/Block.h:119: Eigen::Block<Eigen::Matrix<double, -1, -1, 0, -1, -1>, 1, -1, false>::Block(XprType &, Index) [XprType = Eigen::Matrix<double, -1, -1, 0, -1, -1>, BlockRows = 1, BlockCols = -1, InnerPanel = false]: Assertion `(i>=0) && ( ((BlockRows==1) && (BlockCols==XprType::ColsAtCompileTime) && i<xpr.rows()) ||((BlockRows==XprType::RowsAtCompileTime) && (BlockCols==1) && i<xpr.cols()))' failed.
[1] 21192 abort (core dumped) ./src/mouse
A and b is well formed and the solution x even have the right dimensions but whenever I try to access an element of x I get an assertion failure. From the assertion I deduce that some sort of out of bounds error happens but I can't figure out why?
Please don't abuse of auto with expression template libraries, see this page. Typically, in your case, x is not a Matrix<> object but an abstract object saying that A\b as to be computed... The solution is thus:
Matrix<Float, Dynamic, 1> x = A.fullPivLu().solve(b);

Extract the adjacency matrix from a BGL graph

Using the Boost Graph Library I am looking for a way to extract the adjacency matrix from an underlying graph represented by either boost::adjacency_list or boost::adjacency_matrix. I'd like to use this matrix in conjunction with boost::numeric::ublas to solve a system of simultaneous linear equations.
Here is a minimal example to get you going:
#include <boost/graph/adjacency_list.hpp>
#include <boost/graph/adjacency_matrix.hpp>
using namespace boost;
typedef boost::adjacency_list< listS, vecS, directedS > ListGraph;
typedef boost::adjacency_matrix< directedS > MatrixGraph;
int main(){
ListGraph lg;
add_edge (0, 1, lg);
add_edge (0, 3, lg);
add_edge (1, 2, lg);
add_edge (2, 3, lg);
//How do I get the adjacency matrix underlying lg?
MatrixGraph mg(3);
add_edge (0, 1, mg);
add_edge (0, 3, mg);
add_edge (1, 2, mg);
add_edge (2, 3, mg);
//How do I get the adjacency matrix underlying mg?
}
If anyone could come up with an efficient way to obtain the adjacency matrix I would be much obliged. Ideally the solution is compatible with uBLAS. I wonder if there is a way to avoid iteration through the entire graph.
The easiest way to convert adjacency_list into adjacency_matrix is to use boost::copy_graph
Your code for MatrixGraph mg should be modified as follows
#include <boost/graph/copy.hpp>
#include <cassert>
using namespace boost;
typedef boost::adjacency_list< listS, vecS, directedS > ListGraph;
typedef boost::adjacency_matrix< directedS > MatrixGraph;
int main(){
ListGraph lg;
add_edge(0, 1, lg);
add_edge(0, 3, lg);
add_edge(1, 2, lg);
add_edge(2, 3, lg);
//How do I get the adjacency matrix underlying lg?
//How do I get the adjacency matrix underlying mg?
MatrixGraph mg( num_vertices(lg));
boost::copy_graph(lg, mg);
}
Now, to use adjacency matrix with ublas or similar, you can write a simple "access" class to make syntax more compliant with ublas. Continuing previous snippet we get:
template <class Graph>
class MatrixAccessor
{
public:
typedef typename Graph::Matrix Matrix; //actually a vector<
typedef typename Matrix::const_reference const_reference;
MatrixAccessor(const Graph* g)
: m_g(g)
{
static_assert(boost::is_same<size_t, typename Graph::vertex_descriptor>::value, "Vertex descriptor should be of integer type");
}
const_reference operator()(size_t u, size_t v) const
{
return m_g->get_edge(u, v);
}
const Graph* m_g;
};
void use_matrix(const MatrixGraph & mg)
{
MatrixAccessor<MatrixGraph> matr(&mg);
assert(matr(0, 1) == 1);
assert(matr(0, 2) == 0);
}
In case your adjacency_matrix has some edge-bundled properties, you might need to modify the operator() in MatrixAccessor.
Depending on how much uBLAS you use, you can refine MatrixAccessor further. For example, out_edge_iterator for a given vertex of a MatrixGraph is actually an iterator over matrix column; vertex_iterator can be treated as iterator over matrix rows, etc.
Of course, graph matrix is immutable and as such should be used with care.
just as an easy way and I don't know how much it is efficient.
This is what I came up with:
I have used a small world graph and printed the adjacency matrix.
#include <boost/graph/adjacency_list.hpp>
#include <boost/graph/small_world_generator.hpp>
#include <boost/random/linear_congruential.hpp>
using namespace std;
using namespace boost;
typedef adjacency_list<vecS, vecS, undirectedS> Graph;
typedef small_world_iterator<boost::minstd_rand, Graph> SWGen;
int main()
{
boost::minstd_rand gen;
int N = 20;
int degree = 4;
double rewiring = 0.;
Graph g(SWGen(gen, N, degree, rewiring), SWGen(), 20);
cout << num_edges(g)<< '\n';
typedef graph_traits<Graph>::edge_iterator edge_iterator;
pair<edge_iterator, edge_iterator> ei = edges(g);
for(edge_iterator edge_iter = ei.first; edge_iter != ei.second; ++edge_iter) {
cout << "(" << source(*edge_iter, g) << ", " << target(*edge_iter, g) << ")\n";
}
vector<vector<int> > mat(N,vector<int>(N));
for (edge_iterator edge_iter = ei.first; edge_iter != ei.second; ++edge_iter){
int a = source(*edge_iter, g);
int b = target(*edge_iter, g);
mat[a][b] = 1;
mat[b][a] = 1;
}
for (int i=0; i<N; i++){
for (int j=0; j<N; j++){
cout << mat[i][j]<<" ";
}
cout <<endl;
}
return 0;
}
Output:
0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1
1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 1 0 1 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 1 0 1 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 1 0 1 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1 1 0 1 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 1
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1
1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0
The current revision of the adjacency_matrix has an undocumented public member m_matrix (see line 640). However, it is a flat vector of tuples <bool, bundled_properties> (line 512). Since the underlying storage looks so different from a ublas matrix, it is most likely not possible to convert a graph to a matrix besides iterating over edges.