calculate gradient directions - c++

I want calculate angles of gradients from depth map and group it for some directions (8 sectors)
But my function calculates only first 3 directions
cv::Mat calcAngles(cv::Mat dimg)//dimg is depth map
{
const int directions_num = 8;//number of directions
const int degree_grade = 360;
int range_coeff = 255 / (directions_num + 1);//just for visualize
cv::Mat x_edge, y_edge, full_edge, angles;
dimg.copyTo(x_edge);
dimg.copyTo(y_edge);
dimg.copyTo(full_edge);
//compute gradients
Sobel( dimg, x_edge, CV_8U, 1, 0, 5, 1, 19, 4 );
Sobel( dimg, y_edge, CV_8U, 0, 1, 5, 1, 19, 4 );
Sobel( dimg, full_edge, CV_8U, 1, 1, 5, 1, 19, 4 );
float freq[directions_num + 1];//for collect direction's frequency
memset(freq, 0, sizeof(freq));
angles = cv::Mat::zeros(dimg.rows, dimg.cols, CV_8U);//store directions here
for(int i = 0; i < angles.rows; i++)
{
for(int j = 0; j < angles.cols; j++)
{
angles.at<uchar>(i, j) = (((int)cv::fastAtan2(y_edge.at<uchar>(i, j), x_edge.at<uchar>(i, j))) / (degree_grade/directions_num) + 1
) * (dimg.at<uchar>(i, j) ? 1 : 0);//fastatan returns values from 0 to 360, if i not mistaken. I want group angles by directions_num sectors. I use first 'direction' (zero value) for zero values from depth map (zero value at my depth map suggest that it is bad pixel)
freq[angles.at<uchar>(i, j)] += 1;
}
}
for(int i = 0; i < directions_num + 1; i++)
{
printf("%2.2f\t", freq[i]);
}
printf("\n");
angles *= range_coeff;//for visualization
return angles;
}
Out from one of the frames:
47359.00 15018.00 8199.00 6224.00 0.00 0.00 0.00 0.00 0.00
(first value is "zero pixel", next is number of gradients in n-place but only 3 are not zero)
Visualization
Is there way out? Or these result is OK?
PS Sorry for my writing mistakes. English in not my native language.

You used CV_8U type for Sobel output. It is unsigned integer 8 bit. So it can store only positive values. That's why fastAtan2 returns less or equal than 90. Change type to CV_16S and use short type for accessing the elements:
cv::Sobel(dimg, x_edge, CV_16S, 1, 0, 5, 1, 19, 4);
cv::Sobel(dimg, y_edge, CV_16S, 0, 1, 5, 1, 19, 4);
cv::fastAtan2(y_edge.at<short>(i, j), x_edge.at<short>(i, j))

Related

OpenCV C++ - How can I replace the value of each pixel with the average value of the grayscale in a 3x3 neighborhood?

I'm new to OpenCV (in C++) and image processing. I want, given a grayscale image to replace the value of each pixel computing the average value of the grayscale in a 3x3 neighborhood.
First of all I open the image
Mat img = imread(samples::findFile(argv[1]), IMREAD_GRAYSCALE);
// Example of image
[4 3 9 1,
2 9 8 0,
3 5 2 1,
7 5 8 3]
In order to get the average value of the 3x3 closest pixels of corners (top left, top right, bottom left and bottom right) I make a padding of the image: an 1x1x1x1 constant border
Mat imgPadding;
copyMakeBorder(img, imgPadding, 1,1,1,1, BORDER_CONSTANT, Scalar(0));
// Padding example
[0 0 0 0 0 0,
0 4 3 9 1 0,
0 2 9 8 0 0,
0 3 5 2 1 0,
0 7 5 8 3 0,
0 0 0 0 0 0]
Now I've got some troubles with the output image. I have tried in various ways, but no way brings me to the solution. I tried this, using mean() function to get the average grayscale value of the i,j-th 3x3 matrix got with Rect() method. The for loop starts from the first non-padding pixel and ends at the last non-padding pixel.
Mat imgAvg = Mat::zeros(img.rows, img.cols, img.type());
// initialization of the output Mat object with same input size and type
for (int i = 1; i < imgAvg.rows; i++)
for (int j = 1; j < imgAvg.cols; j++)
imgAvg.at<Scalar>(Point(j - 1, i - 1)) = mean(imgPadding(Rect(j - 1, i - 1, 3, 3)));
but I got this runtime error
main: malloc.c:2379: sysmalloc: Assertion `(old_top == initial_top (av) && old_size == 0) || ((unsigned long) (old_size) >= MINSIZE && prev_inuse (old_top) && ((unsigned long) old_end & (pagesize - 1)) == 0)' failed.
I tried also reducing randomly the range
for (int i = 1; i < imgAvg.rows - 35; i++)
for (int j = 1; j < imgAvg.cols - 35; j++)
imgAvg.at<Scalar>(Point(j - 1, i - 1)) = mean(imgPadding(Rect(j - 1, i - 1, 3, 3)));
and I got this weird output: screenshot
Thanks in advance!
EDIT:
Thank you all for the answers, I didn't know yet the blur() function.
In this way I import the image and simply call the blur function
Mat img = imread(samples::findFile(argv[1]), IMREAD_GRAYSCALE);
Mat imgAvg = Mat::zeros(img.rows, img.cols, img.type());
blur(img, imgAvg, Size(3, 3));
But since I'm still a beginner and I think the purpose of the exercise assigned to me was to write a "handmade" code, I tried also this working solution
for (int i = 1; i <= imgAvg.rows; i++)
for (int j = 1; j <= imgAvg.cols; j++)
imgAvg.at<uint8_t>(Point(j - 1, i - 1)) = mean(imgPadding(Rect(j - 1, i - 1, 3, 3)))[0];
Result of the algorithm (identical for both solutions)
Just apply a smoothing filter to the image - the blur function in the imgproc module should accomplish what you need. A good example is in the documentation: https://docs.opencv.org/3.4/dc/dd3/tutorial_gausian_median_blur_bilateral_filter.html
In this case, the arguments you need are the image (img), a destination image (dst), and kernel size (ksize), which is 3 in this case:
src = ...
Mat dst = Mat::zeros( src.size(), src.type() )
blur( src, dst, Size( 3, 3 ))
Smoothing manually will not be as performant, and is more prone to error.
Good luck!
What you want to do is called "box filtering" in image processing. In OpenCV you do:
cv::blur(src_img,
dest_img, // same shape and type as src, cannot be src
cv::Size(3, 3)) // use a kernel of size 3x3
The default padding is to reflect the border pixel, which won't skew the image statistics. See the documentation if you prefer a different border mode.

Evenly distribute values into array

I have a fixed size boolean array of size 8. The default value of all elements in the array is false. There will be a number of truth values to fill between 1-8.
I want to distribute the truth values as far away from one another as possible. I also wish to be able to randomize the configuration. In this scenario the array wraps around so position 7 is "next to" position 0 in the array.
here are some examples for fill values. I didn't include all possibilities, but hopefully it gets my point across.
1: [1, 0, 0, 0, 0, 0, 0, 0] or [0, 1, 0, 0, 0, 0, 0, 0]
2: [1, 0, 0, 0, 1, 0, 0, 0] or [0, 1, 0, 0, 0, 1, 0, 0]
3: [1, 0, 0, 1, 0, 0, 1, 0] or [0, 1, 0, 0, 1, 0, 0, 1]
4: [1, 0, 1, 0, 1, 0, 1, 0] or [0, 1, 0, 1, 0, 1, 0, 1]
5: [1, 1, 0, 1, 1, 0, 1, 0]
6: [1, 1, 0, 1, 1, 1, 0, 1]
7: [1, 1, 1, 1, 1, 1, 1, 0]
8: [1, 1, 1, 1, 1, 1, 1, 1]
The closest solution I have come up with so far hasn't quite produced the results I'm looking for...
I seek to write it in c++ but here is a little pseudo-code of my algorithm so far...
not quite working out how I wanted
truths = randBetween(1, 8)
values = [0,0,0,0,0,0,0,0]
startPosition = randBetween(0, 7) //starting index
distance = 4
for(i = 0; i < truths; i++) {
pos = i + startPosition + (i * distance)
values[pos % 8] = 1
}
this is an example output from my current code. those marked with a star are incorrect.
[0, 0, 0, 0, 1, 0, 0, 0]
[0, 1, 0, 0, 1, 0, 0, 0]*
[0, 1, 0, 0, 1, 0, 1, 0]
[0, 1, 0, 1, 1, 0, 1, 0]*
[1, 1, 0, 1, 1, 0, 1, 0]
[1, 1, 0, 1, 1, 1, 1, 0]*
[1, 1, 1, 1, 1, 1, 1, 0]
[1, 1, 1, 1, 1, 1, 1, 1]
I'm looking for a simple way to distribute the truth values evenly throughout the array without having to code for special cases.
Check this out:
#include <cassert>
#include <vector>
#include <iostream>
#include <iomanip>
/**
* Generate an even spaced pattern of ones
* #param arr destination vector of ints
* #param onescnt the requested number of ones
*/
static inline
void gen(std::vector<int>& arr, size_t onescnt) {
const size_t len = arr.size();
const size_t zeroscnt = len - onescnt;
size_t ones = 1;
size_t zeros = 1;
for (size_t i = 0; i < len; ++i) {
if (ones * zeroscnt < zeros * onescnt) {
ones++;
arr[i] = 1;
} else {
zeros++;
arr[i] = 0;
}
}
}
static inline
size_t count(const std::vector<int>& arr, int el) {
size_t cnt = 0;
for (size_t i = 0; i < arr.size(); ++i) {
cnt += arr[i] == el;
}
return cnt;
}
static inline
void gen_print(size_t len, size_t onescnt) {
std::vector<int> arr(len);
gen(arr, onescnt);
std::cout << "gen_printf(" << std::setw(2) << len << ", " << std::setw(2) << onescnt << ") = {";
for (size_t i = 0; i < len; ++i) {
std::cout << arr[i] << ",";
}
std::cout << "}\n";
assert(count(arr, 1) == onescnt);
}
int main() {
for (int i = 0; i <= 8; ++i) {
gen_print(8, i);
}
for (int i = 0; i <= 30; ++i) {
gen_print(30, i);
}
return 0;
}
Generates:
gen_printf( 8, 0) = {0,0,0,0,0,0,0,0,}
gen_printf( 8, 1) = {0,0,0,0,0,0,0,1,}
gen_printf( 8, 2) = {0,0,0,1,0,0,0,1,}
gen_printf( 8, 3) = {0,1,0,0,1,0,0,1,}
gen_printf( 8, 4) = {0,1,0,1,0,1,0,1,}
gen_printf( 8, 5) = {1,0,1,1,0,1,0,1,}
gen_printf( 8, 6) = {1,1,0,1,1,1,0,1,}
gen_printf( 8, 7) = {1,1,1,1,1,1,0,1,}
gen_printf( 8, 8) = {1,1,1,1,1,1,1,1,}
gen_printf(30, 0) = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,}
gen_printf(30, 1) = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,}
gen_printf(30, 2) = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,}
gen_printf(30, 3) = {0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,}
gen_printf(30, 4) = {0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,}
gen_printf(30, 5) = {0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,1,}
gen_printf(30, 6) = {0,0,0,0,1,0,0,0,0,1,0,0,0,0,1,0,0,0,0,1,0,0,0,0,1,0,0,0,0,1,}
gen_printf(30, 7) = {0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,0,1,0,0,0,1,0,0,0,1,0,0,0,0,1,}
gen_printf(30, 8) = {0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,1,0,0,0,1,0,0,0,1,0,0,0,1,}
gen_printf(30, 9) = {0,0,1,0,0,1,0,0,0,1,0,0,1,0,0,1,0,0,0,1,0,0,1,0,0,1,0,0,0,1,}
gen_printf(30, 10) = {0,0,1,0,0,1,0,0,1,0,0,1,0,0,1,0,0,1,0,0,1,0,0,1,0,0,1,0,0,1,}
gen_printf(30, 11) = {0,1,0,0,1,0,0,1,0,1,0,0,1,0,0,1,0,0,1,0,1,0,0,1,0,0,1,0,0,1,}
gen_printf(30, 12) = {0,1,0,0,1,0,1,0,0,1,0,1,0,0,1,0,1,0,0,1,0,1,0,0,1,0,1,0,0,1,}
gen_printf(30, 13) = {0,1,0,1,0,1,0,0,1,0,1,0,1,0,0,1,0,1,0,1,0,0,1,0,1,0,1,0,0,1,}
gen_printf(30, 14) = {0,1,0,1,0,1,0,1,0,1,0,1,0,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,0,1,}
gen_printf(30, 15) = {0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,}
gen_printf(30, 16) = {1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,}
gen_printf(30, 17) = {1,0,1,0,1,0,1,1,0,1,0,1,0,1,1,0,1,0,1,0,1,1,0,1,0,1,0,1,0,1,}
gen_printf(30, 18) = {1,0,1,0,1,1,0,1,0,1,1,0,1,0,1,1,0,1,0,1,1,0,1,0,1,1,0,1,0,1,}
gen_printf(30, 19) = {1,0,1,1,0,1,1,0,1,0,1,1,0,1,1,0,1,1,0,1,0,1,1,0,1,1,0,1,0,1,}
gen_printf(30, 20) = {1,0,1,1,0,1,1,0,1,1,0,1,1,0,1,1,0,1,1,0,1,1,0,1,1,0,1,1,0,1,}
gen_printf(30, 21) = {1,1,0,1,1,0,1,1,0,1,1,1,0,1,1,0,1,1,0,1,1,1,0,1,1,0,1,1,0,1,}
gen_printf(30, 22) = {1,1,0,1,1,1,0,1,1,1,0,1,1,0,1,1,1,0,1,1,1,0,1,1,1,0,1,1,0,1,}
gen_printf(30, 23) = {1,1,1,0,1,1,1,0,1,1,1,0,1,1,1,1,0,1,1,1,0,1,1,1,0,1,1,1,0,1,}
gen_printf(30, 24) = {1,1,1,0,1,1,1,1,0,1,1,1,1,0,1,1,1,1,0,1,1,1,1,0,1,1,1,1,0,1,}
gen_printf(30, 25) = {1,1,1,1,0,1,1,1,1,1,0,1,1,1,1,1,0,1,1,1,1,1,0,1,1,1,1,1,0,1,}
gen_printf(30, 26) = {1,1,1,1,1,1,0,1,1,1,1,1,1,0,1,1,1,1,1,1,1,0,1,1,1,1,1,1,0,1,}
gen_printf(30, 27) = {1,1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,0,1,}
gen_printf(30, 28) = {1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,}
gen_printf(30, 29) = {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,}
gen_printf(30, 30) = {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,}
#edit - better evenly spaced pattern.
Explanation:
So let's take an array of 8 ints and we want to have 5 ones. The ideal ratio of (ones / zeros) in a sequence with 8 elements and 5 ones, well would be (5 / 3). We will never approach such ratio, but we can try.
The idea is to loop through the array and remember the number of ones and zeros we have written in the array. If the ratio of (written ones / written zeros) is lower then the destination ratio (ones / zeros) we want to achieve, we need to put a one to the sequence. Otherwise we put zero in the sequence. The ratio changes and we make the decision next time. The idea is to pursue the ideal ratio of ones per zeros in each slice of the array.
A simple way to do this would be to round the ideal fractional positions.
truths = randBetween(1, 8)
values = [0,0,0,0,0,0,0,0]
offset = randBetween(0, 8 * truths - 1)
for(i = 0; i < truths; i++) {
pos = (offset + (i * 8)) / truths
values[pos % 8] = 1
}
This is an application of Bresenham's line-drawing algorithm. I use it not because it's fast on old hardware, but it places true values exactly.
#include <iostream>
#include <stdexcept>
#include <string>
#include <random>
int main(int argc, char **argv) {
try {
// Read the argument.
if(argc != 2) throw std::invalid_argument("one argument");
int dy = std::stoi(argv[1]);
if(dy < 0 || dy > 8) throw std::out_of_range("[0..8]");
int values[8] = {0};
// https://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm
int dx = 8;
int delta = 2 * dy - dx; // Balance the line. Permute it up later.
for(int x = 0; x < dx; x++) {
if(delta > 0) {
values[x] = 1;
delta -= 2 * dx;
}
delta += 2 * dy;
}
for(int x = 0; x < dx; x++)
std::cout << (x ? ", " : "") << values[x];
std::cout << std::endl;
// Rotate the number by a random amount.
// I'm sure there is an easier way to do this.
// https://stackoverflow.com/questions/7560114/random-number-c-in-some-range
std::random_device rd; // obtain a random number from hardware
std::mt19937 eng(rd()); // seed the generator
std::uniform_int_distribution<> distr(0, dx - 1);
int rotate = distr(eng);
bool first = true;
int x = rotate;
do {
std::cout << (first ? "" : ", ") << values[x];
first = false;
x = (x + 1) % dx;
} while(x != rotate);
std::cout << std::endl;
} catch(const std::exception &e) {
std::cerr << "Something went wrong: " << e.what() << std::endl;
return 1;
}
return 0;
}
Once you have an exact solution, rotate it by a random amount.
0, 1, 0, 0, 1, 0, 1, 0
1, 0, 0, 1, 0, 0, 1, 0
You need to calculate distance dynamically. One element is clear, that can reside at arbitrary location
2 elements is clear, too, distance needs to be 4.
4 elements need a distance of 2
8 elements a distance of 1
More difficult are numbers that don't divide the array:
3 requires a distance of 2.66.
5 requires a distance of 1.6
7 requires a distance of 0.875
Errm... In general, if you have a distance of X.Y, you will have to place some of the elements at distances of X and some at distances of X + 1. X is simple, it will be the result of an integer division: 8 / numberOfElements. The remainder will determine how often you will have to switch to X + 1: 8 % numberOfElements. For 3, this will result in 2, too, so you will have 1x distance of 2 and 2x distance of 3:
[ 1 0 1 0 0 1 0 0 ]
2 3 3 (distance to very first 1)
For 5, you'll get: 8/5 = 1, 8%5 = 3, so: 2x distance of 1, 3x distance of 2
[ 1 1 1 0 1 0 1 0 ]
1 1 2 2 2
For 7 you'll get: 8/7 = 1, 8%7 = 1, so: 7x distance of 1, 1x distance of 2
[ 1 1 1 1 1 1 1 0 ]
1 1 1 1 1 1 2
That will work for arbitrary array length L:
L/n = minimum distance
L%n = number of times to apply minimum distance
L-L%n = number of times to apply minimum distance + 1
Mathematical metrics won't reveal any difference between first applying all smaller distances then all larger ones, human sense for aesthetics, though, might prefer if you alternate between larger and smaller as often as possible – or you apply the algorithm recursively (for larger array length), to get something like 2x2, 3x3, 2x2, 3x3 instead of 4x2 and 6x3.

OpenCV col-wise standard deviation result vs MATLAB

I've seen linked questions but I can't understand why MATLAB and OpenCV give different results.
MATLAB Code
>> A = [6 4 23 -3; 9 -10 4 11; 2 8 -5 1]
A =
6 4 23 -3
9 -10 4 11
2 8 -5 1
>> Col_step_1 = std(A, 0, 1)
Col_step_1 =
3.5119 9.4516 14.2945 7.2111
>> Col_final = std(Col_step_1)
Col_final =
4.5081
Using OpenCV and this function:
double getColWiseStd(cv::Mat in)
{
CV_Assert( in.type() == CV_64F );
cv::Mat meanValue, stdValue, m2, std2;
cv::Mat colSTD(1, A.cols, CV_64F);
cv::Mat colMEAN(1, A.cols, CV_64F);
for (int i = 0; i < A.cols; i++)
{
cv::meanStdDev(A.col(i), meanValue, stdValue);
colSTD.at<double>(i) = stdValue.at<double>(0);
colMEAN.at<double>(i) = meanValue.at<double>(0);
}
std::cout<<"\nCOLstd:\n"<<colSTD<<std::endl;
cv::meanStdDev(colSTD, m2, std2);
std::cout<<"\nCOLstd_f:\n"<<std2<<std::endl;
return std2.at<double>(0,0);
}
Applied to the same matrix yields the following:
Matrix:
[6, 4, 23, -3;
9, -10, 4, 11;
2, 8, -5, 1]
COLstd:
[2.867441755680876, 7.71722460186015, 11.67142760000773, 5.887840577551898]
COLstd_f:
[3.187726614989861]
I'm pretty sure that the OpenCV and MATLAB std function are correct, and thus can't find what I'm doing wrong, am I missing a type conversion? Something else?
The standard deviation you're calculating in OpenCV is normalised by number of observations (N) whereas you're calculating standard deviation in MATLAB normalised by N-1 (which is also the default normalisation factor in MATLAB and is known as Bessel's correction). Hence there is the difference.
You can normalise by N in MATLAB by selecting the second input argument as 1:
Col_step_1 = std(A, 1, 1);
Col_final = std(Col_step_1, 1);

Improper Translation Matrix from SVD of Essential Matrix for 3D reconstruction using 2 Images

I am trying to find a 3D model from 2 images taken from the same camera using OpenCV with C++. I followed this method. I am still not able to rectify mistake in R and T computation.
Image 1: With Background Removed for eliminating mismatches
Image 2: Translated only in X direction wrt Image 1 With Background Removed for eliminating mismatches
I have found the Intrinsic Camera Matrix (K) using MATLAB Toolbox. I found it to be :
K=
[3058.8 0 -500
0 3057.3 488
0 0 1]
All image matching keypoints (using SIFT and BruteForce Matching, Mismatches Eliminated) were aligned wrt center of image as follows:
obj_points.push_back(Point2f(keypoints1[symMatches[i].queryIdx].pt.x - image1.cols / 2, -1 * (keypoints1[symMatches[i].queryIdx].pt.y - image1.rows / 2)));
scene_points.push_back(Point2f(keypoints2[symMatches[i].trainIdx].pt.x - image1.cols / 2, -1 * (keypoints2[symMatches[i].trainIdx].pt.y - image1.rows / 2)));
From Point Correspondeces, I found out Fundamental Matrix Using RANSAC in OpenCV
Fundamental Matrix:
[0 0 -0.0014
0 0 0.0028
0.00149 -0.00572 1 ]
Essential Matrix obtained using:
E = (camera_Intrinsic.t())*f*camera_Intrinsic;
E obtained:
[ 0.0094 36.290 1.507
-37.2245 -0.6073 14.71
-1.3578 -23.545 -0.442]
SVD of E:
E.convertTo(E, CV_32F);
Mat W = (Mat_<float>(3, 3) << 0, -1, 0, 1, 0, 0, 0, 0, 1);
Mat Z = (Mat_<float>(3, 3) << 0, 1, 0, -1, 0, 0, 0, 0, 0);
SVD decomp = SVD(E);
Mat U = decomp.u;
Mat Lambda = decomp.w;
Mat Vt = decomp.vt;
New Essential Matrix for epipolar constraint:
Mat diag = (Mat_<float>(3, 3) << 1, 0, 0, 0, 1, 0, 0, 0, 0);
Mat new_E = U*diag*Vt;
SVD new_decomp = SVD(new_E);
Mat new_U = new_decomp.u;
Mat new_Lambda = new_decomp.w;
Mat new_Vt = new_decomp.vt;
Rotation from SVD:
Mat R1 = new_U*W*new_Vt;
Mat R2 = new_U*W.t()*new_Vt;
Translation from SVD:
Mat T1 = (Mat_<float>(3, 1) << new_U.at<float>(0, 2), new_U.at<float>(1, 2), new_U.at<float>(2, 2));
Mat T2 = -1 * T1;
I was getting the R matrices to be :
R1:
[ -0.58 -0.042 0.813
-0.020 -0.9975 -0.066
0.81 -0.054 0.578]
R2:
[ 0.98 0.0002 0.81
-0.02 -0.99 -0.066
0.81 -0.054 0.57 ]
Translation Matrices:
T1:
[0.543
-0.030
0.838]
T2:
[-0.543
0.03
-0.83]
Please clarify wherever there is a mistake.
This 4 sets of P2 matrix R|T with P1=[I] are giving incorrect triangulated models.
Also, I think the T matrix obtained is incorrect, as it was supposed to be only x shift and no z shift.
When tried with same image1=image2 -> I got T=[0,0,1]. What is the meaning of Tz=1? (where there is no z shift as both images are same)
And should I be aligning my keypoint coordinates with image center, or with principle focus obtained from calibration?

Elegant way the find the Vertices of a Cube

Nearly every OpenGL tutorial lets you implement drawing a cube. Therefore the vertices of the cube are needed. In the example code I saw a long list defining every vertex. But I would like to compute the vertices of a cube rather that using a overlong list of precomputed coordinates.
A cube is made of eight vertices and twelve triangles. Vertices are defined by x, y, and z. Triangles are defined each by the indexes of three vertices.
Is there an elegant way to compute the vertices and the element indexes of a cube?
When i was "porting" the csg.js project to Java I've found some cute code which generated cube with selected center point and radius. (I know it's JS, but anyway)
// Construct an axis-aligned solid cuboid. Optional parameters are `center` and
// `radius`, which default to `[0, 0, 0]` and `[1, 1, 1]`. The radius can be
// specified using a single number or a list of three numbers, one for each axis.
//
// Example code:
//
// var cube = CSG.cube({
// center: [0, 0, 0],
// radius: 1
// });
CSG.cube = function(options) {
options = options || {};
var c = new CSG.Vector(options.center || [0, 0, 0]);
var r = !options.radius ? [1, 1, 1] : options.radius.length ?
options.radius : [options.radius, options.radius, options.radius];
return CSG.fromPolygons([
[[0, 4, 6, 2], [-1, 0, 0]],
[[1, 3, 7, 5], [+1, 0, 0]],
[[0, 1, 5, 4], [0, -1, 0]],
[[2, 6, 7, 3], [0, +1, 0]],
[[0, 2, 3, 1], [0, 0, -1]],
[[4, 5, 7, 6], [0, 0, +1]]
].map(function(info) {
return new CSG.Polygon(info[0].map(function(i) {
var pos = new CSG.Vector(
c.x + r[0] * (2 * !!(i & 1) - 1),
c.y + r[1] * (2 * !!(i & 2) - 1),
c.z + r[2] * (2 * !!(i & 4) - 1)
);
return new CSG.Vertex(pos, new CSG.Vector(info[1]));
}));
}));
};
I solved this problem with this piece code (C#):
public CubeShape(Coord3 startPos, int size) {
int l = size / 2;
verts = new Coord3[8];
for (int i = 0; i < 8; i++) {
verts[i] = new Coord3(
(i & 4) != 0 ? l : -l,
(i & 2) != 0 ? l : -l,
(i & 1) != 0 ? l : -l) + startPos;
}
tris = new Tris[12];
int vertCount = 0;
void AddVert(int one, int two, int three) =>
tris[vertCount++] = new Tris(verts[one], verts[two], verts[three]);
for (int i = 0; i < 3; i++) {
int v1 = 1 << i;
int v2 = v1 == 4 ? 1 : v1 << 1;
AddVert(0, v1, v2);
AddVert(v1 + v2, v2, v1);
AddVert(7, 7 - v2, 7 - v1);
AddVert(7 - (v1 + v2), 7 - v1, 7 - v2);
}
}
If you want to understand more of what is going on, you can check out the github page I wrote that explains it.