Wrong eigenvalues with GSL - gsl

The eigenvalues I obtain are different for GSL (Gnu Scientific Library) and for Matlab. Can someone give me a hint what I am doing wrong? :
#include <stdio.h>
#include <gsl/gsl_math.h>
#include <gsl/gsl_eigen.h>
#include <gsl/gsl_complex_math.h>
#include <gsl/gsl_permutation.h>
#include <gsl/gsl_linalg.h>
#include <gsl/gsl_blas.h>
int main(int argc,char** argv)
{
double data[] = { -1.0, 1.0, -1.0, 1.0,
-8.0, 4.0, -2.0, 1.0,
27.0, 9.0, 3.0, 1.0,
64.0, 16.0, 4.0, 1.0 };
gsl_matrix_view m
= gsl_matrix_view_array (data, 4, 4);
gsl_vector_complex *eval = gsl_vector_complex_alloc (4);
gsl_matrix_complex *evec = gsl_matrix_complex_alloc (4, 4);
gsl_eigen_nonsymmv_workspace * w =gsl_eigen_nonsymmv_alloc (4);
gsl_eigen_nonsymmv (&m.matrix, eval, evec, w);
gsl_eigen_nonsymmv_free (w);
int i, j;
for (i = 0; i < 4; i++)
{
gsl_complex eval_i
= gsl_vector_complex_get (eval, i);
gsl_vector_complex_view evec_i
= gsl_matrix_complex_column (evec, i);
printf ("eigenvalue = %g + %gi\n",
GSL_REAL(eval_i), GSL_IMAG(eval_i));
printf ("eigenvector = \n");
for (j = 0; j < 4; ++j)
{
gsl_complex z =
gsl_vector_complex_get(&evec_i.vector, j);
printf("%g + %gi\n", GSL_REAL(z), GSL_IMAG(z));
}
}
}
The resulting eigenvalues/eigenvalues according to GSL are:
eigenvalue = -6.41391 + 0i
eigenvector =
0.0998822 + 0i
0.111251 + 0i
-0.292501 + 0i
-0.944505 + 0i
eigenvalue = 5.54555 + 3.08545i
eigenvector =
0.0430757 + 0.00968662i
-0.0709124 + 0.138917i
0.516595 + -0.0160059i
0.839574 + 0.0413888i
eigenvalue = 5.54555 + -3.08545i
eigenvector =
0.0430757 + -0.00968662i
-0.0709124 + -0.138917i
0.516595 + 0.0160059i
0.839574 + -0.0413888i
eigenvalue = 2.3228 + 0i
eigenvector =
-0.144933 + 0i
0.356601 + 0i
0.919369 + 0i
0.0811836 + 0i
But MATLAB returns the following eigenvalues:
-5.4485 + 0.0000i
5.5948 + 3.5267i
5.5948 - 3.5267i
1.2588 + 0.0000i
Would am I doing wrong? Any hints are appreciated.

Related

Can this Iron Python 2.7 code be sped up?

I have implemented Gray-Scott reaction diffusion in Iron Python 2, mapping grayscale values between 0 and 1.
But since it only gets interesting from about 6400 steps, it takes a lot of time.
So can someone smarter / more proficient in Python than me help me make this code more efficient?
Here is the code:
import random as rnd
dA = 1
dB = 0.5
feed = 0.055
kill = 0.062
wHalb = int(width/2)
hHalb = int(height/2)
#decides if gAA is random or 1
if gAA == 0:
gAA = rnd.random()
def main():
#create base grid of chemical A and B
gridA = [ [gAA for i in range(width)] for i in range(height)]
gridB = [ [0 for i in range(width)] for i in range(height)]
nextA = [ [1 for i in range(width)] for i in range(height)]
nextB = [ [0 for i in range(width)] for i in range(height)]
color = [ [0 for i in range(width)] for i in range(height)]
for x in range (wHalb-baseBlock,wHalb+baseBlock):
for y in range (hHalb-baseBlock,hHalb+baseBlock):
gridB[x][y] = 1
x, y, i, j = 0, 0, 0, 0
for n in range(steps):
for x in range (width):
for y in range (height):
a = gridA[x][y]
b = gridB[x][y]
nextA[x][y] = (a + (dA * laplaceA(x, y, gridA)) - (a * b*b) + (feed * (1 - a)))
nextB[x][y] = (b + (dB * laplaceB(x, y, gridB)) + (a * b*b) - ((kill + feed) * b))
tempA = gridA
gridA = nextA
nextA = tempA
tempB = gridB
gridB = nextB
nextB = tempB
color = [ [(nextA[i][j] - nextB[i][j]) for i in range(width)] for j in range(height)]
return color
def laplaceA(x, y, gridA):
sumA = 0;
xS = x - 1
xE = x + 1
yS = y - 1
yE = y + 1
if (x == 0):
xS = width-1
if (y == 0):
yS = height-1
if (x == width - 1):
xE = 0;
if (y == height - 1):
yE = 0;
sumA = sumA + gridA[x][y] * -1
sumA = sumA + gridA[xS][y] * 0.2
sumA = sumA + gridA[xE][y] * 0.2
sumA = sumA + gridA[x][yE] * 0.2
sumA = sumA + gridA[x][yS] * 0.2
sumA = sumA + gridA[xS][yS] * 0.05
sumA = sumA + gridA[xE][yS] * 0.05
sumA = sumA + gridA[xS][yE] * 0.05
sumA = sumA + gridA[xE][yE] * 0.05
return sumA
def laplaceB(x, y, gridB):
sumB = 0
xS = x - 1
xE = x + 1
yS = y - 1
yE = y + 1
if (x == 0):
xS = width-1
if (y == 0):
yS = height-1
if (x == width - 1):
xE = 0
if (y == height - 1):
yE = 0
sumB = sumB + gridB[x][y] * -1
sumB = sumB + gridB[xS][y] * 0.2
sumB = sumB + gridB[xE][y] * 0.2
sumB = sumB + gridB[x][yE] * 0.2
sumB = sumB + gridB[x][yS] * 0.2
sumB = sumB + gridB[xS][yS] * 0.05
sumB = sumB + gridB[xE][yS] * 0.05
sumB = sumB + gridB[xS][yE] * 0.05
sumB = sumB + gridB[xE][yE] * 0.05
return sumB
aOut = main()

How do I get correct answers using my code with the barycentric formula?

My function getHeightOfTerrain() is calling a barycentric formula function that is not returning the correct height for the one set test height in : heightMapFromArray[][].
I've tried watching OpenGL JAVA Game tutorials 14,21, 22, by "thin matrix" and I am confused on how to use my array: heightMapforBaryCentric in both of the supplied functions, and how to set the arguments that are passed to the baryCentic() function in some sort of manner so that I can solve the problem.
int creaateTerrain(int height, int width)
{
float holderY[6] = { 0.f ,0.f,0.f,0.f,0.f,0.f };
float scaleit = 1.5f;
float holder[6] = { 0.f,0.f,0.f,0.f,0.f,0.f };
for (int z = 0, z2 =0; z < iterationofHeightMap;z2++)
{
//each loop is two iterations and creates one quad (two triangles)
//however because each iteration is by two (i.e. : x=x+2) om bottom
//the amount of triangles is half the x value
//
//number of vertices : 80 x 80 x 6.
//column
for (int x = 0, x2 = 0; x < iterationofHeightMap;x2++)
{
//relevant - A : first triangle - on left triangle
//[row] [colum[]
holder[0] = heightMapFromArray[z][x];
//holder[0] = (float)imageData[(z / 2 * MAP_Z + (x / 2)) * 3];
//holder[0] = holder[0] / 255;// *scaleit;
vertices.push_back(glm::vec3(x, holder[0], z));
//match height map with online barycentric use
heightMapforBaryCentric[x2][z2] = holder[0];
holder[1] = heightMapFromArray[z+2][x];
//holder[1] = (float)imageData[(((z + 2) / 2 * MAP_Z + ((x) / 2))) * 3];
//holder[1] = holder[1] / 255;// 6 * scaleit;
vertices.push_back(glm::vec3(x, holder[1], z + 2));
//match height map with online barycentric use
heightMapforBaryCentric[x2][z2+1] = holder[1];
holder[2] = heightMapFromArray[z+2][x+2];
//holder[2] = (float)imageData[(((z + 2) / 2 * MAP_Z + ((x + 2) / 2))) * 3];
//holder[2] = holder[2] / 255;// *scaleit;
vertices.push_back(glm::vec3(x + 2, holder[2], z + 2));
////match height map with online barycentric use
heightMapforBaryCentric[x2+1][z2+1] = holder[2];
//relevant - B - second triangle (on right side)
holder[3] = heightMapFromArray[z][x];
//holder[3] = (float)imageData[((z / 2)*MAP_Z + (x / 2)) * 3];
//holder[3] = holder[3] / 255;// 256 * scaleit;
vertices.push_back(glm::vec3(x, holder[3], z));
holder[4] = heightMapFromArray[x+2][z+2];
//holder[4] = (float)imageData[(((z + 2) / 2 * MAP_Z + ((x + 2) / 2))) * 3];
//holder[4] = holder[4] / 255;// *scaleit;
vertices.push_back(glm::vec3(x + 2, holder[4], z + 2));
holder[5] = heightMapFromArray[x+2][z];
//holder[5] = (float)imageData[((z / 2)*MAP_Z + ((x + 2) / 2)) * 3];
//holder[5] = holder[5] / 255;// *scaleit;
vertices.push_back(glm::vec3(x + 2, holder[5], z));
x = x + 2;
}
z = z + 2;
}
return(1);
}
float getHeightOfTerrain(float worldX, float worldZ) {
float terrainX = worldX;
float terrainZ = worldZ;
int gridSquareSize = 2.0f;
gridX = (int)floor(terrainX / gridSquareSize);
gridZ = (int)floor(terrainZ / gridSquareSize);
xCoord = ((float)(fmod(terrainX, gridSquareSize)) / (float)gridSquareSize);
zCoord = ((float)(fmod(terrainZ, gridSquareSize)) / (float)gridSquareSize);
if (xCoord <= (1 - zCoord))
{
answer = baryCentric(
//left triangle
glm::vec3(0.0f, heightMapforBaryCentric[gridX][gridZ], 0.0f),
glm::vec3(0.0f, heightMapforBaryCentric[gridX][gridZ+1], 1.0f),
glm::vec3(1.0f, heightMapforBaryCentric[gridX+1][gridZ+1], 1.0f),
glm::vec2(xCoord, zCoord));
// if (answer != 1)
// {
// fprintf(stderr, "Z:gridx: %d gridz: %d answer: %f\n", gridX, gridZ,answer);
//
// }
}
else
{
//right triangle
answer = baryCentric(glm::vec3(0, heightMapforBaryCentric[gridX][gridZ], 0),
glm::vec3(1,heightMapforBaryCentric[gridX+1][gridZ+1], 1),
glm::vec3(1,heightMapforBaryCentric[gridX+1][gridZ], 0),
glm::vec2(xCoord, zCoord));
}
if (answer == 1)
{
answer = 0;
}
//answer = abs(answer - 1);
return(answer);
}
float baryCentric(glm::vec3 p1, glm::vec3 p2, glm::vec3 p3 , glm::vec2 pos) {
float det = (p2.z - p3.z) * (p1.x - p3.x) + (p3.x - p2.x) * (p1.z - p3.z);
float l1 = ((p2.z - p3.z) * (pos.x - p3.x) + (p3.x - p2.x) * (pos.y - p3.z)) / det;
float l2 = ((p3.z - p1.z) * (pos.x - p3.x) + (p1.x - p3.x) * (pos.y - p3.z)) / det;
float l3 = 1.0f - l1 - l2;
return (l1 * p1.y + l2 * p2.y + l3 * p3.y);
}
My expected results were that the center of the test grid's height to be the set value .5 and gradually less as the heights declined. My results were the heights being all the same, varied, or increasing. Usually these heights were under the value of one.

What am I doing wrong when executing the sobel filter function in c++

Here is my sobel filter function performed on a grayscale image. Apparently I'm not doing my calculations correct because I keep getting an all black image. I have already turned in the project but it is bothering me that the results aren't right.
int sobelH[3][3] = { -1, 0, 1,
-2, 0, 2,
-1, 0, 1 },
sobelV[3][3] = { 1, 2, 1,
0, 0, 0,
-1, -2, -1 };
//variable declaration
int mag;
int pix_x, pix_y = 0;
int img_x, img_y;
for (img_x = 0; img_x < img->x; img_x++)
{
for (img_y = 0; img_y < img->y; img_y++)
{
pix_x = 0;
pix_y = 0;
//calculating the X and Y convolutions
for (int i = -1; i <= 1; i++)
{
for (int j = -1; j <= 1; j++)
{
pix_x += (img->data[img_y * img->x + img_x].red + img->data[img_y * img->x + img_x].green + img->data[img_y * img->x + img_x].blue) * sobelH[1 + i][1 + j];
pix_y += (img->data[img_y * img->x + img_x].red + img->data[img_y * img->x + img_x].green + img->data[img_y * img->x + img_x].blue) * sobelV[1 + i][1 + j];
}
}
//Gradient magnitude
mag = sqrt((pix_x * pix_x) + (pix_y * pix_y));
if (mag > RGB_COMPONENT_COLOR)
mag = 255;
if (mag < 0)
mag = 0;
//Setting the new pixel value
img->data[img_y * img->x + img_x].red = mag;
img->data[img_y * img->x + img_x].green = mag;
img->data[img_y * img->x + img_x].blue = mag;
}
}
Although your code could use some improvement, the main reason is that you compute the convolution at constant img_y and img_x. What you need to do is:
pix_x += (img->data[img_y * img->x + img_x + i].red + img->data[img_y * img->x + img_x + i].green + img->data[img_y * img->x + img_x + i].blue) * sobelH[1 + i][1 + j];
Indeed, the Sobel convolution is symmetric, so if you compute the convolution with a constant image, it will result in only black.
Note that in the above example I do not take into account the border of the image. You should make sure to not access pixels that are outside your pixel array.
Another mistake is that you're writing in the input image. You write at location (x,y), then compute the filter result for location (x+1,y) using the modified value at (x,y), which is the wrong value to use.
You need to write your result to a new image.

OpenCV: Custom pixelwise alpha compositing: is this correct?

OpenCV as I know, does not offer pixelwise add() but only addWeighted() that applies one scalar to all pixels. Using the C-style array access that is the fastest among all means of pixel access, my custom alpha compositing function is still slow as hell - it took nearly 2 seconds of operation for a 1400x900 image. I don't think building in release mode helps optimization... Is there a way to increase the speed?
I'm writing alphaCompositeLayers() - an alpha compositing function that multiplies each pixel of the background cv::Mat by the alpha value of the corresponding pixel of the foreground cv::Mat. Both cv::Mats areCV_8UC4` based (unsigned char, 4 channels):
// mat1 in foreground, mat0 in background
cv::Mat alphaCompositeLayers(cv::Mat mat0, cv::Mat mat1) {
cv::Mat res = mat0.clone();
int nRows = res.rows;
int nCols = res.cols * res.channels();
if (res.isContinuous()) {
nCols *= nRows;
nRows = 1;
}
for (int u = 0; u < nRows; u++) {
unsigned char *resrgb = res.ptr<unsigned char>(u);
unsigned char *matrgb = mat1.ptr<unsigned char>(u);
for (int v = 0; v < nCols; v += 4) {
unsigned char newalpha = cv::saturate_cast<unsigned char>(resrgb[v + 3] * (255.0f - matrgb[v + 3]) + matrgb[v + 3]);
resrgb[v] = cv::saturate_cast<unsigned char>((resrgb[v] * resrgb[v + 3] / 255.0f * (255 - matrgb[v + 3]) / 255.0f + matrgb[v] * matrgb[v + 3] / 255.0f)); // / newalpha);
resrgb[v + 1] = cv::saturate_cast<unsigned char>((resrgb[v + 1] * resrgb[v + 3] / 255.0f * (255 - matrgb[v + 3]) / 255.0f + matrgb[v + 1] * matrgb[v + 3] / 255.0f)); // / newalpha);
resrgb[v + 2] = cv::saturate_cast<unsigned char>((resrgb[v + 2] * resrgb[v + 3] / 255.0f * (255 - matrgb[v + 3]) / 255.0f + matrgb[v + 2] * matrgb[v + 3] / 255.0f)); // / newalpha);
resrgb[v + 3] = newalpha;
resrgb[v + 3] = cv::saturate_cast<unsigned char>(rand() % 256);
}
}
return res;
}
Here's another function multiplyLayerByAlpha() that multiplies each pixel by its alpha value (0% opacity = black, 100% opacity = pixel color):
cv::Mat multiplyLayerByAlpha(cv::Mat mat) {
cv::Mat res = mat.clone();
int nRows = res.rows;
int nCols = res.cols * res.channels();
if (res.isContinuous()) {
nCols *= nRows;
nRows = 1;
}
for (int u = 0; u < nRows; u++) {
unsigned char *resrgb = res.ptr<unsigned char>(u);
for (int v = 0; v < nCols; v += 4) {
resrgb[v] = cv::saturate_cast<unsigned char>(resrgb[v] * resrgb[v + 3] / 255.0f);
resrgb[v + 1] = cv::saturate_cast<unsigned char>(resrgb[v + 1] * resrgb[v + 3] / 255.0f);
resrgb[v + 2] = cv::saturate_cast<unsigned char>(resrgb[v + 2] * resrgb[v + 3] / 255.0f);
}
}
return res;
}
An array of cv::Mats, for example {mat0, mat1, mat2} with mat2 on foremost (on top of all 3), I basically run this:
cv::Mat resultingCvMat = multiplyLayerByAlpha(
alphaCompositeLayers(
mat0,
alphaCompositeLayers(mat1, mat2)
)
);
How can I make the program compute the resultingCvMat faster? With C++ ways like multi-threading (then how)? Or with OpenCV functions and ways (again, then how)?

2d rotation opengl

Here is the code I am using.
#define ANGLETORADIANS 0.017453292519943295769236907684886f // PI / 180
#define RADIANSTOANGLE 57.295779513082320876798154814105f // 180 / PI
rotation = rotation *ANGLETORADIANS;
cosRotation = cos(rotation);
sinRotation = sin(rotation);
for(int i = 0; i < 3; i++)
{
px[i] = (vec[i].x + centerX) * (cosRotation - (vec[i].y + centerY)) * sinRotation;
py[i] = (vec[i].x + centerX) * (sinRotation + (vec[i].y + centerY)) * cosRotation;
printf("num: %i, px: %f, py: %f\n", i, px[i], py[i]);
}
so far it seams my Y value is being fliped.. say I enter the value of X = 1 and Y = 1 with a 45 rotation you should see about x = 0 and y = 1.25 ish but I get x = 0 y = -1.25.
Also my 90 degree rotation always return x = 0 and y = 0.
p.s I know I'm only centering my values and not putting them back where they came from. It's not needed to put them back as all I need to know is the value I'm getting now.
Your bracket placement doesn't look right to me. I would expect:
px[i] = (vec[i].x + centerX) * cosRotation - (vec[i].y + centerY) * sinRotation;
py[i] = (vec[i].x + centerX) * sinRotation + (vec[i].y + centerY) * cosRotation;
Your brackets are wrong. It should be
px[i] = ((vec[i].x + centerX) * cosRotation) - ((vec[i].y + centerY) * sinRotation);
py[i] = ((vec[i].x + centerX) * sinRotation) + ((vec[i].y + centerY) * cosRotation);
instead