I'm trying to generate fractals using five different transformations that I have implemented from skeleton code, translate, rotate, scale, non-uniform scale, and image. These transformations are all 3x3 matrices, for example:
Matrix rotate ( Pt p, float theta )
{
Matrix rvalue;
rvalue.data[0][0] = cos(theta);
rvalue.data[0][1] = -sin(theta);
rvalue.data[0][2] = p.x + p.y*sin(theta) - p.x*cos(theta);
rvalue.data[1][0] = sin(theta);
rvalue.data[1][1] = cos(theta);
rvalue.data[1][2] = p.y - p.y*cos(theta) - p.x*sin(theta);
rvalue.data[2][0] = 0;
rvalue.data[2][1] = 0;
rvalue.data[2][2] = 1;
return rvalue;
}
where Matrix is defined as
class Matrix
{
public:
float data [ 3 ] [ 3 ];
Matrix ( void )
{
int i, j;
for ( i = 0; i < 3; i++ )
{
for ( j = 0; j < 3; j++ )
{
data [ i ] [ j ] = 0;
}
}
}
};
In a test file, there is the following code that is supposed to generate Serpinski's Triangle
vector<Matrix> iat;
iat.push_back ( scale ( Pt ( -.9, -.9 ), 0.5 ) );
iat.push_back ( scale ( Pt ( .9, -.9 ), 0.5 ) );
iat.push_back ( scale ( Pt ( 0, .56 ), 0.5 ) );
setIATTransformations ( iat );
Where Pt is defined as:
class Pt
{
public:
float x, y;
Pt ( float newX, float newY )
{
x = newX;
y = newY;
}
Pt ( void )
{
x = y = 0;
}
};
How should I implement setIATTransformations? Multiply the matrices until there is one transformation matrix and loop it a number of times to generate the fractal?
you want sierpinski triangle or fractal generator driven by input script ?
1.triangle
is easy enough
no rotations translations or what so ever are needed
just create the points according to sierpinski rule http://en.wikipedia.org/wiki/Sierpinski_triangle
all sides of triangles are divided to half
so the new points are just average of start and end point of each line
and then fill the sub triangles
if you want just wire-frame then even the point list is not needed
2.generator
you did not provide any rules,commands for the control script
the only thing I see in your script is input of the 3 vertexes of the triangle
and that is all
I do not see any rule for triangle division
or which part is filled or not
how many recursions are used
the only thing you mentioned was that you use 5 transformation matrices 3x3 for rotation,scale and translation
but did not specify when and why
You will have to implement the chaos game. That is, you randomly select one of the transformations and apply it to the iteration point. Do it a number (30, 50 or 100) without painting the point and after that mark all the points. The resulting point cloud will in time fill the fractal.
Your operation scale( Pt (a,b), s) should realize the operation
(x',y')=s*(x,y)+(1-s)*(a,b), that is, in matrix terms
| x' | | s 0 (1-s)*a| | x |
| y' | = | 0 s (1-s)*b| * | y |
| 1 | | 0 0 1 | | y |
Related
in my 3D application I have an Object3D class which construct is bounding box at creation, this bounding box is a simple cuboid with a min and a max point :
However, when I translate my Object3D, I want to update my bounding box using the Model matrix of my object :
BoundingBox& TransformBy(const glm::mat4& modelMatrice){
//extract position of my model matrix
glm::vec3 pos( modelMatrice[3] );
//extract scale of my model matrix
glm::vec3 scale( glm::length(glm::vec3(modelMatrice[0])), glm::length(glm::vec3(modelMatrice[1])) , glm::length(glm::vec3(modelMatrice[2])));
//Creating vec4 min/max from vec3
glm::vec4 min_(min.x, min.y, min.z, 1.0f);
glm::vec4 max_(max.x, max.y, max.z, 1.0f);
//Inverting the matrice to multiply my vec4 with it in order to rotate my min max
glm::mat4 inverse = glm::inverse(modelMatrice);
min_ = min_ * inverse;
max_ = max_ * inverse;
//Scale my min_ max_
min_ *= glm::vec4(scale,1.0f);
max_ *= glm::vec4(scale,1.0f);
//adding model matrice translation to min_ and max_
min_ += glm::vec4(pos,0.0f);
max_ += glm::vec4(pos,0.0f);
//Redefining max and min
max = glm::vec3( (min_.x > max_.x)? min_.x:max_.x, (min_.y > max_.y)? min_.y: max_.y, (min_.z > max_.z)? min_.z : max_.z);
min = glm::vec3( (min_.x < max_.x)? min_.x:max_.x, (min_.y < max_.y)? min_.y: max_.y, (min_.z < max_.z)? min_.z : max_.z);
return *this;
}
Here is how I call my TransformBy function :
box.TransformBy(transform.GetModelMatrix());
For some reason, it make my min and max point rotating correctly it also translate it but my scaling is not correctly applied, making my bounding box having a bigger scale than my 3D object.
Why my scaling is not working properly ?
Maybe there is a less complicated way of doing what I want ?
If I understand correctly how the result of GetModelMatrix() is supposed to look, i.e. that it is just a normal 4x4 transformation matrix, then I think I can guess what the problem is. I find it easiest to visualize/explain with an example, so hopefully that's alright.
Suppose your result of GetModelMatrix() is the following, a simple transformation matrix with no rotational component, just a translation by [5 6 7].
| 1 0 0 5 |
| 0 1 0 6 |
| 0 0 1 7 |
| 0 0 0 1 |
The inverse of that, which gets passed in as modelMatrice, is just:
| 1 0 0 -5 |
| 0 1 0 -6 |
| 0 0 1 -7 |
| 0 0 0 1 |
After calling glm::vec4 min_(min.x,min.y,min.z,1.0f);, let us represent min_ by the vector [x y z 1]. Then min_ * modelMatrice looks like:
| 1 0 0 -5 |
| 0 1 0 -6 |
[x y z 1] * | 0 0 1 -7 | = [x y z (-5x-6y-7z+1)]
| 0 0 0 1 |
That is, the x, y, and z components are not altered because it is the |0 0 0 1| row that gets applied to them in the multiplication, not the column containing the translation components.
May I ask, do you have a particular reason for (a) inverting GetModelMatrix() and (b) multiplying the matrix by the vector as opposed to vice versa? I'd guess that perhaps your code should instead look as follows:
BoundingBox& TransformBy(const glm::mat4 modelMatrice){
...
min_ = modelMatrice * min_;
max_ = modelMatrice * max_;
...
return *this;
}
Called with box.TransformBy(transform.GetModelMatrix());
Update:
In regards to your new code, I question the need for inversion of the transformation matrix for rotation, the need to split up the scaling/translation/rotation into three separate steps, etc. It feels like, unless there is an error somewhere else, the process should be as simple as taking your object's transformation vector, multiplying min_ and max_ by it, and then doing the test for whether any min/max components swapped that you currently have at the end of the function:
BoundingBox& TransformBy(const glm::mat4& modelMatrice){
min_ = modelMatrice * min_;
max_ = modelMatrice * max_;
max = glm::vec3( (min_.x > max_.x)? min_.x:max_.x, (min_.y > max_.y)? min_.y: max_.y, (min_.z > max_.z)? min_.z : max_.z);
min = glm::vec3( (min_.x < max_.x)? min_.x:max_.x, (min_.y < max_.y)? min_.y: max_.y, (min_.z < max_.z)? min_.z : max_.z);
return *this;
}
If the above five-line function does not work, please let me know how it fails; if it does not work, I feel the problem may lie elsewhere in your code.
However, to answer your updated question, as to why the scaling does not work: When you invert the transformation matrix, you are also inverting the scaling. Thus, you are not just rotating when you multiply by inverse: you are also applying the inverted scale. While you could fix this by multiplying by the scale again, the far simpler solution seems to be trying to get the above five-line solution working.
I've loaded a texture mapped OBJ via vtkOBJReader and loaded it into a vtkModifiedBSPTree:
auto readerOther(vtkSmartPointer<vtkOBJReader>::New());
auto rawOtherPath(modelPathOther.toLatin1());
readerOther->SetFileName(rawOtherPath.data());
readerOther->Update();
auto meshDataOther(readerOther->GetOutput());
auto bspTreeOther(vtkSmartPointer<vtkModifiedBSPTree>::New());
bspTreeOther->SetDataSet(meshDataOther);
bspTreeOther->BuildLocator();
I then compute my line segment start and end and feed that into
if (bspTreeOther->IntersectWithLine(p1, p2, tolerance, distanceAlongLine, intersectionCoords, pcoords, subId, cellId, cell))
With all the relevant predefined variables of course.
What I need is the texture's UV coordinates at the point of intersection.
I'm so very new to VTK that I've not yet caught the logic of how its put together yet; the abstraction layers are still losing me while I'm digging through the source.
I've hunted for this answer across SO and the VTK users archives and found vague hints given by those who understood VTK deeply to those who were nearly there themselves, and thus of little help to me thus far.
(Appended 11/9/2018)
To clarify, I'm working with non-degenerate triangulated meshes created by a single 3D scanner shot, so quads and other higher polygons are not going to be ever seen by my code. A general solution should account for such things, but that can be accomplished via triangulating the mesh first via a good application of handwavium.
Code
Note that if one vertex belongs to several polygons and has different texture coordinates, VTK will create duplicates of the vertex.
I don't use vtkCleanPolyData, because VTK will merge such "duplicates" and we will lose needed information, as far as I know.
I use vtkCellLocator instead of vtkModifiedBSPTree,
because in my case it was faster.
The main file main.cpp.
You can find magic numbers in start and end arrays — these are your p1 and p2.
I've set these values just for example
#include <vtkSmartPointer.h>
#include <vtkPointData.h>
#include <vtkCellLocator.h>
#include <vtkGenericCell.h>
#include <vtkOBJReader.h>
#include <vtkTriangleFilter.h>
#include <vtkMath.h>
#include <iostream>
int main(int argc, char * argv[])
{
if (argc < 2)
{
std::cerr << "Usage: " << argv[0] << " OBJ_file_name" << std::endl;
return EXIT_FAILURE;
}
auto reader{vtkSmartPointer<vtkOBJReader>::New()};
reader->SetFileName(argv[1]);
reader->Update();
// Triangulate the mesh if needed
auto triangleFilter{vtkSmartPointer<vtkTriangleFilter>::New()};
triangleFilter->SetInputConnection(reader->GetOutputPort());
triangleFilter->Update();
auto mesh{triangleFilter->GetOutput()};
// Use `auto mesh(reader->GetOutput());` instead if no triangulation needed
// Build a locator to find intersections
auto locator{vtkSmartPointer<vtkCellLocator>::New()};
locator->SetDataSet(mesh);
locator->BuildLocator();
// Initialize variables needed for intersection calculation
double start[3]{-1, 0, 0.5};
double end[3]{ 1, 0, 0.5};
double tolerance{1E-6};
double relativeDistanceAlongLine;
double intersectionCoordinates[3];
double parametricCoordinates[3];
int subId;
vtkIdType cellId;
auto cell{vtkSmartPointer<vtkGenericCell>::New()};
// Find intersection
int intersected = locator->IntersectWithLine(
start,
end,
tolerance,
relativeDistanceAlongLine,
intersectionCoordinates,
parametricCoordinates,
subId,
cellId,
cell.Get()
);
// Get points of intersection cell
auto pointsIds{vtkSmartPointer<vtkIdList>::New()};
mesh->GetCellPoints(cellId, pointsIds);
// Store coordinates and texture coordinates of vertices of the cell
double meshTrianglePoints[3][3];
double textureTrianglePoints[3][2];
auto textureCoordinates{mesh->GetPointData()->GetTCoords()};
for (unsigned pointNumber = 0; pointNumber < cell->GetNumberOfPoints(); ++pointNumber)
{
mesh->GetPoint(pointsIds->GetId(pointNumber), meshTrianglePoints[pointNumber]);
textureCoordinates->GetTuple(pointsIds->GetId(pointNumber), textureTrianglePoints[pointNumber]);
}
// Normalize the coordinates
double movedMeshTrianglePoints[3][3];
for (unsigned i = 0; i < 3; ++i)
{
movedMeshTrianglePoints[0][i] = 0;
movedMeshTrianglePoints[1][i] =
meshTrianglePoints[1][i] -
meshTrianglePoints[0][i];
movedMeshTrianglePoints[2][i] =
meshTrianglePoints[2][i] -
meshTrianglePoints[0][i];
}
// Normalize the texture coordinates
double movedTextureTrianglePoints[3][2];
for (unsigned i = 0; i < 2; ++i)
{
movedTextureTrianglePoints[0][i] = 0;
movedTextureTrianglePoints[1][i] =
textureTrianglePoints[1][i] -
textureTrianglePoints[0][i];
movedTextureTrianglePoints[2][i] =
textureTrianglePoints[2][i] -
textureTrianglePoints[0][i];
}
// Calculate SVD of a matrix consisting of normalized vertices
double U[3][3];
double w[3];
double VT[3][3];
vtkMath::SingularValueDecomposition3x3(movedMeshTrianglePoints, U, w, VT);
// Calculate pseudo inverse of a matrix consisting of normalized vertices
double pseudoInverse[3][3]{0};
for (unsigned i = 0; i < 3; ++i)
{
for (unsigned j = 0; j < 3; ++j)
{
for (unsigned k = 0; k < 3; ++k)
{
if (w[k] != 0)
{
pseudoInverse[i][j] += VT[k][i] * U[j][k] / w[k];
}
}
}
}
// Calculate interpolation matrix
double interpolationMatrix[3][2]{0};
for (unsigned i = 0; i < 3; ++i)
{
for (unsigned j = 0; j < 2; ++j)
{
for (unsigned k = 0; k < 3; ++k)
{
interpolationMatrix[i][j] += pseudoInverse[i][k] * movedTextureTrianglePoints[k][j];
}
}
}
// Calculate interpolated texture coordinates of the intersection point
double interpolatedTexturePoint[2]{textureTrianglePoints[0][0], textureTrianglePoints[0][1]};
for (unsigned i = 0; i < 2; ++i)
{
for (unsigned j = 0; j < 3; ++j)
{
interpolatedTexturePoint[i] += (intersectionCoordinates[j] - meshTrianglePoints[0][j]) * interpolationMatrix[j][i];
}
}
// Print the result
std::cout << "Interpolated texture coordinates";
for (unsigned i = 0; i < 2; ++i)
{
std::cout << " " << interpolatedTexturePoint[i];
}
std::cout << std::endl;
return EXIT_SUCCESS;
}
CMake project file CMakeLists.txt
cmake_minimum_required(VERSION 3.1)
PROJECT(IntersectInterpolate)
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
find_package(VTK REQUIRED)
include(${VTK_USE_FILE})
add_executable(IntersectInterpolate MACOSX_BUNDLE main.cpp)
if(VTK_LIBRARIES)
target_link_libraries(IntersectInterpolate ${VTK_LIBRARIES})
else()
target_link_libraries(IntersectInterpolate vtkHybrid vtkWidgets)
endif()
Math
What we need
Suppose you have a mesh consisting of triangles and your vertices have texture coordinates.
Given vertices of a triangle A, B and C, corresponding texture coordinates A', B' and C', you want to find a mapping (to interpolate) from another inner and boundary points of the triangle to the texture.
Let's make some rational assumptions:
Points A, B, C should correspond to their texture coordinates A', B', C';
Each point X on the border, say AB, should correspond to the points of A'B' line in following way: |AX| / |AB| = |A'X'| / |A'B'| — half way on the original triangle should be a half way on the texture map;
Centroid of the triangle (A + B + C) / 3 should correspond to centroid of the texture triangle (A' + B' + C') / 3.
Equations to solve
Looks like we want to have affine mapping: coordinates of vertices of the original triangle should be multiplied by some coefficients and be added to some constants.
Let's construct the system of equations
Ax * Mxx + Ay * Myx + Az * Mzx + M0x = A'x
Ax * Mxy + Ay * Myy + Az * Mzy + M0y = A'y
Ax * Mxz + Ay * Myz + Az * Mzz + M0z = 0
and the same for B and C.
You can see that we have 9 equations and 12 unknowns.
Though, equations containing Miz (for i in {x, y, z}) have the solution 0 and don't play any role in further computations, so we can just set them equal to 0.
Thus, we have system with 6 equations and 8 unknowns
Ax * Mxx + Ay * Myx + Az * Mzx + M0x = A'x
Ax * Mxy + Ay * Myy + Az * Mzy + M0y = A'y
Let's write entire system in matrix view
-- -- -- -- -- --
| 1 Ax Ay Az | | M0x M0y | | A'x A'y |
| 1 Bx By Bz | x | Mxx Mxy | = | B'x B'y |
| 1 Cx Cy Cz | | Myx Myy | | C'x C'y |
-- -- | Mzx Mzy | -- --
-- --
I subtract coordinates of A vertex from B and C
and texture coordinates A' from B' and C',
and now we have the triangle with the first vertex located
in start of coordinate system as well as corresponding texture coordinates.
This means that now triangles are not translated (moved) one relative to another
and we don't need M0 part of interpolation matrix
-- -- -- -- -- --
| Bx By Bz | | Mxx Mxy | | B'x B'y |
| Cx Cy Cz | x | Myx Myy | = | C'x C'y |
-- -- | Mzx Mzy | -- --
-- --
Solution
Let's call the first matrix P, the second M and the last one T
P M = T
The matrix P is not square.
If we add zero row to it, the matrix becomes singular.
So, we have to calculate pseudo-inverse of it in order to solve the equation.
There's no function for calculating pseudo-inverse matrix in VTK.
We go to Moore–Penrose inverse article on Wikipedia and see that it can be calculated using SVD.
VTKMath::SingularValueDecomposition3x3 function allows us to do it.
The function gives us U, S and VT matrices.
I'll write pseudo-inverse of matrix P as P",
transposition of U as UT and transposition of VT as V.
Pseudo-inverse of diagonal matrix S is a matrix with 1 / Sii elements
where Sii is not a zero and 0 for zero elements
P = U S VT
P" = V S" UT
M = P" T
Usage
To apply interpolation matrix,
we need to not forget that we need to translate input and output vectors.
A' is a 2D vector of texture coordinates of the first vertex in the triangle,
A is a 3D vector of coordinates of the vertex,
M is the found interpolation matrix,
p is a 3D intersection point we want to get texture coordinates for,
t' is the resulting 2D vector with interpolated texture coordinates
t' = A' + (p - A) M
[Rewritten 2019/5/7 to reflect an updated understanding.]
After finding out that the parametric coordinates are inputs to a function from which one can get barycentric coordinates in the case of triangular cells, and then learning about what barycentric coordinates are, I was able to work out the following.
const auto readerOther(vtkSmartPointer<vtkOBJReader>::New());
const auto rawOtherPath(modelPathOther.toLatin1());
readerOther->SetFileName(rawOtherPath.data());
readerOther->Update();
const auto meshDataOther(readerOther->GetOutput());
const auto bspTreeOther(vtkSmartPointer<vtkModifiedBSPTree>::New());
bspTreeOther->SetDataSet(meshDataOther);
bspTreeOther->BuildLocator();
double point1[3]{0.0, 0.0, 0.0}; // start of line segment used to intersect the model.
double point2[3]{0.0, 0.0, 10.0}; // end of line segment
double distanceAlongLine;
double intersectionCoords[3]; // The coordinate of the intersection.
double parametricCoords[3]; // Parametric Coordinates of the intersection - see https://lorensen.github.io/VTKExamples/site/VTKBook/08Chapter8/#82-interpolation-functions
int subId; // ?
vtkIdType cellId;
double intersectedTextureCoords[2];
if (bspTreeOther->IntersectWithLine(point1, point2, TOLERANCE, distanceAlongLine, intersectionCoords, parametricCoords, subId, cellId))
{
const auto textureCoordsOther(meshDataOther->GetPointData()->GetTCoords());
const auto pointIds{meshDataOther->GetCell(cellId)->GetPointIds()};
const auto vertexIndex0{pointIds->GetId(0)};
const auto vertexIndex1{pointIds->GetId(1)};
const auto vertexIndex2{pointIds->GetId(2)};
double texCoord0[2];
double texCoord1[2];
double texCoord2[2];
textureCoordsOther->GetTuple(vertexIndex0, texCoord0);
textureCoordsOther->GetTuple(vertexIndex1, texCoord1);
textureCoordsOther->GetTuple(vertexIndex2, texCoord2);
const auto parametricR{parametricCoords[0]};
const auto parametricS{parametricCoords[1]};
const auto barycentricW0{1 - parametricR - parametricS};
const auto barycentricW1{parametricR};
const auto barycentricW2{parametricS};
intersectedTextureCoords[0] =
barycentricW0 * texCoord0[0] +
barycentricW1 * texCoord1[0] +
barycentricW2 * texCoord2[0];
intersectedTextureCoords[1] =
barycentricW0 * texCoord0[1] +
barycentricW1 * texCoord1[1] +
barycentricW2 * texCoord2[1];
}
Please note that this code is an interpretation of the actual code I'm using; I'm using Qt and its QVector2D and QVector3D classes along with some interpreter glue functions to go to and from arrays of doubles.
See https://lorensen.github.io/VTKExamples/site/VTKBook/08Chapter8 for details about the parametric coordinate systems of various cell types.
I have been trying to implement a simple Gaussian blur algorithm, for my image editing program. However, I have been having some trouble making this work, and I think the problem lies in the below snippet:
for( int j = 0; j < pow( kernel_size, 2 ); j++ )
{
int idx = ( i + kx + ( ky * img.width ));
//Try and overload this whenever possible
valueR += ( img.p_pixelArray[ idx ].r * kernel[ j ] );
valueG += ( img.p_pixelArray[ idx ].g * kernel[ j ] );
valueB += ( img.p_pixelArray[ idx ].b * kernel[ j ] );
if( kx == kernel_limit )
{
kx = -kernel_limit;
ky++;
}
else
{
kx++;
}
}
kx = -kernel_limit;
ky = -kernel_limit;
A brief explanation of the code above: kernel size is the size of the kernel (or matrix) generated by the Gaussian blur formula. kx and ky are variables to be used for iterating over the kernel. i is the parent loop, that nests this one, and goes over every pixel in the image. Each value variable simply holds a float R, G, or B value, and is used afterwards to obtain the final result. The if-else is used to increase kx and ky. idx is used to find the correct pixel. kernel limit is a variable set to
(*kernel size* - 1) / 2
So I can have kx going from -1 ( with a 3x3 kernel ) to +1, and the same thing with ky. I think the problem lies with the line
int idx = ( i + kx + ( ky * img.width ));
But I am not sure. The image I get is:
As can be seen, the color is blurred in a diagonal direction, and looks more like some kind of motion blur than Gaussian blur. If someone could help out, I would be very grateful.
EDIT:
The way I fill the kernel is as follows:
for( int i = 0; i < pow( kernel_size, 2 ); i++ )
{
// This. Is. Lisp.
kernel[i] = (( 1 / ( 2 * pi * pow( sigma, 2 ))) * pow (e, ( -((( pow( kx, 2 ) + pow( ky, 2 )) / 2 * pow( sigma, 2 ))))));
if(( kx + 1 ) == kernel_size )
{
kx = 0;
ky++;
}
else
{
kx++;
}
}
Few problems:
Your Gaussian misses brackets (even though you already have plenty..) around 2 * pow( sigma, 2 ). Now you multiply by variance instead of divide.
But what your problem is, is that your gaussian is centered at kx = ky = 0, as you let it run from 0 to kernel_size, instead of from -kernel_limit to kernel_limit. This results in the diagonal blurring. Something like the following should work better
kx = -kernel_limit;
ky = -kernel_limit;
int kernel_size_sq = kernel_size * kernel_size;
for( int i = 0; i < kernel_size_sq; i++ )
{
double sigma_sq = sigma * sigma;
double kx_sq = kx * kx;
double ky_sq = ky * ky;
kernel[i] = 1.0 / ( 2 * pi * sigma_sq) * exp(-(kx_sq + ky_sq) / (2 * sigma_sq));
if(kx == kernel_limit )
{
kx = -kernel_limit;
ky++;
}
else
{
kx++;
}
}
Also note how I got rid of your lisp-ness and some improvements: use some intermediate variables for clarity (compiler will optimize them away if anyway you ask it to); simple multiplication is faster than pow(x, 2); pow(e, x) == exp(x).
I have been trying to use a rotation matrix to rotate an image. Below is the code I've been using. I have been trying to do so for days now, and everytime it seems there is something wrong, but I can't see what I am doing wrong. For example, my image is getting slanted, instead of rotating...
The code below is divided in two parts: the actual rotation, and moving the picture upwards to make it appear in the correct spot (it needs to have all its point above 0 to be saved properly). It takes as input an array of pixels (containing position information (x, y), and colour information (r, g, b)), an image (used solely to get its pixel count, aka the array size, and the width), and a value in radians for the rotation.
The part responsible for the rotation itself is the one above the line, while the part below the line is responsible for calculating the lowest point in the image, and moving all pixels up or to the right so the all fit (I need still to implement a function to change the image size when an image is rotated by 45 degrees, or similar).
void Rotate( Pixel *p_pixelsToRotate, prg::Image* img, float rad )
{
int imgLength = img->getPixelCount();
int width = img->getWidth();
int x { 0 }, y { 0 };
for( int i = 0; i < imgLength; i++ )
{
x = p_pixelsToRotate[i].x;
y = p_pixelsToRotate[i].y;
p_pixelsToRotate[i].x = round( cos( rad ) * x - sin( rad ) * y );
p_pixelsToRotate[i].y = round( sin( rad ) * x + sin( rad ) * y );
}
===========================================================================
Pixel* P1 = &p_pixelsToRotate[ width - 1 ]; // Definitions of these are in the supporting docs
Pixel* P3 = &p_pixelsToRotate[ imgLength - 1 ];
int xDiff = 0;
int yDiff = 0;
if( P1->x < 0 || P3->x < 0 )
{
(( P1->x < P3->x )) ? ( xDiff = abs( P1->x )) : ( xDiff = abs( P3->x ));
}
if( P1->y < 0 || P3->y < 0 )
{
(( P1->y < P3->y )) ? ( yDiff = abs( P1->y )) : ( yDiff = abs( P3->y ));
}
for( int i = 0; i < imgLength; i++ )
{
p_pixelsToRotate[i].x += xDiff;
p_pixelsToRotate[i].y += yDiff;
}
}
I would prefer fixing this myself, but have been unable to do so for more than a week now. I don't see why the function is not rotating the position information for the array of input pixel. If someone could have a look, and maybe spot why my logic isn't working, I would be immensely grateful. Thank you.
Seems you just made a mistake in the rotation matrix itself:
p_pixelsToRotate[i].y = round( sin( rad ) * x + sin( rad ) * y );
^^^---------------change to cos
For one thing, this is a mistake:
p_pixelsToRotate[i].x = round( cos( rad ) * x - sin( rad ) * y );
p_pixelsToRotate[i].y = round( sin( rad ) * x + >>>sin<<<( rad ) * y );
The >>>sin<<< should be cos. This would explain getting a shear rather than a rotation.
Other comments: Storing pixel coordinates in bitmap data is an extremely expensive way to solve the problem of bitmap rotation. The better way is inverse transform sampling. With a source image X and wishing to rotate it with transform R to get Y, you are currently thinking
Y = R X
where X and Y have the pixel coordinates explicitly stored. To use inverse sampling, think instead of the same equation multiplied on both sides by the inverse of R.
R^(-1) Y = X
where the coordinates are implicit. That is, to produce Y[j][i], transform (j,i) with the inverse R^(-1) to get a coordinate (x,y) in the X image. Use this to sample the nearest pixel X[round(x)][round(y)] in X and assign that as Y[j][i].
(Actually, rather than simple rounding, a more sophisticated algorithm will take a weighted average of the X pixels around (x,y) to get a smoother result. How to choose the weights is a big additional topic.)
After you have this working, you can go a step farther. Instead of doing a full matrix-vector multiplication for each pixel, some algebra will show that the previous sampling coordinate can be updated to get an adjacent one (next to the right or left, up or down) with just a couple of additions. This speeds things up considerably.
The inverse of a rotation is trivial to compute! Just negate the rotation angle.
A last note is that your use of ternary operators o ? o : o to select assignments is truly terrible style. Instead of this:
(( P1->x < P3->x )) ? ( xDiff = abs( P1->x )) : ( xDiff = abs( P3->x ));
say
xDiff = ( P1->x < P3->x ) ? abs( P1->x ) : abs( P3->x );
I'm trying to implement fBm onto a sphere for a planet. To create my sphere, I convert it to such from a cube.
Unfortunately, the fBm that gets generated appears as mirrored patches. In addition, it only does it on 2 faces (wrapping the values for the other faces).
This leads to a similarly stretched look when rendered as a sphere
The noise function is the improved noise as described by Ken Perlin,
I adapted this for HLSL:
float fade(float t) { return t * t * t * (t * (t * 6 - 15) + 10); }
float lerp(float t, float a, float b) { return a + t * (b - a); }
float grad(int hash, float x, float y, float z) {
int h = hash & 15; // CONVERT LO 4 BITS OF HASH CODE
float u = h<8 ? x : y, // INTO 12 GRADIENT DIRECTIONS.
v = h<4 ? y : h==12||h==14 ? x : z;
return ((h&1) == 0 ? u : -u) + ((h&2) == 0 ? v : -v);
}
int p[512] = { 151,...180 }; //0-255 twice
float noise(float x, float y, float z) {
int X = (int)floor(x) & 255; // FIND UNIT CUBE THAT
int Y = (int)floor(y) & 255; // CONTAINS POINT.
int Z = (int)floor(z) & 255;
x -= floor(x); // FIND RELATIVE X,Y,Z
y -= floor(y); // OF POINT IN CUBE.
z -= floor(z);
float u = fade(x), // COMPUTE FADE CURVES
v = fade(y), // FOR EACH OF X,Y,Z.
w = fade(z);
int A = p[X ]+Y, AA = p[A]+Z, AB = p[A+1]+Z, // HASH COORDINATES OF
B = p[X+1]+Y, BA = p[B]+Z, BB = p[B+1]+Z; // THE 8 CUBE CORNERS,
return lerp(w, lerp(v, lerp(u, grad(p[AA ], x , y , z ), // AND ADD
grad(p[BA ], x-1, y , z )), // BLENDED
lerp(u, grad(p[AB ], x , y-1, z ), // RESULTS
grad(p[BB ], x-1, y-1, z ))),// FROM 8
lerp(v, lerp(u, grad(p[AA+1], x , y , z-1 ), // CORNERS
grad(p[BA+1], x-1, y , z-1 )), // OF CUBE
lerp(u, grad(p[AB+1], x , y-1, z-1 ),
grad(p[BB+1], x-1, y-1, z-1 ))));
}
This implementation has worked as expected in a previous project, however for this project it appears to instead create a smoothed out grid when I use the vertex position as the input.
It's a unit cube, so the values aren't integers but I can't figure out why it's not creating the typical Perlin noise texture.
Any help would be greatly appreciated, I'll provide more information if it's needed.
The array of ints p can't be accessed by the function so I'm assuming the values in it are undefined.
A quick fix is to make the array static, but this is really slow.
So now I need to pass in the array. But I'm having trouble with that.
I use the noise function below in a Dx11 planet rendering project. I've included an fBm function too. I found it (written in GLSL) on the WebGL shader programming website ShaderToy.
It was written by the godlike inigo quilez, who authored the site.
Give it a try, I hope it's of some help. All credit should go to inigo quilez for his work. Porting it to HLSL is trivial. I've only tested in in shader model 5, but I'm sure it'll work under 4 at least.
// hash based 3d value noise
// function taken from https://www.shadertoy.com/view/XslGRr
// Created by inigo quilez - iq/2013
// License Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
// ported from GLSL to HLSL
cbuffer cbNoiseParameters
{
float _rOctaves;
float _rLacunarity;
float _rFrequency;
float _rAmplitude;
float _rGain;
float _rOffset;
};
float hash( float n )
{
return frac(sin(n)*43758.5453);
}
float noise( float3 x )
{
// The noise function returns a value in the range -1.0f -> 1.0f
float3 p = floor(x);
float3 f = frac(x);
f = f*f*(3.0-2.0*f);
float n = p.x + p.y*57.0 + 113.0*p.z;
return lerp(lerp(lerp( hash(n+0.0), hash(n+1.0),f.x),
lerp( hash(n+57.0), hash(n+58.0),f.x),f.y),
lerp(lerp( hash(n+113.0), hash(n+114.0),f.x),
lerp( hash(n+170.0), hash(n+171.0),f.x),f.y),f.z);
}
float fBm( float3 vPt )
{
float octaves = _rOctaves;
float lacunarity = _rLacunarity;
float frequency = _rFrequency;
float amplitude = _rAmplitude;
float gain = _rGain;
float offset = _rOffset;
float value = 0.f;
for( int i = 0; i < octaves; ++ i )
{
value += noise( vPt * frequency ) * amplitude;
amplitude *= gain;
frequency *= lacunarity;
}
return value;
}