C++ 1d line inverse projection onto 2d matrix - c++

I have a 2D matrix that represent an image. Firstly, I extract a line from this image (no matter the orientation) and I project pixel's value of this line in a 1D vertical array (with the size of the image's heigth).
This works well. I can perform many operation on this array.
After that, I need to re-insert this vertical array at the same place, same orientation of the line in the 2D matrix.
The problem comes from the inverse projection, I have many holes in my re-integrated line..
Mat DataRaw::InsertLine(Mat image_full,Mat image, Point pointH, Point pointL)
{
float offset = 0;
float coef_dir = 0;
// Equation of the line
coef_dir = (float)(pointH.y-pointL.y)/(pointH.x-pointL.x);
offset = pointH.y - (coef_dir*pointH.x);
float x_cur = 0;
int x = 0;
float x_prev = 0;
for (int y = 0; y<image.rows; y++)
{
x_cur = (float)(y-offset)/coef_dir; // x courant
if (y > 0)
x_prev = (float)((y-1)-offset)/coef_dir; // x à y-1
x = (int)x_cur;
if (x_cur-x_prev > 1)
{
if (y >= 1)
image_full.at<uchar>(y-1,x) = image.at<uchar>(y,0);
}
image_full.at<uchar>(y,x) = image.at<uchar>(y,0);
}
return image_full;
}
PointL and PointH are two point where the line passes through.
I calculate line equation using these two points.
Here is my function to re-insert my line in the 2D matrix, I try to check the difference at each Y step. But...
Thanks for your help !
/***** EDIT ******/
My problem at the left, what I want at right :
http://i.stack.imgur.com/bTB0s.png

Related

Point cloud conversion to 2D range

I am trying to convert a point cloud (x, y, z) data acquired from a Kinect V2 using libfreenect2, into a virtual 2D laser scan (e.g., a horizontal angle/distance vector).
I am currently assigning per pixel column, the PCL distance value, as shown below:
std::vector<float> scan(512, 0);
for (unsigned int row = 0; row < 424; ++row) {
for (unsigned int col = 0; col < 512; ++col) {
float x, y, z;
registration->getPointXYZ(depth, row, col, x, y, z);
if (std::isnan(x) || std::isnan(y) || std::isnan(z)) {
continue;
}
Eigen::Vector3f values = rotate_translate((-1 * x), y - 1.186, z);
if (scan[col] == 0) {
scan[col] = values[1];
}
if (values[1] < scan[col]) {
scan[col] = values[1];
}
}
}
You may ignore the rotate_translate method, it simply changes the local to global coordinates using the sensor pose.
The problem is best shown using the pictures below:
Whereas the LIDAR range sensor produces the following pointsmap:
the kinect 2D range scan is curved, and of course narrower, since the horizontal FOV is 70.6 degrees compared to the 270 degree range of the LIDAR.
It is this curvature that I am trying to fix; the SLAM/ICP library I'm using is mrpt and the actual data scan is inserted into an mrpt::obs::CObservation2DRangeScan observation:
auto obs = mrpt::obs::CObservation2DRangeScan();
obs.loadFromVectors(scan.size(), scan.data(), (char*)scan.data());
obs.aperture = mrpt::utils::DEG2RAD(70.6f);
obs.maxRange = 6.0;
obs.rightToLeft = true;
obs.timestamp = mrpt::system::now();
obs.setSensorPose(sensor);
I've searched around google and SO, and the only answers which seem to address this question, are this one and that one. So whereas I understand that the curvature is the result of me assigning each pixel column the PCL value, I am uncertain as to how I can use that to remove the curvature.
Each reply seems to take a different approach, and from what I understand the task is a linear interpolation of the angle per pixel ratio, and the current pixel coordinates?

C++ How to scale a shape and create an if function to not print if too big after scale?

given a shapes orignal centroid + vertices .. i.e. if its a triangle, i know all three vertices coords. How could i then create a scaling function with a scaling factor as a parameter as below.. however my current code is with error and the result are huge shapes, much more than what im scaling by (only want scale factor of 2).
void Shape::scale(double factor)
{
int x, y, xx, xy;
int disx, disy;
for (itr = vertices.begin(); itr != vertices.end(); ++itr) {
//translate obj to origin (0,0)
x = itr->getX() - centroid.getX();
y = itr->getY() - centroid.getY();
//finds distance between centroid and vertex
disx = x + itr->getX();
disy = y + itr->getY();
xx = disx * factor;
xy = disy * factor;
//translate obj back
xx = xx + centroid.getX();
xy = xy + centroid.getY();
//set new coord
itr->setX(xx);
itr->setY(xy);
}
}
I know of using iterations to run through the vertices, my main point of confusion is how can i do the maths between the factor to scale my shapes size?
this is how i declare and itialise a vertex
// could i possible do (scale*x,scale*y)? or would that be problematic..
vertices.push_back(Vertex(x, y));
Also.. the grid is i.e. 100x100. if a scaled shape was to be too big to fit into that grid, i want an exit from the scale function so that the shape wont be enlarged, how can this be done effectively? so far i have a for look but that just loops on vertices, so it will only stop those that would be outside the grid, instead of cancelling the entire shape which would be ideal
if my question is too broad, please ask and i shall edit further to standard
First thing you need to do is find the center of mass of your set of points. That is the arithmetic mean of the coordinates of your points. Then, for each point calculate the line between the center of mass and that point. Now the only thing left is to put the point on that line, but in factor * current_distance away, where current_distance is the distance from the mass center to the given point before rescaling.
void Shape::scale(double factor)
{
Vertex mass_center = Vertex(0., 0.);
for(int i = 0; i < vertices.size(); i++)
{
mass_center.x += vertices[i].x;
mass_center.y += vertices[i].y;
}
mass_center.x /= vertices.size();
mass_center.y /= vertices.size();
for(int i = 0; i < vertices.size(); i++)
{
//this is a vector that leads from mass center to current vertex
Vertex vec = Vertex(vertices[i].x - mass_center.x, vertices[i].y - mass_center.y);
vertices[i].x = mass_center.x + factor * vec.x;
vertices[i].y = mass_center.y + factor * vec.y;
}
}
If you already know the centroid of a shape and the vertexes are the distance from that point then scaling in rectangular coordinates is just multiplying the x and y components of each vertex by the appropriate scaling factor (with a negative value flipping the shape around the axis.
void Shape::scale(double x_factor, double y_factor){
for(auto i=0; i < verticies.size();++i){
verticies[i].x *= x_scale;
verticies[i].y *= y_scale;
}
}
You could then just overload this function with one that takes a single parameter and calls this function with the same value for x and y.
void Shape::scale(double factor){
Shape::scale(factor, factor);
}
If you're vertex values are not centered at the origin then you will also have to multiply those values by your scaling factor.

Impact of cubic and catmull splines on image

I am trying to implement some function like below
For this I am trying to use Cubic interpolation and Catmull interpolation ( check both separately to compare the best result) , what i am not understanding is what impact these interpolation show on image and how we can get these points values where we clicked to set that curve ? and do we need to define the function these black points on the image separately ?
I am getting help from these resources
Source 1
Source 2
Approx the same focus
Edit
int main (int argc, const char** argv)
{
Mat input = imread ("E:\\img2.jpg");
for(int i=0 ; i<input.rows ; i++)
{
for (int p=0;p<input.cols;p++)
{
//for(int t=0; t<input.channels(); t++)
//{
input.at<cv::Vec3b>(i,p)[0] = 255*correction(input.at<cv::Vec3b>(i,p)[0]/255.0,ctrl,N); //B
input.at<cv::Vec3b>(i,p)[1] = 255*correction(input.at<cv::Vec3b>(i,p)[1]/255.0,ctrl,N); //G
input.at<cv::Vec3b>(i,p)[2] = 255*correction(input.at<cv::Vec3b>(i,p)[2]/255.0,ctrl,N); //R
//}
}
}
imshow("image" , input);
waitKey();
}
So if your control points are always on the same x coordinate
and linearly dispersed along whole range then you can do it like this:
//---------------------------------------------------------------------------
const int N=5; // number of control points (must be >= 4)
float ctrl[N]= // control points y values initiated with linear function y=x
{ // x value is index*1.0/(N-1)
0.00,
0.25,
0.50,
0.75,
1.00,
};
//---------------------------------------------------------------------------
float correction(float col,float *ctrl,int n)
{
float di=1.0/float(n-1);
int i0,i1,i2,i3;
float t,tt,ttt;
float a0,a1,a2,a3,d1,d2;
// find start control point
col*=float(n-1);
i1=col; col-=i1;
i0=i1-1; if (i0< 0) i0=0;
i2=i1+1; if (i2>=n) i2=n-1;
i3=i1+2; if (i3>=n) i3=n-1;
// compute interpolation coefficients
d1=0.5*(ctrl[i2]-ctrl[i0]);
d2=0.5*(ctrl[i3]-ctrl[i1]);
a0=ctrl[i1];
a1=d1;
a2=(3.0*(ctrl[i2]-ctrl[i1]))-(2.0*d1)-d2;
a3=d1+d2+(2.0*(-ctrl[i2]+ctrl[i1]));
// now interpolate new colro intensity
t=col; tt=t*t; ttt=tt*t;
t=a0+(a1*t)+(a2*tt)+(a3*ttt);
return t;
}
//---------------------------------------------------------------------------
It uses 4-point 1D interpolation cubic (from that link in my comment above) to get new color just do this:
new_col = correction(old_col,ctrl,N);
this is how it looks:
the green arrows shows derivation error (always only on start and end point of whole curve). It can be corrected by adding 2 more control points one before and one after all others ...
[Notes]
color range is < 0.0 , 1.0 > so if you need other then just multiply the result and divide the input ...
[edit1] the start/end derivations fixed a little
float correction(float col,float *ctrl,int n)
{
float di=1.0/float(n-1);
int i0,i1,i2,i3;
float t,tt,ttt;
float a0,a1,a2,a3,d1,d2;
// find start control point
col*=float(n-1);
i1=col; col-=i1;
i0=i1-1;
i2=i1+1; if (i2>=n) i2=n-1;
i3=i1+2;
// compute interpolation coefficients
if (i0>=0) d1=0.5*(ctrl[i2]-ctrl[i0]); else d1=ctrl[i2]-ctrl[i1];
if (i3< n) d2=0.5*(ctrl[i3]-ctrl[i1]); else d2=ctrl[i2]-ctrl[i1];
a0=ctrl[i1];
a1=d1;
a2=(3.0*(ctrl[i2]-ctrl[i1]))-(2.0*d1)-d2;
a3=d1+d2+(2.0*(-ctrl[i2]+ctrl[i1]));
// now interpolate new colro intensity
t=col; tt=t*t; ttt=tt*t;
t=a0+(a1*t)+(a2*tt)+(a3*ttt);
return t;
}
[edit2] just some clarification on the coefficients
they are all derived from this conditions:
y(t) = a0 + a1*t + a2*t*t + a3*t*t*t // direct value
y'(t) = a1 + 2*a2*t + 3*a3*t*t // first derivation
now you have points y0,y1,y2,y3 so I chose that y(0)=y1 and y(1)=y2 which gives c0 continuity (value is the same in the joint points between curves)
now I need c1 continuity so i add y'(0) must be the same as y'(1) from previous curve.
for y'(0) I choose avg direction between points y0,y1,y2
for y'(1) I choose avg direction between points y1,y2,y3
These are the same for the next/previous segments so it is enough. Now put it all together:
y(0) = y0 = a0 + a1*0 + a2*0*0 + a3*0*0*0
y(1) = y1 = a0 + a1*1 + a2*1*1 + a3*1*1*1
y'(0) = 0.5*(y2-y0) = a1 + 2*a2*0 + 3*a3*0*0
y'(1) = 0.5*(y3-y1) = a1 + 2*a2*1 + 3*a3*1*1
And solve this system of equtions (a0,a1,a2,a3 = ?). You will get what I have in source code above. If you need different properties of the curve then just make different equations ...
[edit3] usage
pic1=pic0; // copy source image to destination pic is mine image class ...
for (y=0;y<pic1.ys;y++) // go through all pixels
for (x=0;x<pic1.xs;x++)
{
float i;
// read, convert, write pixel
i=pic1.p[y][x].db[0]; i=255.0*correction(i/255.0,red control points,5); pic1.p[y][x].db[0]=i;
i=pic1.p[y][x].db[1]; i=255.0*correction(i/255.0,green control points,5); pic1.p[y][x].db[1]=i;
i=pic1.p[y][x].db[2]; i=255.0*correction(i/255.0,blue control points,5); pic1.p[y][x].db[2]=i;
}
On top there are control points per R,G,B. On bottom left is original image and on bottom right is corrected image.

Optimizing a simple 2D Tile engine (+potential bugfix)

Preface
Yes, there is plenty to cover here... but I'll do my best to keep this as well-organized, informative and straight-to-the-point as I possibly can!
Using the HGE library in C++, I have created a simple tile engine.
And thus far, I have implemented the following designs:
A CTile class, representing a single tile within a CTileLayer, containing row/column information as well as an HGE::hgeQuad (which stores vertex, color and texture information, see here for details).
A CTileLayer class, representing a two-dimensional 'plane' of tiles (which are stored as a one-dimensional array of CTile objects), containing the # of rows/columns, X/Y world-coordinate information, tile pixel width/height information, and the layer's overall width/height in pixels.
A CTileLayer is responsible for rendering any tiles which are either fully or partially visible within the boundaries of a virtual camera 'viewport', and to avoid doing so for any tiles which are outside of this visible range. Upon creation, it pre-calculates all information to be stored within each CTile object, so the core of engine has more room to breathe and can focus strictly on the render loop. Of course, it also handles proper deallocation of each contained tile.
Issues
The problem I am now facing essentially boils down to the following architectural/optimization issues:
In my render loop, even though I am not rendering any tiles which are outside of visible range, I am still looping through all of the tiles, which seems to have a major performance impact for larger tilemaps (i.e., any thing above 100x100 rows/columns # 64x64 tile dimensions still drops the framerate down by 50% or more).
Eventually, I intend to create a fancy tilemap editor to coincide with this engine.
However, since I am storing all two-dimensional information inside one or more 1D arrays, I don't have any idea how possible it would be to implement some sort of rectangular-select & copy/paste feature, without some MAJOR performance hit -- involving looping through every tile twice per frame. And yet if I used 2D arrays, there would be a slightly less but more universal FPS drop!
Bug
As stated before... In my render code for a CTileLayer object, I have optimized which tiles are to be drawn based upon whether or not they are within viewing range. This works great, and for larger maps I noticed only a 3-8 FPS drop (compared to a 100+ FPS drop without this optimization).
But I think I'm calculating this range incorrectly, because after scrolling halfway through the map you can start to see a gap (on the topmost & leftmost sides) where tiles aren't being rendered, as if the clipping range is increasing faster than the camera can move (even though they both move at the same speed).
This gap gradually increases in size the further along into the X & Y axis you go, eventually eating up nearly half of the top & left sides of the screen on a large map.
My render code for this is shown below...
Code
//
// [Allocate]
// For pre-calculating tile information
// - Rows/Columns = Map Dimensions (in tiles)
// - Width/Height = Tile Dimensions (in pixels)
//
void CTileLayer::Allocate(UINT numColumns, UINT numRows, float tileWidth, float tileHeight)
{
m_nColumns = numColumns;
m_nRows = numRows;
float x, y;
UINT column = 0, row = 0;
const ULONG nTiles = m_nColumns * m_nRows;
hgeQuad quad;
m_tileWidth = tileWidth;
m_tileHeight = tileHeight;
m_layerWidth = m_tileWidth * m_nColumns;
m_layerHeight = m_tileHeight * m_nRows;
if(m_tiles != NULL) Free();
m_tiles = new CTile[nTiles];
for(ULONG l = 0; l < nTiles; l++)
{
m_tiles[l] = CTile();
m_tiles[l].column = column;
m_tiles[l].row = row;
x = (float(column) * m_tileWidth) + m_offsetX;
y = (float(row) * m_tileHeight) + m_offsetY;
quad.blend = BLEND_ALPHAADD | BLEND_COLORMUL | BLEND_ZWRITE;
quad.tex = HTEXTURE(nullptr); //Replaced for the sake of brevity (in the engine's code, I used a globally allocated texture array and did some random tile generation here)
for(UINT i = 0; i < 4; i++)
{
quad.v[i].z = 0.5f;
quad.v[i].col = 0xFF7F7F7F;
}
quad.v[0].x = x;
quad.v[0].y = y;
quad.v[0].tx = 0;
quad.v[0].ty = 0;
quad.v[1].x = x + m_tileWidth;
quad.v[1].y = y;
quad.v[1].tx = 1.0;
quad.v[1].ty = 0;
quad.v[2].x = x + m_tileWidth;
quad.v[2].y = y + m_tileHeight;
quad.v[2].tx = 1.0;
quad.v[2].ty = 1.0;
quad.v[3].x = x;
quad.v[3].y = y + m_tileHeight;
quad.v[3].tx = 0;
quad.v[3].ty = 1.0;
memcpy(&m_tiles[l].quad, &quad, sizeof(hgeQuad));
if(++column > m_nColumns - 1) {
column = 0;
row++;
}
}
}
//
// [Render]
// For drawing the entire tile layer
// - X/Y = world position
// - Top/Left = screen 'clipping' position
// - Width/Height = screen 'clipping' dimensions
//
bool CTileLayer::Render(HGE* hge, float cameraX, float cameraY, float cameraTop, float cameraLeft, float cameraWidth, float cameraHeight)
{
// Calculate the current number of tiles
const ULONG nTiles = m_nColumns * m_nRows;
// Calculate min & max X/Y world pixel coordinates
const float scalarX = cameraX / m_layerWidth; // This is how far (from 0 to 1, in world coordinates) along the X-axis we are within the layer
const float scalarY = cameraY / m_layerHeight; // This is how far (from 0 to 1, in world coordinates) along the Y-axis we are within the layer
const float minX = cameraTop + (scalarX * float(m_nColumns) - m_tileWidth); // Leftmost pixel coordinate within the world
const float minY = cameraLeft + (scalarY * float(m_nRows) - m_tileHeight); // Topmost pixel coordinate within the world
const float maxX = minX + cameraWidth + m_tileWidth; // Rightmost pixel coordinate within the world
const float maxY = minY + cameraHeight + m_tileHeight; // Bottommost pixel coordinate within the world
// Loop through all tiles in the map
for(ULONG l = 0; l < nTiles; l++)
{
CTile tile = m_tiles[l];
// Calculate this tile's X/Y world pixel coordinates
float tileX = (float(tile.column) * m_tileWidth) - cameraX;
float tileY = (float(tile.row) * m_tileHeight) - cameraY;
// Check if this tile is within the boundaries of the current camera view
if(tileX > minX && tileY > minY && tileX < maxX && tileY < maxY) {
// It is, so draw it!
hge->Gfx_RenderQuad(&tile.quad, -cameraX, -cameraY);
}
}
return false;
}
//
// [Free]
// Gee, I wonder what this does? lol...
//
void CTileLayer::Free()
{
delete [] m_tiles;
m_tiles = NULL;
}
Questions
What can be done to fix those architectural/optimization issues, without greatly impacting any other rendering optimizations?
Why is that bug occurring? How can it be fixed?
Thank you for your time!
Optimising the iterating of the map is fairly straight forward.
Given a visible rect in world coordinates (left, top, right, bottom) it's fairly trivial to work out the tile positions, simply by dividing by the tile size.
Once you have those tile coordinates (tl, tt, tr, tb) you can very easily calculate the first visible tile in your 1D array. (The way you calculate any tile index from a 2D coordinate is (y*width)+x - remember to make sure the input coordinate is valid first though.) You then just have a double for loop to iterate the visible tiles:
int visiblewidth = tr - tl + 1;
int visibleheight = tb - tt + 1;
for( int rowidx = ( tt * layerwidth ) + tl; visibleheight--; rowidx += layerwidth )
{
for( int tileidx = rowidx, cx = visiblewidth; cx--; tileidx++ )
{
// render m_Tiles[ tileidx ]...
}
}
You can use a similar system for selecting a block of tiles. Just store the selection coordinates and calculate the actual tiles in exactly the same way.
As for your bug, why do you have x, y, left, right, width, height for the camera? Just store camera position (x,y) and calculate the visible rect from the dimensions of your screen/viewport along with any zoom factor you have defined.
This is a pseudo codish example, geometry variables are in 2d vectors. Both the camera object and the tilemap has a center-position and a extent (half size). The math is just the same even if you decide to stick with pure numbers. Even if you don't use center coordinates and extent, perhaps you'll get an idea on the math. All of this code is in the render function, and is rather simplified. Also, this example assume you already got a 2D array -like object that holds the tiles.
So, first a full example, and I'll explain each part further down.
// x and y are counters, sx is a placeholder for x start value as x will
// be in the inner loop and need to be reset each iteration.
// mx and my will be the values x and y will count towards too.
x=0,
y=0,
sx=0,
mx=total_number_of_tiles_on_x_axis,
my=total_number_of_tiles_on_y_axis
// calculate the lowest and highest worldspace values of the cam
min = cam.center - cam.extent
max = cam.center + cam.extent
// subtract with tilemap corners and divide by tilesize to get
// the anount of tiles that is outside of the cameras scoop
floor = Math.floor( min - ( tilemap.center - tilemap.extent ) / tilesize)
ceil = Math.ceil( max - ( tilemap.center + tilemap.extent ) / tilesize)
if(floor.x > 0)
sx+=floor.x
if(floor.y > 0)
y+=floor.y
if(ceil.x < 0)
mx+=ceil.x
if(ceil.y < 0)
my+=ceil.y
for(; y<my; y++)
// x need to be reset each y iteration, start value are stored in sx
for(x=sx; x<mx; x++)
// render tile x in tilelayer y
Explained bit by bit. First thing in the render function, we will use a few variables.
// x and y are counters, sx is a placeholder for x start value as x will
// be in the inner loop and need to be reset each iteration.
// mx and my will be the values x and y will count towards too.
x=0,
y=0,
sx=0,
mx=total_number_of_tiles_on_x_axis,
my=total_number_of_tiles_on_y_axis
To prevent rendering all tiles, you need to provide either a camera-like object or information on where the visible area starts and stops (in worldspace if the scene is movable)
In this example I'm providing a camera object to the render function which has a center and an extent stored as 2d vectors.
// calculate the lowest and highest worldspace values of the cam
min = cam.center - cam.extent
max = cam.center + cam.extent
// subtract with tilemap corners and divide by tilesize to get
// the anount of tiles that is outside of the cameras scoop
floor = Math.floor( min - ( tilemap.center - tilemap.extent ) / tilesize)
ceil = Math.ceil( max - ( tilemap.center + tilemap.extent ) / tilesize)
// floor & ceil is 2D vectors
Now, if floor is higher than 0 or ceil is lower than 0 on any axis, it means that there just as many tiles outside of the camera scoop.
// check if there is any tiles outside to the left or above of camera
if(floor.x > 0)
sx+=floor.x// set start number of sx to amount of tiles outside of camera
if(floor.y > 0)
y+=floor.y // set startnumber of y to amount of tiles outside of camera
// test if there is any tiles outisde to the right or below the camera
if(ceil.x < 0)
mx+=ceil.x // then add the negative value to mx (max x)
if(ceil.y < 0)
my+=ceil.y // then add the negative value to my (max y)
A normal render of the tilemap would go from 0 to number of tiles that axis, this using a loop within a loop to account for both axis. But thanks to the above code x and y will always stick to the space within the border of the camera.
// will loop through only the visible tiles
for(; y<my; y++)
// x need to be reset each y iteration, start value are stored in sx
for(x=sx; x<mx; x++)
// render tile x in tilelayer y
Hope this helps!

Rotating coordinates around an axis

I'm representing a shape as a set of coordinates in 3D, I'm trying to rotate the whole object around an axis (In this case the Z axis, but I'd like to rotate around all three once I get it working).
I've written some code to do this using a rotation matrix:
//Coord is a 3D vector of floats
//pos is a coordinate
//angles is a 3d vector, each component is the angle of rotation around the component axis
//in radians
Coord<float> Polymers::rotateByMatrix(Coord<float> pos, const Coord<float> &angles)
{
float xrot = angles[0];
float yrot = angles[1];
float zrot = angles[2];
//z axis rotation
pos[0] = (cosf(zrot) * pos[0] - (sinf(zrot) * pos[1]));
pos[1] = (sinf(zrot) * pos[0] + cosf(zrot) * pos[1]);
return pos;
}
The image below shows the object I'm trying to rotate (looking down the Z axis) before the rotation is attempted, each small sphere indicates one of the coordinates I'm trying to rotate
alt text http://www.cs.nott.ac.uk/~jqs/notsquashed.png
The rotation is performed for the object by the following code:
//loop over each coordinate in the object
for (int k=start; k<finish; ++k)
{
Coord<float> pos = mp[k-start];
//move object away from origin to test rotation around origin
pos += Coord<float>(5.0,5.0,5.0);
pos = rotateByMatrix(pos, rots);
//wrap particle position
//these bits of code just wrap the coordinates around if the are
//outside of the volume, and write the results to the positions
//array and so shouldn't affect the rotation.
for (int l=0; l<3; ++l)
{
//wrap to ensure torroidal space
if (pos[l] < origin[l]) pos[l] += dims[l];
if (pos[l] >= (origin[l] + dims[l])) pos[l] -= dims[l];
parts->m_hPos[k * 4 + l] = pos[l];
}
}
The problem is that when I perform the rotation in this way, with the angles parameter set to (0.0,0.0,1.0) it works (sort of), but the object gets deformed, like so:
alt text http://www.cs.nott.ac.uk/~jqs/squashed.png
which is not what I want. Can anyone tell me what I'm doing wrong and how I can rotate the entire object around the axis without deforming it?
Thanks
nodlams
Where you do your rotation in rotateByMatrix, you compute the new pos[0], but then feed that into the next line for computing the new pos[1]. So the pos[0] you're using to compute the new pos[1] is not the input, but the output. Store the result in a temp var and return that.
Coord<float> tmp;
tmp[0] = (cosf(zrot) * pos[0] - (sinf(zrot) * pos[1]));
tmp[1] = (sinf(zrot) * pos[0] + cosf(zrot) * pos[1]);
return tmp;
Also, pass the pos into the function as a const reference.
const Coord<float> &pos
Plus you should compute the sin and cos values once, store them in temporaries and reuse them.