This code draws a sine wave with function. In the following panning/zooming code, I am trying to understand how fWorldPerScreenWidthPixel is being used to draw the line segments.
WorldToScreen(fWorldLeft - fWorldPerScreenWidthPixel, -function((fWorldLeft - fWorldPerScreenWidthPixel) - 5.0f) + 5.0f, opx, opy);
It is setting opx and opy, but why is it subtracted from: fWorldLeft
It seems strange to want to start left of fWorldLeft in the for loop where it draws the line. fWorldLeft starts at -25.
I have included the necessary code to explain:
// Draw Chart
float fWorldPerScreenWidthPixel = (fWorldRight - fWorldLeft) / ScreenWidth();
float fWorldPerScreenHeightPixel = (fWorldBottom - fWorldTop) / ScreenHeight();
int px, py, opx = 0, opy = 0;
WorldToScreen(fWorldLeft - fWorldPerScreenWidthPixel, -function((fWorldLeft - fWorldPerScreenWidthPixel) - 5.0f) + 5.0f, opx, opy);
for (float x = fWorldLeft; x < fWorldRight; x += fWorldPerScreenWidthPixel)
{
float y = -function(x - 5.0f) + 5.0f;
WorldToScreen(x, y, px, py);
DrawLine(opx, opy, px, py, PIXEL_SOLID, FG_GREEN);
opx = px;
opy = py;
}
Call to set fWorldLeft:
// Clip
float fWorldLeft, fWorldTop, fWorldRight, fWorldBottom;
ScreenToWorld(0, 0, fWorldLeft, fWorldTop);
Sets fWorldleft:
// Convert coordinates from Screen Space --> World Space
void ScreenToWorld(int nScreenX, int nScreenY, float &fWorldX, float &fWorldY)
{
fWorldX = ((float)nScreenX / fScaleX) + fOffsetX;
fWorldY = ((float)nScreenY / fScaleY) + fOffsetY;
}
and while I'm at it, World to Screen:
// Convert coordinates from World Space --> Screen Space
void WorldToScreen(float fWorldX, float fWorldY, int &nScreenX, int &nScreenY)
{
nScreenX = (int)((fWorldX - fOffsetX) * fScaleX);
nScreenY = (int)((fWorldY - fOffsetY) * fScaleY);
}
Thank you!
Josh
Let's break it down
WorldToScreen(
fWorldLeft - fWorldPerScreenWidthPixel,
-function((fWorldLeft - fWorldPerScreenWidthPixel) - 5.0f) + 5.0f,
opx, opy);
A clearer way to write that would be
x = fWorldLeft - fWorldPerScreenWidthPixel;
WorldToScreen(
x,
-function((x) - 5.0f) + 5.0f,
opx, opy);
This transforms the position (x, f(x)) from world space to screen space and stores the result in (opx, opy). Let's see how these two variables are used:
for(...)
{
...
DrawLine(opx, opy, px, py, PIXEL_SOLID, FG_GREEN);
...
}
This draws a line from (opx, opy) to (px, py) (which is the current point on the function. (opx, opy) is the old point on the function. And this is exactly what you are doing with the initialization from above. You set (opx, opy) to a point that is one pixel outside of the screen to ensure that there are no gaps at the border.
Related
I'm trying to do terrain following, and I get a negative camera position in the xz plane. Now I get an out of boundary exception, because the row or the col is negative. How would I transform the cell of the grid to the origin correctly, giving negative camera coordinates.
Here is the two functions
int cGrid::getHeightmapEntry(int row, int col)
{
return m_heightmap[row * 300 + col];
}
float cGrid::getHeight(float x, float z, float _width, float _depth, int _cellSpacing)
{
// Translate on xz-plane by the transformation that takes
// the terrain START point to the origin.
x = ((float)_width / 2.0f) + x;
z = ((float)_depth / 2.0f) - z;
// Scale down by the transformation that makes the
// cellspacing equal to one. This is given by
// 1 / cellspacing since; cellspacing * 1 / cellspacing = 1.
x /= (float)_cellSpacing;
z /= (float)_cellSpacing;
// From now on, we will interpret our positive z-axis as
// going in the 'down' direction, rather than the 'up' direction.
// This allows to extract the row and column simply by 'flooring'
// x and z:
float col = ::floorf(x);
float row = ::floorf(z);
if (row < 0 || col<0)
{
row = 0;
}
// get the heights of the quad we're in:
//
// A B
// *---*
// | / |
// *---*
// C D
float A = getHeightmapEntry(row, col);
float B = getHeightmapEntry(row, col + 1);
float C = getHeightmapEntry(row + 1, col);
float D = getHeightmapEntry(row + 1, col + 1);
//
// Find the triangle we are in:
//
// Translate by the transformation that takes the upper-left
// corner of the cell we are in to the origin. Recall that our
// cellspacing was nomalized to 1. Thus we have a unit square
// at the origin of our +x -> 'right' and +z -> 'down' system.
float dx = x - col;
float dz = z - row;
// Note the below compuations of u and v are unneccessary, we really
// only need the height, but we compute the entire vector to emphasis
// the books discussion.
float height = 0.0f;
if (dz < 1.0f - dx) // upper triangle ABC
{
float uy = B - A; // A->B
float vy = C - A; // A->C
// Linearly interpolate on each vector. The height is the vertex
// height the vectors u and v originate from {A}, plus the heights
// found by interpolating on each vector u and v.
height = A + Lerp(0.0f, uy, dx) + Lerp(0.0f, vy, dz);
}
else // lower triangle DCB
{
float uy = C - D; // D->C
float vy = B - D; // D->B
// Linearly interpolate on each vector. The height is the vertex
// height the vectors u and v originate from {D}, plus the heights
// found by interpolating on each vector u and v.
height = D + Lerp(0.0f, uy, 1.0f - dx) + Lerp(0.0f, vy, 1.0f - dz);
}
return height;
}
float height = m_Grid.getHeight(position.x, position.y, 49 * 300, 49 * 300, 6.1224489795918367f);
if (height != 0)
{
position.y = height + 10.0f;
}
m_Camera.SetPosition(position.x, position.y, position.z);
bool cGrid::readRawFile(std::string fileName, int m, int n)
{
// A height for each vertex
std::vector<BYTE> in(m*n);
std::ifstream inFile(fileName.c_str(), std::ios_base::binary);
if (!inFile)
return false;
inFile.read(
(char*)&in[0], // buffer
in.size());// number of bytes to read into buffer
inFile.close();
// copy BYTE vector to int vector
m_heightmap.resize(n*m);
for (int i = 0; i < in.size(); i++)
m_heightmap[i] = (float)((in[i])/255)*50.0f;
return true;
}
m_Grid.readRawFile("castlehm257.raw", 50, 50);
I infer that you’re storing a 50 by 50 matrix inside a 300 by 300 matrix, to represent a grid of 49 by 49 cells. I also infer that m_Grid is an object of type cGrid. Your code appears to contain the following errors:
Argument(2) of call m_Grid.getHeight is not a z value.
Argument(3) of call m_Grid.getHeight is inconsistent with argument(5).
Argument(4) of call m_Grid.getHeight is inconsistent with argument(5).
Implicit cast of literal float to int in argument(5) of call m_Grid.getHeight - the value will be truncated.
Try changing your function call to this:
float height = m_Grid.getHeight(position.x, position.z, 49 * cellspacing, 49 * cellspacing, cellspacing);
-- where cellspacing is as defined in your diagram.
Also, try changing parameter(5) of cGrid::getHeight from int _cellSpacing to float _cellSpacing.
(I have edited this answer a couple of times as my understanding of your code has evolved.)
Alright, so I'm trying to click and drag to rotate around an object using C++ and OpenGL. The way I have it is to use gluLookAt centered at the origin and I'm getting coordinates for the eye by using parametric equations for a sphere (eyex = 2* cos(theta) * sin(phi); eyey = 2* sin(theta) * sin(phi); eyez = 2* cos(phi);). This works mostly, as I can click and rotate horizontally, but when I try to rotate vertically it makes tight circles instead of rotating vertically. I'm trying to get the up vector by using the position of the camera and a vecter at a 90 degree angle along the x-z plane and taking the cross product of that.
The code I have is as follows:
double dotProduct(double v1[], double v2[]) {
return v1[0]*v2[0] + v1[1]*v2[1] + v1[2]*v2[2];
}
void mouseDown(int button, int state, int x, int y) {
if (button == GLUT_LEFT_BUTTON && state == GLUT_DOWN ) {
xpos = x;
ypos = y;
}
}
void mouseMovement(int x, int y) {
diffx = x - xpos;
diffy = y - ypos;
xpos = x;
ypos = y;
}
void camera (void) {
theta += 2*PI * (-diffy/glutGet(GLUT_SCREEN_HEIGHT));
phi += PI * (-diffx/glutGet(GLUT_WINDOW_WIDTH));
eyex = 2* cos(theta) * sin(phi);
eyey = 2* sin(theta) * sin(phi);
eyez = 2* cos(phi);
double rightv[3], rightt[3], eyes[3];
rightv[0] = 2* cos(theta + 2/PI) * sin(phi);
rightv[1] = 0;
rightv[2] = 2* cos(phi);
rightt[0] = rightv[0];
rightt[1] = rightv[1];
rightt[2] = rightv[2];
rightv[0] = rightv[0] / sqrt(dotProduct(rightt, rightt));
rightv[1] = rightv[1] / sqrt(dotProduct(rightt, rightt));
rightv[2] = rightv[2] / sqrt(dotProduct(rightt, rightt));
eyes[0] = eyex;
eyes[1] = eyey;
eyes[2] = eyez;
upx = (eyey/sqrt(dotProduct(eyes,eyes)))*rightv[2] + (eyez/sqrt(dotProduct(eyes,eyes)))*rightv[1];
upy = (eyez/sqrt(dotProduct(eyes,eyes)))*rightv[0] + (eyex/sqrt(dotProduct(eyes,eyes)))*rightv[2];
upz = (eyex/sqrt(dotProduct(eyes,eyes)))*rightv[1] + (eyey/sqrt(dotProduct(eyes,eyes)))*rightv[0];
diffx = 0;
diffy = 0;
}
I am somewhat basing things off of this but it doesn't work, so I tried my way instead.
This isn't exactly a solution for the way you are doing it but I did something similar the other day. I did it by using DX's D3DXMatrixRotationAxis and D3DXVec3TransformCoord The math behind the D3DXMatrixRotationAxis method can be found at the bottom of the following page: D3DXMatrixRotationAxis Math use this if you are unable to use DX. This will allow you to rotate around any axis you pass in. In my object code I keep track of a direction and up vector and I simply rotate each of these around the axis of movement(in your case the yaw and pitch).
To implement the fixed distance camera like this I would simply do the dot product of the current camera location and the origin location (if this never changes then you can simply do it once.) and then move the camera to the origin rotate it the amount you need then move it back with its new direction and up values.
I'm no mathematician, but I need to draw a filled in circle.
My approach was to use someone else's math to get all the points on the circumference of a circle, and turn them into a triangle fan.
I need the vertices in a vertex array, no immediate mode.
The circle does appear. However, when I try and overlay circles strange things happen. They appear only a second and then disappear. When I move my mouse out of the window a triangle sticks out from nowhere.
Here's the class:
class circle
{
//every coordinate with have an X and Y
private:
GLfloat *_vertices;
static const float DEG2RAD = 3.14159/180;
GLfloat _scalex, _scaley, _scalez;
int _cachearraysize;
public:
circle(float scalex, float scaley, float scalez, float radius, int numdegrees)
{
//360 degrees, 2 per coordinate, 2 coordinates for center and end of triangle fan
_cachearraysize = (numdegrees * 2) + 4;
_vertices = new GLfloat[_cachearraysize];
for(int x= 2; x < (_cachearraysize-2); x = x + 2)
{
float degreeinRadians = x*DEG2RAD;
_vertices[x] = cos(degreeinRadians)*radius;
_vertices[x + 1] = sin(degreeinRadians)*radius;
}
//get the X as X of 0 and X of 180 degrees, subtract to get diameter. divide
//by 2 for radius and add back to X of 180
_vertices[0]= ((_vertices[2] - _vertices[362])/2) + _vertices[362];
//same idea for Y
_vertices[1]= ((_vertices[183] - _vertices[543])/2) + _vertices[543];
//close off the triangle fan at the same point as start
_vertices[_cachearraysize -1] = _vertices[0];
_vertices[_cachearraysize] = _vertices[1];
_scalex = scalex;
_scaley = scaley;
_scalez = scalez;
}
~circle()
{
delete[] _vertices;
}
void draw()
{
glScalef(_scalex, _scaley, _scalez);
glVertexPointer(2,GL_FLOAT, 0, _vertices);
glDrawArrays(GL_TRIANGLE_FAN, 0, _cachearraysize);
}
};
That's some ugly code, I'd say - lots of magic numbers et cetera.
Try something like:
struct Point {
Point(float x, float y) : x(x), y(y) {}
float x, y;
};
std::vector<Point> points;
const float step = 0.1;
const float radius = 2;
points.push_back(Point(0,0));
// iterate over the angle array
for (float a=0; a<2*M_PI; a+=step) {
points.push_back(cos(a)*radius,sin(a)*radius);
}
// duplicate the first vertex after the centre
points.push_back(points.at(1));
// rendering:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(2,GL_FLOAT,0, &points[0]);
glDrawArrays(GL_TRIANGLE_FAN,0,points.size());
It's up to you to rewrite this as a class, as you prefer. The math behind is really simple, don't fear to try and understand it.
I am currently trying to work on getting my virtual trackball to work from any angle. When I am looking at it from the z axis, it seems to work fine. I hold my mouse down, and move the mouse up... the rotation will move accordingly.
Now, if I change my viewing angle / position of my camera and try to move my mouse. The rotation will occur as if I were looking from the z axis. I cannot come up with a good way to get this to work.
Here is the code:
void Renderer::mouseMoveEvent(QMouseEvent *e)
{
// Get coordinates
int x = e->x();
int y = e->y();
if (isLeftButtonPressed)
{
// project current screen coordinates onto hemi sphere
Point sphere = projScreenCoord(x,y);
// find axis by taking cross product of current and previous hemi points
axis = Point::cross(previousPoint, sphere);
// angle can be found from magnitude of cross product
double length = sqrt( axis.x * axis.x + axis.y * axis.y + axis.z * axis.z );
// Normalize
axis = axis / length;
double lengthPrev = sqrt( previousPoint.x * previousPoint.x + previousPoint.y * previousPoint.y + previousPoint.z * previousPoint.z );
double lengthCur = sqrt( sphere.x * sphere.x + sphere.y * sphere.y + sphere.z * sphere.z );
angle = asin(length / (lengthPrev * lengthCur));
// Convert into Degrees
angle = angle * 180 / M_PI;
// 'add' this rotation matrix to our 'total' rotation matrix
glPushMatrix(); // save the old matrix so we don't mess anything up
glLoadIdentity();
glRotatef(angle, axis[0], axis[1], axis[2]); // our newly calculated rotation
glMultMatrixf(rotmatrix); // our previous rotation matrix
glGetFloatv(GL_MODELVIEW_MATRIX, (GLfloat*) rotmatrix); // we've let OpenGL do our matrix mult for us, now get this result & store it
glPopMatrix(); // return modelview to its old value;
}
// Project screen coordinates onto a unit hemisphere
Point Renderer::projScreenCoord(int x, int y)
{
// find projected x & y coordinates
double xSphere = ((double)x/width)*2.0 - 1.0;
double ySphere = ( 1 - ((double)y/height)) * 2.0 - 1.0;
double temp = 1.0 - xSphere*xSphere - ySphere*ySphere;
// Do a check so you dont do a sqrt of a negative number
double zSphere;
if (temp < 0){ zSphere = 0.0;}
else
{zSphere = sqrt(temp);}
Point sphere(xSphere, ySphere, zSphere);
// return the point on the sphere
return sphere;
}
I am still fairly new at this. Sorry for the trouble and thanks for all the help =)
The usual way involves quaternions. E.g., in sample code originally from SGI.
Here is what I'm trying to do. I'm trying to make a bullet out of the center of the screen. I have an x and y rotation angle. The problem is the Y (which is modified by rotation on the x) is really not working as intended. Here is what I have.
float yrotrad, xrotrad;
yrotrad = (Camera.roty / 180.0f * 3.141592654f);
xrotrad = (Camera.rotx / 180.0f * 3.141592654f);
Vertex3f Pos;
// get camera position
pls.x = Camera.x;
pls.y = Camera.y;
pls.z = Camera.z;
for(float i = 0; i < 60; i++)
{
//add the rotation vector
pls.x += float(sin(yrotrad)) ;
pls.z -= float(cos(yrotrad)) ;
pls.y += float(sin(twopi - xrotrad));
//translate camera coords to cube coords
Pos.x = ceil(pls.x / 3);
Pos.y = ceil((pls.y) / 3);
Pos.z = ceil(pls.z / 3);
if(!CubeIsEmpty(Pos.x,Pos.y,Pos.z)) //remove first cube that made contact
{
delete GetCube(Pos.x,Pos.y,Pos.z);
SetCube(0,Pos.x,Pos.y,Pos.z);
return;
}
}
This is almost identical to how I move the player, I add the directional vector to the camera then find which cube the player is on. If I remove the pls.y += float(sin(twopi - xrotrad)); then I clearly see that on the X and Z, everything is pointing as it should. When I add pls.y += float(sin(twopi - xrotrad)); then it almost works, but not quite, what I observed from rendering out spheres of the trajector is that the furthur up or down I look, the more offset it becomes rather than stay alligned to the camera's center. What am I doing wrong?
Thanks
What basically happens is very difficult to explain, I'd expect the bullet at time 0 to always be at the center of the screen, but it behaves oddly. If i'm looking straight at the horizon to +- 20 degrees upward its fine but then it starts not following any more.
I set up my matrix like this:
void CCubeGame::SetCameraMatrix()
{
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(Camera.rotx,1,0,0);
glRotatef(Camera.roty,0,1,0);
glRotatef(Camera.rotz,0,0,1);
glTranslatef(-Camera.x , -Camera.y,-Camera.z );
}
and change the angle like this:
void CCubeGame::MouseMove(int x, int y)
{
if(!isTrapped)
return;
int diffx = x-lastMouse.x;
int diffy = y-lastMouse.y;
lastMouse.x = x;
lastMouse.y = y;
Camera.rotx += (float) diffy * 0.2;
Camera.roty += (float) diffx * 0.2;
if(Camera.rotx > 90)
{
Camera.rotx = 90;
}
if(Camera.rotx < -90)
{
Camera.rotx = -90;
}
if(isTrapped)
if (fabs(ScreenDimensions.x/2 - x) > 1 || fabs(ScreenDimensions.y/2 - y) > 1) {
resetPointer();
}
}
You need to scale X and Z by cos(xradrot). (In other words, multiply by cos(xradrot)).
Imagine you're pointing straight down the Z axis but looking straight up. You don't want the bullet to shoot down the Z axis at all, this is why you need to scale it. (It's basically the same thing that you're doing between X and Z, but now doing it on the XZ vector and Y.)
pls.x += float(sin(yrotrad)*cos(xrotrad)) ;
pls.z -= float(cos(yrotrad)*cos(xrotrad)) ;
pls.y += float(sin(twopi - xrotrad));