glViewport offset and ortho projection - c++

In all tutorials I found about creating projection matrix based on viewport size all of them assumed that left bottom coordinates of viewport will be (0,0).
Now I want to draw to the different parts of the screen and for that purpose I want to switch viewports accordingly:
glViewport(0,0,windowWidth/2, windowHeight/2); //left bottom
glViewport(0,windowHeight/2,windowWidth/2, windowHeight/2);//left top
glViewport(windowWidth/2,0,windowWidth/2, windowHeight/2);//right bottom
glViewport(windowWidth/2, windowHeight/2,windowWidth/2, windowHeight/2);//right top
Now I have a problem with defining my projection matrix. Without having any (x,y) offest I was using this code for calculating my ortho projection matrix:
if (m_WindowWidth > m_WindowHeight)
{
auto viewportAspectRatio = (float)m_WindowWidth / (float)m_WindowHeight;
m_ProjectionMatrix.m_fLeft = (-1.0f) * m_fWindowSize * viewportAspectRatio;
m_ProjectionMatrix.m_fRight = m_fWindowSize * viewportAspectRatio;
m_ProjectionMatrix.m_fBottom = (-1.0f)*m_fWindowSize;
m_ProjectionMatrix.m_fTop = m_fWindowSize;
m_ProjectionMatrix.m_fNear = -(10.0f)*m_fWindowSize;
m_ProjectionMatrix.m_fFar = (10.0f)*m_fWindowSize;
m_fMoveSpeed = static_cast<GLfloat>(m_fWindowSize * 2 / static_cast<float>(m_WindowHeight));
}
else
{
auto viewportAspectRatio = (float)m_WindowHeight / (float)m_WindowWidth;
m_ProjectionMatrix.m_fLeft = (-1.0f)*m_fWindowSize;
m_ProjectionMatrix.m_fRight = m_fWindowSize;
m_ProjectionMatrix.m_fBottom = (-1.0f)*m_fWindowSize * viewportAspectRatio;
m_ProjectionMatrix.m_fTop = m_fWindowSize * viewportAspectRatio;
m_ProjectionMatrix.m_fNear = -(10.0f)*m_fWindowSize;
m_ProjectionMatrix.m_fFar = (10.0f)*m_fWindowSize;
m_fMoveSpeed = static_cast<GLfloat>(m_fWindowSize * 2 / static_cast<float>(m_WindowWidth));
}
And this works fine UNTIL I will add any (x,y) offset to my viewport. The effect is following when using glViewport(0, m_WindowHeight/2, m_WindowWidth/2, m_WindowHeight/2):
And with glViewport(0, 0, m_WindowWidth/2, m_WindowHeight/2):
How can I make it work?

First, aspect ratio is always width/height.
Then I think what you are looking for is :
m_ProjectionMatrix.m_fLeft = x;
m_ProjectionMatrix.m_fRight = x + m_WindowWidth;
m_ProjectionMatrix.m_fBottom = y;
m_ProjectionMatrix.m_fTop = y + m_WindowHeight;
m_ProjectionMatrix.m_fNear = -(10.0f)*m_fWindowSize;
m_ProjectionMatrix.m_fFar = (10.0f)*m_fWindowSize;
Have a look at this wiki page

I found solution to this problem and posted it on the gamedev forum:
https://gamedev.stackexchange.com/questions/122284/glviewport-offset-and-ortho-projection/122289#122289
In short:
I was drawing my whole scene to the framebuffer and then I rendered generated texture to the screen. This operation caused unwanted glViewport() transformations accumulation.
Solution to this is setting the glViewport() to origin befrore rendering to the framebuffer and then setting offset when rendering texture to the screen.

Related

Objects beyond the far clipping plane are rendered in perspective view

I see objects beyond the far clipping plane in perspective projection and I don't think this is how it's suppose to work, so can someone give me an explanation why do I see objects beyond the far clipping plane such as a grid in this example.
The orthogonal projections works fine btw
I cleared all shapes from this demo and added two grids by changing the following code in Luna Frank Shapes Demo
void ShapesApp::BuildRenderItems()
{
auto gridRitem = std::make_unique<RenderItem>();
gridRitem->World = MathHelper::Identity4x4();
gridRitem->ObjCBIndex = 0;
gridRitem->Geo = mGeometries["shapeGeo"].get();
gridRitem->PrimitiveType = D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST;
gridRitem->IndexCount = gridRitem->Geo->DrawArgs["grid"].IndexCount;
gridRitem->StartIndexLocation = gridRitem->Geo->DrawArgs["grid"].StartIndexLocation;
gridRitem->BaseVertexLocation = gridRitem->Geo->DrawArgs["grid"].BaseVertexLocation;
mAllRitems.push_back(std::move(gridRitem));
gridRitem = std::make_unique<RenderItem>();
XMStoreFloat4x4(&gridRitem->World, XMMatrixTranslation(0,-1002,0)* XMMatrixRotationRollPitchYaw(1.5708, 0, 0));;
gridRitem->ObjCBIndex = 1;
gridRitem->Geo = mGeometries["shapeGeo"].get();
gridRitem->PrimitiveType = D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST;
gridRitem->IndexCount = gridRitem->Geo->DrawArgs["grid"].IndexCount;
gridRitem->StartIndexLocation = gridRitem->Geo->DrawArgs["grid"].StartIndexLocation;
gridRitem->BaseVertexLocation = gridRitem->Geo->DrawArgs["grid"].BaseVertexLocation;
mAllRitems.push_back(std::move(gridRitem));
gridRitem = std::make_unique<RenderItem>();
XMStoreFloat4x4(&gridRitem->World, XMMatrixTranslation(0, -1002, 0) * XMMatrixRotationRollPitchYaw(1.5708, 1.5708, 0));
gridRitem->ObjCBIndex = 2;
gridRitem->Geo = mGeometries["shapeGeo"].get();
gridRitem->PrimitiveType = D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST;
gridRitem->IndexCount = gridRitem->Geo->DrawArgs["grid"].IndexCount;
gridRitem->StartIndexLocation = gridRitem->Geo->DrawArgs["grid"].StartIndexLocation;
gridRitem->BaseVertexLocation = gridRitem->Geo->DrawArgs["grid"].BaseVertexLocation;
mAllRitems.push_back(std::move(gridRitem));
and change the grid size
GeometryGenerator::MeshData grid = geoGen.CreateGrid(200.0f, 200.0f, 60, 40);
and the projection matrix from on resize
XMMATRIX P = XMMatrixPerspectiveFovLH(0.25f * MathHelper::Pi, AspectRatio(), .1f, 1000.f);
XMStoreFloat4x4(&mProj, P);
Now I can still see the grid even though its beyond the far plane and even if the far plane is 900 the still appears on rotation at the edge of the screen. so I need to reduce the far plane or move the grid further, and as I keep changing the value of the far plane I can see a shape works as a brush, it hides everything beyond it but as camera rotate and the grid is no longer behind it the grid re-appear
and here's what I meant by the brush
I think you're thinking of the maximum view distance as being consistently 900 units away from the camera/eye position. If that was the case, it wouldn't be a clipping plane at all, it would be a curve - a sector of a sphere.
In reality the view frustum is a truncated pyramid made up of 6 planes. When the far plane is set to 900, then the view distance for the pixel in the centre of the view is 900, but the view distance at the corners is much higher (how much higher depends on the FOVs - you could work it out with a bit of trig).
So as you turn your camera left and right, an object approx 900 units away from the camera will come in and out of view as it intersects the far plane.

Orient object along surface normal

When the user clicks on a surface I would like place an object at this position and orient it perpendicular to the surface normal.
After the user performs a click, I read the depth of three neighboring pixels from the buffer, unproject the pixels from screen coordinates to object space and then compute the surface normal from these points in object space:
glReadPixels(mouseX, mouseY, ..., &depthCenter);
pointCenter = gluUnProject(mouseX, mouseY, depthCenter, ...);
glReadPixels(mouseX, mouseY - 1, ..., &depthUp);
pointUp = gluUnProject(mouseX, mouseY - 1, depthUp, ...);
glReadPixels(mouseX - 1, mouseY, ..., &depthLeft);
pointLeft = gluUnProject(mouseX - 1, mouseY, depthLeft, ...);
centerUpVec = norm( pointCenter - pointUp );
centerLeftVec = norm( pointCenter - pointLeft );
normalVec = norm( centerUpVec.cross(centerLeftVec) );
I know that computing the normal just from three pixels is problematic (e.g. at edges or if the three points have vastly different depth), but for my initial test on a flat surface this must suffice.
Finally, in order to orient the object along the computed normal vector I create a rotation matrix from the normal and the up vector:
upVec = vec(0.0f, 1.0f, 0.0f);
xAxis = norm( upVec.cross(normalVec) );
yAxis = norm( normalVec.cross(xAxis) );
// set orientation of model matrix
modelMat(0,0) = xAxis(0);
modelMat(1,0) = yAxis(0);
modelMat(2,0) = normalVec(0);
modelMat(0,1) = xAxis(1);
modelMat(1,1) = yAxis(1);
modelMat(2,1) = normalVec(1);
modelMat(0,2) = xAxis(2);
modelMat(1,2) = yAxis(2);
modelMat(2,2) = normalVec(2);
// set position of model matrix by using the previously computed center-point
modelMat(0,3) = pointCenter(0);
modelMat(1,3) = pointCenter(1);
modelMat(2,3) = pointCenter(2);
For testing purposes I'm placing an objects on a flat surface after each click. This works well in most cases when my camera is facing downwards the up vector.
However, once I rotate my camera the placed objects are oriented arbitrarily and I can't figure out why!
Ok, I just found a small, stupid bug in my code that was unrelated to the actual problem. Therefore, the approach stated in the question above is working correctly.
In order to avoid some pitfalls, one could of course just use a math library, such as Eigen, in order to compute the rotation between the up vector and the surface normal:
upVec = Eigen::Vector3f(0.0f, 1.0f, 0.0f);
Eigen::Quaternion<float> rotationQuat;
rotationQuat.setFromTwoVectors(upVec, normalVec);

Why do I have to divide by Z?

I needed to implement 'choosing an object' in a 3D environment. So instead of going with robust, accurate approach, such as raycasting, I decided to take the easy way out. First, I transform the objects world position onto screen coordinates:
glm::mat4 modelView, projection, accum;
glGetFloatv(GL_PROJECTION_MATRIX, (GLfloat*)&projection);
glGetFloatv(GL_MODELVIEW_MATRIX, (GLfloat*)&modelView);
accum = projection * modelView;
glm::mat4 transformed = accum * glm::vec4(objectLocation, 1);
Followed by some trivial code to transform the opengl coordinate system to normal window coordinates, and do a simple distance from the mouse check. BUT that doesn't quite work. In order to translate from world space to screen space, I need one more calculation added on to the end of the function shown above:
transformed.x /= transformed.z;
transformed.y /= transformed.z;
I don't understand why I have to do this. I was under the impression that, once one multiplied your vertex by the accumulated modelViewProjection matrix, you had your screen coordinates. But I have to divide by Z to get it to work properly. In my openGL 3.3 shaders, I never have to divide by Z. Why is this?
EDIT: The code to transform from from opengl coordinate system to screen coordinates is this:
int screenX = (int)((trans.x + 1.f)*640.f); //640 = 1280/2
int screenY = (int)((-trans.y + 1.f)*360.f); //360 = 720/2
And then I test if the mouse is near that point by doing:
float length = glm::distance(glm::vec2(screenX, screenY), glm::vec2(mouseX, mouseY));
if(length < 50) {//you can guess the rest
EDIT #2
This method is called upon a mouse click event:
glm::mat4 modelView;
glm::mat4 projection;
glm::mat4 accum;
glGetFloatv(GL_PROJECTION_MATRIX, (GLfloat*)&projection);
glGetFloatv(GL_MODELVIEW_MATRIX, (GLfloat*)&modelView);
accum = projection * modelView;
float nearestDistance = 1000.f;
gameObject* nearest = NULL;
for(uint i = 0; i < objects.size(); i++) {
gameObject* o = objects[i];
o->selected = false;
glm::vec4 trans = accum * glm::vec4(o->location,1);
trans.x /= trans.z;
trans.y /= trans.z;
int clipX = (int)((trans.x+1.f)*640.f);
int clipY = (int)((-trans.y+1.f)*360.f);
float length = glm::distance(glm::vec2(clipX,clipY), glm::vec2(mouseX, mouseY));
if(length<50) {
nearestDistance = trans.z;
nearest = o;
}
}
if(nearest) {
nearest->selected = true;
}
mouseRightPressed = true;
The code as a whole is incomplete, but the parts relevant to my question works fine. The 'objects' vector contains only one element for my tests, so the loop doesn't get in the way at all.
I've figured it out. As Mr David Lively pointed out,
Typically in this case you'd divide by .w instead of .z to get something useful, though.
My .w values were very close to my .z values, so in my code I change the statement:
transformed.x /= transformed.z;
transformed.y /= transformed.z;
to:
transformed.x /= transformed.w;
transformed.y /= transformed.w;
And it still worked just as before.
https://stackoverflow.com/a/10354368/2159051 explains that division by w will be done later in the pipeline. Obviously, because my code simply multiplies the matrices together, there is no 'later pipeline'. I was just getting lucky in a sense, because my .z value was so close to my .w value, there was the illusion that it was working.
The divide-by-Z step effectively applies the perspective transformation. Without it, you'd have an iso view. Imagine two view-space vertices: A(-1,0,1) and B(-1,0,100).
Without the divide by Z step, the screen coordinates are equal (-1,0).
With the divide-by-Z, they are different: A(-1,0) and B(-0.01,0). So, things farther away from the view-space origin (camera) are smaller in screen space than things that are closer. IE, perspective.
That said: if your projection matrix (and matrix multiplication code) is correct, this should already be happening, as the projection matrix will contain 1/Z scaling components which do this. So, some questions:
Are you really using the output of a projection transform, or just the view transform?
Are you doing this in a pixel/fragment shader? Screen coordinates there are normalized (-1,-1) to (+1,+1), not pixel coordinates, with the origin at the middle of the viewport. Typically in this case you'd divide by .w instead of .z to get something useful, though.
If you're doing this on the CPU, how are you getting this information back to the host?
I guess it is because you are going from 3 dimensions to 2 dimensions, so you are normalizing the 3 dimension world to a 2 dimensional coordinates.
P = (X,Y,Z) in 3D will be q = (x,y) in 2D where x=X/Z and y = Y/Z
So a circle in 3D will not be circle in 2D.
You can check this video out:
https://www.youtube.com/watch?v=fVJeJMWZcq8
I hope I understand your question correctly.

OpenGL Frustum visibility test with sphere : Far plane not working

I am doing a program to test sphere-frustum intersection and being able to determine the sphere's visibility. I am extracting the frustum's clipping planes into camera space and checking for intersection. It works perfectly for all planes except the far plane and I cannot figure out why. I keep pulling the camera back but my program still claims the sphere is visible, despite it having been clipped long ago. If I go far enough it eventually determines that it is not visible, but this is some distance after it has exited the frustum.
I am using a unit sphere at the origin for the test. I am using the OpenGL Mathematics (GLM) library for vector and matrix data structures and for its built in math functions. Here is my code for the visibility function:
void visibilityTest(const struct MVP *mvp) {
static bool visLastTime = true;
bool visThisTime;
const glm::vec4 modelCenter_worldSpace = glm::vec4(0,0,0,1); //at origin
const int negRadius = -1; //unit sphere
//Get cam space model center
glm::vec4 modelCenter_cameraSpace = mvp->view * mvp->model * modelCenter_worldSpace;
//---------Get Frustum Planes--------
//extract projection matrix row vectors
//NOTE: since glm stores their mats in column-major order, we extract columns
glm::vec4 rowVec[4];
for(int i = 0; i < 4; i++) {
rowVec[i] = glm::vec4( mvp->projection[0][i], mvp->projection[1][i], mvp->projection[2][i], mvp->projection[3][i] );
}
//determine frustum clipping planes (in camera space)
glm::vec4 plane[6];
//NOTE: recall that indices start at zero. So M4 + M3 will be rowVec[3] + rowVec[2]
plane[0] = rowVec[3] + rowVec[2]; //near
plane[1] = rowVec[3] - rowVec[2]; //far
plane[2] = rowVec[3] + rowVec[0]; //left
plane[3] = rowVec[3] - rowVec[0]; //right
plane[4] = rowVec[3] + rowVec[1]; //bottom
plane[5] = rowVec[3] - rowVec[1]; //top
//extend view frustum by 1 all directions; near/far along local z, left/right among local x, bottom/top along local y
// -Ax' -By' -Cz' + D = D'
plane[0][3] -= plane[0][2]; // <x',y',z'> = <0,0,1>
plane[1][3] += plane[1][2]; // <0,0,-1>
plane[2][3] += plane[2][0]; // <-1,0,0>
plane[3][3] -= plane[3][0]; // <1,0,0>
plane[4][3] += plane[4][1]; // <0,-1,0>
plane[5][3] -= plane[5][1]; // <0,1,0>
//----------Determine Frustum-Sphere intersection--------
//if any of the dot products between model center and frustum plane is less than -r, then the object falls outside the view frustum
visThisTime = true;
for(int i = 0; i < 6; i++) {
if( glm::dot(plane[i], modelCenter_cameraSpace) < static_cast<float>(negRadius) ) {
visThisTime = false;
}
}
if(visThisTime != visLastTime) {
printf("Sphere is %s visible\n", (visThisTime) ? "" : "NOT " );
visLastTime = visThisTime;
}
}
The polygons appear to be clipped by the far plane properly so it seems that the projection matrix is set up properly, but the calculations make it seem like the plane is way far out. Perhaps I am not calculating something correctly or have a fundamental misunderstanding of the calculations that are required?
The calculations that deal specifically with the far clipping plane are:
plane[1] = rowVec[3] - rowVec[2]; //far
and
plane[1][3] += plane[1][2]; // <0,0,-1>
I'm setting the plane to be equal to the 4th row (or in this case column) of the projection matrix - the 3rd row of the projection matrix. Then I'm extending the far plane one unit further (due to the sphere's radius of one; D' = D - C(-1) )
I've looked over this code many times and I can't see why it shouldn't work. Any help is appreciated.
EDIT:
I can't answer my own question as I don't have the rep, so I will post it here.
The problem was that I wasn't normalizing the plane equations. This didn't seem to make much of a difference for any of the clip planes besides the far one, so I hadn't even considered it (but that didn't make it any less wrong). After normalization everything works properly.

CPU Ray Casting

I'm attempting ray casting an octree on the CPU (I know the GPU is better, but I'm unable to get that working at this time, I believe my octree texture is created incorrectly).
I understand what needs to be done, and so far I cast a ray for each pixel, and check if that ray intersects any nodes within the octree. If it does and the node is not a leaf node, I check if the ray intersects it's child nodes. I keep doing this until a leaf node is hit. Once a leaf node is hit, I get the colour for that node.
My question is, what is the best way to draw this to the screen? Currently im storing the colours in an array and drawing them with glDrawPixels, but this does not produce correct results, with gaps in the renderings, as well as the projection been wrong (I am using glRasterPos3fv).
Edit: Here is some code so far, it needs cleaning up, sorry. I have omitted the octree ray casting code as I'm not sure it's needed, but I will post if it'll help :)
void Draw(Vector cameraPosition, Vector cameraLookAt)
{
// Calculate the right Vector
Vector rightVector = Cross(cameraLookAt, Vector(0, 1, 0));
// Set up the screen plane starting X & Y positions
float screenPlaneX, screenPlaneY;
screenPlaneX = cameraPosition.x() - ( ( WINDOWWIDTH / 2) * rightVector.x());
screenPlaneY = cameraPosition.y() + ( (float)WINDOWHEIGHT / 2);
float deltaX, deltaY;
deltaX = 1;
deltaY = 1;
int currentX, currentY, index = 0;
Vector origin, direction;
origin = cameraPosition;
vector<Vector4<int>> colours(WINDOWWIDTH * WINDOWHEIGHT);
currentY = screenPlaneY;
Vector4<int> colour;
for (int y = 0; y < WINDOWHEIGHT; y++)
{
// Set the current pixel along x to be the left most pixel
// on the image plane
currentX = screenPlaneX;
for (int x = 0; x < WINDOWWIDTH; x++)
{
// default colour is black
colour = Vector4<int>(0, 0, 0, 0);
// Cast the ray into the current pixel. Set the length of the ray to be 200
direction = Vector(currentX, currentY, cameraPosition.z() + ( cameraLookAt.z() * 200 ) ) - origin;
direction.normalize();
// Cast the ray against the octree and store the resultant colour in the array
colours[index] = RayCast(origin, direction, rootNode, colour);
// Move to next pixel in the plane
currentX += deltaX;
// increase colour arry index postion
index++;
}
// Move to next row in the image plane
currentY -= deltaY;
}
// Set the colours for the array
SetFinalImage(colours);
// Load array to 0 0 0 to set the raster position to (0, 0, 0)
GLfloat *v = new GLfloat[3];
v[0] = 0.0f;
v[1] = 0.0f;
v[2] = 0.0f;
// Set the raster position and pass the array of colours to drawPixels
glRasterPos3fv(v);
glDrawPixels(WINDOWWIDTH, WINDOWHEIGHT, GL_RGBA, GL_FLOAT, finalImage);
}
void SetFinalImage(vector<Vector4<int>> colours)
{
// The array is a 2D array, with the first dimension
// set to the size of the window (WINDOW_WIDTH * WINDOW_HEIGHT)
// Second dimension stores the rgba values for each pizel
for (int i = 0; i < colours.size(); i++)
{
finalImage[i][0] = (float)colours[i].r;
finalImage[i][1] = (float)colours[i].g;
finalImage[i][2] = (float)colours[i].b;
finalImage[i][3] = (float)colours[i].a;
}
}
Your pixel drawing code looks okay. But I'm not sure that your RayCasting routines are correct. When I wrote my raytracer, I had a bug that caused horizontal artifacts in on the screen, but it was related to rounding errors in the render code.
I would try this...create a result set of vector<Vector4<int>> where the colors are all red. Now render that to the screen. If it looks correct, then the opengl routines are correct. Divide and conquer is always a good debugging method.
Here's a question though....why are you using Vector4 when later on you write the image as GL_FLOAT? I'm not seeing any int->float conversion here....
You problem may be in your 3DDDA (octree raycaster), and specifically with adaptive termination. It results from the quantisation of rays into gridcell form, that causes certain octree nodes which lie slightly behind foreground nodes (i.e. of a higher z depth) and which thus should be partly visible & partly occluded, to not be rendered at all. The smaller your voxels are, the less noticeable this will be.
There is a very easy way to test whether this is the problem -- comment out the adaptive termination line(s) in your 3DDDA and see if you still get the same gap artifacts.