Q3D Setting aspect ration has no effect on an image - opengl

Again trying to make something NOT out of box. I'm facing the following issue:
I was trying to change aspect ration of a camera like
Qt3DRender::QCamera *camera = view.camera();
camera->lens()->setPerspectiveProjection(45.0f, 100.0f/*16.0f/9.0f*/, 0.1f, 1000.0f);
camera->lens()->setProjectionType(Qt3DRender::QCameraLens::ProjectionType::PerspectiveProjection);
camera->setPosition(QVector3D(0, 0, 40.0f));
camera->setViewCenter(QVector3D(0, 0, 0));
But apparently it has no effect what-so-ever. It's like this parameters is totally irrelevant to image formation.
Could some one explain me how can I set aspect ration or any other method to set "uniform mat4 mvp;" if I have to use shader anyway.
Is there any concise reference on how CameraLens passes its value to shaders? The code is VERY big and intricated I'd rather not go too deep into it.

After studing the source code of Q3D I've found that aspect ration is dynamically changed by the engine. The only way to stop it is to use 'setProjectionMatrix' function which will stop automatic update of the projection matrix and everything will be left to handle for yourself (including culling calculation and correct projection matrix formation).

Related

OpenGL Fog does not appear

I wanted to create a coordinate system with some lines in it, and wanted to display one window with depth-fog.
My "fog-code" looks like this:
glEnable(GL_FOG);
float fogColor[4] = {0.8, 0.8, 0.8, 1};
glFogi(GL_FOG_MODE, GL_LINEAR);
glFogfv(GL_FOG_COLOR, fogColor);
glFogf(GL_FOG_DENSITY,0.8);
glHint(GL_FOG_HINT, GL_NICEST);
glFogf(GL_FOG_START,0.1);
glFogf(GL_FOG_END,200);
and is placed in my main function (don't know yet if this could cause any problems, but just to be sure), right after the init()-call and before my display-function-call.
Update:
The problem was actually really simple: My problem was, that I worked solely on the GL_MODELVIEW-matrix, thinking there was no real difference to the GL_PROJECTION-matrix. According to this article and the post from Reto Koradi, there is a pretty significant difference. I hugely recommend reading the full article to better understand the system behind OpenGL (definitely helped me a lot).
The corrected code (for my init()-call) would then be:
void init2()
{
glClearColor (1.0, 1.0, 1.0, 0.0); // set background color to white
glMatrixMode(GL_PROJECTION); // switch to projection mode
glLoadIdentity(); // initialize a projection matrix
glOrtho(-300, 300, -300, 300, -800, 800); // map coordinates to the viewport
gluLookAt(2,2,10, 0,0,-0.5, 0,1,0);
glMatrixMode(GL_MODELVIEW); // now switch to modelview mode
}
The fog equation is evaluated based on the value of (quote from OpenGL 2.1 spec):
Otherwise, if the fog source is FRAGMENT DEPTH, then c is the eye-coordinate distance from the eye, (0,0,0,1) in eye coordinates, to the fragment center.
FRAGMENT_DEPTH is the default, so this applies in your case. Eye coordinate refers to the coordinates after the model-view transformation has been applied. So it's the distance from the origin after applying the model-view transform. The spec also allows implementations to use the absolute value of the z-coordinate instead of the distance from the origin.
One small observation on your code: GL_FOG_DENSITY does not matter if the mode is GL_LINEAR. It is only used for the exponential modes.
For GL_LINEAR mode, the behavior is pretty much as you would expect. The original fragment color is linearly blended with the fog color within the range GL_FOG_START to GL_FOG_END. So everything smaller than GL_FOG_START has the original fragment color, everything after GL_FOG_END has the fog color, and the values in between are linear interpolations between the two, with gradually more fog color and less original fragment color.
To get good results, you'll have to play with the GL_FOG_START and GL_FOG_END values. If you don't get as much for as desired, you can start by reducing the value of GL_FOG_END.
I peeked at the linked code, and noticed one problem: You're specifying the projection matrix while you're in GL_MODELVIEW matrix mode. You need to be careful that you specify the matrices in the correct matrix mode, which is GL_PROJECTION for the projection matrix.
Mixing up the matrix modes does not have an adverse effect on the resulting vertex coordinates, since both the model-view and projection matrices are applied to the vertices. So for very simple use, you can sometimes get away with using the wrong mode. But once lighting comes into play, it is critical to use the correct matrix mode, since lighting calculations are done after the model-view transformation has been applied, but before the projection transformation.
And yes, as others already pointed out, a lot of this actually gets simpler if you write your own shaders. The fact that I quoted the OpenGL 2.1 spec is probably a hint that this functionality is old and obsolete.
Like to many things that OpenGL-1.1 did, fog is calculated on a per vertex level. So if you have a long line, with only two points, fog is calculated only for the end points and then the color interpolated linear inbetween. Depending on how your line is aligned and which shading mode you use, this may result in no apparent fogging.
Two solutions:
Subdivide the lines into a couple of dozen line segments, so to sample the fog at more than two points.
or
Use a fragment shader instead and calculate the fog term therein. This is what I suggest doing.

Render a 3D model with the same size regardless of camera position

I've got a particular model that acts as controls in the viewer. The user can click on different parts of it to perform transformations on another model in the viewer (like controls/handles in applications like Unity or Blender).
We'd like the controls to remain the same size regardless how zoomed in/out the camera is. I've tried scaling the size of it based on the distance between the object and the camera but it isn't quite right. Is there a standard way of accomplishing something like this?
The controls are rendered using the fixed pipeline, but we've got other components using the programmable pipeline.
The easy answer is "use the programmable pipeline" because it's not that difficult to write
if(normalObject) {
gl_Position = projection * view * model * vertex;
} else {
gl_Position = specialMVPMatrix * vertex;
}
Whereas you'll spend a lot more code trying to get this to work in the Fixed-Function-Pipeline and plenty more CPU cycles rendering it.
With the fixed pipeline, the easiest way to do this is to simply not apply any transformations when you render the controls:
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
// draw controls
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
The glPushMatrix()/glPopMatrix() calls will make sure that the previous matrices are restored at the end of this code fragment.
With no transformation at all, the range of coordinates mapped to the window will be [-1.0 .. 1.0] in both coordinate directions. If you need anything else, you can apply the necessary transformations before you start drawing the controls.

Adapt existing code for OpenGL stereoscopic rendering?

I'm trying to implement stereoscopic 3d in OpenGL using a side-by-side technique.
I've read this article which in great detail explains how to set up the camera for left and right views. It uses a camera model and set up the left and right views using gluLookAt.
However in my case I want to adapt existing code that already set up the projection.
See the following example where "existingcode" represents the code that I cannot make changes to.
//Render left view
// setUpCamera set the gl projection and model matrix
existingcode.setUpCamera()
..
here I want to somehow modify the current gl projection matrix for the left view
..
existingcode.renderScene()
//.. then render right view
Can it be done, perhaps by calling glGetMatrix and modify it somehow?
What you've to do is employ some lens shiftig.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
stereo_offset = eye * near * parallax_factor / convergence_distance;
glFrustum(stereo_offset + left, stereo_offset + right, bottom, top, near, far);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(eye * parallax_factor * convergence_distance, 0, 0);
/* now use gluLookAt here as this were a normal 2D rendering */
parallax_factor should be no larger than the ratio of half_eye_distance / screen_width, so the larger the screen gets the smaller the parallax_factor is. A good value for parallax_factor for computer display use is 0.05, for large screens (think cinema) it's something like 0.01
This projection shifting technique is exactly what I used for re-rendering Elephants Dream in stereoscopic 3D, although, since Blenders offline renderer doesn't use OpenGL the code looks a little bit different http://www.youtube.com/watch?v=L-tmaMR1p3w

Per-model local rotation breaks openGL Lighting

I'm having trouble with OpenGL lighting. My issue is this: When the object has 0 rotation, the lighting is fine- otherwise the lighting works, but rotates with the object, instead of staying fixed in regards to the scene.
Sounds simple, right? The OpenGL FAQ has some simple advice on this: coordinates passed to glLightfv(GL_LIGHT0, GL_POSITION...) are multiplied by the current MODELVIEW matrix. So I must be calling this at the wrong place... except I'm not. I've copied the MODELVIEW matrix into a variable to debug, and it stays the same regardless of how my object is rotated. So it has to be something else, but I'm at a loss as to what.
I draw the model using glDrawArrays, and position my model within the world using glMatrixMult on a matrix built from a rotation quaternion and a translation. All of this takes place within glPushMatrix/glPopMatrix, so shouldn't have any side effect on the light.
A cut down version of my rendering process looks like this:
//Setup our camera
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
cameraMatrix = translate(Vector3D(Pos.mX,Pos.mY,Pos.mZ)) * camRot.QuatToMatrix();
glMultMatrixf((GLfloat*)&cameraMatrix);
//Position the light now
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
GLfloat lp[4] = {lightPos.mX, lightPos.mY, lightPos.mZ, 1.0f};
glLightfv(GL_LIGHT0, GL_POSITION,(GLfloat*) lp);
//Loop, doing this for each model: (mRot, mPos, and mi are model member variables)
matrix = translate(Vector3D(mPos.mX,mPos.mY,mPos.mZ)) * mRot.QuatToMatrix();
glPushMatrix();
glMultMatrixf((GLfloat*)&matrix);
glBindBuffer(GL_ARRAY_BUFFER, mi->mVertexBufHandle); //Bind the model VBO.
glDrawArrays(GL_TRIANGLES, 0, mi->verts); //Draw the object
glPopMatrix();
I thought the normals might be messed up, but when I render them out they look fine. Is there anything else that might effect openGL lighting? The FAQ mentions:
If your light source is part of a
light fixture, you also may need to
specify a modeling transform, so the
light position is in the same location
as the surrounding fixture geometry.
I took this to mean that you'd need to translate the light into the scene, kind of a no-brainer... but does it mean something else?
It might be minor, but in this line:
glLightfv(GL_LIGHT0, GL_POSITION,(GLfloat*) &lp);
remove the & (address operator). lp will already give you the array-address.
This was awhile back, but I did eventually figure out the problem. The issue I thought I was having was that the light's position got translated wrong. Picture this: the light was located at 0,0,0, but then I translated and rotated my mesh. If this had been the case, I'd have to do as suggested in the other answers and make certain I was placing my glLightfv calls in the right place.
The actual problem turned out to be much simpler, yet much more insidious. It turns out I wasn't setting the glNormalPointer correctly, and so it was being fed garbage data. While debugging, I'd render the normals to check that they were correct, but when doing so I'd manually draw them based on the positions I'd calculated. A recommendation to future debuggers: when drawing your debug info normal rays, make sure you feed the debug function /the same data/ as openGL gets. In my case, this would mean pointing my normal ray draw function's glVertexPointer to the same place as the model's glNormalPointer.
Basically an OpenGL light behaves like a vertex. So in your code it's transformed by cameraMatrix, while your meshes are transformed by cameraMatrix * matrix. Now, it looks like both cameraMatrix and matrix contain mrot.QuatToMatrix(), that is: there is a single rotation matrix there, and the light gets rotated once, while the objects get rotated twice. It doesn't look right to me, unless your actual code is different; the mRot matrix you use for each mesh should be its own, e.g. mRot[meshIndex].

Setting up a camera in OpenGL

I've been working on a game engine for awhile. I've started out with 2D Graphics with just SDL but I've slowly been moving towards 3D capabilities by using OpenGL. Most of the documentation I've seen about "how to get things done," use GLUT, which I am not using.
The question is how do I create a "camera" in OpenGL that I could move around a 3D environment and properly display 3D models as well as sprites (for example, a sprite that has a fixed position and rotation). What functions should I be concerned with in order to setup a camera in OpenGL camera and in what order should they be called in?
Here is some background information leading up to why I want an actual camera.
To draw a simple sprite, I create a GL texture from an SDL surface and I draw it onto the screen at the coordinates of (SpriteX-CameraX) and (SpriteY-CameraY). This works fine but when moving towards actual 3D models it doesn't work quite right. The cameras location is a custom vector class (i.e. not using the standard libraries for it) with X, Y, Z integer components.
I have a 3D cube made up of triangles and by itself, I can draw it and rotate it and I can actually move the cube around (although in an awkward way) by passing in the camera location when I draw the model and using that components of the location vector to calculate the models position. Problems become evident with this approach when I go to rotate the model though. The origin of the model isn't the model itself but seems to be the origin of the screen. Some googling tells me I need to save the location of the model, rotate it about the origin, then restore the model to its origal location.
Instead of passing in the location of my camera and calculating where things should be being drawn in the Viewport by calculating new vertices, I figured I would create an OpenGL "camera" to do this for me so all I would need to do is pass in the coordinates of my Camera object into the OpenGL camera and it would translate the view automatically. This tasks seems to be extremely easy if you use GLUT but I'm not sure how to set up a camera using just OpenGL.
EDIT #1 (after some comments):
Following some suggestion, here is the update method that gets called throughout my program. Its been updated to create perspective and view matrices. All drawing happens before this is called. And a similar set of methods is executed when OpenGL executes (minus the buffer swap). The x,y,z coordinates are straight an instance of Camera and its location vector. If the camera was at (256, 32, 0) then 256, 32 and 0 would be passed into the Update method. Currently, z is set to 0 as there is no way to change that value at the moment. The 3D model being drawn is a set of vertices/triangles + normals at location X=320, Y=240, Z=-128. When the program is run, this is what is drawn in FILL mode and then in LINE mode and another one in FILL after movement, when I move the camera a little bit to the right. It likes like may Normals may be the cause, but I think it has moreso to do with me missing something extremely important or not completely understanding what the NEAR and FAR parameters for glFrustum actually do.
Before I implemented these changes, I was using glOrtho and the cube rendered correctly. Now if I switch back to glOrtho, one face renders (Green) and the rotation is quite weird - probably due to the translation. The cube has 6 different colors, one for each side. Red, Blue, Green, Cyan, White and Purple.
int VideoWindow::Update(double x, double y, double z)
{
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
glFrustum(0.0f, GetWidth(), GetHeight(), 0.0f, 32.0f, 192.0f);
glMatrixMode( GL_MODELVIEW );
SDL_GL_SwapBuffers();
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glRotatef(0, 1.0f, 0.0f, 0.0f);
glRotatef(0, 0.0f, 1.0f, 0.0f);
glRotatef(0, 0.0f, 0.0f, 1.0f);
glTranslated(-x, -y, 0);
return 0;
}
EDIT FINAL:
The problem turned out to be an issue with the Near and Far arguments of glFrustum and the Z value of glTranslated. While change the values has fixed it, I'll probably have to learn more about the relationship between the two functions.
You need a view matrix, and a projection matrix. You can do it one of two ways:
Load the matrix yourself, using glMatrixMode() and glLoadMatrixf(), after you use your own library to calculate the matrices.
Use combinations of glMatrixMode(GL_MODELVIEW) and glTranslate() / glRotate() to create your view matrix, and glMatrixMode(GL_PROJECTION) with glFrustum() to create your projection matrix. Remember - your view matrix is the negative translation of your camera's position (As it's where you should move the world to relative to the camera origin), as well as any rotations applied (pitch/yaw).
Hope this helps, if I had more time I'd write you a proper example!
You have to do it using the matrix stack as for object hierarchy,
but the camera is inside the hierarchy so you have to put the inverse transform on the stack before drawing the objects as openGL only uses the matrix from 3D to camera.
If you have not checked then may be looking at following project would explain in detail what "tsalter" wrote in his post.
Camera from OGL SDK (CodeColony)
Also look at Red book for explanation on viewing and how does model-view and projection matrix will help you create camera. It starts with good comparison between actual camera and what corresponds to OpenGL API. Chapter 3 - Viewing