Defining and Manipulating a 3D Object with Specific Rotations - opengl

I have rendered a very rough model of a molecule that consists of 7 helices and would like to ask if there is anyway possible to allow the helices themselves to tilt (rotate) in certain ways so as to interact with one another. For clarity, I insert an image of my program output (although for an orthographic projection, so it appears as the projection of a 3D helix onto a 2D plane).
I have included the code for rending a single helix (all others are the same).
Would it be useful to store the geometry of my objects in vertex arrays instead of rendering them each time separately for the 7 different colors? (Each helix consists of 36,000 vertices and I am concerned that the arrays might get large enough to cause serious performance issues?)
I understand the matrix stack is the data structure for performing multiple consecutive individual transformations on particular objects, but I not sure how exactly to specify so that an entire one of my helices can tilt? (glRotatef does not actually tilt the helices for some reason)
/*HELIX RENDERING*/
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glTranslatef(0.0, 100.0, -5.0); //Move Position
glRotatef(90.0, 0.0, 0.0, 0.0);
glBegin(GL_LINE_STRIP);
for(theta = 0.0; theta <= 360.0; theta += 0.01) {
x = r*(cosf(theta));
y = r*(sinf(theta));
z = c*theta;
glVertex3f(x,y,z);
glColor3f(1.0, 1.0, 0.0);
}
glEnd();
glPopMatrix();

Would it be useful to store the
geometry of my objects in vertex
arrays instead of rendering them each
time separately for the 7 different
colors? (Each helix consists of 36,000
vertices and I am concerned that the
arrays might get large enough to cause
serious performance issues?
Drawing geometry using vertex arrays always makes sense. And in your case, the overhead caused by those 36k * (5 floating pointer operations + 2 function calls) will seriously affect your performance. Using vertex arrays will give you a 100× performance gain easily, just because you're not recreating the data each and every call.
You may also be interested in not using lines, since you can't shade those in any useful way. I'd render those helices by creating basic building blocks, created from ellipses extruded along the helical. One basic block for the intra helix and two caps. The chirality is easily changed by mirroring along one axis. With modern OpenGL implementations you can implement instancing on the intra-helix-element to further increase performance.
If you want to flex the helix, I'd do this using skeletal skinning.

Related

OpenGL Fog does not appear

I wanted to create a coordinate system with some lines in it, and wanted to display one window with depth-fog.
My "fog-code" looks like this:
glEnable(GL_FOG);
float fogColor[4] = {0.8, 0.8, 0.8, 1};
glFogi(GL_FOG_MODE, GL_LINEAR);
glFogfv(GL_FOG_COLOR, fogColor);
glFogf(GL_FOG_DENSITY,0.8);
glHint(GL_FOG_HINT, GL_NICEST);
glFogf(GL_FOG_START,0.1);
glFogf(GL_FOG_END,200);
and is placed in my main function (don't know yet if this could cause any problems, but just to be sure), right after the init()-call and before my display-function-call.
Update:
The problem was actually really simple: My problem was, that I worked solely on the GL_MODELVIEW-matrix, thinking there was no real difference to the GL_PROJECTION-matrix. According to this article and the post from Reto Koradi, there is a pretty significant difference. I hugely recommend reading the full article to better understand the system behind OpenGL (definitely helped me a lot).
The corrected code (for my init()-call) would then be:
void init2()
{
glClearColor (1.0, 1.0, 1.0, 0.0); // set background color to white
glMatrixMode(GL_PROJECTION); // switch to projection mode
glLoadIdentity(); // initialize a projection matrix
glOrtho(-300, 300, -300, 300, -800, 800); // map coordinates to the viewport
gluLookAt(2,2,10, 0,0,-0.5, 0,1,0);
glMatrixMode(GL_MODELVIEW); // now switch to modelview mode
}
The fog equation is evaluated based on the value of (quote from OpenGL 2.1 spec):
Otherwise, if the fog source is FRAGMENT DEPTH, then c is the eye-coordinate distance from the eye, (0,0,0,1) in eye coordinates, to the fragment center.
FRAGMENT_DEPTH is the default, so this applies in your case. Eye coordinate refers to the coordinates after the model-view transformation has been applied. So it's the distance from the origin after applying the model-view transform. The spec also allows implementations to use the absolute value of the z-coordinate instead of the distance from the origin.
One small observation on your code: GL_FOG_DENSITY does not matter if the mode is GL_LINEAR. It is only used for the exponential modes.
For GL_LINEAR mode, the behavior is pretty much as you would expect. The original fragment color is linearly blended with the fog color within the range GL_FOG_START to GL_FOG_END. So everything smaller than GL_FOG_START has the original fragment color, everything after GL_FOG_END has the fog color, and the values in between are linear interpolations between the two, with gradually more fog color and less original fragment color.
To get good results, you'll have to play with the GL_FOG_START and GL_FOG_END values. If you don't get as much for as desired, you can start by reducing the value of GL_FOG_END.
I peeked at the linked code, and noticed one problem: You're specifying the projection matrix while you're in GL_MODELVIEW matrix mode. You need to be careful that you specify the matrices in the correct matrix mode, which is GL_PROJECTION for the projection matrix.
Mixing up the matrix modes does not have an adverse effect on the resulting vertex coordinates, since both the model-view and projection matrices are applied to the vertices. So for very simple use, you can sometimes get away with using the wrong mode. But once lighting comes into play, it is critical to use the correct matrix mode, since lighting calculations are done after the model-view transformation has been applied, but before the projection transformation.
And yes, as others already pointed out, a lot of this actually gets simpler if you write your own shaders. The fact that I quoted the OpenGL 2.1 spec is probably a hint that this functionality is old and obsolete.
Like to many things that OpenGL-1.1 did, fog is calculated on a per vertex level. So if you have a long line, with only two points, fog is calculated only for the end points and then the color interpolated linear inbetween. Depending on how your line is aligned and which shading mode you use, this may result in no apparent fogging.
Two solutions:
Subdivide the lines into a couple of dozen line segments, so to sample the fog at more than two points.
or
Use a fragment shader instead and calculate the fog term therein. This is what I suggest doing.

Render a 3D model with the same size regardless of camera position

I've got a particular model that acts as controls in the viewer. The user can click on different parts of it to perform transformations on another model in the viewer (like controls/handles in applications like Unity or Blender).
We'd like the controls to remain the same size regardless how zoomed in/out the camera is. I've tried scaling the size of it based on the distance between the object and the camera but it isn't quite right. Is there a standard way of accomplishing something like this?
The controls are rendered using the fixed pipeline, but we've got other components using the programmable pipeline.
The easy answer is "use the programmable pipeline" because it's not that difficult to write
if(normalObject) {
gl_Position = projection * view * model * vertex;
} else {
gl_Position = specialMVPMatrix * vertex;
}
Whereas you'll spend a lot more code trying to get this to work in the Fixed-Function-Pipeline and plenty more CPU cycles rendering it.
With the fixed pipeline, the easiest way to do this is to simply not apply any transformations when you render the controls:
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
// draw controls
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
The glPushMatrix()/glPopMatrix() calls will make sure that the previous matrices are restored at the end of this code fragment.
With no transformation at all, the range of coordinates mapped to the window will be [-1.0 .. 1.0] in both coordinate directions. If you need anything else, you can apply the necessary transformations before you start drawing the controls.

opengl 3.3 z-fighting ortho 2d view

I'm having some issues with z fighting while drawing simple 2d textured quads using opengl. The symptoms are both objects moving at the same speed and one on top of another but periodically one can see through the other and vice versa - sort of like a "flickering". I assume this is indeed z fighting.
I have turned off Depth Testing and have the following as well:
gl.Disable(gl.DEPTH_TEST)
gl.DepthFunc(gl.LESS)
gl.Enable(gl.BLEND)
gl.BlendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA)
My view and ortho matrices are as follows:
I have tried to set the near and far distances much greater ( like range of 50000 but still no help)
Projection := mathgl.Ortho(0.0, float32(width), float32(height), 0.0, -5.0, 5.0)
View := mathgl.LookAt(0.0, 0.0, 5.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0)
The only difference with my opengl process is that instead of a drawelements call for each individual object, I package all vertices, uvs(sprite atlas), translation, rotation, etc in one big package sent to vertex shader.
Does anyone have remedies for 2d z fighting?
edit:
i'm adding some pictures to further describe the scenario:
These images are taken a few seconds apart from each other. They are simply texture moving from left to right. As they move; you see from the image, that one sprite over-lapse the other and vice versa back and forth etc very fast.
Also note that my images (sprites) are pngs that have a transparent background to them..
It definitely isn't depth fighting if you have depth testing disabled as shown in the code snippet.
"I package all vertices, uvs(sprite atlas), translation, rotation, etc in one big package sent to vertex shader." - You need to look into the order that you add your sprites. Perhaps it's inconsistent for some reason.
This could be Z fighting
the usual causes are:
fragments are at the same Z-coordinate or closer then accuracy of Z-coordinate
fragments are too far from perspective camera with perspective projection the more far you are from Z near the less accuracy
some ways to fix this:
change size/position of overlapped surfaces slightly
use more bits for Z-Buffer (Depth)
use linear or logarithmic Z-buffer
increase Z-near or decrease Z-far or both for perspective projection you can combine more frustrums to get high definition Z range
sometimes helps to use glDepthFunc(GL_LEQUAL)
This could be an issue with Blending.
as you use Blending you need to render a bit differently. To render transparency correctly you must Z-sort the scene otherwise artifacts can occur. If you got too much dense geometry of transparent objects or objects near them (many polygon edges near). In addition Z-fighting creates a magnitude higher artifacts with blending.
some ways to fix this:
Z sorting can be partially done by multi pass rendering + Depth test + switching front face
so first render all solids and then render Z-sorted transparent objects with front face set to the side not facing camera. Then render the same objects with front face set for side facing camera. You need to use depth test for this!!!. This way you do not need to sort all polygons of scene just the transparent objects. Results are not 100% correct for complex transparent geometries but the results are usually good enough (especially for dynamic scenes). This is how the output from this looks like
it is a glass cup a bit messed up visually by selected blending function for this case because darker pixels means 2 layers of glass on purpose it is not a bug. Therefore the opening looks like the front/back faces are swapped
use less dense geometry for transparent objects
get rid of Z-fighting issues

Black Screen Effect - OpenGL

I'm new to OpenGL and I've been experiencing the "Black Screen Effect". I've spent ages trying to work out why I'm not seeing anything and I haven't had much success. I'm using LWJGL and here is the piece of code I'm trying to run:
glViewport(0,0,DISPLAY_WIDTH,DISPLAY_HEIGHT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho( 6200000.0f, 6300000.0f, 350000.0f, 380000.0f, -10000000.0f, 100000000.0f);
gluLookAt(368000.0f, 6250000.0f, -10000.0f, 368000.0f, 6250000.0f, 10000.0f, 0.0f,1.0f,0.0f);
glPushMatrix();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glPushMatrix();
if(ready)
{
glColor3f(1.0f,0.5f,1.0f);
glPointSize(100);
glBegin(GL_POINTS);
for(int i = 0; i < data.length; i += 100)
{
glColor3f(1.0f,1.0f,1.0f);
glVertex3f((float) data[i][1], (float) data[i][2],0.0f);
System.out.println((float) data[i][1] + ", " + (float) data[i][2]);
}
glEnd();
System.out.println("\n\nfinished drawing\n\n");
glFlush();
I am drawing in a different colour that i used to clear the screen.
My data set is quite large (over 100 000 points) so I tried plotting every hundredth point, but that's not working.
I am also trying to plot points at positions such as (400 000, 6 800 000) would this be presenting me with problems? I'm pretty sure that 32bit floating point numbers should be able to handle these values.
I am pretty certain that a pixel with size=1 will try to plot as 1 pixel on the screen, regardless of how small it is compared with the bounds of the orthographic projection.
Maybe I'm dealing with the projection matrix incorrectly.
First, like said in my comment don't use gluLookAt on the projection matrix. It defines the camera (view) and therefore belongs to the model view matrix. This isn't the cause for your problem and it should also work this way, but it is conceptually wrong.
Next, if you call this code every frame, you push a new matrix onto the stack every frame, without calling glPopMatrix. glPushMatrix is generally there to save the current matrix and restore it later with a call to glPopMatrix, because every other command (like glLoadIdentity, but also gluLookAt and glOrtho) modifies the current matrix (the one selected by glMatrixMode).
Otherwise, you should always keep the size of your scene in relation to the viewing volume (the glOrtho parameters in your case) in mind. At the moment you're looking from point (368000, 6250000, -10000) to point (368000, 6250000, 10000). Together with the glOrtho parameters this should define your viewing volume to be the [368000-6300000 , 368000-6200000] x [6250000+350000 , 6250000+380000] x [-10000000-10000, 100000000-10000] box. If you don't transform your points further by any local transformations, their coordinates should ly in these intervals to be visible. Keep an eye on the minus in the x-interval. This is due to the fact that you actually rotated the view volume 180 degrees around the y-axis, because you defined the view to look from -z to z, whereas GL's default eye-space defines the viewer to look from z to -z (which usually is not that much of a problem with an origin-symmetric viewing volume, but yours is highly asymmetric).
Although your numbers are extremely strange they should be handlable by 32bit floats. But are you really sure you want your points to have a size of 100 pixels (if this is even supported)?
And if you only draw 2D points in an orthographic view, I'm also not sure if you need such a HUGE depth range.

OpenGL: scale then translate? and how?

I've got some 2D geometry. I want to take some bounding rect around my geometry, and then render a smaller version of it somewhere else on the plane. Here's more or less the code I have to do scaling and translation:
// source and dest are arbitrary rectangles.
float scaleX = dest.width / source.width;
float scaleY = dest.height / source.height;
float translateX = dest.x - source.x;
float translateY = dest.y - source.y;
glScalef(scaleX, scaleY, 0.0);
glTranslatef(translateX, translateY, 0.0);
// Draw geometry in question with its normal verts.
This works exactly as expected for a given dimension when the dest origin is 0. But if the origin for, say, x, is nonzero, the result is still scaled correctly but looks like (?) it's translated to something near zero on that axis anyways-- turns out it's not exactly the same as if dest.x were zero.
Can someone point out something obvious I'm missing?
Thanks!
FINAL UPDATE Per Bahbar's and Marcus's answers below, I did some more experimentation and solved this. Adam Bowen's comment was the tip off. I was missing two critical facts:
I needed to be scaling around the center of the geometry I cared about.
I needed to apply the transforms in the opposite order of the intuition (for me).
The first is kind of obvious in retrospect. But for the latter, for other good programmers/bad mathematicians like me: Turns out my intuition was operating in what the Red Book calls a "Grand, Fixed Coordinate System", in which there is an absolute plane, and your geometry moves around on that plane using transforms. This is OK, but given the nature of the math behind stacking multiple transforms into one matrix, it's the opposite of how things really work (see answers below or Red Book for more). Basically, the transforms are "applied" in "reverse order" to how they appear in code. Here's the final working solution:
// source and dest are arbitrary rectangles.
float scaleX = dest.width / source.width;
float scaleY = dest.height / source.height;
Point sourceCenter = centerPointOfRect(source);
Point destCenter = centerPointOfRect(dest);
glTranslatef(destCenter.x, destCenter.y, 0.0);
glScalef(scaleX, scaleY, 0.0);
glTranslatef(sourceCenter.x * -1.0, sourceCenter.y * -1.0, 0.0);
// Draw geometry in question with its normal verts.
In OpenGL, matrices you specify are multiplied to the right of the existing matrix, and the vertex is on the far right of the expression.
Thus, the last operation you specify are in the coordinate system of the geometry itself.
(The first is usually the view transform, i.e. inverse of your camera's to-world transform.)
Bahbar makes a good point that you need to consider the center point for scaling. (or the pivot point for rotations.) Usually you translate there, rotate/scale, then translate back. (or in general, apply basis transform, the operation, then the inverse). This is called Change of Basis, which you might want to read up on.
Anyway, to get some intuition about how it works, try with some simple values (zero, etc) then alter them slightly (perhaps an animation) and see what happens with the output. Then it's much easier to see what your transforms are actually doing to your geometry.
Update
That the order is "reversed" w.r.t. intuition is rather common among beginner OpenGL-coders. I've been tutoring a computer graphics course and many react in a similar manner. It becomes easier to think about how OpenGL does it if you consider the use of pushmatrix/popmatrix while rendering a tree (scene-graph) of transforms and geometries. Then the current order-of-things becomes rather natural, and the opposite would make it rather difficult to get anything useful done.
Scale, just like Rotate, operates from the origin. so if you scale by half an object that spans the segment [10:20] (on axis X, e.g.), you get [5:10]. The object therefore was scaled, and moved closer to the origin. Exactly what you observed.
This is why you apply Scale first in general (because objects tend to be defined around 0).
So if you want to scale an object around point Center, you can translate the object from Center to the origin, scale there, and translate back.
Side note, if you translate first, and then scale, then your scale is applied to the previous translation, which is why you probably had issues with this method.
I haven't played with OpenGL ES, just a bit with OpenGL.
It sounds like you want to transform from a different position as opposed to the origin, not sure, but can you try to do the transforms and draws that bit within glPushMatrix() and glPopMatrix() ?
e.g.
// source and dest are arbitrary rectangles.
float scaleX = dest.width / source.width;
float scaleY = dest.height / source.height;
float translateX = dest.x - source.x;
float translateY = dest.y - source.y;
glPushMatrix();
glScalef(scaleX, scaleY, 0.0);
glTranslatef(translateX, translateY, 0.0);
// Draw geometry in question with its normal verts.
//as if it were drawn from 0,0
glPopMatrix();
Here's a simple Processing sketch I wrote to illustrate the point:
import processing.opengl.*;
import javax.media.opengl.*;
void setup() {
size(500, 400, OPENGL);
}
void draw() {
background(255);
PGraphicsOpenGL pgl = (PGraphicsOpenGL) g;
GL gl = pgl.beginGL();
gl.glPushMatrix();
//transform the 'pivot'
gl.glTranslatef(100,100,0);
gl.glScalef(10,10,10);
//draw something from the 'pivot'
gl.glColor3f(0, 0.77, 0);
drawTriangle(gl);
gl.glPopMatrix();
//matrix poped, we're back to orginin(0,0,0), continue as normal
gl.glColor3f(0.77, 0, 0);
drawTriangle(gl);
pgl.endGL();
}
void drawTriangle(GL gl){
gl.glBegin(GL.GL_TRIANGLES);
gl.glVertex2i(10, 0);
gl.glVertex2i(0, 20);
gl.glVertex2i(20, 20);
gl.glEnd();
}
Here is an image of the sketch running, the same green triangle is drawn, with translation and scale applied, then the red one, outsie the push/pop 'block', so it is not affected by the transform:
HTH,
George