I have stumbled upon a problem here while writing a program in which I am animating shapes using openGL.
Currently in the program, I am creating some shapes, with the following snippet
for(int i=50;i<=150;i=i+50){
for(int j=50;j<=750;j=j+200){
//Draw rectangle shape at position(j,i); //shape has additional capability for animations }
}
which gives me this output:
Now, I have to resize these rectangles and move them all to another position. I have the final target Point for the first rectangle rectangle at position[0][0] where it should be moved. However, when I animate the size of these rectangles with something like
rectangle.resize(newWidth, newHeight, animationTime);
the rectangle for obvious reasons do not stick together, and I get something like:
I am looking for something like Grouping which can bind these shapes together, so that even when different animations like resize (and motion etc.) are applied, the vertices or the boundaries should be touching together.
Note that Grouping is the main thing here. I might have a requirement in the future in which I would have to group the two rectangles in the last column, where independent animations (like rotations) already happening on them. So, I picture this something like a plane/container having these two rectangle and that plane/container itself can be animated for position etc. I am fine with algorithm/concept and not the code.
Instead of animating the geometry on the CPU, animate scale/position matrices on the CPU and leave the transformation of the geometry to the vertex shader via the MVP matrix. Use one and the same scale matrix for all the rectangles. (Or two matrices, if your scale factor is different in X and Y).
PS. Here's an example:
float sc = 0;
void init()
{
glMatrixMode (GL_MODELVIEW);
glLoadIdentity ();
}
void on_each_frame()
{
// do other things
// draw pulsating rectangles
sc += 0.02;
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glScalef((float)sin(sc) + 1.5f);
// draw rectangles as usual, **without** scaling them
glPopMatrix();
// do other things
}
Think about implementing a 'DrawableAnimatableObject' which is a high level 3D object that is able to animate and draw itself, and contains your polygons (multiple rectangles in your case) as internal data. See the following incomplete code to give you an idea:
class DrawableAnimatableObject {
private:
Mesh *mesh;
Vector3 position;
Quaternion orientation;
Vector3 scale;
Matrix transform;
public:
DrawableAnimatableObject();
~DrawableAnimatableObject();
//update the object properties for the next frame.
//it updates the scale, position or orientation of your
//object to suit your animation.
void update();
//Draw the object.
//This function converts scale, orientation and position
//information into proper OpenGL matrices and passes them
//to the shaders prior to drawing the polygons,
//therefore no need to resize the polygons individually.
void draw();
//Standard set-get;
void setPosition(Vector3 p);
Vector3 getPosition();
void setOrientation(Quaternion q);
Quaternion getOrientation();
void setScale(float f);
Vector3 getScale();
};
In this code, Mesh is a data structure that contains your polygons. Simply put, it can be a vertex-face list, or a more complicated structure like half-edge. The DrawableAnimatableObject::draw() function should look something like this:
DrawableAnimatableObject::draw() {
transform = Matrix::CreateTranslation(position) * Matrix::CreateFromQuaternion(orientation) * Matrix::CreateScale(scale);
// in modern openGL this matrix should be passed to shaders.
// in legacy OpenGL you will apply this matrix with:
glPushMatrix();
glMultMatrixf(transform);
glBegin(GL_QUADS);
//...
// Draw your rectangles here.
//...
glEnd();
glPopMatrix();
}
Related
I'm just a noob to GLSL and don't know how to do this in GLSL.
What I trying to do is making alpha value to 1 on center of sphere and drop gradually on outer.
So I made a prototype using Blender node editor and that's how I did.
Now I am trying to do this in glsl.
Maybe i can use gl_Normal to replace "normal on Geometry" on Blender.
(Though it's removed after version 140, my final goal is just "make" it, so ignore that.)
And there are also dot function to calculate "dot product on vector math" on glsl.
Now i need is "View vector of camera data" and "ColorRamp".
I think "ColorRamp" can be done with mix and sin functions,
but have no idea how to get "View vector of camera data".
I already read this, and understand what it is, but don't know how to get.
So How can I get "View vector of camera data"?
Well without depth the shaders are simple enough:
// Vertex
varying vec2 pos; // fragment position in world space
void main()
{
pos=gl_Vertex.xy;
gl_Position=ftransform();
}
// Fragment
varying vec2 pos;
uniform vec4 sphere; // sphere center and radius (x,y,z,r)
void main()
{
float r,z;
r=length(pos-sphere.xy); // radius = 2D distance to center (ignoring z)
if (r>sphere.a) discard; // throw away fragments outside sphere
r=0.2*(1.0-(r/sphere[3])); // color gradient from 2D radius ...
gl_FragColor=vec4(r,r,r,1.0);
}
Yes you can also use gl_ModelViewProjectionMatrix * gl_Vertex; instead of the ftransform(). As you can see I used world coordinates so I do not need to play with radius scaling... If you want also the gl_FragDepth to make this 3D then you have to work in screen space which is much more complicated and I am too lazy to try it. Anyway change the gradient color to whatever you like.
The rendering in C++ is done like this:
void gl_draw()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
GLint id;
float aspect=float(xs)/float(ys);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60.0/aspect,aspect,0.1,100.0);
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(15.0,0.0,1.0,0.0);
glTranslatef(1.0,1.0,-10.0);
glDisable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
float xyzr[4]={ 0.7,0.3,-5.0,1.5 };
// GL 1.0 circle for debug
int e; float a,x,y,z;
glBegin(GL_LINE_STRIP);
for (a=0.0,e=1;e;a+=0.01*M_PI)
{
if (a>=2.0*M_PI) { e=0; a=2.0*M_PI; }
x=xyzr[0]+(xyzr[3]*cos(a));
y=xyzr[1]+(xyzr[3]*sin(a));
z=xyzr[2];
glVertex3f(x,y,z);
}
glEnd();
// GLSL sphere
glUseProgram(prog_id);
id=glGetUniformLocation(prog_id,"sphere"); glUniform4fv(id,1,xyzr);
glBegin(GL_QUADS);
glColor3f(1,1,1);
glVertex3f(xyzr[0]-xyzr[3],xyzr[1]-xyzr[3],xyzr[2]);
glVertex3f(xyzr[0]+xyzr[3],xyzr[1]-xyzr[3],xyzr[2]);
glVertex3f(xyzr[0]+xyzr[3],xyzr[1]+xyzr[3],xyzr[2]);
glVertex3f(xyzr[0]-xyzr[3],xyzr[1]+xyzr[3],xyzr[2]);
glEnd();
glUseProgram(0);
glFlush();
SwapBuffers(hdc);
}
And result:
In white is the debug GL 1.0 circle to see if the two are placed in the same place. Change the gradient to match your needs. I did not use transparency so if you need it change the alpha component and enable/set BLENDing.
The xs,ys is resolution of my GL window. and xyzr is your sphere { x,y,z,r } definition. Hope I did not forget to copy something. This code and answer take advantage of (so look there for more info in case I miss something):
GLSL render Disc pattern
complete GL+GLSL+VAO/VBO C++ example
So I've been playing with LWJGL 3D object coordinates to 2D screen space coordinates using GLU.gluProject, however I'm finding there to be quite a problem when the xyz of the 3D object is behind the camera. The screen space coordinates seem to be on screen twice, once for the actual potion which works fine, but again for when the object is behind, and the positions are somewhat inverted of the objects true position (camera moves left, so do the screen coordinates twice as fast as the camera).
Here's the code I'm using for 3D to 2D:
public static float[] get2DFrom3D(float x, float y, float z) {
FloatBuffer screen = BufferUtils.createFloatBuffer(3);
IntBuffer view = BufferUtils.createIntBuffer(16);
FloatBuffer model = BufferUtils.createFloatBuffer(16);
FloatBuffer proj = BufferUtils.createFloatBuffer(16);
GL11.glGetFloat(GL11.GL_MODELVIEW_MATRIX, model);
GL11.glGetFloat(GL11.GL_PROJECTION_MATRIX, proj);
GL11.glGetInteger(GL11.GL_VIEWPORT, view);
boolean res= GLU.gluProject(x, y, z, model, proj, view, screen);
if (res) {
return new float[] {screen.get(0), Display.getHeight() - screen.get(1), screen.get(2)};
}
return null;
}
Another query is what the screen.get(2) value is used for, as it majorly varies from 0.8 to 1.1, however occasionally reaches -18 or 30 when the position is just below the camera, and the camera pitch is sat just above or below the horizon.
Any help is appreciated.
Points behins the camera (or on the camera plane) can never be correctly projected. This case can only be handled for primitives like lines or triangles. During rendering, the primitives are clipped against the viewing frustum, so that new vertices (and new primitives) can be generated. But this is impossible to do for a single point, you always need lines or polygon edges to calculate any meaningful intersection point.
Individual points, and this is all what gluProject handles, can either be inside or outside of the frustum. But gluProject does not care abnout that, it just applies the transformations, mirroring points behind the camera in front of the camera. It is the responsibilty of the caller to ensure that the points to project are actually inside of the viewing frustum.
I am working on a C++ project and it is written in MFC Templates;
using the OpenGL Library I am drawing the spheres in a special coordinate. I go to this special coordinate with glTranslatef function, but when I draw two spheres with the same X coordinates, it look likes they have a difference in their x.
For example when I draw two sphere in (x,y,z):(1,1,0), and (x,y,z):(1,2,0) the output is this:
this view is from the above:
This is my function for drawing the spheres:
void MYGLView::DrawSphere(double X_position, double Y_Position, double Z_Position,
GLdouble radius, int longitudeSubdiv, int latitudeSubdiv,
double Red, double Green,double Blue)
{
gluQuadricDrawStyle(m_quadrObj, GLU_FILL);
float shininess = 64.0f;
glPushMatrix();
glTranslatef(X_position,Y_Position,Z_Position);
glColor3f(Red,Green,Blue);
gluSphere(m_quadrObj,radius,longitudeSubdiv,latitudeSubdiv);
//glTranslatef(-3,0,0);
glFlush();
glPopMatrix();
}
Can you tell me where I make the mistake?
your camera is slightly turned downwards. Therefore you have a vanishing point for all vertical lines. If you want that all vertical lines are parallel on the screen your camera is not allowed to tilt downwards. Alternatively you can use parallel projection, where all lines that are parallel in the world remain parallel in the image.
I have a class for projection in OpenGL, which a user can use as follows:
//inside the draw method
customCam1.begin();
//draw various things here
customCam1.end();
The begin and end methods in my class are simple methods right now as follows:
void CustomCam::begin(){
saveGlobalMatrices();
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-hParam,hParam,-tParam,tParam,near,far);//hParam and tParam are supplied by the user of the class
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
void CustomCam::end(){
loadGlobalMatrices();
};
I want the user to be able to create multiple instances of the above class (supply different params for lParam and tParam for each of these classes) and then draw all the three on the screen. In essence, this is like three different cameras for the scene which are two be drawn on the screen. (Consider for example top, right, bottom view to be draw on the screen with the screen divided in three columns).
Now since there's only one projection matrix, how do I achieve three different custom cam views at the same time?
You just have to draw the scene three times using a different projection matrix (camera object in your case) each time. And in each of those three passes you set a different viewport for your renderings to appear in different parts of the overall framebuffer:
glViewport(0, 0, width/3, height); //first column
customCam1.begin();
//draw scene
customCam1.end();
glViewport(width/3, 0, width/3, height); //second column
customCam2.begin();
//draw scene
customCam2.end();
glViewport(2*width/3, 0, width/3, height); //third column
customCam3.begin();
//draw scene
customCam3.end();
But you cannot draw the whole scene using three different projection matrices and three different viewports all in one go.
EDIT: For the sake of completeness you can indeed do this with a single pass, using geometry shaders and the GL_ARB_viewport_array extension (core since 4.1). In this case the vertex shader would just do the modelview transformation and you would have all three projection matrices as uniforms and in the geometry shader generate three different triangles (projected by the respective matrices) for each input triangle, and each with a different gl_ViewportIndex:
layout(triangles) in;
layout(triangle_strip, max_vertices=9) out;
uniform mat4 projection[3];
void main()
{
for(int i=0; i<projection.length(); ++i)
{
gl_ViewportIndex = i;
for(int j=0; j<gl_in.length(); ++j)
{
gl_Position = projection[i] * gl_in[j].gl_Position;
EmitVertex();
}
EndPrimitive();
}
}
But given your use of depracted old functionality, I'd say geometry shaders and OpenGL 4.1 functionality are not yet an option for you (or at least not the first thing to change in your current framework).
I am making a rollercoaster inside of a skybox in OpenGL, and without much background on it's functions or computer graphics it is proving to be very difficult. I drew a rollercoaster using Catmull-Rom spline interpolation, and drew each point with glVertex3f. Now I want to call an update() function every 50ms to move the camera around the track. gluLookAt() is producing weird results, either removing the track from the screen, producing a black screen, etc. I think I need to move some of the matrix functions around but I am not sure where to put each one. Here is my code so far:
int main(int argc, char** argc)
{
// ... load track, etc ...
// Init currpos, nextpos, iter, up
currpos = Vec3f(0, 0, 0);
nextpos = currpos;
iter = 0;
up = Vec3f(0, 1, 0);
deque<Vec3f> points;
Vec3f newpt;
// Loop through the points and interpolate
for (pointVectorIter pv = g_Track.points().begin(); pv != g_Track.points().end(); pv++)
{
Vec3f curr(*pv); // Initialize the current point and a new point (to be drawn)
points.push_back(curr); // Push the current point onto the stack
allpoints.push_back(curr); // Add current point to the total stack
if (points.size() == 4) // Check if there are 4 points in the stack, if so interpolate
{
for (float u = 0.0f; u < 1.0f; u += 0.01f)
{
newpt = interpolate(points[0], points[1], points[2], points[3], u);
glColor3f(1, 1, 1);
glVertex3f(newpt.x(), newpt.y(), newpt.z());
allpoints.push_back(newpt);
}
points.pop_front();
}
}
// glutInit, InitGL(), etc...
}
void InitGL(GLvoid)
{
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(100.0, (GLfloat)WINDOW_WIDTH / (GLfloat)WINDOW_HEIGHT, .0001, 999999);
glMatrixMode(GL_MODELVIEW);
glClearColor(0.0f, 0.0f, 0.0f, 0.5f);
}
void display (void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(currpos.x(), currpos.y(), currpos.z(), nextpos.x(), nextpos.y(), nextpos.z(), up.x(), up.y(), up.z());
glPushMatrix();
glEnable(GL_TEXTURE_2D); // Enable texturing from now on
/* draw skybox, this was from previous assignment and renders correctly */
glPopMatrix();
// now draw rollercoaster ...
glPushMatrix();
glBegin(GL_LINE_STRIP);
deque<Vec3f> points;
Vec3f newpt;
for each (Vec3f pt in allpoints)
{
glColor3f(1, 1, 1);
glVertex3f(pt.x(), pt.y(), pt.z());
}
glutTimerFunc(50, update, 1);
glEnd();
glPopMatrix();
// Swap buffers, so one we just drew is displayed
glutSwapBuffers();
}
void update(int a)
{
if (iter < allpoints.size())
{
currpos = allpoints[iter];
nextpos = allpoints[iter + 1];
gaze = nextpos - currpos;
gaze.Normalize();
Vec3f::Cross3(binorm, gaze, up);
binorm.Normalize();
Vec3f::Cross3(up, binorm, gaze);
up.Normalize();
glutPostRedisplay();
}
iter++;
}
The idea is that I am keeping a global deque allpoints that includes the control points of the spline and the interpolated points. Once that is complete, I call update() every 50ms, and move the camera along each point in allpoints. In a previous version of the project, I could see that the rollercoaster was being drawn correctly. It is gluLookAt() that doesn't seem to work how I want it to. With the code above, the program starts with the camera looking at one side of the skybox with a part of the rollercoaster, and then when update() is called, the rollercoaster disappears but the camera does not move. I have been messing around with where I am putting the OpenGL matrix functions, and depending on where they are sometimes update() will cause a blank screen as well.
Besides the absence of glPopMatrix (which user971377 already spotted), you call glLoadIdentity in your drawing routine, which of course overwrites any changes you did on the modelview matrix in the update method (using gluLookAt).
Always keep in mind: gluLookAt, glOrtho, gluPerspective, glTranslate, glRotate, and all other matrix and transformation functions always work on the top element (changed by glPush/PopMatrix) of the currently selected matrix stack (changed by glMatrixMode). And they always multiply the current matrix, istead of replacing it. So like for gluPerspective, you should call glLoadIdentity before calling gluLookAt. And the whole camera change should be done in the rendering routine, istead of the update routine.
Instead of doing any GL transformations in update you should rather change the variables on which the camera depends and set the camera (gluLookAt on the modelview matrix) in the display method. To demonstrate the standard use of these functions, your code should be something like:
void display()
{
<general state setup (glClear, ...)>
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glLookAt(camera); //view transformation (camera)
//object 1
glPushMatrix(); //save modelview
glTranslate/glRotate/glScale; //local model transformations
<draw object 1>
glPopMatrix();
...
//object n
glPushMatrix(); //save modelview
glTranslate/glRotate/glScale; //local model transformations
<draw object n>
glPopMatrix();
gluSwapBuffers();
}
void update()
{
camera = ...;
}
}
Noticed in your code glPushMatrix(); is called with no glPopMatrix();
Just a thought, this might have something to do with you issue.
gluLookAt always applies its result to current matrix, which in your case is GL_MODEL_VIEW. But when you render your roller coaster, you load identity in that matrix, which erase the value you put using gluLookAt.
In this case, you don't need to touch the model view. In fact, GL_MODEL_VIEW stands for model matrix multiply by view matrix. In this case, you can glPushMatrix() followed by glMulMatrix( myModelMatrix ) and after rendering glPopMatrix(). With this, you can keep your view matrix inside the GL_MODEL_VIEW and still use a different model matrix for each object
I also suggest you only change projection matrix once a frame, and not each frame.
It's been a long time since I touched OpenGL, but here are a few things to consider:
With each call to display(), you are drawing the skybox with the current matrix then loading the identity matrix to draw the roller coaster. Perhaps load the identity within the push/pop so that the skybox is constant, but your prevailing tranformations on the roller coaster are applied.
Do you need to call gluPerspective and glMatrixMode with every call to display()?
Repeatedly calculating binorm from up and then up from binorm will probably give you unexpected results in terms of rotation of the camera around the screen's z axis.
The call to gluLookAt appears to have nextpos and currpos reversed, pointing the camera in the opposite direction.
(Opinion only) It may still look wierd with a completely stationary skybox. Matching camera rotation (but not translation) when drawing the skybox and roller coaster may look better.