I have a class for projection in OpenGL, which a user can use as follows:
//inside the draw method
customCam1.begin();
//draw various things here
customCam1.end();
The begin and end methods in my class are simple methods right now as follows:
void CustomCam::begin(){
saveGlobalMatrices();
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-hParam,hParam,-tParam,tParam,near,far);//hParam and tParam are supplied by the user of the class
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
void CustomCam::end(){
loadGlobalMatrices();
};
I want the user to be able to create multiple instances of the above class (supply different params for lParam and tParam for each of these classes) and then draw all the three on the screen. In essence, this is like three different cameras for the scene which are two be drawn on the screen. (Consider for example top, right, bottom view to be draw on the screen with the screen divided in three columns).
Now since there's only one projection matrix, how do I achieve three different custom cam views at the same time?
You just have to draw the scene three times using a different projection matrix (camera object in your case) each time. And in each of those three passes you set a different viewport for your renderings to appear in different parts of the overall framebuffer:
glViewport(0, 0, width/3, height); //first column
customCam1.begin();
//draw scene
customCam1.end();
glViewport(width/3, 0, width/3, height); //second column
customCam2.begin();
//draw scene
customCam2.end();
glViewport(2*width/3, 0, width/3, height); //third column
customCam3.begin();
//draw scene
customCam3.end();
But you cannot draw the whole scene using three different projection matrices and three different viewports all in one go.
EDIT: For the sake of completeness you can indeed do this with a single pass, using geometry shaders and the GL_ARB_viewport_array extension (core since 4.1). In this case the vertex shader would just do the modelview transformation and you would have all three projection matrices as uniforms and in the geometry shader generate three different triangles (projected by the respective matrices) for each input triangle, and each with a different gl_ViewportIndex:
layout(triangles) in;
layout(triangle_strip, max_vertices=9) out;
uniform mat4 projection[3];
void main()
{
for(int i=0; i<projection.length(); ++i)
{
gl_ViewportIndex = i;
for(int j=0; j<gl_in.length(); ++j)
{
gl_Position = projection[i] * gl_in[j].gl_Position;
EmitVertex();
}
EndPrimitive();
}
}
But given your use of depracted old functionality, I'd say geometry shaders and OpenGL 4.1 functionality are not yet an option for you (or at least not the first thing to change in your current framework).
Related
So I'm trying to render a basic overlay onto my 3D scene, and currently I can either have the 3D scene or the 2D overlay, I cant work out how to get both
In my main method, where render is called, I moved specific render functions to manager classes, so in the main render I call :
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_COLOR_MATERIAL);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-aspect, aspect, -1, 1, -10, 10);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
material.setColour(new Vector3f(1,1,1));
sLight.getPointLight().setPosition(camera.getPosition());
sLight.setDirection(camera.getForward());
DayCycle.getInstance().update(Time.getDelta());
shader.updateUniforms(transform.getTransformation(), transform.getProjectedTransformation(), material);
//material is a wrapper class for textures and specular value etc
//transform is a matrix wrapper for getting projected transformations, taking the camera position when its created
WorldManager.renderAll(true); //true denotes yes to wireframe mode
InterfaceManager.renderAll();
glfwSwapBuffers(window);
glfwPollEvents();
If i comment out WorldManager.renderAll(), I get the little 2d square in the right part of the screen, If i dont comment it, I get the world render but no little square
WorldManager.renderAll()
public static void renderAll(boolean wireframeMode)
{
RendererUtils.setWireframeMode(wireframeMode);
for (String s : chunks.keySet())
{
Chunk actingChunk = chunks.get(s);
Transform transform = new Transform();
Shader shader = PhongShader.getInstance();
transform.setTranslation(new Vector3f(actingChunk.getLocation().getX() * (Chunk.ChunkSize),0.0f, actingChunk.getLocation().getY() * (Chunk.ChunkSize)));
transform.setScale(1.0f, 50f, 1.0f);
shader.updateUniforms(transform.getTransformation(), transform.getProjectedTransformation(), actingChunk.getMaterial());
shader.bind();
actingChunk.getMesh().draw();
//transform.setRotation(new Vector3f(0,0,0));
}
}
InterfaceManager.renderAll()
public static void renderAll()
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, 0, height, -10, 10);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glDisable(GL_CULL_FACE);
glDisable(GL_DEPTH_TEST);
RendererUtils.setWireframeMode(false);
for (Interface i : interfaces)
{
Transform transform = new Transform();
transform.setTranslation(new Vector3f(0,0,0));
InterfaceShader.getInstance().updateUniforms(transform.getProjectedTransformation());
InterfaceShader.getInstance().bind();
i.getMesh().draw();
}
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
}
When I have WorldManager.renderAll() uncommented, i get a nice sea of triangles (as its meant to look) but no 2D square
With it commented, I get a nice little square where its meant to be and nothing else
Shaders are here : https://pastebin.com/xWaWhQHy because I felt this post was getting too long to have them inlined
What's my problem? I cant figure out where it is
Edit : If i've missed any pertinent code, tell me and i'll upload it to a pastebin
Edit 2 : updated my code here to reflect that i'd removed a shader in interfaceManager to actually get a square to draw at all : https://pastebin.com/pHHDsCvF for the shader code
Edit 3 : Ive determined it's something to do with my interface shaders, If i use PhongShader instead of InterfaceShader then it works exactly how I wanted it to
I can suggest you to modify the code this way:
WorldManager.renderAll(true); //true denotes yes to wireframe mode
glClear(GL_DEPTH_BUFFER_BIT);
InterfaceManager.renderAll();
This way you will clear depth buffer before rendering 2d interface.
The problem was that I was still applying transformations to the vertices after passing them to the shader.
By editing out the transformation (and later scrapping the entire vertex shader) in the InterfaceShader instance, the little squares were appearing in the right place
I'm just a noob to GLSL and don't know how to do this in GLSL.
What I trying to do is making alpha value to 1 on center of sphere and drop gradually on outer.
So I made a prototype using Blender node editor and that's how I did.
Now I am trying to do this in glsl.
Maybe i can use gl_Normal to replace "normal on Geometry" on Blender.
(Though it's removed after version 140, my final goal is just "make" it, so ignore that.)
And there are also dot function to calculate "dot product on vector math" on glsl.
Now i need is "View vector of camera data" and "ColorRamp".
I think "ColorRamp" can be done with mix and sin functions,
but have no idea how to get "View vector of camera data".
I already read this, and understand what it is, but don't know how to get.
So How can I get "View vector of camera data"?
Well without depth the shaders are simple enough:
// Vertex
varying vec2 pos; // fragment position in world space
void main()
{
pos=gl_Vertex.xy;
gl_Position=ftransform();
}
// Fragment
varying vec2 pos;
uniform vec4 sphere; // sphere center and radius (x,y,z,r)
void main()
{
float r,z;
r=length(pos-sphere.xy); // radius = 2D distance to center (ignoring z)
if (r>sphere.a) discard; // throw away fragments outside sphere
r=0.2*(1.0-(r/sphere[3])); // color gradient from 2D radius ...
gl_FragColor=vec4(r,r,r,1.0);
}
Yes you can also use gl_ModelViewProjectionMatrix * gl_Vertex; instead of the ftransform(). As you can see I used world coordinates so I do not need to play with radius scaling... If you want also the gl_FragDepth to make this 3D then you have to work in screen space which is much more complicated and I am too lazy to try it. Anyway change the gradient color to whatever you like.
The rendering in C++ is done like this:
void gl_draw()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
GLint id;
float aspect=float(xs)/float(ys);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60.0/aspect,aspect,0.1,100.0);
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(15.0,0.0,1.0,0.0);
glTranslatef(1.0,1.0,-10.0);
glDisable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
float xyzr[4]={ 0.7,0.3,-5.0,1.5 };
// GL 1.0 circle for debug
int e; float a,x,y,z;
glBegin(GL_LINE_STRIP);
for (a=0.0,e=1;e;a+=0.01*M_PI)
{
if (a>=2.0*M_PI) { e=0; a=2.0*M_PI; }
x=xyzr[0]+(xyzr[3]*cos(a));
y=xyzr[1]+(xyzr[3]*sin(a));
z=xyzr[2];
glVertex3f(x,y,z);
}
glEnd();
// GLSL sphere
glUseProgram(prog_id);
id=glGetUniformLocation(prog_id,"sphere"); glUniform4fv(id,1,xyzr);
glBegin(GL_QUADS);
glColor3f(1,1,1);
glVertex3f(xyzr[0]-xyzr[3],xyzr[1]-xyzr[3],xyzr[2]);
glVertex3f(xyzr[0]+xyzr[3],xyzr[1]-xyzr[3],xyzr[2]);
glVertex3f(xyzr[0]+xyzr[3],xyzr[1]+xyzr[3],xyzr[2]);
glVertex3f(xyzr[0]-xyzr[3],xyzr[1]+xyzr[3],xyzr[2]);
glEnd();
glUseProgram(0);
glFlush();
SwapBuffers(hdc);
}
And result:
In white is the debug GL 1.0 circle to see if the two are placed in the same place. Change the gradient to match your needs. I did not use transparency so if you need it change the alpha component and enable/set BLENDing.
The xs,ys is resolution of my GL window. and xyzr is your sphere { x,y,z,r } definition. Hope I did not forget to copy something. This code and answer take advantage of (so look there for more info in case I miss something):
GLSL render Disc pattern
complete GL+GLSL+VAO/VBO C++ example
I'm making a weather simulation in Opengl 4.0 and am trying to create the sky by creating a fullscreen quad in the background. I'm trying to do that by having the vertex shader generate four vertexes and then drawing a triangle strip. Everything compiles just fine and I can see all the other objects I've made before, but the sky is nowhere to be seen. What am I doing wrong?
main.cpp
GLint stage = glGetUniformLocation(myShader.Program, "stage");
//...
glBindVertexArray(FS); //has four coordinates (-1,-1,1) to (1,1,1) in buffer object
glUniform1i(stage, 1);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindVertexArray(0);
vertex shader
uniform int stage;
void main()
{
if (stage==1)
{
gl_Position = vec4(position, 1.0f);
}
else
{
//...
}
}
fragment shader
uniform int stage;
void main()
{
if (stage==1)
{ //placeholder gray colour so I can see the sky
color = vec4(0.5f, 0.5f, 0.5f, 1.0f);
}
else
{
//...
}
}
I should also mention that I'm a beginner in OpenGL and that it really has to be in OpenGL 4.0 or later.
EDIT:
I've figured out where's the problem, but still don't know how to fix it. The square exists, but only displays if I multiply it with the view and projection matrix (but then it doesn't stay glued to the screen and just rotates along with the rest of the scene, which I do not want). Essentially, I somehow need to switch back to 2D or to screen space or however it's called, draw the square, and switch back to 3D so that all the other objects work fine. How?
The issue was with putting a 1 as the z coord – putting 0.999f instead solved the issue.
I have stumbled upon a problem here while writing a program in which I am animating shapes using openGL.
Currently in the program, I am creating some shapes, with the following snippet
for(int i=50;i<=150;i=i+50){
for(int j=50;j<=750;j=j+200){
//Draw rectangle shape at position(j,i); //shape has additional capability for animations }
}
which gives me this output:
Now, I have to resize these rectangles and move them all to another position. I have the final target Point for the first rectangle rectangle at position[0][0] where it should be moved. However, when I animate the size of these rectangles with something like
rectangle.resize(newWidth, newHeight, animationTime);
the rectangle for obvious reasons do not stick together, and I get something like:
I am looking for something like Grouping which can bind these shapes together, so that even when different animations like resize (and motion etc.) are applied, the vertices or the boundaries should be touching together.
Note that Grouping is the main thing here. I might have a requirement in the future in which I would have to group the two rectangles in the last column, where independent animations (like rotations) already happening on them. So, I picture this something like a plane/container having these two rectangle and that plane/container itself can be animated for position etc. I am fine with algorithm/concept and not the code.
Instead of animating the geometry on the CPU, animate scale/position matrices on the CPU and leave the transformation of the geometry to the vertex shader via the MVP matrix. Use one and the same scale matrix for all the rectangles. (Or two matrices, if your scale factor is different in X and Y).
PS. Here's an example:
float sc = 0;
void init()
{
glMatrixMode (GL_MODELVIEW);
glLoadIdentity ();
}
void on_each_frame()
{
// do other things
// draw pulsating rectangles
sc += 0.02;
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glScalef((float)sin(sc) + 1.5f);
// draw rectangles as usual, **without** scaling them
glPopMatrix();
// do other things
}
Think about implementing a 'DrawableAnimatableObject' which is a high level 3D object that is able to animate and draw itself, and contains your polygons (multiple rectangles in your case) as internal data. See the following incomplete code to give you an idea:
class DrawableAnimatableObject {
private:
Mesh *mesh;
Vector3 position;
Quaternion orientation;
Vector3 scale;
Matrix transform;
public:
DrawableAnimatableObject();
~DrawableAnimatableObject();
//update the object properties for the next frame.
//it updates the scale, position or orientation of your
//object to suit your animation.
void update();
//Draw the object.
//This function converts scale, orientation and position
//information into proper OpenGL matrices and passes them
//to the shaders prior to drawing the polygons,
//therefore no need to resize the polygons individually.
void draw();
//Standard set-get;
void setPosition(Vector3 p);
Vector3 getPosition();
void setOrientation(Quaternion q);
Quaternion getOrientation();
void setScale(float f);
Vector3 getScale();
};
In this code, Mesh is a data structure that contains your polygons. Simply put, it can be a vertex-face list, or a more complicated structure like half-edge. The DrawableAnimatableObject::draw() function should look something like this:
DrawableAnimatableObject::draw() {
transform = Matrix::CreateTranslation(position) * Matrix::CreateFromQuaternion(orientation) * Matrix::CreateScale(scale);
// in modern openGL this matrix should be passed to shaders.
// in legacy OpenGL you will apply this matrix with:
glPushMatrix();
glMultMatrixf(transform);
glBegin(GL_QUADS);
//...
// Draw your rectangles here.
//...
glEnd();
glPopMatrix();
}
I need to draw a line between two meshes I've created. Each mesh is associated with a different model matrix. I've been thinking on how to do this and I thought of this:
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(first_object_model_matrix);
glBegin(GL_LINES);
glVertex3f(0, 0, 0); // object coord
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(first_object_model_matrix);
glVertex3f(0, 0, 0); // ending point of the line
glEnd( );
But the problem is that I can't call glMatrixMode and glLoadMatrixf between glBegin and glEnd. I'm also using shaders and the programmable pipeline, so the idea of turning back to the fixed pipeline with my scene rendered isn't exciting.
Can you:
Suggest me precisely how to draw a line between two meshes (I have their model matrix) with shaders.
or
Suggest me how to write code similar to the one above to draw a line having two meshes model matrices.
Calculate the line's two points by multiplying each one with one of your model matrices. The following is pseudo-code. Since you're using Qt, you could use its built-in maths libraries to accomplish this effect.
vec3 line_point_1 = model_matrix_object1 * vec4(0, 0, 0, 1);
vec3 line_point_2 = model_matrix_object2 * vec4(0, 0, 0, 1);
// Draw Lines
The position of the second point can simply be taken from the w vector of the model_matrix_object2. No need to multiply with (0,0,0,1).
This is because a 4x4 matrix in OpenGL is usually an ortho matrix consisting of a 3x3 rotational part and a translational vector. The last row is then padded with 0,0,0,1. If you want to know where a 4x4 matrix would translate simply get the vector in the right-most column.
See Given a 4x4 homogeneous matrix, how can i get 3D world coords? for more info.