I actually have two question.
I am learning OpenGL and I encountered that many samples in internet pass view matrix, projection matrix and model matrix or combination of them to shader. I want to know why? Because you already have them from gl_modelview, gl_modelviewporjection and etc... so whats the use of passing them again as uniform to shader?
So anyhow I want to build a shadow map but I dont get it what to pass to shader to transform coordinates into shadow map. I prefer using standard gl_* matrixes as I already coded my program based on them.
Here is the code I have now.
void FirstPass()
{
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, shadow_fbo);
glViewport(0,0,shadow_Width,shadow_Height);
glClear(GL_DEPTH_BUFFER_BIT);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
}
void SecondPass()
{
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,0);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glActiveTexture(GL_TEXTURE7);
glBindTexture(GL_TEXTURE_2D,shadow_texmap);
}
void display(void)
{
glUseProgramObjectARB(0);
float myarray[16];
FirstPass();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(light_positionFix[0], light_positionFix[1], light_positionFix[2], 0, 0, 0, 0, 1, 0);
DrawObjects();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
SecondPass();
if (!LightFollowCamera)
glLightfv(GL_LIGHT0, GL_POSITION, light_positionFix);
gluLookAt(eye[0], eye[1], eye[2], lookat[0], lookat[1], lookat[2], 0, 1, 0);
if (LightFollowCamera)
{
light_positionFix[0]=eye[0];
light_positionFix[1]=eye[1];
light_positionFix[2]=eye[2];
}
DrawObjects();
glutSwapBuffers ();
}
Lots of these shader variables still work but are deprecated since OpenGL 3. For an up-to-date list of the existing built in variables take a look at page 7 of this monstrous pdf. Outdated variables aren't even mentioned there anymore. The pdf is for the very latest version of OpenGL which you shouldn't target as a beginner because you don't need all the cutting edge features. OpenGL 3.2 (core profile) is perfectly fine in terms of compability with 4.x, the support from graphics vendors and you'll find all the features you need as a beginner. Take a look at the quick reference card. Old built-in variables are still mentioned in 3.2 but are marked as deprecated. The often used term modern OpenGL relates to OpenGL 3.x core profile or higher.
Related
This might be a more basic OpenGL mistake than the title suggests.
I am doing segmentation using fragment shaders in OpenGL, which require multiple rendering passes to do successive operations (eg. gaussian blur + edge detection + segmentation).
As far as I understood, there is this common technique called ping pong which takes two frame buffers (FBO) and simply renders to one FBO using the other as input.
The thing is, one pass--shader_0 outputting stuff to FBO_1 using FBO_0 as input--works, but when I try to use shader_1 with FBO_0 as input and render into FBO_1, I get a completely transparent image.
I checked both shaders and they do work individually, yet together they produce this transparent output.
Here is the set of calls I do for each pass, with segmentationBuffers containing the two FBOs, respectively used as input and output for this pass:
glBindFramebuffer(
GL_FRAMEBUFFER,
segmentationBuffers[lastSegmentationFboRenderedTo]->FramebufferName
);
glViewport(0, 0, windowWidth, windowHeight);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
currentStepShader->UseProgram();
glClearColor(0, 0, 0, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Enable blending
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
lastSegmentationFboRenderedTo = (lastSegmentationFboRenderedTo + 1) % 2;
glActiveTexture(GL_TEXTURE0);
glBindTexture(
GL_TEXTURE_2D,
segmentationBuffers[lastSegmentationFboRenderedTo]->renderedTexture
);
glUniform1i(glGetUniformLocation(shader->shaderPtr, "inputTexture"), 0);
glUniform2fv(
glGetUniformLocation(shader->shaderPtr, "texCoordOffsets"),
25,
texCoordOffsets
);
quad->Draw(GL_TRIANGLES, shader,
orthographicProjection,
glm::mat4(1.0f),
getOverlayModelMatrix()
);
And as stated above, doing one pass yields correct intermediary results, but doing two in a row gives a transparent frame. I suspect this is a more basic OpenGL mistake than it seems, but any help is appreciated!
I solved the issue by removing the call to glEnable(GL_DEPTH_TEST);.
I suspect that by enabling depth testing, OpenGL was discarding fragments from subsequent computation steps since they had the same depth value.
My OpenGL window is drawn like this:
glClearColor(0.3f, 0.4f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
I want to use a texture to fill up the window.
Is there an easier way to do that, instead of creating another VBO, EBO besides the one I'm already using for my triangles?
Since there is the glClearColor that fills the background..
The most direct and generally most efficient way to draw a texture to the window is by using glBlitFramebuffer().
To use this, you need to create an FBO, and attach your texture texId to it:
GLuint fboId = 0;
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fboId);
glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, texId, 0);
Note that the code above bound GL_READ_FRAMEBUFFER, since we want to use this as the source of the blit.
Then, to copy the content:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); // if not already bound
glBlitFramebuffer(0, 0, width, height, 0, 0, width, height,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
This is for the case where texture and window have the same size. Otherwise, you can specify different sizes in the first 8 arguments, and may want to use GL_LINEAR for the last parameter.
Using glBlitFramebuffer() has a few advantages over drawing a window sized textured quad:
It needs fewer API calls.
You don't need to write a shader for the copy operation.
You don't need to bind a different shader program, which can reduce overhead.
The driver may have a more optimized code path for the operation, compared to using an app provided shader and draw call.
Many GPUs have dedicated units for blitting data, which can be more efficient than the programmable shader units. They can also potentially run in parallel to the general purpose programmable part of the GPU, allowing the copy to be executed in parallel with rendering. If that applies, the performance gain can be very substantial.
In one word: No.
Well in legacy OpenGL there'd be glDrawPixels but this function never was very well supported and dead slow on most implementation. You better forget that I told you about it. Also it's been removed from modern OpenGL and never existed in OpenGL-ES.
There are already some answers to this question, but I want to add some more alternatives, for completeness:
1. attributeless rendering
With modern GL, you can render completely without vertex attributes. You can put the 4 2d coordiantes of the full screen rect directly as a const array into the vertex shader and access them via gl_VertexID:
// VERTEX SHADER
#version 150 core
out vec2 v_tex;
const vec2 pos[4]=vec2[4](vec2(-1.0, 1.0),
vec2(-1.0,-1.0),
vec2( 1.0, 1.0),
vec2( 1.0,-1.0));
void main()
{
v_tex=0.5*pos[gl_VertexID] + vec2(0.5);
gl_Position=vec4(pos[gl_VertexID], 0.0, 1.0)
}
// FRAGMENT SHADER
#version 150 core
in vec2 v_tex;
uniform sampler2D texSampler;
out vec4 color;
void main()"
{
color=texture(texSampler, v_tex);
}
If your texture exactly matches the resolution of your viewport (so you are not scaling the texture at all), you can completely remove the v_tex varying and use color=texelFetch(texSampler, ivec2(gl_FragCoord.xy)) in the FS, as #datenwolf suggested in his comment.
In any case, you still need some VAO bound, even if no attributes are enabled in it. So this method requires you to do the following once during intialization:
Create and compile the shaders and link them to the program
Create a new VAO name by a glGenVertexArrays() call
And for drawing, you have to:
Bind the texture you want to draw
Use the program
Bind the (still empty) VAO
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4)
You might also be able to simply re-use the currently bound VAO. As the shader does not access any attributes, it does not matter what data your VBOs provide, and which attributes are enabled currently.
This method requires you to switch the shader, which isn't exactly cheap either, so it might be better to just switch the buffer bindigs and keep the current shader.. But you might need to switch the shader anyway.
2. nvidia-specifc extension
NVidia provides a specific extension for the task of drawing a texture to the screen: NV_draw_texture. This introduces the glDrawTextureNV() function which allows drawing a texture without setting changing anything on the GL state. Quoting from the overview section of the extension spec:
While this functionality can be obtained in unextended OpenGL by drawing a
rectangle and using a fragment shader to do a texture lookup,
DrawTextureNV() is likely to have better power efficiency on
implementations supporting this extension. Additionally, use of this
extension frees the application developer from having to set up
specialized shaders, transformation matrices, vertex attributes, and
various other state in order to render the rectangle.
The drawback of this method is of course that it is nvidia-specific, so it is probably of less practical use in a general GL application.
You can render your texture to a fullscreen quad using an ortographic projection:
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glDisable(GL_LIGHTING);
// Set up ortographic projection
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(0, width, 0, height, -1, 1);
// Render a quad
glBegin(GL_QUADS);
glTexCoord2f(0,0); glVertex2f(0,0);
glTexCoord2f(0,1); glVertex2f(0,width);
glTexCoord2f(1,1); glVertex2f(height, width);
glTexCoord2f(1,0); glVertex2f(height,0);
glEnd();
// Reset Projection Matrix
glPopMatrix();
glDisable(GL_TEXTURE_2D);
glEnable(GL_LIGHTING);
Render this into your framebuffer instead of glClearColor.
Is it possible to use both old and new OpenGL in one program?
Assuming I've understood the difference.
In my program I've used:
WGL_CONTEXT_MAJOR_VERSION_ARB, 3,
WGL_CONTEXT_MINOR_VERSION_ARB, 2,
But for example, can I use a function that contains this to draw a grid: (old)
glBegin(GL_LINES);
glVertex3f(-50, 0, (GLfloat)x);
glVertex3f( 50, 0, (GLfloat)x);
glVertex3f((GLfloat)x, 0, -50);
glVertex3f((GLfloat)x, 0, 50);
glEnd();
And a function like this, to texture and render something: (new)
glUseProgram(myShader->handle());
glBindTexture(GL_TEXTURE_2D, texName);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBindVertexArray(m_vaoID[0]); //select first VAO
glDrawArrays(GL_TRIANGLES, 0, 6); //draw two triangles
glDisable(GL_BLEND);
glUseProgram(0);
glBindVertexArray(0);
Or does the use of newer versions that make use of vao/vbo's make functions that contain glBegin/glEnd obsolete?
I hope that makes sense. Please excuse the naivety.
If it's an OpenGL 3.2 or higher compatibility profile then yes, you can mix immediate mode calls with proper rendering. Whether you should or not is another matter (you probably shouldn't in production code, but it can be useful for debugging). With a core profile, you won't be able to use the deprecated APIs.
Note that prior to 3.2, there was no concept of profiles, so with a 3.0/3.1 context, things are more complicated (see link above), but in practice there isn't much use in targeting 3.0/3.1 since just about any 3.0 capable hardware will be fine with 3.2.
In my work I overlap a part of a captured frame with an image. I open my webcam with openCV and then I transform the captured frame in a texture and display it in a GLUT window. Also, I overlap a part of this texture with this image:
I do this in real time, and the result is:
As you can see, edges of projected image are inaccurate. I think it is an aliasing problem, but I don't know how to do the antialiasing process with opengl. I've tried to look for on web, but I didn't find a good solution for my problem.
In my "calculate" function I transform the mat image into a texture usign the following code:
GLvoid calculate(){
...
...
cvtColor(image, image, CV_BGR2RGB);
glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, textures[1]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
//glTexImage2D(GL_TEXTURE_2D, 0, 4,image.cols, image.rows, 0, GL_RGB,GL_UNSIGNED_BYTE, image.data);
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, image.cols, image.rows, GL_RGB, GL_UNSIGNED_BYTE, image.data);
}
and I show the result using this code:
GLvoid Show(void) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
// Matrice di proiezione
glMatrixMode (GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, WIDTH, HEIGHT, 0);
// Matrice model view
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
...
...
glBindTexture(GL_TEXTURE_2D, textures[1]);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex2f((GLfloat)((coord[3].x)),(GLfloat)(coord[3].y));
glTexCoord2f(1.0f, 0.0f); glVertex2f((GLfloat)((coord[0].x)),(GLfloat)(coord[0].y));
glTexCoord2f(1.0f, 1.0f); glVertex2f((GLfloat)((coord[1].x)),(GLfloat)(coord[1].y));
glTexCoord2f(0.0f, 1.0f); glVertex2f((GLfloat)((coord[2].x)),(GLfloat)(coord[2].y));
glEnd();
}
glFlush();
glutSwapBuffers();
}
In initialization function I write this:
GLvoid Init() {
glGenTextures(2, textures);
glClearColor (0.0, 0.0, 0.0, 0.0);
glEnable (GL_POLYGON_SMOOTH);
glHint (GL_POLYGON_SMOOTH_HINT, GL_DONT_CARE);
glDisable(GL_DEPTH_TEST);
}
but it doesn't work...
I work on Win7 x64, with OPenGL 4.0 and Glut 3.7. My video card is an NVidia GeForce gt 630. also I enabled antialiasing from Nvidia control panel, but nothing is changed.
does anyone know how to help me?
I solved my problem! I used GLFW insted of GLUT, as #Michael IV suggested me!
in order to do antialiasing with GLFW i used this line of code:
glfwOpenWindowHint(GLFW_FSAA_SAMPLES,4);
and the result now is very good, as you can see in the following image.
Thanks for your help!
First I wonder why you are using OpenGL 4.0 to work with fixed (deprecated ) pipeline...
But let's get to the problem.What you need is MSAA .I am not sure ,enabling it via control panel will always do the trick.Usually it is done inside the code.
Unfortunately for you , you selected to use GLUT which has no option to set hardware MSAA level.If you want to be able to do so switch to GLFW.Another option is do do it manually but that implies you use custom FBOs.In such a scenario you can create FBO with MSAA texture attachment setting MSAA level for the texture (also you can apply custom multisampling algorithms in fragment shader if you wish).
Here is a thread on this topic.
GLFW allows you specifying MSAA level on window setup.See the related API.
MSAA does degrade the performance ,but how much depends on your hardware and probably OpenGL drivers.
I'm trying to use GWEN to draw some GUI elements on top of my opengl scene. It seems to have set up correctly but nothing from gwen is actually being drawn (visibly at least). I'm using a custom renderer which is essentially GWEN's stock opengl renderer but with a different function for loading textures. And OpenGL::Begin() and OpenGL::End() replaced with these:
void coRenderer::Begin()
{
glUseProgram(0);
glDisable(GL_DEPTH_TEST);
glDepthMask(0);
glEnable(GL_BLEND);
glMatrixMode(GL_PROJECTION); // Select The Projection Matrix
glPushMatrix(); // Store The Projection Matrix
glLoadIdentity();
glOrtho(0, screen->w, screen->h, 0, -1, 1 );
glMatrixMode(GL_MODELVIEW);
glActiveTexture(GL_TEXTURE0);
}
void coRenderer::End()
{
Flush();
glMatrixMode(GL_PROJECTION); // Select The Projection Matrix
glPopMatrix(); // Restore The Old Projection Matrix
glDisable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
glDepthMask(1);
glEnable(GL_TEXTURE_2D);
}
the code for gwen's opengl renderer is here:
http://gwen.googlecode.com/svn/trunk/trunk/gwen/Renderers/OpenGL/OpenGL.cpp
BTW I'm using OpenGL 2.1 not 3.0+
Ah GWEN. That frustrating GUI library.
When I started using it, and integrating it into the engine we wrote in school, I had the same issue as you, using the stock OpenGL renderer however. Turned out it was being positioned wrong, calling glLoadIdentity() to reset the identity matrix seemed to resolve it.
The issue you are having, could well end up being the same as what I had, or there could be a problem with your custom OpenGL renderer. I'm not sure if you know much about GWEN, or how it works, but it runs on a single texture, that skins the GUI. Are you loading that in? Perhaps your texture loader isn't loading it correctly.
Try using your Debugger and stepping through your program. Areas of interest would be where you're attempting to load the GUI skin, where you're assigning the screen space that GWEN can use, and when you're actually attempting to render the GUI.