I've been having a problem trying to get OpenGL 3.2 to work and after spending a few hours trying to figure out what was wrong I realized that it does not support glBegin. I use that command probably about 50-100 times in my engine to draw full screen quads and GUI elements. So what is a simple way to just draw a rectangle with OpenGL 3.2? Do I actually have to create a vertex buffer, fragment shader, and vertex shader to do something so simple?!
Do I actually have to create a vertex buffer, fragment shader, and vertex shader to do something so simple?!
Yep, no freebies in Core profile.
Related
Is there any way to draw a texture to the screen without using a shader? Something like the immediate mode in gl3.1 (glBegin, glTexCoord, etc). I know that VA and IB are necessary but what about the shader? I've just needed to show a simple texture fullscreen.
So basically no. The proper OpenGL 3 way is to make a shader.
I'm new to OpenGL, and I'm trying to understand vertex and fragment shaders. It seems you can use a vertex shader to make a gradient if you define the color you want each of the vertices to be, but it seems you can also make gradients using a fragment shader if you use the FragCoord variable, for example.
My question is, since you seem to be able to make color gradients using both kinds of shaders, which one is better to use? I'm guessing vertex shaders are faster or something since everyone seems to use them, but I just want to make sure.
... since everyone seems to use them
Using vertex and fragment shaders are mandatory in modern OpenGL for rendering absolutely everything.† So everyone uses both. It's the vertex shader responsibility to compute the color at the vertices, OpenGL's to interpolate it between them, and fragment shader's to write the interpolated value to the output color attachment.
† OK, you can also use a compute shader with imageStore, but I'm talking about the rasterization pipeline here.
So I have an opengl program that draws a group on objects. When I draw these objects I want to use my shader program is a vertex shader and a vertex shader exclusively. Basically, I am aiming to adjust the height of the model inside the vertex shader depending on a texture calculation. And that is it. Otherwise I want the object to be drawn as if using naked openGL (no shaders). I do not want to implement a fragment shader.
However I haven't been able to find how to make it so I can have a shader program with only a vertex shader and nothing else. Forgetting the part about adjust my model's height, so far I have:
gl_FrontColor = gl_Color;
gl_Position = modelViewProjectionMain * Position;
It transforms the object to the correct position alright, however when I do this I loose texture coordinates and also lighting information (normals are lost). What am I missing? How do I write a "do-nothing" vertex shader? That is, a vertex shader you could turn off and on when drawing a textured .obj with normals, and there would be no difference?
You can't write a shader with partial implementation. Either you do everything in a shader or completely rely on fixed functionality(deprecated) for a given object.
What you can do is this:
glUseProgram(handle)
// draw objects with shader
glUseProgram(0)
// draw objects with fixed functionality
To expand a little on the entirely correct answer by Abhishek Bansal, what you want to do would be nice but is not actually possible. You're going to have to write your own vertex and fragment shaders.
From your post, by "naked OpenGL" you mean the fixed-function pipeline in OpenGL 1 and 2, which included built-in lighting and texturing. Shaders in OpenGL entirely replace the fixed-function pipeline rather than extending it. And in OpenGL 3+ the old functionality has been removed, so now they're compulsory.
The good news is that vertex/fragment shaders to perform the same function as the original OpenGL lighting and texturing are easy to find and easy to modify for your purpose. The OpenGL Shading Language book by Rost, Licea-Kane, etc has a whole chapter "Emulating OpenGL Fixed Functionality" Or you could get a copy of the 5th edition OpenGL SuperBible book and code (not the 6th edition) which came with a bunch of useful predefined shaders. Or if you prefer online resources to books, there are the NeHe tutorials.
Writing shaders seems a bit daunting at first, but it's easier than you might think, and the extra flexibility is well worth it.
Simple task: draw a fullscreen quad with texture, nothing more, so we can be sure the texture will fill whole screen space. (We will do some more shader magic later).
Drawing fullscreen quad with simple fragment shader was easy, but now we are stuck for a whole day trying to make it textured. We read plenty of tutorials, but none of them helped us. Theose about sdl are mainly using opengl 1.x, those about OpenGL 2.0 are not about texturing, or SDL. :(
The code is here. Everything is in colorLUT.c, and fragment shader is in colorLUT.fs. The result is window of the same size as image, and if you comment the last line in shader, you get nice red/green gradient, so the shader is fine.
Texture initialization hasn't changed compared to OpenGL 1.4. Tutorials will work fine.
If fragment shader works, but you don't see texture (and get black screen), texture loading is broken or texture hasn't been set correctly. Disable shader, and try displaying textured polygon with fixed-function functionality.
You may want to call glPixelStorei(GL_UNPACK_ALIGNMENT, 1) before trying to init texture. Default value is 4.
Easier way to align texture to screen is to add vertex shader and pass texture coordinates - instead of trying to calculate them using gl_FragCoord.
You're passing surface size into "resolution" uniform. This is an error. You should be passing viewport size instead.
You may want to generate mipmaps. Either generate them yourself, or use GL_GENERATE_MIPMAPS because it is available in OpenGL 2 (but has been deprecated in later versions)
OpenGL.org has specifications for OpenGL 2.0 and GLSL 1.5. Download them and use them as reference, when in doubt.
NVIdia OpenGL SDK has examples you may want to check - they cover shaders.
And there's "OpenGL Orange book" (OpenGL shading language) which specifically deals with shaders.
Next time include code into question.
So I want to draw lots of quads (or even cubes), and stumbled across this lovely thing called the geometry shader.
I kinda get how it works now, and I could probably manipulte it into drawing a cube for every vertex in the vertex buffer, but I'm not sure if it's the right way to do it. The geometry shader happens between the vertex shader and the fragment shader, so it works on the vertices in screen space. But I need them in world space to do transformations.
So, is it OK to have my vertex shader simply pipe the inputs to the geometry shader, and have the geometry shader multiply by the modelviewproj matrix after creating the primitives? It should be no problem with the unified shader architecture, but I still feel queasy when making the vertex shader redundant.
Are there alternatives? Or is this really the 'right' way to do it?
It is perfectly OK.
Aside from that, consider using instanced rendering (glDrawArraysInstanced,glDrawElementsInstanced) with vertex attribute divisor (glVertexAttribDivisor). This way you can accomplish the same task without geometry shader at all.
For example, you can have a regular cube geometry bound. Then you have a special vertex attribute carrying cube positions you want for each instance. You should bind it with a divisor=1, what will make it advance for each instance drawn. Then draw the cube using glDraw*Instanced, specifying the number of instances.
You can also sample input data from textures, using gl_VertexID or gl_InstanceID for coordinates.