I've just realized that GL_ALPHA_TEST was deprecated since OpenGL 3.0, so I can't use it with glEnable(). And I've tried quickly to google how to replace it now (and why it was deprecated), but failed to find the answer for this simple question. I also didn't find the information about GL_ALPHA_TEST removing in Khronos documentation.
I suggest that now the only way to discard fragments according to their alpha value is the "discard" keyword in fragment shaders. Am I right?
Yes, you are correct. GL_ALPHA_TEST is not in core and you must use discard in the fragment shader to get the same effect.
The other alternative is to use a compatibility context, but this is not available on all systems.
Related
I have read that the first parameter of the glDrawElements is mode:
http://www.opengl.org/sdk/docs/man3/xhtml/glDrawElements.xml
Symbolic constants GL_POINTS, GL_LINE_STRIP, GL_LINE_LOOP, GL_LINES, GL_LINE_STRIP_ADJACENCY, GL_LINES_ADJACENCY, GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN, GL_TRIANGLES, GL_TRIANGLE_STRIP_ADJACENCY and GL_TRIANGLES_ADJACENCY are accepted.
I do not see there GL_POLYGON. Is that means that I can not use GL_POLYGON? and if I got 10 indices? Am I need to transform it to a few polygons which contains 3 indices each one? If it is true, How do I do it?
The GL3 and GL4 level man pages on www.opengl.org only document the Core Profile of OpenGL. GL_POLYGON is deprecated, and was not part of the Core Profile when the spec was split into Core and Compatibility profiles in OpenGL 3.2.
You can still use GL_POLYGON if you create a context that supports the Compatibility Profile. But if you start out, I would suggest that you stick to Core Profile features. If you do need documentation for the deprecated features, you'll have to go back to the GL2 man pages.
To draw a polygon, GL_TRIANGLE_FAN is the easiest replacement. You can use the same set of vertices for a triangle fan as you would use for GL_POLYGON, and it will produce the same result.
You are linking to the GL3 manual pages, by the way.
Since GL_POLYGON was deprecated in 3.0 and removed in 3.1, you are not going to find it listed there. In fact, you will find some tokens there that are only supported in GL 3.2 (adjacency primitives, which were introduced when Geometry Shaders were); fortunately that is actually documented in the manual page itself unlike the fact that GL_POLYGON was deprecated.
For compatibility profiles (which you are using), you should view the GL2 manual page. The GL2 man page can be found here.
Is there a way to read fragment from the framebuffer currently rendered?
So, I'm looking for a way to read color information from the fragment that's on the place that current fragment will probably overwrite. So, exact position of the fragment that previously rendered.
I found gl_FragData and gl_LastFragData to be added with certain EXT_ extensions to shaders, but if they are what I need, could somebody explain how to use those?
I am looking either for a OpenGL or OpenGL ES 2.0 solution.
EDIT:
All the time I was searching for the solution that would allow me to have some kind of read&write "uniform" accessible from shaders. For anyone out there searching for similar thing, OpenGL version 4.3+ support image and buffer storage types. They do allow both reading and writing to them simultaneously, and in combination with compute shaders they proved to be very powerful tool.
Your question seems rather confused.
Part of your question (the first sentence) asks if you can read from the framebuffer in the fragment shader. The answer is, generally no. There is an OpenGL ES 2.0 extension that lets you do so, but it's only supported on some hardware. In desktop GL 4.2+, you can use arbitrary image load/store to get the same effect. But you can't render to that image anymore; you have to write your data using image storing functions.
gl_LastFragData is pretty simple: it's the color from the sample in the framebuffer that will be overwritten by this fragment shader. You can do with it what you wish, if it is available.
The second part of your question (the second paragraph) is a completely different question. There, you're asking about fragments that were potentially never written to the framebuffer. You can't read from a fragment shader; you can only read images. And if a fragment fails the depth test, then it's data was never rendered to an image. So you can't read it.
With most nVidia hardware you can use the GL_NV_texture_barrier extension to read from a texture that's currently bound to a framebuffer. But bear in mind that you won't be able to read data any more recent than produced in the previous draw call
My background: I first started experimenting with OpenGL some months ago, for no particular purpose, just fun. I started reading the OpenGL redbook, and got as far as making a planetary system with a lot of different lighting. That lasted for a month, and my interest for openGL went away. It awoke again a week or so ago, and as I gathered from some SO posts, the redbook is outdated and the OpenGL Superbible is a better source for learning. So I started reading it. I like the concept of shaders but there's a real mess going on in my brain because of transition from my old memories of the fixed pipeline and the new concept of shaders.
Question: I would like to write some statements which I think are true and I am asking OpenGL experts to verify them (i.e. whether I am understanding correctly, not quite correctly or absolutely incorrectly). So...
1) If we don't use any shader program, nothing changes. We have current color, current normal, current transformation matrix, current everything, and as soon as we call glVertex**(...) these current values are taken and the vertex is fed to ... I don't know what. The fact is that it's transformed with the current matrix, the current color and normal are applied to it etc.
2) As soon as we use a shader program, all the above stops working. That is, glColor, glRotate etc. make no sense (Do they?). I mean, glColor still does set the current color, glRotate still multiplies the current matrix by the rotation matrix, but these aren't used at all. Instead, we feed vertex attributes by glVertexAttrib. Which attribute means what is totally dependent on our vertex shader and the in variable binding. We also find ans set the values of the uniforms and then call glVertex and the shader is executed ( I don't know immediately or after glEnd() is called). The actual vertex and fragment processing is done entirely manually in the shader program.
3) Shaders don't add anything to depth testing. That is, I don't need to take care of it in a shader. I just call glEnable(GL_DEPTH_TEST). Neither is face culling affected.
4) Alpha blending and antialiasing need not be taken care of in shaders. glEnable calls will suffice.
5) Is it a good idea to use gluPerspective, glRotate, glPushMatrix and other matrix functions, and then retrieve the current matrix and feed it as a uniform to a shader? Thus there won't be any need in using a 3rd party matrix library.
It depends on what version of OpenGL you're talking about. Up through OpenGL 3.0, all the fixed functionality is still present, so yes, if you decide to just use fixed functionality it continues to work like it always did. Starting from 3.0, quite a bit of the fixed pipeline was deprecated, and as of 3.1 it disappears completely. Using these, you no longer really have the option to just use the fixed pipeline.
Again, it depends. For example, up through OpenGL 3.0, glColor is still supported, even when you use a shader. The difference is that instead of automatically being applied to what gets drawn, it's supplied to your shader, which can use it unchanged, modify it as it sees fit, or ignore it completely. So, your fragment shader receives gl_FrontColor and gl_BackColor, and writes the actual fragment color to gl_FragColor. If you're using OpenGL 3.1 or newer, however, glColor (for example) just no longer exists -- a color will be just another value you supply to your shader like you could/would anything else.
That's correct, at least up to OpenGL 3.1. As of 4.0, there's a new compute shader that (I believe) can get involved in things like depth testing (but I haven't used it, so I'm a bit uncertain about that).
Yes, you can still use built-in alpha blending. Depending on your hardware, you may also want to consider using the gl_ARB_draw_buffers_blend extension (which is mandatory as of OpenGL 4, if I recall correctly).
Yet again, it depends on the version of OpenGL you're talking about. Current OpenGL completely eliminates all support for matrices so you have no choice but to use some other matrix library. Older versions supplied things like gl_ModelViewMatrix and gl_NormalMatrix to your shader as a uniform so you could go that route if you chose.
2) In modern OpenGL, there is no glColor, glBegin, glVertex, glRotate etc. so they don't make sense.
5) In modern OpenGL there are no built-in matrices, so you have to use a 3rd party library or write your own. So to answer your question, no, it's not a good idea.
glPixelTransfer, glTexEnv and glRasterPos have been deprecated in OpenGL 3.1. What is it replaced with? If not replaced, how can I get a similar effect? I would like to use these functions.
What is it replaced with?
Framebuffer objects and fragment shaders.
P.S. If you don't want to mess with shaders, you can keep using older OpenGL version, you know.
According to the following wiki page:
OpenGL Wiki Page
It says "One of the requirements is to use shaders.". Is this true? To use GlVertexAttribPointer do I have to use shaders? I'm just starting out in OpenGL and just want to keep things simple for now, without having to introduce shaders at such an early stage of development. I will be using GLSL eventually, but want to get each feature "working" before adding any new features to my code.
Thanks
Yes, it's true, you need shaders to use generic vertex attributes, if not, how would OpenGL know that attribute 0 is normals, 1 is position and 2 is texture coordinates? There is no API for doing that in the Fixed Function pipeline.
It might work, but that's just luck, not defined behaviour.