I made an interesting, if confusing discovery today.
Thus far I've gotten by fine using glDraw*(GL_LINE_STRIP...) (or related GL_LINE* draw modes). Of course, this always requires some re-ordering of vertices in order to make the same vertex data work seamlessly between GL_TRIANGLES and GL_LINE_STRIP, but okay, all good there.
Then today I reintroduced some older code I had and found glPolygonMode(GL_FRONT_AND_BACK, GL_LINE) among it. I looked it up and people were saying that while GL_BACK and GL_FRONT are deprecated in 3.3 core context, glPolygonMode is undeprecated, though supported only by GL_FRONT_AND_BACK as the first argument. So I tried it with GL_LINE as the second argument, along with glDraw*(GL_TRIANGLES...) and not only did it work perfectly, it also required none of the explicit re-ordering of vertices required to suit GL_LINE_STRIP. (I went back to an earlier configuration to test this).
Questions:
What am I supposed to be using, in 3.3 core context? Is either method OK? The reason I ask is that I am wondering whether the line glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); is really even giving me a core profile, since it is after all only a hint to GLFW.
Which approach is recommended? Are there performance impacts to the latter? It certainly seems a lot more convenient.
Why even have glDraw*(GL_LINE*...) if this can be done via glPolygonMode?
Ad 1: Both methods are perfectly fine in OpenGL Core profile.
Ad 2: I'm not sure about this, but I guess that there will not be a huge performance difference.
Ad 3: This methods exist because one might also want to draw line objects that are not composed from triangles. Examples for this are circles or even straight lines.
Related
Though I know gluCylinder is somewhat old(and glu too) and glut is still there(through freeglut) but I saw those two and wondering what's the difference ? besides that gluCylinder requires that you define a Quadric, and what's faster ? .
The original GLUT did not have a glutSolidCylinder() function. That appears to be something FreeGLUT added.
gluCylinder
Pros:
Supports texture coordinate generation.
Cons:
GLU is old. I mean, really really old. The spec was last updated in 1998, and I suspect that the available implementations are just as old. This means that it's using immediate mode rendering (glBegin/glEnd) style, which is inefficient, and not available anymore in modern versions of OpenGL.
GLU support is starting to disappear from some platforms.
glutSolidCylinder
Pros:
As long as you're comfortable with using FreeGLUT, it's still supported, with source code available.
The FreeGLUT version seems to be able to use moderately modern rendering methods (VBOs), based on browsing the source code.
Cons:
Does not generate texture coordinates. This was definitely not supported for most solids in GLUT, and as far as I can tell is still not supported for cylinders in FreeGLUT.
self-made
Rendering a cylinder is very easy. Personally, I would just write it myself.
I agree with #Reto. I prefer implementing a cylinder myself too. Specially because it has a simple parametric form (a stack of circles). Interestingly, I was helping somebody else to dray cylinders. Maybe you find that interesting too:
Make a line thicker in 3D?
I'm trying to replicate the effect of Cathode but i'm not really aware of any rendering effects in SDL. Does anyone know the technique used in Cathode? Are they using OpenGL and shaders maybe?
If you are still interested in the subject I'm working on a similar project. The effects were obtained by using GLSL shaders.
You can grab the source code here: https://github.com/Swordifish90/cool-old-term/
The shaders strings might not be extremely readable due to the extensive use of the ternary operators (needed to customize the appearance) but they should give you a really good idea.
If you poke around a bit in the application bundle, you'll find that the only relevant framework is GLKit which, according to Apple, will "reduce the effort required to create new shader-based apps".
There's also a bunch of ".fragdata", ".vertdata", and ".glsldata" files, which are encrypted.
Very unfortunate for you.
So I would say: Yes, it's OpenGL shaders all the way.
Unfortunately, since the shaders are encrypted, you're going to have to locate suitable algorithms elsewhere.
(Perhaps it's possible to use the OpenGL debugging and profiling tools to capture the shader source as it is compiled, but I doubt it.)
You may have realized that Android phones have (had?) such animations when you put them to sleep. That code is available under in file named ElectronBeam.java.
However it is Java code and uses GLES 1.0 with GLES 1.1 Extenstions but algorithm for bending screen should be understandable.
Seems to be based on GLTerminal which uses OpenGL, it would have to use OpenGL and shaders for speed.
I guess the fastest approximation would be to render the text to buffers within OpenGL and use a deformed 2d grid to create the "rounded corners" radial distortion.
But it would take a lot of work to add all the features that cathode has, not to mention to run them quickly.
I suspect emulating a CRT perfectly is a bit like emulating an analog synth perfectly - hard to impossible.
If you want to work quickly and not killing the CPU, the GPU is the best solution! So pixel shaders. pixel shaders can do all of these effects. Once I made such an application. I wrote it in Silverlight, but it does not matter, I used the pixel shader.
Suggests to write this in Qt4 and add to the QWidget pixel shader effects.
I am currently taking a Game Console Programming module at Sunderland University.
What they are teaching in this module is OpenGL and Phyre Engine to develop PS3 game.
The fact that PS3 SDK kit is not available for free (it is quite expensive) makes it really difficult for me to get around when a problem arises.
Apparently, PS3 framework doesn't support most of the gl function calls like glGenList, glBegin, glEnd and so on.
glBegin(GL_QUADS);
glTexCoord2f(TEXTURE_SIZE, m_fTextureOffset);
glVertex3f(-100, 0, -100);
//some more
glEnd();
I get errors when debugging with PS3 debug mode at glBegin, glEnd and glTexCoord2f.
Is there any way to get around it?
like a different way of drawing object, perhaps?
Most games developed for the PS3 don't use OpenGL at all, but are programmed "on the metal" i.e. make direct use of the GPU without an intermediate, abstrace API. Yes, there is a OpenGL-esque API for the PS3, but this is actually based on OpenGL-ES.
In OpenGL-ES there is no immediate mode. Immediatate Mode is this cumbersome method of passing geometry to OpenGL by starting a primitive with glBegin and then chaining up calls of vertex attribute state setting, concluded by submitting the vertex by its position glVertex and finishing with glEnd. Nobody wants to use this! Especially not on a system with limited resources.
You have the geometry data in memory available anyway. So why not simply point OpenGL to use what's already there? Well, that's exactly what to do: Vertex Arrays. You give OpenGL pointers to where find data (generic glVertexAttribPointer in modern OpenGL, or in old fixed function the predefined, fixed attributesglVertexPointer, glTexCoordPointer, glNormalPointer, glColorPointer) and then have it draw a whole bunch of it using glDrawElements or glDrawArrays.
In modern OpenGL the drawing process is controlled by user programmable shaders. In fixed function OpenGL all you can do is parametize a inflationary number of state variables.
The OpenGL used by the PlayStation 3 is a variant of OpenGL ES 1.0 (according to wikipedia with some features of ES 2.0).
http://www.khronos.org/opengles/1_X
Has the specification. There doesn't seem to be glBegin/glEnd functions there. Those (as in, fixed pipeline functions) are deprecated (and with OpenGL 4.0 and OpenGL ES 2.0, removed) in favor of things like VBO's anyway though, so there probably isn't much point in learning how to work with these.
If you are using PhyreEngine, you should generally avoid directly calling the graphics API directly, as PhyreEngine sits on top of different APIs on different platforms.
On PC it uses GL (or D3D), but on PS3 it uses a lower-level API. So even if you used GL-ES functionality, and even if it compiles, it will likely not function. So it's not surprising you are seeing errors when building for PS3.
Ideally you should use PhyreEngine's pipeline for drawing, which is platform-agnostic. If you stick to that API, you can in principle compile your code for any supported platform.
There is a limit to how much I can comment on PhyreEngine publicly (sorry), but if you are on a university course, your university should have access to the official support forums where you could get more specific help.
If you really must target the underlying graphics API directly, be aware that you may need to write/modify your code per-platform, and that you will need to 'play nice' with any contextual state that PhyreEngine may rely on.
we've been creating several half-transparent 3D cubes in a scene by OpenGL which displays very good on Windows 7 and Fedora 15, but become quite awful on Meego system.
This is what it looks like on my Fedora 15 system:
This is what it looks like on Meego. The color of the line has been changed by us, otherwise the cubes you see would be more pathetic:
The effects are implemented by just using the normal glColor4f function, and made to be transparent just by setting the value of alpha. How could it be like that?
Both freeglut and openglut have been tried on the Meego system and failed to display any better.
I've even tried to use an engine like irrlicht to implement this instead but there would be nothing but black on the screen when the zBuffer argument of beginScene method was set to be false (and normal when it's true, but that would not be what we want).
This should not be the problem of the display card or the driver, because we've seen a 3D game with a transparent ball involved on the very same netbook and system.
We failed to find the reason here. Could any one give any help on why this would be happening please?
It sounds as if you may be relying on default settings (or behavior), which may be different between platforms.
Are you explicitly setting any of OpenGL's blend properties, such as glBlendFunc? If you are, it may help to post the relevant code that does this.
One of the comments mentioned sorting your transparent objects. If you aren't, that's something you might want to consider to achieve more accurate results. In either case, that behavior should be the same from platform to platform so I would have guessed that's not your issue.
Edit:
One other thought. Are you setting glCullFace? It could be that your transparent faces are being culled because of your vertex winding.
Both freeglut and openglut have been tried on the Meego system and failed to display any better.
Those are just simple windowing frameworks and have no effect whatsoever on the OpenGL execution.
Somewhere in your blending code you're messing up. From the looks of the correct rendering I'd say your blend function there is glBlendFunc(GL_ONE, GL_ONE), while on Meego it's something like glBlendFunc(GL_SRC_ALPHA, GL_ONE).
I'm having problem at mipmapping the textures on different hardware. I use the following code:
char *exts = (char *)glGetString(GL_EXTENSIONS);
if(strstr(exts, "SGIS_generate_mipmap") == NULL){
// use gluBuild2DMipmaps()
}else{
// use GL_GENERATE_MIPMAP
}
But on some cards it says GL_GENERATE_MIPMAP is supported when its not, thus the gfx card is trying to read memory from where the mipamp is supposed to be, thus the card renders other textures to those miplevels.
I tried glGenerateMipmapEXT(GL_TEXTURE_2D) but it makes all my textures white, i enabled GL_TEXTURE_2D before using that function (as it was told to).
I could as well just use gluBuild2DMipmaps() for everyone, since it works. But i dont want to make new cards load 10x slower because theres 2 users who have really old cards.
So how do you choose mipmap method correctly?
glGenerateMipmap is supported at least by OpenGL 3.3 as a part of functionality, not as extension.
You have following options:
Check OpenGL version, if it is more recent that the first one that ever supported glGenerateMipmap, use glGenerateMipmap.
(I'd recommend this one) OpenGL 1.4..2.1 supports texParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE)(see this) Which will generate mipmaps from base level. This probably became "deprecated" in OpenGL 3, but you should be able to use it.
Use GLEE or GLEW ans use glewIsSupported / gleeIsSupported call to check for extension.
Also I think that instead of using extensions, it should be easier to stick with OpenGL specifications. A lot of hardware supports OpenGL 3, so you should be able get most of required functionality (shaders, mipmaps, framebuffer objects, geometry shaders) as part of OpenGL specification, not as extension.
If drivers lie, there's not much you can do about it. Also remember that glGenerateMipmapEXT is part of the GL_EXT_framebuffer_object extension.
What you are doing wrong is checking for the SGIS_generate_mipmap extension and using GL_GENERATE_MIPMAP, since this enum belongs to core OpenGL, but that's not really the problem.
The issue you describe sounds like a very horrible OpenGL implementation bug, i would bypass it using gluBuild2DMipmaps on those cards (having a list and checking at startup).