OpenGL: How to select correct mipmapping method automatically? - c++

I'm having problem at mipmapping the textures on different hardware. I use the following code:
char *exts = (char *)glGetString(GL_EXTENSIONS);
if(strstr(exts, "SGIS_generate_mipmap") == NULL){
// use gluBuild2DMipmaps()
}else{
// use GL_GENERATE_MIPMAP
}
But on some cards it says GL_GENERATE_MIPMAP is supported when its not, thus the gfx card is trying to read memory from where the mipamp is supposed to be, thus the card renders other textures to those miplevels.
I tried glGenerateMipmapEXT(GL_TEXTURE_2D) but it makes all my textures white, i enabled GL_TEXTURE_2D before using that function (as it was told to).
I could as well just use gluBuild2DMipmaps() for everyone, since it works. But i dont want to make new cards load 10x slower because theres 2 users who have really old cards.
So how do you choose mipmap method correctly?

glGenerateMipmap is supported at least by OpenGL 3.3 as a part of functionality, not as extension.
You have following options:
Check OpenGL version, if it is more recent that the first one that ever supported glGenerateMipmap, use glGenerateMipmap.
(I'd recommend this one) OpenGL 1.4..2.1 supports texParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE)(see this) Which will generate mipmaps from base level. This probably became "deprecated" in OpenGL 3, but you should be able to use it.
Use GLEE or GLEW ans use glewIsSupported / gleeIsSupported call to check for extension.
Also I think that instead of using extensions, it should be easier to stick with OpenGL specifications. A lot of hardware supports OpenGL 3, so you should be able get most of required functionality (shaders, mipmaps, framebuffer objects, geometry shaders) as part of OpenGL specification, not as extension.

If drivers lie, there's not much you can do about it. Also remember that glGenerateMipmapEXT is part of the GL_EXT_framebuffer_object extension.
What you are doing wrong is checking for the SGIS_generate_mipmap extension and using GL_GENERATE_MIPMAP, since this enum belongs to core OpenGL, but that's not really the problem.
The issue you describe sounds like a very horrible OpenGL implementation bug, i would bypass it using gluBuild2DMipmaps on those cards (having a list and checking at startup).

Related

What is the best way to copy the lower resolution mips of a texture in OpenGL?

The OpenGL application I maintain has tu run in systems with small VRAM.
In some cases, at runtime, I can guarantee that a specific texture won't need high resolution. My textures are in compressed format.
What I would like to do is to discard the highest resolution mips to save VRAM.
I thought of doing something like this psudocode:
u32 fullResTex = loadTextureAndUploadToGPU("my_texture.ktx");
generateMipmaps(fullResTex);
u32 lowResTex = createTexture();
copyMipsLowestMips(fullResTex -> lowResMips);
destroy(fullResTex)
I'm not sure which function of the OpenGL API can help me with "copyMipsLowestMips".
I have found a few that could potentially serve the purpose: glCopyTexImage2D,
glCopyImageSubData.
I have seen that glCopyImageSubData requires OpenGL 4.3 though. I would prefer to avoid using versions later than 3.3 but I might be able to use the function if available through an extension.
What would be the best to approach this problem?
I ended up using glCopyImageSubData and it seems to work fine. It's the only function that was able to do what I needed. It's a pity that it requires OpenGL 4.3.

Is it a big deal switching from OpenGL 3.0 to OpenGL ES 2.0?

If I am currently developing a game for windows using SDL and GLEW (for OpenGL 3.0+) and I later want to port my game to Android, will I have to rewrite the majority of my code to convert from OpenGL 3.0 to OpenGL ES 2.0? Are there any programs that do this for me? Is it a big deal switching from OpenGL to OpenGL ES?
Not at all, it is very easy to convert.
Only differences are shader variables and constants, and suffixes like GL_RGBA8 to GL_RGBA8_OES. However, there are limits in OpenGL ES. For instance, you can use only GL_UNSIGNED_BYTE or GL_UNSIGNED_SHORT as indices data type GL_UNSIGNED_INT. Which means, you can not draw more than 65,535 indices at one go. It is not a big deal although you should refer to the official OpenGL ES manual, https://www.khronos.org/opengles/sdk/docs/man/
Refer to the link OpenGL ES 2.0 vs OpenGL 3 - Similarities and Differences by coffeeandcode
It really depends on your code
OpenGL ES 2.0 (and 3.0) is mostly a subset of Desktop OpenGL.
The biggest difference is there is no legacy fixed function pipeline in ES. What's the fixed function pipeline? Anything having to do with glVertex, glColor, glNormal, glLight, glPushMatrix, glPopMatrix, glMatrixMode, etc... in GLSL using any of the variables that access the fixed function data like gl_Vertex, gl_Normal, gl_Color, gl_MultiTexCoord, gl_FogCoord etc...
If you use any of those features you'll have some work cut out for you. OpenGL ES 2.0 and 3.0 are just plain shaders. No "3d" is provided for you. You're required to write all projection, lighting, texture references, etc yourself.
If you're already doing that (which most modern games probably do ) you might not have too much work. If on the other hand you've been using those old deprecated OpenGL features which from my experience is still very very common (most tutorials still use that stuff). Then you've got a bit of work cut out for you as you try to reproduce those features on your own.
There is an open source library, regal, which I think was started by NVidia. It's supposed to reproduce that stuff. Be aware that whole fixed function system was fairly inefficient which is one of the reasons it was deprecated but it might be a way to get things working quickly.

Replicating Cathode retro terminal effect?

I'm trying to replicate the effect of Cathode but i'm not really aware of any rendering effects in SDL. Does anyone know the technique used in Cathode? Are they using OpenGL and shaders maybe?
If you are still interested in the subject I'm working on a similar project. The effects were obtained by using GLSL shaders.
You can grab the source code here: https://github.com/Swordifish90/cool-old-term/
The shaders strings might not be extremely readable due to the extensive use of the ternary operators (needed to customize the appearance) but they should give you a really good idea.
If you poke around a bit in the application bundle, you'll find that the only relevant framework is GLKit which, according to Apple, will "reduce the effort required to create new shader-based apps".
There's also a bunch of ".fragdata", ".vertdata", and ".glsldata" files, which are encrypted.
Very unfortunate for you.
So I would say: Yes, it's OpenGL shaders all the way.
Unfortunately, since the shaders are encrypted, you're going to have to locate suitable algorithms elsewhere.
(Perhaps it's possible to use the OpenGL debugging and profiling tools to capture the shader source as it is compiled, but I doubt it.)
You may have realized that Android phones have (had?) such animations when you put them to sleep. That code is available under in file named ElectronBeam.java.
However it is Java code and uses GLES 1.0 with GLES 1.1 Extenstions but algorithm for bending screen should be understandable.
Seems to be based on GLTerminal which uses OpenGL, it would have to use OpenGL and shaders for speed.
I guess the fastest approximation would be to render the text to buffers within OpenGL and use a deformed 2d grid to create the "rounded corners" radial distortion.
But it would take a lot of work to add all the features that cathode has, not to mention to run them quickly.
I suspect emulating a CRT perfectly is a bit like emulating an analog synth perfectly - hard to impossible.
If you want to work quickly and not killing the CPU, the GPU is the best solution! So pixel shaders. pixel shaders can do all of these effects. Once I made such an application. I wrote it in Silverlight, but it does not matter, I used the pixel shader.
Suggests to write this in Qt4 and add to the QWidget pixel shader effects.

some OpenGL function calls is not available in developing PS3 game

I am currently taking a Game Console Programming module at Sunderland University.
What they are teaching in this module is OpenGL and Phyre Engine to develop PS3 game.
The fact that PS3 SDK kit is not available for free (it is quite expensive) makes it really difficult for me to get around when a problem arises.
Apparently, PS3 framework doesn't support most of the gl function calls like glGenList, glBegin, glEnd and so on.
glBegin(GL_QUADS);
glTexCoord2f(TEXTURE_SIZE, m_fTextureOffset);
glVertex3f(-100, 0, -100);
//some more
glEnd();
I get errors when debugging with PS3 debug mode at glBegin, glEnd and glTexCoord2f.
Is there any way to get around it?
like a different way of drawing object, perhaps?
Most games developed for the PS3 don't use OpenGL at all, but are programmed "on the metal" i.e. make direct use of the GPU without an intermediate, abstrace API. Yes, there is a OpenGL-esque API for the PS3, but this is actually based on OpenGL-ES.
In OpenGL-ES there is no immediate mode. Immediatate Mode is this cumbersome method of passing geometry to OpenGL by starting a primitive with glBegin and then chaining up calls of vertex attribute state setting, concluded by submitting the vertex by its position glVertex and finishing with glEnd. Nobody wants to use this! Especially not on a system with limited resources.
You have the geometry data in memory available anyway. So why not simply point OpenGL to use what's already there? Well, that's exactly what to do: Vertex Arrays. You give OpenGL pointers to where find data (generic glVertexAttribPointer in modern OpenGL, or in old fixed function the predefined, fixed attributesglVertexPointer, glTexCoordPointer, glNormalPointer, glColorPointer) and then have it draw a whole bunch of it using glDrawElements or glDrawArrays.
In modern OpenGL the drawing process is controlled by user programmable shaders. In fixed function OpenGL all you can do is parametize a inflationary number of state variables.
The OpenGL used by the PlayStation 3 is a variant of OpenGL ES 1.0 (according to wikipedia with some features of ES 2.0).
http://www.khronos.org/opengles/1_X
Has the specification. There doesn't seem to be glBegin/glEnd functions there. Those (as in, fixed pipeline functions) are deprecated (and with OpenGL 4.0 and OpenGL ES 2.0, removed) in favor of things like VBO's anyway though, so there probably isn't much point in learning how to work with these.
If you are using PhyreEngine, you should generally avoid directly calling the graphics API directly, as PhyreEngine sits on top of different APIs on different platforms.
On PC it uses GL (or D3D), but on PS3 it uses a lower-level API. So even if you used GL-ES functionality, and even if it compiles, it will likely not function. So it's not surprising you are seeing errors when building for PS3.
Ideally you should use PhyreEngine's pipeline for drawing, which is platform-agnostic. If you stick to that API, you can in principle compile your code for any supported platform.
There is a limit to how much I can comment on PhyreEngine publicly (sorry), but if you are on a university course, your university should have access to the official support forums where you could get more specific help.
If you really must target the underlying graphics API directly, be aware that you may need to write/modify your code per-platform, and that you will need to 'play nice' with any contextual state that PhyreEngine may rely on.

OpenGL Accumulation Buffer in SFML?

How would I enable the glAccum buffer in SFML? It doesn't appear to exist by default (glGet(GL_ACCUM_BLUE_BIT) returns 0). Is this even possible?
Not with SFML 1.6 or 2.0. And I wouldn't hold my breath on them adding such a feature anytime soon. They were incredibly resistant to the simple suggestion of allowing sRGB framebuffers.
I'd suggest switching to Allegro 5, as they provide accumulation buffer support.