A while ago I converted a C# program of mine to use OpenGL and found it ran perfectly (and faster) on my Computer at Home. However, I have 2 issues. Firstly, the code I use to free textures from the graphics card doesn't word, it gives me a memory access violation exception at runtime. Secondly, most of the graphics don't work on any other machine but mine.
By accident, I managed to convert some of the graphics to 8-bit PNGs (all the others are 32bit) and these work fine on other machines. Recognising this, I attempted to regulate the quality when loading the images. My attempts failed (this was a while ago, I think they largely involved trying to format a bitmap then using the GDI to draw the texture onto it, creating a lower quality version). Is there any way in .NET to take a bitmap and nicely change the quality? The code concerned is below. I recall it is largely based on some I found on Stack Overflow in the past, but which didn't quite suit my needs. 'img' as a .NET Image, and 'd' is an integer dimension, which I use to ensure the images are square.
uint[] output = new uint[1];
Bitmap bMap = new Bitmap(img, new Size(d, d));
System.Drawing.Imaging.BitmapData bMapData;
Rectangle rect = new Rectangle(0, 0, bMap.Width, bMap.Height);
bMapData = bMap.LockBits(rect, System.Drawing.Imaging.ImageLockMode.ReadOnly, bMap.PixelFormat);
gl.glGenTextures(1, output);
gl.glBindTexture(gl.GL_TEXTURE_2D, output[0]);
gl.glTexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MAG_FILTER, gl.GL_NEAREST);
gl.glTexParameteri(gl.GL_TEXTURE_2D,gl.GL_TEXTURE_MIN_FILTER, gl.GL_NEAREST);
gl.glTexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_WRAP_S, gl.GL_CLAMP);
gl.glTexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_WRAP_T, gl.GL_CLAMP);
gl.glPixelStorei(gl.GL_UNPACK_ALIGNMENT, 1);
if (use16bitTextureLimit)
gl.glTexImage2D(gl.GL_TEXTURE_2D, 0, gl.GL_RGBA_FLOAT16_ATI, bMap.Width, bMap.Height, 0, gl.GL_BGRA, gl.GL_UNSIGNED_BYTE, bMapData.Scan0);
else
gl.glTexImage2D(gl.GL_TEXTURE_2D, 0, gl.GL_RGBA, bMap.Width, bMap.Height, 0, gl.GL_BGRA, gl.GL_UNSIGNED_BYTE, bMapData.Scan0);
bMap.UnlockBits(bMapData);
bMap.Dispose();
return output;
The 'use16bitTextureLimit' is a bool, and I rather hoped the code shown would reduce the quality to 16bit, but I havn't noticed any difference. It may be that this works and the Graphics cards still don't like it. I was unable to find any indication of a way to use 8-bit PNgs.
This is in a function which returns the uint array (as a texture address) for use when rendering. The faulty texture disposale simply involves: gl.glDeleteTextures(1, imgsGL[i]); Where imgGL is an array of unit arrays.
As said, the rendering is fine on some computers, and the texture deletion causes a runtime error on all systems (except my netbook, where I can't create textures atall, though I think that may be linked to the quality issue).
If anyone can provide any info of relevance, that would be great. I've spent many days on the program, and would really like to more compatible with less good graphics cards.
The kind of access violation you encounter usually happens if the call to glTexImage2D causes a buffer overrun. Double check that all the glPixelStore parameters related to unpacking are properly set and that the format parameter (the second one that is) matches the type and size of the data you supply. I know this kind of bg very well, and those are the first checks I usually do, whenever I encounter it.
For the texture not showing up: Did you check, that the texture's dimensions are actually powers of two each?In C using a macro the test for power of two can be written like this (this one boils down to testing, that there's only one of the bits of a integer is set)
#define ISPOW2(x) ( x && !( (x) & ((x) - 1) ) )
It is not neccessary that a texture image is square, though. Common misconception, but you really just have to make sure that each dimension is a power of 2. A 16×128 image is perfectly fine.
Changing the internal format to GL_RGBA_FLOAT16_ATI will probably even increase quality, but one can not be sure, as GL_RGBA may coerce to to anything the driver sees fit. Also this is a vendor specific format, so I disregard it's use. There are all kinds of ARB formats, also a half float one (which FLOAT16_ATI is).
Related
I made a program, that changes resolution, color depth,... and then it render simple texture on screen. It all works without any problem until I switch to 8b color depth. Then there appears problem of calling non-existing functions (function points to 0x00) like glCreateShader. It made me wonder and I got idea, which proved to be correct. Created context have really low version.
After calling glGetString(GL_VERSION) i recieved that context version was 1.1.0. With higher color depths it returns 4.4
Is there any reason for decreasing version? I looked through google and some of opengl.org pages, but I did not found anything about deprecating 8b color depth. Even Windows CAN switch to this color depth so there is no reason why OpenGL shouldn't be able to handle this.
Sure i can emulate it by decreasing number of colors, memory is not what I am concerned. I just want to know why is this happening. Program is prototype for lab experiments, so i need to have as many options as possible and this is just cutting one third away.
Last thing i should add is that program is written in C/C++ with Winapi and some WGL functions, but I think that this does not matter much.
Your graphics driver is falling back to the software implementation because no hardware accelerated pixel format matching your criteria could be found.
Most drivers will not give you hardware accelerated 8-bit per-pixel formats, especially if you request an RGB[A] (WGL_TYPE_RGBA_ARB) color mode.
Sure i can emulate it by decreasing number of colors, memory is not what I am concerned. I just want to know why is this happening.
To get an 8-bit format, you must use an indexed color mode (WGL_TYPE_COLORINDEX_ARB); paletted rendering. I suspect modern drivers will not even support that sort of thing unless they offer a compatibility profile (which rules out platforms like OS X).
The smallest RGB color depth you should realistically attempt is RGB555 or RGB565. 15/16-bit color is supported on modern hardware. Indexed color modes, on the other hand, are really pushing your luck.
I know that on mobile devices, the largest texture you could render in a single draw differs: sometimes it is a mere 1024x1024 - other times 2048x2048 etc.
What is the case for Desktop games? I am using OpenGL 2.0.
I intend to draw one single background sprite that could be as big as 5000x5000. I am guessing that TexturePacker is not quite useful in this scenario, because I don't really need an atlas since I'm just trying to make a single sprite.
Yes, I just tested for 5000x5000 and it works just fine. Just wondering if there's an actual limit to consider. Maybe it differs from one computer to another?
In addition to what P.T. said, I wanted to supply the code for that (in libGDX).
IntBuffer intBuffer = BufferUtils.newIntBuffer(16);
Gdx.gl20.glGetIntegerv(GL20.GL_MAX_TEXTURE_SIZE, intBuffer);
System.out.println(intBuffer.get());
On my desktop system this results in 4096, meaning that the max size supported is 4096x4096. My system is not that old though. You should probably not assume that 5000x5000 is available on all desktop systems. Usually you don't need textures that big so not all GPUs support that. You can always split it up in several textures and draw it on multiple quads next to each other to work around that problem.
The maximum texture size is a function of OpenGL, which leaves the size to the video card's device driver (within bounds).
You can check at run-time to see what the reported limits are (though see Confusion with GL_MAX_TEXTURE_SIZE for some caveats).
To find out what a variety of hardware reports in practice, there are some sites that collect databases of results from users (mostly concerned with benchmark performance), that often also collect data like max texture size. (E.g., gfxbench.com, or http://opengl.gpuinfo.org/gl_stats_caps_single.php?listreportsbycap=GL_MAX_TEXTURE_SIZE)
I think on a modern desktop GPU 5000x5000 will be well under the supported limit.
I'm working on an OpenGL-powered 2d engine.
I'm using stb_image to load image data so I can create OpenGL textures. I know that the UV origin for OpenGL is bottom-left and I also intend to work in that space for my screen-space 2d vertices i.e. I'm using glm::ortho( 0, width, 0, height, -1, 1 ), not inverting 0 and height.
You probably guessed it, my texturing is vertically flipped but I'm 100% sure that my UV are specified correctly.
So: is this caused by stbi_load's storage of pixel data? I'm currently loading PNG files only so I don't know if it would cause this problem if I was using another file format. Would it? (I can't test right now, I'm not at home).
I really want to keep the screen coords in the "standard" OpenGL space... I know I could just invert the orthogonal projection to fix it but I would really rather not.
I can see two sane options:
1- If this is caused by stbi_load storage of pixel data, I could invert it at loading time. I'm a little worried about that for performance reason and because I'm using texture arrays (glTexture3d) for sprite animations meaning I would need to invert texture tiles individually which seems painful and not a general solution.
2- I could use a texture coordinate transformation to vertically flip the UVs on the GPU (in my GLSL shaders).
A possible 3rd option would be to use glPixelStore to specify the input data... but I can't find a way to tell it that the incoming pixels are vertically flipped.
What are your recommendations for handling my problem? I figured I can't be the only one using stbi_load + OpenGL and having that problem.
Finally, my target platforms are PC, Android and iOS :)
EDIT: I answered my own question... see below.
I know this question's pretty old, but it's one of the first results on google when trying to solve this problem, so I thought I'd offer an updated solution.
Sometime after this question was originally asked stb_image.h added a function called "stbi_set_flip_vertically_on_load", simply passing true to this function will cause it to output images the way OpenGL expects - thus removing the need for manual flipping/texture-coordinate flipping.
Also, for those who don't know where to get the latest version, for whatever reason, you can find it at github being actively worked on:
https://github.com/nothings/stb
It's also worth noting that in stb_image's current implementation they flip the image pixel-by-pixel, which isn't exactly performant. This may change at a later date as they've already flagged it for optimsation. Edit: It appears that they've swapped to memcpy, which should be a good bit faster.
Ok, I will answer my own question... I went thru the documentation for both libs (stb_image and OpenGL).
Here are the appropriate bits with reference:
glTexImage2D says the following about the data pointer parameter: "The first element corresponds to the lower left corner of the texture image. Subsequent elements progress left-to-right through the remaining texels in the lowest row of the texture image, and then in successively higher rows of the texture image. The final element corresponds to the upper right corner of the texture image." From http://www.opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml
The stb_image lib says this about the loaded image pixel: "The return value from an image loader is an 'unsigned char *' which points to the pixel data. The pixel data consists of *y scanlines of *x pixels, with each pixel consisting of N interleaved 8-bit components; the first pixel pointed to is top-left-most in the image." From http://nothings.org/stb_image.c
So, the issue is related the pixel storage difference between the image loading lib and OpenGL. It wouldn't matter if I loaded other file formats than PNG because stb_image returns the same data pointer for all formats it loads.
So I decided I'll just swap in place the pixel data returned by stb_image in my OglTextureFactory. This way, I keep my approach platform-independent. If load time becomes an issue down the road, I'll remove the flipping at load time and do something on the GPU instead.
Hope this helps someone else in the future.
Yes, you should. This can be easily accomplished by simply calling this STBI function before loading the image:
stbi_set_flip_vertically_on_load(true);
Since this is a matter of opposite assumptions between image libraries in general and OpenGL, Id say the best way is to manipulate the vertical UV-coord. This takes minimal effort and is always relevant when loading images using any image library and passing it to OpenGL.
Either feed tex-coords with 1.0f-uv.y in vertex-population OR reverse in shader.
fcol = texture2D( tex, vec2(uv.x,1.-uv.y) );
I'm experiencing a difficult problem on certain ATI cards (Radeon X1650, X1550 + and others).
The message is: "Access violation at address 6959DD46 in module 'atioglxx.dll'. Read of address 00000000"
It happens on this line:
glGetTexImage(GL_TEXTURE_2D,0,GL_RGBA,GL_FLOAT,P);
Note:
Latest graphics drivers are installed.
It works perfectly on other cards.
Here is what I've tried so far (with assertions in the code):
That the pointer P is valid and allocated enough memory to hold the image
Texturing is enabled: glIsEnabled(GL_TEXTURE_2D)
Test that the currently bound texture is the one I expect: glGetIntegerv(GL_TEXTURE_2D_BINDING)
Test that the currently bound texture has the dimensions I expect: glGetTexLevelParameteriv( GL_TEXTURE_WIDTH / HEIGHT )
Test that no errors have been reported: glGetError
It passes all those test and then still fails with the message.
I feel I've tried everything and have no more ideas. I really hope some GL-guru here can help!
EDIT:
After concluded it is probably a driver bug I posted about it here too: http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=295137#Post295137
I also tried GL_PACK_ALIGNMENT and it didn't help.
By some more investigation I found that it only happened on textures that I have previously filled with pixels using a call to glCopyTexSubImage2D. So I could produce a workaround by replacing the glCopyTexSubImage2d call with calls to glReadPixels and then glTexImage2D instead.
Here is my updated code:
{
glCopyTexSubImage2D cannot be used here because the combination of calling
glCopyTexSubImage2D and then later glGetTexImage on the same texture causes
a crash in atioglxx.dll on ATI Radeon X1650 and X1550.
Instead we copy to the main memory first and then update.
}
// glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, PixelWidth, PixelHeight); //**
GetMem(P, PixelWidth * PixelHeight * 4);
glReadPixels(0, 0, PixelWidth, PixelHeight, GL_RGBA, GL_UNSIGNED_BYTE, P);
SetMemory(P,GL_RGBA,GL_UNSIGNED_BYTE);
You might take care of the GL_PACK_ALIGNEMENT. This parameter told you the closest byte count to pack the texture. Ie, if you have a image of 645 pixels:
With GL_PACK_ALIGNEMENT to 4 (default value), you'll have 648 pixels.
With GL_PACK_ALIGNEMENT to 1, you'll have 645 pixels.
So ensure that the pack value is ok by doing:
glPixelStorei(GL_PACK_ALIGNMENT, 1)
Before your glGetTexImage(), or align your memory texture on the GL_PACK_ALIGNEMENT.
This is most likely a driver bug. Having written 3D apis myself it is even easy to see how. You are doing something that is really weird and rare to be covered by test: Convert float data to 8 bit during upload. Nobody is going to optimize that path. You should reconsider what you are doing in the first place. The generic conversion cpu conversion function probably kicks in there and somebody messed up a table that drives allocation of temp buffers for that. You should really reconsider using an external float format with an internal 8 bit format. Conversions like that in the GL api usually point to programming errors. If you data is float and you want to keep it as such you should use a float texture and not RGBA. If you want 8 bit why is your input float?
I ran into an issue while compiling an openGl code. The thing is that i want to achieve full scene anti-aliasing and i don't know how. I turned on force-antialiasing from the Nvidia control-panel and that was what i really meant to gain. I do it now with GL_POLYGON_SMOOTH. Obviously it is not efficient and good-looking. Here are the questions
1) Should i use multi sampling?
2) Where in the pipeline does openGl blend the colors for antialiasing?
3) What alternatives do exist besides GL_*_SMOOTH and multisampling?
GL_POLYGON_SMOOTH is not a method to do Full-screen AA (FSAA).
Not sure what you mean by "not efficient" in this context, but it certainly is not good looking, because of its tendency to blend in the middle of meshes (at the triangle edges).
Now, with respect to FSAA and your questions:
Multisampling (aka MSAA) is the standard way today to do FSAA. The usual alternative is super-sampling (SSAA), that consists in rendering at a higher resolution, and downsample at the end. It's much more expensive.
The specification says that logically, the GL keeps a sample buffer (4x the size of the pixel buffer, for 4xMSAA), and a pixel buffer (for a total of 5x the memory), and on each sample write to the sample buffer, updates the pixel buffer with the resolved value from the current 4 samples in the sample buffer (It's not called blending, by the way. Blending is what happens at the time of the write into the sample buffer, controlled by glBlendFunc et al.). In practice, this is not what happens in hardware though. Typically, you write only to the sample buffer (and the hardware usually tries to compress the data), and when comes the time to use it, the GL implementation will resolve the full buffer at once, before the usage happens. This also helps if you actually use the sample buffer directly (no need to resolve at all, then).
I covered SSAA and its cost. The latest technique is called Morphological anti-aliasing (MLAA), and is actively being researched. The idea is to do a post-processing pass on the fully rendered image, and anti-alias what looks like sharp edges. Bottom line is, it's not implemented by the GL itself, you have to code it as a post-processing pass. I include it for reference, but it can cost quite a lot.
I wrote a post about this here: Getting smooth, big points in OpenGL
You have to specify WGL_SAMPLE_BUFFERS and WGL_SAMPLES (or GLX prefix for XOrg/GLX) before creating your OpenGL context, when selecting a pixel format or visual.
On Windows, make sure that you use wglChoosePixelFormatARB() if you want a pixel format with extended traits, NOT ChoosePixelFormat() from GDI/GDI+. wglChoosePixelFormatARB has to be queried with wglGetProcAddress from the ICD driver, so you need to create a dummy OpenGL context beforehand. WGL function pointers are valid even after the OpenGL context is destroyed.
WGL_SAMPLE_BUFFERS is a boolean (1 or 0) that toggles multisampling. WGL_SAMPLES is the number of buffers you want. Typically 2,4 or 8.