glBufferData fails silently with overly large sizes - c++

i just noticed, that glBufferData fails silently when i try to call it with size: 1085859108 and data: NULL.
Following calls to glBufferSubData fail with a OUT_OF_MEMORY 'Exception'. This is on Windows XP 32bit, NVIDIA Gforce 9500 GT (1024MB) and 195.62 Drivers.
Is there any way to determinate if a buffer was created sucessfully? (Something like a proxy texture for example?)
kind regards,
Florian

I doubt that it's really silent. I'd guess that glGetError would return GL_OUT_OF_MEMORY after that attempt.

Related

strange CopyResource result with ATI Radeon card

I get strange results, when I try to copy the rendered texture into a background texture for later use, using the CopyResource command. this is whats comming out:
I don't get any dx11 warnings or errors.
this also only happens while using an ATI Radeon card.
I also tried on 5 other nvidia cards and the output looks fine.
I downloaded the newest drivers and also I tried older ones, but nothing changed.
I can not post the code, anyway it is to huge. I only want to know, if some one also had something like this, and if so, how did you solved it?
Is there a better way to copy textures using another method?
I found out, that the problem is easy solveable. After a long debugging session
I saw, that the source texture was also bound to the render output. This gives no warnings or errors and is valid on nvidia cards, but my radeon card (AMD Radeon R7 M370) does not like it.
So I changed my code to:
OMSetRenderTargets(1, nullptr, nullptr);
CopyResource(...
and the bug was fixed. maybe someone helps this answer to solve the same problem.

Getting a pixelformat/context with stencil buffer with Mesa OpenGL

I need to change a very old application to be able to work through Remote Desktop Connection (which only supports a subset of opengl 1.1). It only needs various opengl 1.x functions, so I'm trying to use the trick of placing a mesa opengl32.dll file in the application folder. The application only makes sparse use of opengl so it's ok to go with a low performance software renderer.
Anyway, I obtained a precompiled mesa opengl32.dll file from https://wiki.qt.io/Cross_compiling_Mesa_for_Windows but I can't get a pixelformat/context with stencil buffer enabled. If I disable stencil buffer use then everything else works but really it would be best if I could figure out how to get a pixelformat/context with stencil buffer enabled.
Here's the pixelformat part of context creation code:
function gl_context_create_init(adevice_context:hdc):int;
var
pfd,pfd2:tpixelformatdescriptor;
begin
mem_zero(pfd,sizeof(pfd));
pfd.nSize:=sizeof(pfd);
pfd.nVersion:=1;
pfd.dwFlags:=PFD_DRAW_TO_WINDOW or PFD_SUPPORT_OPENGL or PFD_DOUBLEBUFFER;
pfd.iPixelType:=PFD_TYPE_RGBA;
pfd.cColorBits:=32;
pfd.iLayerType:=PFD_MAIN_PLANE;
pfd.cStencilBits:=4;
gl_pixel_format:=choosepixelformat(adevice_context,#pfd);
if gl_pixel_format=0 then
gl_error('choosepixelformat');
if not setpixelformat(adevice_context,gl_pixel_format,#pfd) then
gl_error('setpixelformat');
describepixelformat(adevice_context,gl_pixel_format,sizeof(pfd2),pfd2);
if ((pfd.dwFlags and pfd2.dwFlags)<>pfd.dwFlags) or
(pfd.iPixelType<>pfd2.iPixelType) or
(pfd.cColorBits<>pfd2.cColorBits) or
(pfd.iLayerType<>pfd2.iLayerType) or
(pfd.cStencilBits>pfd2.cStencilBits) then
gl_error('describepixelformat');
...
end;
The error happens at the line (pfd.cStencilBits>pfd2.cStencilBits), i can't seem to find a pixelformat that has cStencilBits not 0 through mesa, so I can't get a context that supports stencils.
Well it turns out that choosepixelformat cannot choose a pixel format only available through mesa opengl32.dll however, wglchoosepixelformat can choose a pixel format only available through mesa, so my problem is solved, as I have now been able to get the stencil buffers to work while using Remote Desktop Connection with this old program.
The thing I don't understand but don't have time to look into (if you know the answer please post it in the comments of this answer), is that setpixelformat and describepixelformat both work perfectly fine with pixel formats only available through mesa. I expected either all 3 of choosepixelformat/setpixelformat/describepixelformat to either all work or all not work, but this is how it is.

finding allocated texture memory on intel

as a follow up to a different question (opengl: adding higher resolution mipmaps to a texture), I'd like to emulate a function of gDEBugger. I'd like to find the total size of the currently allocated textures, which would be used to decide between different ways to solve that question.
The specific thing I'd like to do is figure out how gDEBugger fills in the info in "view/viewers/graphic and compute memory analysis viwer", and in that window the place where it tells me the sum of the size of all currently loaded textures.
For nvidia cards it appears I can call "glGetIntegerv(GL_GPU_MEM_INFO_CURRENT_AVAILABLE_MEM_NVX,#mem_available);" just before starting the texture test and just after, make the difference, and get the desired result.
For ATI/AMD it appears I can call "wglGetGPUInfoAMD(0, WGL_GPU_RAM_AMD, GL_UNSIGNED_INT, 4, &mem_available);" before and after the texture test to get the wanted result.
For intel video cards however I am not finding the right keywords to put in various search engines to figure this out.
So, anyone can help figure out how to do this with intel cards and confirm the method I'll use for ati/amd and nvidia cards?
edit: it appears that for amd/ati cards what I wrote earlier might be for total memory and for current memory I should use instead "glGetIntegerv( GL_TEXTURE_FREE_MEMORY_ATI, &mem_avail );"
edit2: for reference, here's what seems to be the most concise and precise source for what I wrote for the ati/amd and nvidia cards: http://nasutechtips.blogspot.ca/2011/02/how-to-get-gpu-memory-size-and-usage-in.html

glDrawArray+VBO increasing memory footprint

I am writing a Windows based OpenGL viewer application.
I am using VBO + triangle strip + glDrawArrays method to render my meshes. Every thing is perfectly working on all machines.
In case of Windows Desktop with nVidia Quadro cards the working/peak working memory shoots when i first call glDrawArray.
While in case of laptops having nvidia mobile graphic cards the working memory or peak working memory does not shoot. Since last few days i am checking almost all forums/post/tuts about VBO memory issue. Tried all combinations of VBO like GL_STATIC_DRAW/DYNAMIC/STREAM, glMapbuffer/glunmapbuffer. But nothing stops shooting memory on my desktops.
I suspect that for VBO with ogl 1.5 i am missing some flags.
PS: I have almost 500 to 600 VBO's in my application. I am using array of structures ( i.e. v,n,c,t together in a structure). And I am not aligning my VBOs to 16k memory.
Can any one suggest me how I should go ahead to solve this issue. Any hints/pointers would be helpful.
Do you actually run out of memory or does your application increasingly consume memory? If not, why bother? If the OpenGL implementation keeps a working copy for itself, then this is probably for a reason. Also there's little you can do on the OpenGL side to avoid this, since it's entirely up to the driver how it manages its stuff. I think the best course of action, if you really want to keep the memory footprint low, is contacting NVidia, so that they can double check if this may be a bug in their drivers.

glGetTexImage() doesnt work properly on ATI cards? fails when miplevel > 0

I use that function to get my texture from the graphics card,
but for some reason it doesnt return anything on some cards if miplevel > 0
Here is the code im using to get image:
glGetTexImage(GL_TEXTURE_2D, miplevel, GL_RGB, GL_UNSIGNED_BYTE, data);
here is the code i use to check which method to use for mipmapping:
ext = (char*)glGetString(GL_EXTENSIONS);
if(strstr(ext, "SGIS_generate_mipmap") == NULL){
// use gluBuild2DMipmaps()
}else{
// use GL_GENERATE_MIPMAP
}
So far it has worked properly, so it says GL_GENERATE_MIPMAP is supported for those ATI cards below.
Here are the tested cards:
ATI Radeon 9550 / X1050 Series
ATI Mobility Radeon HD 3470
ATI Radeon X700
ATI Radeon HD 4870
ATI Radeon HD 3450
At this moment i am taking the miplevel 0 and generating the mipmap by own code. Is there better fix for this?
Also glGetError() returns 0 for all cards, so no error occurs. it just doesnt work. probably a driver problem?
Im still looking for better fix than resizing it myself on CPU...
Check the error that glGetTexImage is reporting. It most probably tells you what the error is .
Edit: Sounds like the joys of using ATI's poorly written OpenGL drivers. Assuming your drivers are up-to-date, use an nVidia card, work around it or accept it won't work. Thats pretty much your only options. Might be worth hassling ATI about it but they will, most likely, do nothing, alas.
Edit2: On the problem cards are you using GL_GENERATE_MIPMAP? It might be that you can't grab the mip levels unless they are explicitly built ...? ie Try gluBuild2DMipmaps() for everything.
Edit 3: Thing is though. It "may" be the cause of your problems. It doesn't sound unlikely to me that the ATI card grabs the texture from a local copy, however if you use the auto generate of mip maps then it does it entirely on the card and never copies them back. Explicitly try building the mip maps locally and see if that fixes your issues. It may not, however you do need to try these things or you will never figure out the problem. Alas trial and error is all that works with problems like this. This is why a fair few games have large databases with driver name, card name and driver version to decide whether a feature will or will not work.