Intel OpenGL Driver bug? - opengl

Whenever I try to render my terrain with point light's it only works on my Nvidia gpu and driver, and not the Intel integrated and driver. I believe the problem is in my code and a bug in the Nvidia gpu since I heard Nvidia's OpenGL implementations are buggy and will let you get away with things your not supposed to. And since I get no error's I need help debugging my shader's.
Link:
http://pastebin.com/sgwBatnw
Note:
I use OpenGL 2 and GLSL Version 120
Edit:
I was able to fix the problem on my one, to anyone with similar problems it's not because I used the regular transformation matrix because when I did that I set the normals w value to 0.0; The problem was that with the intel integrated graphics there is apparently a max number of array's in a uniform or max uniform size in general and I was going over that limit but it was deciding not to report it. Another thing wrong with this code was that I was doing implicit type conversion (dividing vec3's by floats) so I corrected those things and it started to work. Here's my updated code.
Link: http://pastebin.com/zydK7YMh

Related

finding allocated texture memory on intel

as a follow up to a different question (opengl: adding higher resolution mipmaps to a texture), I'd like to emulate a function of gDEBugger. I'd like to find the total size of the currently allocated textures, which would be used to decide between different ways to solve that question.
The specific thing I'd like to do is figure out how gDEBugger fills in the info in "view/viewers/graphic and compute memory analysis viwer", and in that window the place where it tells me the sum of the size of all currently loaded textures.
For nvidia cards it appears I can call "glGetIntegerv(GL_GPU_MEM_INFO_CURRENT_AVAILABLE_MEM_NVX,#mem_available);" just before starting the texture test and just after, make the difference, and get the desired result.
For ATI/AMD it appears I can call "wglGetGPUInfoAMD(0, WGL_GPU_RAM_AMD, GL_UNSIGNED_INT, 4, &mem_available);" before and after the texture test to get the wanted result.
For intel video cards however I am not finding the right keywords to put in various search engines to figure this out.
So, anyone can help figure out how to do this with intel cards and confirm the method I'll use for ati/amd and nvidia cards?
edit: it appears that for amd/ati cards what I wrote earlier might be for total memory and for current memory I should use instead "glGetIntegerv( GL_TEXTURE_FREE_MEMORY_ATI, &mem_avail );"
edit2: for reference, here's what seems to be the most concise and precise source for what I wrote for the ati/amd and nvidia cards: http://nasutechtips.blogspot.ca/2011/02/how-to-get-gpu-memory-size-and-usage-in.html

Different lighting behavior on different machines

I recently tried to make an .obj mesh loader in C++ with OpenGL and I am confronted to a strange problem.
I have a std::vector<Vector3f> that reprensents the coords of the vertices of the faces, and another one that represents its normals. In my Vector3f, there is a std::array<float,3> so I can preserve contiguity between elements.
// Vertex Pointer to triangle array
glVertexPointer(3,GL_FLOAT, sizeof(Vector3f), &(_triangles[0].data[0]));
// Normal pointer to normal array
glNormalPointer(GL_FLOAT,sizeof(Vector3f),&(_normals[0].data[0]));
When I compile the program on my school computers, it gives the good results, but when I compile it on my desktop computer, the lighting is strange, it's like all the faces are reflecting light in the camera, and so they appear all white.
Do you guys have any idea of what could be my problem ?
EDIT :
My computer is under ArchLinux, my window manager is Awesome, and this is written on a sticker on my pc
Intel Core i7-3632QM 2.2GHz with Turbo Boost up to 3.2GHz.
NVIDIA GeForce GT 740M
I don't know much about my school computers, but I think they are on Ubuntu.
I made it.
Of course, with such a little information, it would be difficult for anyone esle to find the answer.
This was based on sources given by school, and at a certain point, the shininess of the mesh was defined that way
glMaterialf (GL_FRONT, GL_SHININESS, 250);
However, in the Open GL documentation, it specified that
Only values in the range [0, 128] are accepted.
So I guess the different version of OpenGL reacted differently to this mistake :
my school's version of OpenGL probably decided to clamp the value of the shininess in [0,128]
my computer's version probably made saturated the shininess, which is why I had so bright results.
However, thank you very much for your help, and for taking time to read this post.

Broken ANGLE-generated HLSL from webgl shader

I have an issue with a webgl shader that I've determined is related to ANGLE because it only occurs on Windows in Firefox or Chrome, and it doesn't happen if I force opengl (chrome --use-gl=desktop).
I've created a jsfiddle that shows ANGLE-generated HLSL of my custom shader. (for hlsl conversion to work in this jsfiddle, you must run chrome with --enable-privileged-webgl-extensions, or just see my gist of the output)
So I have working glsl and the generated hlsl compiles but doesn't do the same thing. The symptom is that on Windows, the vertices appear in correct initial locations, but do not move although I change the uniform jed. I can't find the bug in the generated code.
Any tips for debugging problems like this?
Hard to say based on your info (including no original GLSL). It's not hard to imagine this being fixed by the July 4 revisions to ANGLE, however. I would say update, first.

Which version of GLSL supports Indexing in Fragment Shader?

I have a fragment shader that iterates over some input data and on old hardwares I get:
error C6013: Only arrays of texcoords may be indexed in this profile, and only with a loop index variable
Googling around I saw a lot of things like "hardware prior to XX doesnt support indexing on fragment shader".
I was wondering if this behavior is standardized in GLSL versions, something like "glsl version pior to XX doesnt support indexing on fragment shader". And if so, which version starts supporting it.
What is your exact hardware ?
Old ATI cards (below X1600) and their drivers have such issues. Most certainly, not the most recent cards from Intel also suffer from this.
"Do you have any sugestion on how to detect if my hardware is capable of indexing in fragment shader?"
The only reliable yet not-so-beautiful way is to get the Renderer information:
glGetString(GL_RENDERER)
and check if this renderer occurs in the list of unsupported ones.
That particular error comes from the Nvidia compiler for nv4x (GeForce 6/7 cards), and is a limitation of the hardware. Any workaround would require disabling the hardware completely and using pure software rendering.
All versions of GLSL support indexing in the language -- this error falls under the catch-all of exceeding the hardware resource limits.

Using shader for calculations

Is it possible to use shader for calculating some values and then return them back for further use?
For example I send mesh down to GPU, with some parameters about how it should be modified(change position of vertices), and take back resulting mesh? I see that rather impossible because I haven't seen any variable for comunication from shaders to CPU. I'm using GLSL so there are just uniform, atributes and varying. Should I use atribute or uniform, would they be still valid after rendering? Can I change values of those variables and read them back in CPU? There are methods for mapping data in GPU but would those be changed and valid?
This is the way I'm thinking about this, though there could be other way, which is unknow to me. I would be glad if someone could explain me this, as I've just read some books about GLSL and now I would like to program more complex shaders, and I wouldn't like to relieve on methods that are impossible at this time.
Thanks
Great question! Welcome to the brave new world of General-Purpose Computing on Graphics Processing Units (GPGPU).
What you want to do is possible with pixel shaders. You load a texture (that is: data), apply a shader (to do the desired computation) and then use Render to Texture to pass the resulting data from the GPU to the main memory (RAM).
There are tools created for this purpose, most notably OpenCL and CUDA. They greatly aid GPGPU so that this sort of programming looks almost as CPU programming.
They do not require any 3D graphics experience (although still preferred :) ). You don't need to do tricks with textures, you just load arrays into the GPU memory. Processing algorithms are written in a slightly modified version of C. The latest version of CUDA supports C++.
I recommend to start with CUDA, since it is the most mature one: http://www.nvidia.com/object/cuda_home_new.html
This is easily possible on modern graphics cards using either Open CL, Microsoft Direct Compute (part of DirectX 11) or CUDA. The normal shader languages are utilized (GLSL, HLSL for example). The first two work on both Nvidia and ATI graphics cards, cuda is nvidia exclusive.
These are special libaries for computing stuff on the graphics card. I wouldn't use a normal 3D API for this, althought it is possible with some workarounds.
Now you can use shader buffer objects in OpenGL to write values in shaders that can be read in host.
My best guess would be to send you to BehaveRT which is a library created to harness GPUs for behavorial models. I think that if you can formulate your modifications in the library, you could benefit from its abstraction
About the data passing back and forth between your cpu and gpu, i'll let you browse the documentation, i'm not sure about it