glsl vertex shader compiles on pc, mac and not on another pc - glsl

So, I have a PC with an nvidia geforce gtx 580, a mac with some ATI card and then a notebook with a geforce gtx 680M.
The vertex shader compiles on the 580 and the ATI, but not on the gtx 680M.
The error is quite interesting (not):
ERROR: 0:18: '': syntax error syntax error
That line of code is: int vIdStep = gl_VertexID % 9;
I have tried to delete all the whitespaces, add extra empty lines, move the line around, ... nothing works.
I use gl_VertexId in other shaders that compile without problems. Only this one with the % in it won't compile on the 680M.
What is this?
Anyone else has this experience?
What can I do about it?
EDIT: By the way: this solves the problem, but it's a terrible sollution IMO and I really want a better one:
int vIdStep = int(mod(float(gl_VertexID), 9));

Ok, I finally found out what the problem was.
My portable computer has a hybrid video adapter: Intel/Nvidia.
For some reason, even in fullscreen, the driver kept choosing the Intel adapter on my tests.
Therefor... it's a bug in the Intel driver.
You can't use % on glsl on Intel HD Graphics 4000 at this moment. I will now submit the bug to them.

Related

Unable to use geometry shader

I recently switched my laptop's OS from Windows 10 to Fedora linux. After this I tried to load and run my current c++ SFML project. However, when it attempts to load my geometry shader I just get this:
Failed to create a shader: your system doesn't support geometry shaders (you should test Shader::isGeometryAvailable() before trying to use geometry shaders)
I know my system should support geometry shaders as it worked just fine previously on windows. My laptop has a Quad Core AMD Ryzen 5 2500U with Radeon Vega Mobile Gfx chip with the amdgpu driver. Is this an issue with drivers? Could my software have caused this issue? (If so let me know and I will edit this post)
The result of sf::shader::isAvailable() is true. The result of sf::shader:isGeometryAvailable() is false.
If anyone knows how to fix this issue it would be excellent.

strange CopyResource result with ATI Radeon card

I get strange results, when I try to copy the rendered texture into a background texture for later use, using the CopyResource command. this is whats comming out:
I don't get any dx11 warnings or errors.
this also only happens while using an ATI Radeon card.
I also tried on 5 other nvidia cards and the output looks fine.
I downloaded the newest drivers and also I tried older ones, but nothing changed.
I can not post the code, anyway it is to huge. I only want to know, if some one also had something like this, and if so, how did you solved it?
Is there a better way to copy textures using another method?
I found out, that the problem is easy solveable. After a long debugging session
I saw, that the source texture was also bound to the render output. This gives no warnings or errors and is valid on nvidia cards, but my radeon card (AMD Radeon R7 M370) does not like it.
So I changed my code to:
OMSetRenderTargets(1, nullptr, nullptr);
CopyResource(...
and the bug was fixed. maybe someone helps this answer to solve the same problem.

Intel OpenGL Driver bug?

Whenever I try to render my terrain with point light's it only works on my Nvidia gpu and driver, and not the Intel integrated and driver. I believe the problem is in my code and a bug in the Nvidia gpu since I heard Nvidia's OpenGL implementations are buggy and will let you get away with things your not supposed to. And since I get no error's I need help debugging my shader's.
Link:
http://pastebin.com/sgwBatnw
Note:
I use OpenGL 2 and GLSL Version 120
Edit:
I was able to fix the problem on my one, to anyone with similar problems it's not because I used the regular transformation matrix because when I did that I set the normals w value to 0.0; The problem was that with the intel integrated graphics there is apparently a max number of array's in a uniform or max uniform size in general and I was going over that limit but it was deciding not to report it. Another thing wrong with this code was that I was doing implicit type conversion (dividing vec3's by floats) so I corrected those things and it started to work. Here's my updated code.
Link: http://pastebin.com/zydK7YMh

How do I get OpenGL 3.3?

Ok so I'm trying to use the tutorials at: http://arcsynthesis.org/gltut/ but I keep getting an error message that pops for like a second saying "Unable to create OpenGL 3.3 context (flags 1, profile 1)", there's also a bunch of pdb files missing. I did download the newest drivers for both graphics cards on my laptop (that is both the Intel(R) HD Graphics 3000 and the NVIDIA GeForce GT 540M) and I did launch a software called "OpenGL Extensions Viewer", and it displays that I should be able to run OpenGL version 3.1
NOW, I guess is that some would now say that perhaps my card can't run 3.3, but:
1) My card is said to support 4.0 :
http://www.notebookcheck.net/NVIDIA-GeForce-GT-540M.41715.0.html
2) There are people who say that "Any hardware that supports OpenGL 3.1 is capable of supporting OpenGL 3.3. "
OpenGL 3.+ glsl compatibility mess?
3) And finally... A YEAR AGO, I GOT IT TO RUN! Seriously, I got it to work after 2 months of trying. I'm even using some old project files from that time and they sadly won't launch anymore because of the same mistake... I did format since then.
I recall that last time, it was a whole series of things that I tried... like disabling one graphic card to be able to update the other... or maybe it was that I used some different diagnostic, which someone online advised saying that "if that program detects that the OpenGL isn't working properly, it'll fix it".
Right now I'm busy with other homework, so if anyone at all has any suggestions what this could be about, please tell!

glGetTexImage() doesnt work properly on ATI cards? fails when miplevel > 0

I use that function to get my texture from the graphics card,
but for some reason it doesnt return anything on some cards if miplevel > 0
Here is the code im using to get image:
glGetTexImage(GL_TEXTURE_2D, miplevel, GL_RGB, GL_UNSIGNED_BYTE, data);
here is the code i use to check which method to use for mipmapping:
ext = (char*)glGetString(GL_EXTENSIONS);
if(strstr(ext, "SGIS_generate_mipmap") == NULL){
// use gluBuild2DMipmaps()
}else{
// use GL_GENERATE_MIPMAP
}
So far it has worked properly, so it says GL_GENERATE_MIPMAP is supported for those ATI cards below.
Here are the tested cards:
ATI Radeon 9550 / X1050 Series
ATI Mobility Radeon HD 3470
ATI Radeon X700
ATI Radeon HD 4870
ATI Radeon HD 3450
At this moment i am taking the miplevel 0 and generating the mipmap by own code. Is there better fix for this?
Also glGetError() returns 0 for all cards, so no error occurs. it just doesnt work. probably a driver problem?
Im still looking for better fix than resizing it myself on CPU...
Check the error that glGetTexImage is reporting. It most probably tells you what the error is .
Edit: Sounds like the joys of using ATI's poorly written OpenGL drivers. Assuming your drivers are up-to-date, use an nVidia card, work around it or accept it won't work. Thats pretty much your only options. Might be worth hassling ATI about it but they will, most likely, do nothing, alas.
Edit2: On the problem cards are you using GL_GENERATE_MIPMAP? It might be that you can't grab the mip levels unless they are explicitly built ...? ie Try gluBuild2DMipmaps() for everything.
Edit 3: Thing is though. It "may" be the cause of your problems. It doesn't sound unlikely to me that the ATI card grabs the texture from a local copy, however if you use the auto generate of mip maps then it does it entirely on the card and never copies them back. Explicitly try building the mip maps locally and see if that fixes your issues. It may not, however you do need to try these things or you will never figure out the problem. Alas trial and error is all that works with problems like this. This is why a fair few games have large databases with driver name, card name and driver version to decide whether a feature will or will not work.