imageAtomicExchange won't compile - c++

I'm trying to use two opengl images, one of which is sparse and the other used as a sort of page table, in which I keep track of the page actually commited.
I have a simple little shader, which looks like this (main not included):
#version 450 core
#extension GL_ARB_shader_image_load_store : require
uniform float gridSize;
uniform float pageTableSize;
bool isPageInMemoryOrRequest (in ivec3 pos)
{
bool returnValue = false;
if ( 255u == imageAtomicExchange(pageTable, pos, 128u) )
{
returnValue = true;
}
return returnValue;
}
And my problem is that this won't compile. I keep getting this message:
Error C1115: unable to find compatible overloaded function "imageAtomicExchange(struct uimage3D1x8_bindless, ivec3, uint)"
I'm pretty sure I've never seen that _bindless part anywhere in the specs and I'm not exactly sure how the compiler figures out that is a bindless texture at compile time (or maybe they're all bindless in the latest drivers).
I've got a GTX660TI and I'm using the 352.86 drivers.
I'm wondering if anyone's had this sort of issue before and could tell me what might the problem be.
Thanks in advance.

According to the extension specification of ARB_shader_image_load_store (Section 8.X, Image Functions), there is only of very limited number of supported formats for atomic operations:
Atomic memory operations
are supported on only a subset of all image variable types; must
be either:
an image variable with signed integer components (iimage*) and a
format qualifier of "r32i", or
an image variable with unsigned integer components (uimage*) and a
format qualifier of "r32ui".
I assume from the error message, that you have tried to use a r8ui format, which is not supported.

Related

how do I solve the error of loop in webgl shader

When I pass in an uniform int variable for a for loop, it reports an error
When I define a constant, it doesn't report an error,
How to solve it
// error INVALID_OPERATION
uniform int myLen;
for (int i = 0; i < myLen; i += 1)
// success
const int myNum = 10;
for (int i = 0; i < myNum; i += 1)
I am guessing you are targeting WebGL 1.
If we look at the specification for The OpenGL® ES Shading Language, version 1.00, which is what WebGL uses, and look at the section "Appendix A: Limitations for ES 2.0" (OpenGL ES 2.0 is what WebGL 1 is based on), it says:
In general, control flow is limited to forward branching and to loops where the maximum number of
iterations can easily be determined at compile time.
[…]
for loops are supported but with the following restrictions:
[…]
The for statement has the form:
for ( init-declaration ; condition ; expression ) statement
[…]
condition has the form
loop_index relational_operator constant_expression
where relational_operator is one of: > >= < <= == or !=
Note the "constant_expression". This unfortunately means that you aren't allowed* to use a uniform variable for your loop bound, like you did.
I believe this is different in WebGL 2. You might want to try using that if it's an option.
* The GLSL ES spec does say "Within the GLSL ES specification, implementations are permitted to implement features beyond the minima described in this section, without the use of an extension." However, unfortunately WebGL's specification prohibits this:
A WebGL implementation must only accept shaders which conform to The OpenGL ES Shading Language, Version 1.00 [GLES20GLSL], and which do not exceed the minimum functionality mandated in Sections 4 and 5 of Appendix A

What is the limit on work item (shader instance) memory in WebGL?

I declare an array in my WebGL vertex shader:
attribute vec2 position;
void main() {
#define length 1024
float arr[length];
// use arr so that it doesn't get optimized away
This works, but if I increase length to 2048 then gl.drawArrays does nothing. There are no errors- shaders compile, program links and passes gl.validateProgram. I'm guessing that I tried to use too much memory on the stack. Is there a better, programmatic way to discover this limit? Am I doing something else wrong?
There are no errors- shaders compile, program links and passes gl.validateProgram.
As guaranteed by the spec!
Section 2.10: "Vertex Shaders", page 42:
A shader should not fail to compile, and a program object should not fail to
link due to lack of instruction space or lack of temporary variables.
The GLSL spec helpfully notes:
Appendix A, section 3: "Usage of Temporary Variables":
The maximum number of variables is defined by the conformance tests.
You can get your very own copy of the conformance tests for the low, low price of $14,000-$19,000.
However, you can at least detect this situation (Section 2.10, page 41):
It is not always possible to determine at link time if a program object actually will execute. Therefore validation is done when the first rendering command (DrawArrays or DrawElements) is issued, to determine if the currently active program object can be executed. If it cannot be executed then no fragments will be rendered, and the rendering command will generate the error INVALID_OPERATION.

Parsing GLSL error messages

When I compile a broken GLSL shader then the NVidia driver gives me error messages like this:
0(102) : error C1008: undefined variable "vec"
I know the number inside the brackets is the line number. I wonder what the 0 at the beginning of the error message means. I hoped it would be the index in the sources array which is passed to glShaderSource but that's not the case. It's always 0. Does someone know what this first number means?
And is there some official standard for the error message format so I'm able to parse the line number from it or do other OpenGL implementations use other formats? I only have access to NVidia hardware so I can't check how the error messages are looking like when using AMD or Intel hardware.
It is a file name, which you can't specify via GL API, so it is 0.
You can set it with #line num filename preprocessor command right within shader code. Could be helpful if your shader is constructed from many files with #includes via external preprocessor (before passing source to GL).
There is no standard for messages. Everyone doing whatever they want.
glShaderSource accepts an array of source strings. The first number is the index into that array.

incorrect value from glGetProgramiv(program, GL_ACTIVE_UNIFORM_MAX_LENGTH, outParam)

I need the maximum length of a uniform name. ie: given a used program with uniforms uniform test and uniform myuniform, glGetProgramiv(program, GL_ACTIVE_UNIFORM_MAX_LENGTH, outParam) should get output of 10 (for "myuniform").
I have a very simple test shader set up with 1 defined uniform: uniform float time
glGetProgramiv(program, GL_ACTIVE_UNIFORM_MAX_LENGTH, outParam) returns 5, the length of time. If I change "time" to something else, it returns the changed length (ex: change to "timer" it returns 6).
glGetProgramiv with GL_ACTIVE_UNIFORMS tells me that there are 2 uniforms.
The second uniform that it is reporting is gl_ModelViewProjectionMatrix.
I am fine with it including gl_ModelViewProjectionMatrix in the list - I am using it in the shader, but this brings up a problem when combined with the other return value. Why doesn't glGetProgramiv return the length of "gl_ModelViewProjectionMatrix" if it is including it in the list? I need the full names of attributes and variables for my application, but since I am getting a max length of 5, glGetActiveUniform is returning a uniform name of "gl_M" which is not acceptable.
Either the max name length should include the MVP matrix, or the list of names should not. It does not make sense to include the name in the list but not in the max name length calculation.
Is this happening only for me? I could not find anything else about it using Google. I could abandon a query for max length and always use very big buffers, but I've seen some very long variable names before, so the buffers would have to be huge to guarantee no errors. That's not a real fix anyway.
This test is working correctly for attributes. I use gl_Vertex and have no other attributes. The system reports 1 current attribute with a length of 10 and a name of gl_Vertex correctly. If I remove my time uniform entirely, that leaves the MVP matrix as the only used uniform, the system reports 1 current uniform, max name length of 0, so getting its name with the returned max length gets nothing.
For completeness, I include the code below. The code is in Java and uses JOGL to access OpenGL bindings. To highlight the relevant areas, I have deleted lines not relevant to this issue, which was mostly the GUI updating, including the GUI part that actually shows the values obtained in here. I also deleted the part that gets the attributes since that works fine, as stated above.
FYI for the C people who are wary of the Java-isms: think of a Buffer (IntBuffer, ByteBuffer, FloatBuffer) like a pointer, buffer.get() like buffer[n++], and buffer = IntBuffer.allocate(n) like a malloc(). I also use OpenGL in C and C++, so I can rewrite this in C if the GLSL gurus here prefer that.
Any suggestions?
// add options to panelShaderParameters
public void updateShaderParameters(GLAutoDrawable surface)
{
GL2 gl = surface.getGL().getGL2();
IntBuffer outParam = IntBuffer.allocate(1);
int numParameters = 0,
maxNameLength = 0;
IntBuffer size = null,
type = null;
ByteBuffer name = null;
gl.glGetProgramiv(shader.getName(), GL2.GL_ACTIVE_UNIFORMS, outParam);
numParameters = outParam.get();
outParam = IntBuffer.allocate(1);
gl.glGetProgramiv(shader.getName(), GL2.GL_ACTIVE_UNIFORM_MAX_LENGTH, outParam);
maxNameLength = outParam.get();
for(int i = 0; i < numParameters; i += 1)
{
size = IntBuffer.allocate(1);
type = IntBuffer.allocate(1);
name = ByteBuffer.allocate(maxNameLength);
gl.glGetActiveUniform(shader.getName(), i, maxNameLength, (IntBuffer)null, size, type, name);
byte[] nameBuffer = new byte[maxNameLength];
name.position(0);
name.get(nameBuffer);
}
}
You have found a driver bug. You can attempt to report it (this forum is a place where it may be seen), but there's nothing you can do to make it work correctly. So just work around it.
Rather than having a single max-size, just ask each uniform in turn what it's length is. In C/C++, you do this by passing NULL for the buffer to glGetActiveUniformName:
GLsizei length;
glGetActiveUniformName(program, index, 1000 /*large number*/, &length, NULL);
//Use length to allocate a buffer of the appropriate size.
I don't know how this would work using JOGL. Perhaps you should switch to LWJGL, which has a much more reasonable Java implementation of this function.
Also, stop storing the strings as byte arrays. Convert them into proper Java strings (another reason to use LWJGL).
We had exactly this problem, and I have seen it described here by other:
http://www.opengl.org/discussion_boards/showthread.php/179117-Driver-bug-causing-incorrect-glGetProgramiv-output
In my case, the problem arose after upgrading from nvidia 295.40 driver to 304.64, in various GPU models. In that forum above the bug was reported for an intel driver ¿ which driver are you using Ludowijk ?
I guess that perhaps builtin attributes starting with "gl_" are not considered for max name length computation, but even if this is true this behavoir should be considered a bug IMHO.
Thanks Nicol for the idea for the workaround.

How do I set the DPI of a scan using TWAIN in C++

I am using TWAIN in C++ and I am trying to set the DPI manually so that a user is not displayed with the scan dialog but instead the page just scans with set defaults and is stored for them. I need to set the DPI manually but I can not seem to get it to work. I have tried setting the capability using the ICAP_XRESOLUTION and the ICAP_YRESOLUTION. When I look at the image's info though it always shows the same resolution no matter what I set it to using the ICAPs. Is there another way to set the resolution of a scanned in image or is there just an additional step that needs to be done that I can not find in the documentation anywhere?
Thanks
I use ICAP_XRESOLUTION and the ICAP_YRESOLUTION to set the scan resolution for a scanner, and it works at least for a number of HP scanners.
Code snipset:
float x_res = 1200;
cap.Cap = ICAP_XRESOLUTION;
cap.ConType = TWON_ONEVALUE;
cap.hContainer = GlobalAlloc(GHND, sizeof(TW_ONEVALUE));
if(cap.hContainer)
{
val_p = (pTW_ONEVALUE)GlobalLock(cap.hContainer);
val_p->ItemType = TWTY_FIX32;
TW_FIX32 fix32_val = FloatToFIX32(x_res);
val_p->Item = *((pTW_INT32) &fix32_val);
GlobalUnlock(cap.hContainer);
ret_code = SetCapability(cap);
GlobalFree(cap.hContainer);
}
TW_FIX32 FloatToFIX32(float i_float)
{
TW_FIX32 Fix32_value;
TW_INT32 value = (TW_INT32) (i_float * 65536.0 + 0.5);
Fix32_value.Whole = LOWORD(value >> 16);
Fix32_value.Frac = LOWORD(value & 0x0000ffffL);
return Fix32_value;
}
The value should be of type TW_FIX32 which is a floating point format defined by twain (strange but true).
I hope it works for you!
It should work the way.
But unfortunately we're not living in a perfect world. TWAIN drivers are among the most buggy drivers out there. Controlling the scanning process with TWAIN has always been a big headache because most drivers have never been tested without the scan dialog.
As far as I know there is also no test-suite for twain-drivers, so each of them will behave slightly different.
I wrote an OCR application back in the 90th and had to deal with these issues as well. What I ended up was having a list of supported scanners and a scanner module with lots of hacks and work-arounds for each different driver.
Take the ICAP_XRESOLUTION for example: The TWAIN documentation sais you have to send the resolution as a 32 bit float. Have you tried to set it using an integer instead? Or send it as float but put the bit-representation of an integer into the float, or vice versa. All this could work for the driver you're working with. Or it could not work at all.
I doubt the situation has changed much since then. So good luck getting it working on at least half of the machines that are out there.