I ported this sample to to jogl from g-truc and it works, everything fine everything nice.
But now I am trying to understand exactly what the stream of glDrawTransformFeedbackStream refers to.
Basically a vec4 position input gets transformed to
String[] strings = {"gl_Position", "Block.color"};
gl4.glTransformFeedbackVaryings(transformProgramName, 2, strings, GL_INTERLEAVED_ATTRIBS);
as following:
void main()
{
gl_Position = mvp * position;
outBlock.color = vec4(clamp(vec2(position), 0.0, 1.0), 0.0, 1.0);
}
transform-stream.vert, transform-stream.geom
And then I simply render the transformed objects with glDrawTransformFeedbackStream
feedback-stream.vert, feedback-stream.frag
Now, based on the docs they say:
Specifies the index of the transform feedback stream from which to
retrieve a primitive count.
Cool, so if I bind my feedbackArrayBufferName to 0 here
gl4.glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, feedbackName[0]);
gl4.glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, feedbackArrayBufferName[0]);
gl4.glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, 0);
I guess it should be that.
Also the geometry shader outputs (only) the color to index 0. What about the positions? Are they assumed to be already on stream 0? How? From glTransformFeedbackVaryings?
Therefore, I tried to switch all the references to this stream to 1 to check if they are all consistent and then if they do refer to the same index.
So I modified
gl4.glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 1, feedbackArrayBufferName[0]);
and
gl4.glDrawTransformFeedbackStream(GL_TRIANGLES, feedbackName[0], 1);
and also inside the geometry shader
out Block
{
layout(stream = 1) vec4 color;
} outBlock;
But if I run, I get:
Program link failed: 1
Link info
---------
error: Transform feedback can't capture varyings belonging to different vertex streams in a single buffer.
OpenGL Error(GL_INVALID_OPERATION): initProgram
GlDebugOutput.messageSent(): GLDebugEvent[ id 0x502
type Error
severity High: dangerous undefined behavior
source GL API
msg GL_INVALID_OPERATION error generated. <program> object is not successfully linked, or is not a program object.
when 1455183474230
source 4.5 (Core profile, arb, debug, compat[ES2, ES3, ES31, ES32], FBO, hardware) - 4.5.0 NVIDIA 361.43 - hash 0x225c78a9]
GlDebugOutput.messageSent(): GLDebugEvent[ id 0x502
type Error
severity High: dangerous undefined behavior
source GL API
msg GL_INVALID_OPERATION error generated. <program> has not been linked, or is not a program object.
when 1455183474232
source 4.5 (Core profile, arb, debug, compat[ES2, ES3, ES31, ES32], FBO, hardware) - 4.5.0 NVIDIA 361.43 - hash 0x225c78a9]
Trying to know what'g going on, I found this here
Output variables in the Geometry Shader can be declared to go to a particular stream. This is controlled via an in-shader specification, but there are certain limitations that affect advanced component interleaving.
No two outputs that go to different streams can be captured by the same buffer. Attempting to do so will result in a linker error. So using multiple streams with interleaved writing requires using advanced interleaving to route attributes to different buffers.
Is it what happens to me? position going to index 0 and color to index 1?
I'd simply like to know if my hypotesis are correct. And if yes, I want to prove it by changing the stream index.
Therefore I'd also like to know how I can set the position on stream 1 together with color after my changes.. shall I modify the output of the geometry shader in this way layout(triangle_strip, max_vertices = 3, xfb_buffer = 1) out;?
Because it complains
Shader status invalid: 0(11) : error C7548: 'layout(xfb_buffer)' requires "#extension GL_ARB_enhanced_layouts : enable" before use
Then I add it and I get
error: Transform feedback can't capture varyings belonging to different vertex streams in a single buffer.
But now they should be both on stream 1, what I am missing?
Moreover, what is the definition of a stream?
Related
I have an OpenGL project that has previously used OpenGL 3.0-based methods for drawing arrays and I'm trying to convert it to use newer methods (at least available as of OpenGL 4.3). However, so far I have not been able to make it work.
The piece of code I'll use for explanation creates groupings of points and draws lines between them. It can also fill the resulting polygon (using triangles). I'll give an example using the point-drawing routines. Here's the pseudo-code for how it used to work:
[Original] When points are first generated (happens once):
// ORIGINAL, WORKING CODE RUN ONCE DURING SETUP:
// NOTE: This code is in c# and uses a library called SharpGL
// to invoke OpenGL calls via a context herein called "gl"
float[] xyzArray = [CODE NOT SHOWN -- GENERATES ARRAY OF POINTS]
// Create a buffer for the vertex data
// METHOD CODE NOT SHOWN, but uses glGenBuffers to fill class-member buffer IDs
GenerateBuffers(gl); // Note: we now have a VerticesBufferId
// Set the vertex buffer as the current buffer and fill it
gl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, this.VerticesBufferId);
gl.BufferData(OpenGL.GL_ARRAY_BUFFER, xyzArray, OpenGL.GL_STATIC_DRAW);
[Original] Within the loop that does the drawing:
// ORIGINAL, WORKING CODE EXECUTED DURING EACH DRAW LOOP:
// NOTE: This code is in c# and uses a library called SharpGL
// to invoke OpenGL calls via a context herein called "gl"
// Note: positionAttributeId (below) was derived from the active
// shader program via glGetAttribLocation
gl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, this.VerticesBufferId);
gl.EnableVertexAttribArray(positionAttributeId);
gl.VertexAttribPointer(positionAttributeId, 3, OpenGL.GL_FLOAT, false, 0, IntPtr.Zero);
// Missing code sets some uniforms in the shader program and determines
// the start (iStart) and length (pointCount) of points to draw
gl.DrawArrays(OpenGL.GL_LINE_STRIP, iStart, pointCount);
That code has worked for quite a while now, but I'm trying to move to more modern techniques. Most of my code didn't change at all, and this is the new code that replaced the above:
[After Mods] When points are first generated (happens once):
// MODIFIED CODE RUN ONCE DURING SETUP:
// NOTE: This code is in c# and uses a library called SharpGL
// to invoke OpenGL calls via a context herein called "gl"
float[] xyzArray = [CODE NOT SHOWN -- GENERATES ARRAY OF POINTS]
// Create a buffer for the vertex data
// METHOD CODE NOT SHOWN, but uses glGenBuffers to fill class-member buffer IDs
GenerateBuffers(gl); // Note: we now have a VerticesBufferId.
// Set the vertex buffer as the current buffer
gl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, this.VerticesBufferId);
gl.BufferData(OpenGL.GL_ARRAY_BUFFER, xyzArray, OpenGL.GL_STATIC_DRAW);
// ^^^ ALL CODE ABOVE THIS LINE IS IDENTIAL TO THE ORIGINAL ^^^
// Generate Vertex Arrays
// METHOD CODE NOT SHOWN, but uses glGenVertexArrays to fill class-member array IDs
GenerateVertexArrays(gl); // Note: we now have a PointsArrayId
// My understanding: I'm telling OpenGL to associate following calls
// with the vertex array generated with the ID PointsArrayId...
gl.BindVertexArray(PointsArrayId);
// Here I associate the positionAttributeId (found from the shader program)
// with the currently bound vertex array (right?)
gl.EnableVertexAttribArray(positionAttributeId);
// Here I tell the bound vertex array about the format of the position
// attribute as it relates to that array -- I think.
gl.VertexAttribFormat(positionAttributeId, 3, OpenGL.GL_FLOAT, false, 0);
// As I understand it, I can define my own "local" buffer index
// in the following calls (?). Below I use 0, which I then bind
// to the buffer with id = this.VerticesBufferId (for the purposes
// of the currently bound vertex array)
gl.VertexAttribBinding(positionAttributeId, 0);
gl.BindVertexBuffer(0, this.VerticesBufferId, IntPtr.Zero, 0);
gl.BindVertexArray(0); // we no longer want to be bound to PointsArrayId
[After Mods] Within the loop that does the drawing:
// MODIFIED CODE EXECUTED DURING EACH DRAW LOOP::
// NOTE: This code is in c# and uses a library called SharpGL
// to invoke OpenGL calls via a context herein called "gl"
// Here I tell OpenGL to bind the VertexArray I established previously
// (which should understand how to fill the position attribute in the
// shader program using the "zeroth" buffer index tied to the
// VerticesBufferId data buffer -- because I went through all the trouble
// of telling it that above, right?)
gl.BindVertexArray(this.PointsArrayId);
// \/ \/ \/ NOTE: NO CODE CHANGES IN THE CODE BELOW ThIS LINE \/ \/ \/
// Missing code sets some uniforms in the shader program and determines
// the start (iStart) and length (pointCount) of points to draw
gl.DrawArrays(OpenGL.GL_LINE_STRIP, iStart, pointCount);
After the modifications, the routines draw nothing to the screen. There's no exceptions thrown or indications (that I can tell) of a problem executing the commands. It just leaves a blank screen.
General questions:
Does my conversion to the newer vertex array methods look correct? Do you see any errors in the logic?
Am I supposed to do specific glEnable calls to make this method work vice the old method?
Can I mix and match between the two methods to fill attributes in the same shader program? (e.g., In addition to the above, I fill out triangle data and use it wit the same shader program. If I haven't switched that process to the new method, will that cause a problem)?
If there is there anything else I'm missing here, I'd really appreciate it if you'd let me know.
A little more sleuthing and I figured out my error:
When using glVertexAttribPointer, you can set the stride parameter to 0 (zero) and OpenGL will automatically determine the stride; however, when using glVertexAttribFormat, you must set the stride yourself.
Once I manually set the stride value in glVertexAttribFormat, everything worked as expected.
I have a shader program with a for loop in the geometry shader. The program links (and operates) fine when the for loop length is small enough. If I increase the length then I get a link error (with empty log). The shaders compile fine in both cases. Here is the geometry shader code (with everything I thought relevant):
#version 330
layout (points) in;
layout (triangle_strip, max_vertices = 256) out;
...
void main()
{
...
for(int i = 0 ; i < 22 ; ++i) // <-- Works with 22, not with 23.
{
...
EmitVertex();
...
EmitVertex();
...
EmitVertex();
...
EmitVertex();
EndPrimitive();
}
}
The specs state: "non-terminating loops are allowed. The consequences of very long or non-terminating loops are platform dependent." Could this be a platform dependent situation (GeForce GT 640)? As the shader code evolved, the max length of the for loop changed (more code -> smaller max), leading me to suspect it has something to do with loop unrolling. Can anyone give me any more info on this issue? (Let me know if you need more code/description.)
One possible reason for failure to link programs containing geometry shaders as the GL_MAX_GEOMETRY_TOTAL_OUTPUT_COMPONENTS limit. Section 11.3.4.5 "Geometry Shader Outputs" of the OpenGL 4.5 core profile specifiaction states (my emphasis):
There are two implementation-dependent limits on the value of GEOMETRY_VERTICES_OUT; it may not exceed the value of MAX_GEOMETRY_OUTPUT_VERTICES, and the product of the total number of vertices and the sum of all
components of all active output variables may not exceed the value of MAX_GEOMETRY_TOTAL_OUTPUT_COMPONENTS. LinkProgram will fail if it determines
that the total component limit would be violated.
The GL guarantees that this toal component limit is at least 1024.
You did not paste the full code of your shaders, so that it is unclear how many components per vertex you are using, but it might be a reason for a link failure.
If I increase the length then I get a link error (with empty log).
The spec does not require any linker or compiler messages at all. However, Nvidia usually provides quite good log messages. If you can reproduce the "link failure without log message" scenario in the most current driver version, it might be worth filing a bug report.
I have a terrible behavior with one of my textures on OpenGL.
After deleting a texture, I create a new one, it generates the same tex number as before, but the texture is incorrect. Also glGetError returns always 0 on every line ! I tried to add glFlush/glFinish after glDeleteTextures but it doesn't change anything ! the texture number seems locked somewhere...why ?
It's single threaded, here the behavior :
//myTexture == 24 is loaded and works correctly
GLboolean bIsTexture = glIsTexture(myTexture); //returns 1 = > ok
glDeleteTextures(1,&myTexture);
bIsTexture = glIsTexture(myTexture); //returns 0 => ok
//Let's create a new texture
glGenTextures(1,&myTexture);//myTexture == 24 (as the glDelete was ok)
glBindTexture(GL_TEXTURE_2D,myTexture);
bIsTexture = glIsTexture(myTexture); //returns 0 => FAILS
Call BindTexture to 0 before creating a texture already bind :
//Let's create a new texture
glBindTexture(GL_TEXTURE_2D,0); // free the old bind texture if deleted
glGenTextures(1,&myTexture);//myTexture == 24 (as the glDelete was ok)
glBindTexture(GL_TEXTURE_2D,myTexture);
bIsTexture = glIsTexture(myTexture); //returns 1 => Ok
Well, you're missing one important step in the commonly used sequence of operations for creating a new texture: Actually allocating it.
//Let's create a new texture
glGenTextures(1,&myTexture);//myTexture == 24 (as the glDelete was ok)
glBindTexture(GL_TEXTURE_2D,myTexture);
Here you must call either glTexImage2D or glTexStorage to actually allocate the texture object. Before doing so, there's no texture data associated with the generated texture name. This is important: The value(s) generated by glGenTextures are not textures, but texture names (i.e. handles) and while OpenGL states that this should be a texture object already, a buggy driver may interpret it wrongly.
glTexImage2D(…); // <<<<<
bIsTexture = glIsTexture(myTexture); //returns …
Update:
As Andon M. Coleman points out (thanks for that), binding a texture name to a texture target makes texture object (associated to said name). So one should expect glIsTexture to return GL_TRUE in that case. Now here's how reality differs from the ideal world of specification: An actual (buggy) driver may be wrongly implemented and assume a name being associated with a texture object only if there's an actual data storage present, so it might be necessary to do the actual allocation to see the effect.
In practice you normally do the storage allocation quite soon after the name allocation. I presume the test suite the implementer of your driver used does not check for this corner case. Time for writing a bug report.
The only thing glIsTexture (...) does is let you know if an OpenGL name (handle) belongs to a texture or not. In fact, OpenGL names are not actually tied to their final purpose until first-use. In the case of your texture name, glIsTexture merely checks to see if the name is associated with a texture; this association takes place the first time you call glBindTexture (...) using the name. glIsTexture does not tell you if the name has an asociated data store (e.g. you called glTexImage2D (...) or in OpenGL 4+ glTexStorage (...)).
Here's an interesting bit of trivia: before OpenGL 3.0, the spec. allowed you to come up with any unused number for an object name and OpenGL would treat it like you had used a glGen___ (...) function to generate the name; this can still be done in compatibility profiles. That is how unimportant the name generation functions were in the grand scheme of things.
The big takeaway here is that names are given their function upon fist use. More importantly glIs___ (...) merely tells you if a name is associated with a particular kind of OpenGL object (not if it is a valid/initialized/... object).
The official explanation of what I just mentioned comes from the OpenGL spec, which states:
The command:
void GenTextures( sizei n, uint *textures );
returns n previously unused texture names in textures. These names are marked as used, for the purposes of GenTextures only, but they acquire texture state and a dimensionality only when they are first bound, just as if they were unused.
The binding is effected by calling:
void BindTexture( enum target, uint texture );
with target set to the desired texture target and texture set to the unused name. The resulting texture object is a new state vector, comprising all the state and with the same initial values listed in section 8.21 The new texture object bound to target is, and remains a texture of the dimensionality and type specified by target until it is deleted.
Since this is all that glIsTexture (...) is supposed to do, I would have to assume that this is a driver bug.
When I run my DirectX11 project, I get spammed in my output window every time the ID3D10Device::DrawIndexed is called with this warning
D3D11: WARNING: ID3D11DeviceContext::DrawIndexed: Input vertex slot 0
has stride 48 which is less than the minimum stride logically expected
from the current Input Layout (56 bytes). This is OK, as hardware is
perfectly capable of reading overlapping data. However the developer
probably did not intend to make use of this behavior. [ EXECUTION
WARNING #355: DEVICE_DRAW_VERTEX_BUFFER_STRIDE_TOO_SMALL ]
This is how I'm currently calling the function
pImmediateContext->DrawIndexed( this->vertexBuffer.indices.size() * 3,
0, 0 );
I'm not sure what I'm doing wrong that is causing this warning. If someone could shed some light on the issue I would appreciate it.
The error is telling you that your input layout has a different total byte size than the stride you have set when setting the vertex buffer.
To fix the problem you need to ensure that that the input layer set via IASetInputLayout() has the same stride as the one set when you call IASetVertexBuffers().
I have completely re-written this first post to better show what the problem is.
I am using ps v1.4 (highest version I have support for) and keep getting an error.
It happens any time I use any type of function such as cos, dot, distance, sqrt, normalize etc. on something that was passed into the pixelshader.
For example, I need to do "normalize(LightPosition - PixelPosition)" to use a point light in my pixelshader, but normalize gives me an error.
Some things to note-
I can use things like pow, abs, and radians with no error.
There is only an error if it is done on something passed from the vertex shader. (For example I could take the sqrt of a local pixelshader variable with no error)
I get the error from doing a function on ANY variable passed in, even text coords, color, etc.
Inside the vertex shader I can do all of these functions on any variables passed in with no errors, it's only in the pixelshader that I get an error
All the values passing from the vertex to pixel shader are correct, because if I use software processing rather than hardware I get no error and a perfectly lit scene.
Since normalizing the vector is essentially where my error comes form I tried creating my own normalizing function.
I call Norm(LightPosition - PixelPosition) and "Norm" looks like this -
float3 Norm(float3 v)
{
return v / sqrt(dot(v, v));
}
I still get the error because I guess technically I'm still trying to take a sqrt inside the pixelshader.
The error isn't anything specific, it just says "error in application" on the line where I load my .fx file in C#
I'm thinking it could actually be a compiling error because I have to use such old versions (vs 1.1 and ps 1.4)
When debugged using fxc.exe it tells me "can not map instruction to pixel shader instruction set"
Old GPU:s didn't always support any instruction, especially in the pixel shader.
You might get away with a sqrt in the vertex shader but for a so old version (1.1 !!) the fragment shader might be extremely limited.
I.e this might not be a bug.
The work around could be to skip the hlsl and write your own assembler (but you might stumble onto the same problem there) and simulate the sqrt (say with a texture lookup and / or interpolations if you can have 2 textures in 1.0 :-p )
You can of course try to write a sqrt-lookup/interpolation in hlsl but it might be too big too (I don't remember but IIRC 1.1 don't let you write very long shaders).