Ho do you convert an OpenGL project from older glVertexAttribPointer methods to newer glVertexAttribBinding methods? - opengl

I have an OpenGL project that has previously used OpenGL 3.0-based methods for drawing arrays and I'm trying to convert it to use newer methods (at least available as of OpenGL 4.3). However, so far I have not been able to make it work.
The piece of code I'll use for explanation creates groupings of points and draws lines between them. It can also fill the resulting polygon (using triangles). I'll give an example using the point-drawing routines. Here's the pseudo-code for how it used to work:
[Original] When points are first generated (happens once):
// ORIGINAL, WORKING CODE RUN ONCE DURING SETUP:
// NOTE: This code is in c# and uses a library called SharpGL
// to invoke OpenGL calls via a context herein called "gl"
float[] xyzArray = [CODE NOT SHOWN -- GENERATES ARRAY OF POINTS]
// Create a buffer for the vertex data
// METHOD CODE NOT SHOWN, but uses glGenBuffers to fill class-member buffer IDs
GenerateBuffers(gl); // Note: we now have a VerticesBufferId
// Set the vertex buffer as the current buffer and fill it
gl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, this.VerticesBufferId);
gl.BufferData(OpenGL.GL_ARRAY_BUFFER, xyzArray, OpenGL.GL_STATIC_DRAW);
[Original] Within the loop that does the drawing:
// ORIGINAL, WORKING CODE EXECUTED DURING EACH DRAW LOOP:
// NOTE: This code is in c# and uses a library called SharpGL
// to invoke OpenGL calls via a context herein called "gl"
// Note: positionAttributeId (below) was derived from the active
// shader program via glGetAttribLocation
gl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, this.VerticesBufferId);
gl.EnableVertexAttribArray(positionAttributeId);
gl.VertexAttribPointer(positionAttributeId, 3, OpenGL.GL_FLOAT, false, 0, IntPtr.Zero);
// Missing code sets some uniforms in the shader program and determines
// the start (iStart) and length (pointCount) of points to draw
gl.DrawArrays(OpenGL.GL_LINE_STRIP, iStart, pointCount);
That code has worked for quite a while now, but I'm trying to move to more modern techniques. Most of my code didn't change at all, and this is the new code that replaced the above:
[After Mods] When points are first generated (happens once):
// MODIFIED CODE RUN ONCE DURING SETUP:
// NOTE: This code is in c# and uses a library called SharpGL
// to invoke OpenGL calls via a context herein called "gl"
float[] xyzArray = [CODE NOT SHOWN -- GENERATES ARRAY OF POINTS]
// Create a buffer for the vertex data
// METHOD CODE NOT SHOWN, but uses glGenBuffers to fill class-member buffer IDs
GenerateBuffers(gl); // Note: we now have a VerticesBufferId.
// Set the vertex buffer as the current buffer
gl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, this.VerticesBufferId);
gl.BufferData(OpenGL.GL_ARRAY_BUFFER, xyzArray, OpenGL.GL_STATIC_DRAW);
// ^^^ ALL CODE ABOVE THIS LINE IS IDENTIAL TO THE ORIGINAL ^^^
// Generate Vertex Arrays
// METHOD CODE NOT SHOWN, but uses glGenVertexArrays to fill class-member array IDs
GenerateVertexArrays(gl); // Note: we now have a PointsArrayId
// My understanding: I'm telling OpenGL to associate following calls
// with the vertex array generated with the ID PointsArrayId...
gl.BindVertexArray(PointsArrayId);
// Here I associate the positionAttributeId (found from the shader program)
// with the currently bound vertex array (right?)
gl.EnableVertexAttribArray(positionAttributeId);
// Here I tell the bound vertex array about the format of the position
// attribute as it relates to that array -- I think.
gl.VertexAttribFormat(positionAttributeId, 3, OpenGL.GL_FLOAT, false, 0);
// As I understand it, I can define my own "local" buffer index
// in the following calls (?). Below I use 0, which I then bind
// to the buffer with id = this.VerticesBufferId (for the purposes
// of the currently bound vertex array)
gl.VertexAttribBinding(positionAttributeId, 0);
gl.BindVertexBuffer(0, this.VerticesBufferId, IntPtr.Zero, 0);
gl.BindVertexArray(0); // we no longer want to be bound to PointsArrayId
[After Mods] Within the loop that does the drawing:
// MODIFIED CODE EXECUTED DURING EACH DRAW LOOP::
// NOTE: This code is in c# and uses a library called SharpGL
// to invoke OpenGL calls via a context herein called "gl"
// Here I tell OpenGL to bind the VertexArray I established previously
// (which should understand how to fill the position attribute in the
// shader program using the "zeroth" buffer index tied to the
// VerticesBufferId data buffer -- because I went through all the trouble
// of telling it that above, right?)
gl.BindVertexArray(this.PointsArrayId);
// \/ \/ \/ NOTE: NO CODE CHANGES IN THE CODE BELOW ThIS LINE \/ \/ \/
// Missing code sets some uniforms in the shader program and determines
// the start (iStart) and length (pointCount) of points to draw
gl.DrawArrays(OpenGL.GL_LINE_STRIP, iStart, pointCount);
After the modifications, the routines draw nothing to the screen. There's no exceptions thrown or indications (that I can tell) of a problem executing the commands. It just leaves a blank screen.
General questions:
Does my conversion to the newer vertex array methods look correct? Do you see any errors in the logic?
Am I supposed to do specific glEnable calls to make this method work vice the old method?
Can I mix and match between the two methods to fill attributes in the same shader program? (e.g., In addition to the above, I fill out triangle data and use it wit the same shader program. If I haven't switched that process to the new method, will that cause a problem)?
If there is there anything else I'm missing here, I'd really appreciate it if you'd let me know.

A little more sleuthing and I figured out my error:
When using glVertexAttribPointer, you can set the stride parameter to 0 (zero) and OpenGL will automatically determine the stride; however, when using glVertexAttribFormat, you must set the stride yourself.
Once I manually set the stride value in glVertexAttribFormat, everything worked as expected.

Related

Draw multiple meshes to different locations (DirectX 12)

I have a problem with DirectX 12. I have made a small 3D renderer. Models are translated to 3D space in vertex shader with basic World View Projection matrixes that are in constant buffer.
To change data of the constant buffer i'm currently using memcpy(pMappedConstantBuffer + alignedSize * frame, newConstantBufferData, alignedSize) this command replaces constant buffer's data immediately.
So the problem comes here, drawing is recorded to a command list that will be later sent to the gpu for execution.
Example:
/* Now i want to change the constant buffer to change the next draw call's position to (0, 1, 0) */
memcpy(/*Parameters*/);
/* Now i want to record a draw call to the command list */
DrawInstanced(/*Parameters*/);
/* But now i want to draw other mesh to other position so i have to change the constant buffer. After this memcpy() the draw position will be (0, -1, 0) */
memcpy(/*Parameters*/);
/* Now i want to record new draw call to the list */
DrawInstanced(/*Parameters*/);
After this i sent the command list to gpu for execution, but quess what all the meshes will be in the same position, because all memcpys are executed before even the command list is sent to gpu. So basically the last memcpy overwrites the previous ones.
So basically the question is how do i draw meshes to different positions or how to replace constant buffer's data in the command list so the constant buffer changes between each draw call on gpu?
Thanks
No need for help anymore i solved it by myself. I created constant buffer for each mesh.
About execution order, you are totally right, you memcpy calls will update the buffers immediately, but the commands will not be processed until you push your command list in the queue (and you will not exactly know when this will happen).
In Direct3D11, when you use Map on a buffer, this is handled for you (some space will be allocated to avoid that if required).
So In Direct3D12 you have several choices, I'll consider that you want to draw N objects, and you want to store one matrix per object in your cbuffer.
First is to create one buffer per object and set data independently. If you have only a few, this is easy to maintain (and extra memory footprint due to resource allocations will be ok)
Other option is to create a large buffer (which can contain N matrices), and create N constant buffer views that points to the memory location of each object. (Please note that you also have to respect 256 bytes alignment in that case too, see CreateConstantBufferView).
You can also use a StructuredBuffer and copy all data into it (in that case you do not need the alignment), and use an index in the vertex shader to lookup the correct matrix. (it is possible to set a uint value in your shader and use SetGraphicsRoot32BitConstant to apply it directly).

Error: class "ofTexture" has no member "getTextureReference"

I'm finishing up my second semester of C++ programming and have wanted to spice up my output. I'm fairly familiar with C++ but very new to oF. I've been following along with the tutorials in the oF book from the site and am on the Shaders chapter working with textures: http://openframeworks.cc/ofBook/chapters/shaders.html#addingtextures
In this section, I'm getting an error (I'm using Visual Studio): class "ofTexture" has no member "getReferenceTexture".
#include "ofApp.h"
void ofApp::setup() {
// setup
plane.mapTexCoordsFromTexture(img.getTextureReference());
}
void ofApp::draw() {
// bind our texture. in our shader this will now be tex0 by default
// so we can just go ahead and access it there.
img.getTextureReference().bind();
// start our shader, in our OpenGL3 shader this will automagically set
// up a lot of matrices that we want for figuring out the texture matrix
// and the modelView matrix
shader.begin();
// get mouse position relative to center of screen
float mousePosition = ofMap(mouseX, 0, ofGetWidth(), plane.getWidth(), -plane.getWidth(), true);
shader.setUniform1f("mouseX", mousePosition);
ofPushMatrix();
ofTranslate(ofGetWidth()/2, ofGetHeight()/2);
plane.draw();
ofPopMatrix();
shader.end();
img.getTextureReference().unbind();
}
I opened up the ofTexture.h and .cpp files and sure enough there's no member called getTextureReference. I've browsed through the oF site and forum, looked through Stack Exchange, and did a google search but I'm not getting a clear picture of what this call is supposed to do to see if there's a work around or another function I should be calling.
Has ofTexture::getTextureReference been replaced with something else? Thanks!
If I interpret the openframeworks source correctly you can call bind directly on your ofTexture.
If its an instance of ofVideoGrabber you need to call getTexture

glDrawTransformFeedbackStream, what the stream refers to?

I ported this sample to to jogl from g-truc and it works, everything fine everything nice.
But now I am trying to understand exactly what the stream of glDrawTransformFeedbackStream refers to.
Basically a vec4 position input gets transformed to
String[] strings = {"gl_Position", "Block.color"};
gl4.glTransformFeedbackVaryings(transformProgramName, 2, strings, GL_INTERLEAVED_ATTRIBS);
as following:
void main()
{
gl_Position = mvp * position;
outBlock.color = vec4(clamp(vec2(position), 0.0, 1.0), 0.0, 1.0);
}
transform-stream.vert, transform-stream.geom
And then I simply render the transformed objects with glDrawTransformFeedbackStream
feedback-stream.vert, feedback-stream.frag
Now, based on the docs they say:
Specifies the index of the transform feedback stream from which to
retrieve a primitive count.
Cool, so if I bind my feedbackArrayBufferName to 0 here
gl4.glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, feedbackName[0]);
gl4.glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, feedbackArrayBufferName[0]);
gl4.glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, 0);
I guess it should be that.
Also the geometry shader outputs (only) the color to index 0. What about the positions? Are they assumed to be already on stream 0? How? From glTransformFeedbackVaryings?
Therefore, I tried to switch all the references to this stream to 1 to check if they are all consistent and then if they do refer to the same index.
So I modified
gl4.glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 1, feedbackArrayBufferName[0]);
and
gl4.glDrawTransformFeedbackStream(GL_TRIANGLES, feedbackName[0], 1);
and also inside the geometry shader
out Block
{
layout(stream = 1) vec4 color;
} outBlock;
But if I run, I get:
Program link failed: 1
Link info
---------
error: Transform feedback can't capture varyings belonging to different vertex streams in a single buffer.
OpenGL Error(GL_INVALID_OPERATION): initProgram
GlDebugOutput.messageSent(): GLDebugEvent[ id 0x502
type Error
severity High: dangerous undefined behavior
source GL API
msg GL_INVALID_OPERATION error generated. <program> object is not successfully linked, or is not a program object.
when 1455183474230
source 4.5 (Core profile, arb, debug, compat[ES2, ES3, ES31, ES32], FBO, hardware) - 4.5.0 NVIDIA 361.43 - hash 0x225c78a9]
GlDebugOutput.messageSent(): GLDebugEvent[ id 0x502
type Error
severity High: dangerous undefined behavior
source GL API
msg GL_INVALID_OPERATION error generated. <program> has not been linked, or is not a program object.
when 1455183474232
source 4.5 (Core profile, arb, debug, compat[ES2, ES3, ES31, ES32], FBO, hardware) - 4.5.0 NVIDIA 361.43 - hash 0x225c78a9]
Trying to know what'g going on, I found this here
Output variables in the Geometry Shader can be declared to go to a particular stream. This is controlled via an in-shader specification, but there are certain limitations that affect advanced component interleaving.
No two outputs that go to different streams can be captured by the same buffer. Attempting to do so will result in a linker error. So using multiple streams with interleaved writing requires using advanced interleaving to route attributes to different buffers.
Is it what happens to me? position going to index 0 and color to index 1?
I'd simply like to know if my hypotesis are correct. And if yes, I want to prove it by changing the stream index.
Therefore I'd also like to know how I can set the position on stream 1 together with color after my changes.. shall I modify the output of the geometry shader in this way layout(triangle_strip, max_vertices = 3, xfb_buffer = 1) out;?
Because it complains
Shader status invalid: 0(11) : error C7548: 'layout(xfb_buffer)' requires "#extension GL_ARB_enhanced_layouts : enable" before use
Then I add it and I get
error: Transform feedback can't capture varyings belonging to different vertex streams in a single buffer.
But now they should be both on stream 1, what I am missing?
Moreover, what is the definition of a stream?

Broken texture number OpenGL, after glGenTextures/glBindTexture

I have a terrible behavior with one of my textures on OpenGL.
After deleting a texture, I create a new one, it generates the same tex number as before, but the texture is incorrect. Also glGetError returns always 0 on every line ! I tried to add glFlush/glFinish after glDeleteTextures but it doesn't change anything ! the texture number seems locked somewhere...why ?
It's single threaded, here the behavior :
//myTexture == 24 is loaded and works correctly
GLboolean bIsTexture = glIsTexture(myTexture); //returns 1 = > ok
glDeleteTextures(1,&myTexture);
bIsTexture = glIsTexture(myTexture); //returns 0 => ok
//Let's create a new texture
glGenTextures(1,&myTexture);//myTexture == 24 (as the glDelete was ok)
glBindTexture(GL_TEXTURE_2D,myTexture);
bIsTexture = glIsTexture(myTexture); //returns 0 => FAILS
Call BindTexture to 0 before creating a texture already bind :
//Let's create a new texture
glBindTexture(GL_TEXTURE_2D,0); // free the old bind texture if deleted
glGenTextures(1,&myTexture);//myTexture == 24 (as the glDelete was ok)
glBindTexture(GL_TEXTURE_2D,myTexture);
bIsTexture = glIsTexture(myTexture); //returns 1 => Ok
Well, you're missing one important step in the commonly used sequence of operations for creating a new texture: Actually allocating it.
//Let's create a new texture
glGenTextures(1,&myTexture);//myTexture == 24 (as the glDelete was ok)
glBindTexture(GL_TEXTURE_2D,myTexture);
Here you must call either glTexImage2D or glTexStorage to actually allocate the texture object. Before doing so, there's no texture data associated with the generated texture name. This is important: The value(s) generated by glGenTextures are not textures, but texture names (i.e. handles) and while OpenGL states that this should be a texture object already, a buggy driver may interpret it wrongly.
glTexImage2D(…); // <<<<<
bIsTexture = glIsTexture(myTexture); //returns …
Update:
As Andon M. Coleman points out (thanks for that), binding a texture name to a texture target makes texture object (associated to said name). So one should expect glIsTexture to return GL_TRUE in that case. Now here's how reality differs from the ideal world of specification: An actual (buggy) driver may be wrongly implemented and assume a name being associated with a texture object only if there's an actual data storage present, so it might be necessary to do the actual allocation to see the effect.
In practice you normally do the storage allocation quite soon after the name allocation. I presume the test suite the implementer of your driver used does not check for this corner case. Time for writing a bug report.
The only thing glIsTexture (...) does is let you know if an OpenGL name (handle) belongs to a texture or not. In fact, OpenGL names are not actually tied to their final purpose until first-use. In the case of your texture name, glIsTexture merely checks to see if the name is associated with a texture; this association takes place the first time you call glBindTexture (...) using the name. glIsTexture does not tell you if the name has an asociated data store (e.g. you called glTexImage2D (...) or in OpenGL 4+ glTexStorage (...)).
Here's an interesting bit of trivia: before OpenGL 3.0, the spec. allowed you to come up with any unused number for an object name and OpenGL would treat it like you had used a glGen___ (...) function to generate the name; this can still be done in compatibility profiles. That is how unimportant the name generation functions were in the grand scheme of things.
The big takeaway here is that names are given their function upon fist use. More importantly glIs___ (...) merely tells you if a name is associated with a particular kind of OpenGL object (not if it is a valid/initialized/... object).
The official explanation of what I just mentioned comes from the OpenGL spec, which states:
The command:
void GenTextures( sizei n, uint *textures );
returns n previously unused texture names in textures. These names are marked as used, for the purposes of GenTextures only, but they acquire texture state and a dimensionality only when they are first bound, just as if they were unused.
The binding is effected by calling:
void BindTexture( enum target, uint texture );
with target set to the desired texture target and texture set to the unused name. The resulting texture object is a new state vector, comprising all the state and with the same initial values listed in section 8.21 The new texture object bound to target is, and remains a texture of the dimensionality and type specified by target until it is deleted.
Since this is all that glIsTexture (...) is supposed to do, I would have to assume that this is a driver bug.

Need XTK example on how to use X.shaders

Can someone provide an example on how to use X.shaders in XTK ?
I need to use custom shaders to apply texture and color with alpha component for vertices.
After reading the code : for the moment you cannot. Initialy the shader class was not done to be public, but I submited an issue to Haehn to export it as public so you can instantiate one shader. In addition it needs 2 setters for the fragment and vertex sources and to remove the test for all attributes/uniforms to be used in the shader sources.
Notice that with the current code you cannot add parameters to your shaders (btw there should be enough for any use, you can see them here in the "attributes" and "uniforms").
To use it, after that, I'd say :
var r = new X.renderer3D(); //create a renderer
r.init(); //initiate it
var sh = new X.shaders(); // create a new par of shaders
/* here use the futur setters to set sources from a string or a file */
r.addShaders(sh); // this set the shaders for the renderer and try to compile them
// DO NOT call init anymore or it would erase the current shaders and replace them by default ones
/*
Any code to fill the scene, etc...
*/
r.render();
But it needs to wait the 3 changes I said at the beginning of this post. I'm waiting for Haehn's news.
#Ricola3D is right.
The discussion regarding this is here:
https://github.com/xtk/X/issues/69
But if you want an alpha channel for vertices, you can use X.object.opacity.