xtk renderer3D's pick() produce errors in webgl2 enabled browsers - xtk

Has anyone used xtk with webgl2 to do the pick() call? specifically
renderer3d's.
Error: WebGL: drawArrays: Feedback loop detected...renderer3D.js:1977:7
Error: WebGL: readPixels: Out-of-bounds reads with readPixels are deprecated, and may be slow. renderer3D.js:1445:5

For the first error, feedback loops have always been invalid and an error in WebGL. From the WebGL 1 spec section 6.26
6.26 Feedback Loops Between Textures and the Framebuffer
In the OpenGL ES 2.0 API, it's possible to make calls that both write to and read from the same texture, creating a feedback loop. It specifies that where these feedback loops exist, undefined behavior results.
In the WebGL API, such operations that would cause such feedback loops (by the definitions in the OpenGL ES 2.0 spec) will instead generate an INVALID_OPERATION error.
As for the 2nd error that's not a valid WebGL error. Which version of which browser is generating that error?
Here's the WebGL conformance test to make sure you can read out of bounds
https://www.khronos.org/registry/webgl/sdk/tests/conformance/reading/read-pixels-test.html?webglVersion=1&quiet=0
And here's a snippet showing reading out of bounds does not generate an error.
['webgl', 'webgl2'].forEach(check);
function check(version) {
log(`checking ${version}`);
const gl = document.createElement("canvas").getContext(version);
if (!gl) {
log(`${version} not supported`);
return;
}
const pixel = new Uint8Array(4);
// read off the left bottom
gl.readPixels(-10, -10, 1, 1, gl.RGBA, gl.UNSIGNED_BYTE, pixel);
// read off the right top
gl.readPixels(400, 300, 1, 1, gl.RGBA, gl.UNSIGNED_BYTE, pixel);
//
const error = gl.getError();
log(error ? `error was ${error} reading out of bounds`
: "there were no errors reading out of bounds");
}
function log(...args) {
const elem = document.createElement("pre");
elem.textContent = [...args].join();
document.body.appendChild(elem);
}
Maybe file bugs with xtk?

Related

Ho do you convert an OpenGL project from older glVertexAttribPointer methods to newer glVertexAttribBinding methods?

I have an OpenGL project that has previously used OpenGL 3.0-based methods for drawing arrays and I'm trying to convert it to use newer methods (at least available as of OpenGL 4.3). However, so far I have not been able to make it work.
The piece of code I'll use for explanation creates groupings of points and draws lines between them. It can also fill the resulting polygon (using triangles). I'll give an example using the point-drawing routines. Here's the pseudo-code for how it used to work:
[Original] When points are first generated (happens once):
// ORIGINAL, WORKING CODE RUN ONCE DURING SETUP:
// NOTE: This code is in c# and uses a library called SharpGL
// to invoke OpenGL calls via a context herein called "gl"
float[] xyzArray = [CODE NOT SHOWN -- GENERATES ARRAY OF POINTS]
// Create a buffer for the vertex data
// METHOD CODE NOT SHOWN, but uses glGenBuffers to fill class-member buffer IDs
GenerateBuffers(gl); // Note: we now have a VerticesBufferId
// Set the vertex buffer as the current buffer and fill it
gl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, this.VerticesBufferId);
gl.BufferData(OpenGL.GL_ARRAY_BUFFER, xyzArray, OpenGL.GL_STATIC_DRAW);
[Original] Within the loop that does the drawing:
// ORIGINAL, WORKING CODE EXECUTED DURING EACH DRAW LOOP:
// NOTE: This code is in c# and uses a library called SharpGL
// to invoke OpenGL calls via a context herein called "gl"
// Note: positionAttributeId (below) was derived from the active
// shader program via glGetAttribLocation
gl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, this.VerticesBufferId);
gl.EnableVertexAttribArray(positionAttributeId);
gl.VertexAttribPointer(positionAttributeId, 3, OpenGL.GL_FLOAT, false, 0, IntPtr.Zero);
// Missing code sets some uniforms in the shader program and determines
// the start (iStart) and length (pointCount) of points to draw
gl.DrawArrays(OpenGL.GL_LINE_STRIP, iStart, pointCount);
That code has worked for quite a while now, but I'm trying to move to more modern techniques. Most of my code didn't change at all, and this is the new code that replaced the above:
[After Mods] When points are first generated (happens once):
// MODIFIED CODE RUN ONCE DURING SETUP:
// NOTE: This code is in c# and uses a library called SharpGL
// to invoke OpenGL calls via a context herein called "gl"
float[] xyzArray = [CODE NOT SHOWN -- GENERATES ARRAY OF POINTS]
// Create a buffer for the vertex data
// METHOD CODE NOT SHOWN, but uses glGenBuffers to fill class-member buffer IDs
GenerateBuffers(gl); // Note: we now have a VerticesBufferId.
// Set the vertex buffer as the current buffer
gl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, this.VerticesBufferId);
gl.BufferData(OpenGL.GL_ARRAY_BUFFER, xyzArray, OpenGL.GL_STATIC_DRAW);
// ^^^ ALL CODE ABOVE THIS LINE IS IDENTIAL TO THE ORIGINAL ^^^
// Generate Vertex Arrays
// METHOD CODE NOT SHOWN, but uses glGenVertexArrays to fill class-member array IDs
GenerateVertexArrays(gl); // Note: we now have a PointsArrayId
// My understanding: I'm telling OpenGL to associate following calls
// with the vertex array generated with the ID PointsArrayId...
gl.BindVertexArray(PointsArrayId);
// Here I associate the positionAttributeId (found from the shader program)
// with the currently bound vertex array (right?)
gl.EnableVertexAttribArray(positionAttributeId);
// Here I tell the bound vertex array about the format of the position
// attribute as it relates to that array -- I think.
gl.VertexAttribFormat(positionAttributeId, 3, OpenGL.GL_FLOAT, false, 0);
// As I understand it, I can define my own "local" buffer index
// in the following calls (?). Below I use 0, which I then bind
// to the buffer with id = this.VerticesBufferId (for the purposes
// of the currently bound vertex array)
gl.VertexAttribBinding(positionAttributeId, 0);
gl.BindVertexBuffer(0, this.VerticesBufferId, IntPtr.Zero, 0);
gl.BindVertexArray(0); // we no longer want to be bound to PointsArrayId
[After Mods] Within the loop that does the drawing:
// MODIFIED CODE EXECUTED DURING EACH DRAW LOOP::
// NOTE: This code is in c# and uses a library called SharpGL
// to invoke OpenGL calls via a context herein called "gl"
// Here I tell OpenGL to bind the VertexArray I established previously
// (which should understand how to fill the position attribute in the
// shader program using the "zeroth" buffer index tied to the
// VerticesBufferId data buffer -- because I went through all the trouble
// of telling it that above, right?)
gl.BindVertexArray(this.PointsArrayId);
// \/ \/ \/ NOTE: NO CODE CHANGES IN THE CODE BELOW ThIS LINE \/ \/ \/
// Missing code sets some uniforms in the shader program and determines
// the start (iStart) and length (pointCount) of points to draw
gl.DrawArrays(OpenGL.GL_LINE_STRIP, iStart, pointCount);
After the modifications, the routines draw nothing to the screen. There's no exceptions thrown or indications (that I can tell) of a problem executing the commands. It just leaves a blank screen.
General questions:
Does my conversion to the newer vertex array methods look correct? Do you see any errors in the logic?
Am I supposed to do specific glEnable calls to make this method work vice the old method?
Can I mix and match between the two methods to fill attributes in the same shader program? (e.g., In addition to the above, I fill out triangle data and use it wit the same shader program. If I haven't switched that process to the new method, will that cause a problem)?
If there is there anything else I'm missing here, I'd really appreciate it if you'd let me know.
A little more sleuthing and I figured out my error:
When using glVertexAttribPointer, you can set the stride parameter to 0 (zero) and OpenGL will automatically determine the stride; however, when using glVertexAttribFormat, you must set the stride yourself.
Once I manually set the stride value in glVertexAttribFormat, everything worked as expected.

Error: class "ofTexture" has no member "getTextureReference"

I'm finishing up my second semester of C++ programming and have wanted to spice up my output. I'm fairly familiar with C++ but very new to oF. I've been following along with the tutorials in the oF book from the site and am on the Shaders chapter working with textures: http://openframeworks.cc/ofBook/chapters/shaders.html#addingtextures
In this section, I'm getting an error (I'm using Visual Studio): class "ofTexture" has no member "getReferenceTexture".
#include "ofApp.h"
void ofApp::setup() {
// setup
plane.mapTexCoordsFromTexture(img.getTextureReference());
}
void ofApp::draw() {
// bind our texture. in our shader this will now be tex0 by default
// so we can just go ahead and access it there.
img.getTextureReference().bind();
// start our shader, in our OpenGL3 shader this will automagically set
// up a lot of matrices that we want for figuring out the texture matrix
// and the modelView matrix
shader.begin();
// get mouse position relative to center of screen
float mousePosition = ofMap(mouseX, 0, ofGetWidth(), plane.getWidth(), -plane.getWidth(), true);
shader.setUniform1f("mouseX", mousePosition);
ofPushMatrix();
ofTranslate(ofGetWidth()/2, ofGetHeight()/2);
plane.draw();
ofPopMatrix();
shader.end();
img.getTextureReference().unbind();
}
I opened up the ofTexture.h and .cpp files and sure enough there's no member called getTextureReference. I've browsed through the oF site and forum, looked through Stack Exchange, and did a google search but I'm not getting a clear picture of what this call is supposed to do to see if there's a work around or another function I should be calling.
Has ofTexture::getTextureReference been replaced with something else? Thanks!
If I interpret the openframeworks source correctly you can call bind directly on your ofTexture.
If its an instance of ofVideoGrabber you need to call getTexture

glDrawTransformFeedbackStream, what the stream refers to?

I ported this sample to to jogl from g-truc and it works, everything fine everything nice.
But now I am trying to understand exactly what the stream of glDrawTransformFeedbackStream refers to.
Basically a vec4 position input gets transformed to
String[] strings = {"gl_Position", "Block.color"};
gl4.glTransformFeedbackVaryings(transformProgramName, 2, strings, GL_INTERLEAVED_ATTRIBS);
as following:
void main()
{
gl_Position = mvp * position;
outBlock.color = vec4(clamp(vec2(position), 0.0, 1.0), 0.0, 1.0);
}
transform-stream.vert, transform-stream.geom
And then I simply render the transformed objects with glDrawTransformFeedbackStream
feedback-stream.vert, feedback-stream.frag
Now, based on the docs they say:
Specifies the index of the transform feedback stream from which to
retrieve a primitive count.
Cool, so if I bind my feedbackArrayBufferName to 0 here
gl4.glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, feedbackName[0]);
gl4.glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, feedbackArrayBufferName[0]);
gl4.glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, 0);
I guess it should be that.
Also the geometry shader outputs (only) the color to index 0. What about the positions? Are they assumed to be already on stream 0? How? From glTransformFeedbackVaryings?
Therefore, I tried to switch all the references to this stream to 1 to check if they are all consistent and then if they do refer to the same index.
So I modified
gl4.glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 1, feedbackArrayBufferName[0]);
and
gl4.glDrawTransformFeedbackStream(GL_TRIANGLES, feedbackName[0], 1);
and also inside the geometry shader
out Block
{
layout(stream = 1) vec4 color;
} outBlock;
But if I run, I get:
Program link failed: 1
Link info
---------
error: Transform feedback can't capture varyings belonging to different vertex streams in a single buffer.
OpenGL Error(GL_INVALID_OPERATION): initProgram
GlDebugOutput.messageSent(): GLDebugEvent[ id 0x502
type Error
severity High: dangerous undefined behavior
source GL API
msg GL_INVALID_OPERATION error generated. <program> object is not successfully linked, or is not a program object.
when 1455183474230
source 4.5 (Core profile, arb, debug, compat[ES2, ES3, ES31, ES32], FBO, hardware) - 4.5.0 NVIDIA 361.43 - hash 0x225c78a9]
GlDebugOutput.messageSent(): GLDebugEvent[ id 0x502
type Error
severity High: dangerous undefined behavior
source GL API
msg GL_INVALID_OPERATION error generated. <program> has not been linked, or is not a program object.
when 1455183474232
source 4.5 (Core profile, arb, debug, compat[ES2, ES3, ES31, ES32], FBO, hardware) - 4.5.0 NVIDIA 361.43 - hash 0x225c78a9]
Trying to know what'g going on, I found this here
Output variables in the Geometry Shader can be declared to go to a particular stream. This is controlled via an in-shader specification, but there are certain limitations that affect advanced component interleaving.
No two outputs that go to different streams can be captured by the same buffer. Attempting to do so will result in a linker error. So using multiple streams with interleaved writing requires using advanced interleaving to route attributes to different buffers.
Is it what happens to me? position going to index 0 and color to index 1?
I'd simply like to know if my hypotesis are correct. And if yes, I want to prove it by changing the stream index.
Therefore I'd also like to know how I can set the position on stream 1 together with color after my changes.. shall I modify the output of the geometry shader in this way layout(triangle_strip, max_vertices = 3, xfb_buffer = 1) out;?
Because it complains
Shader status invalid: 0(11) : error C7548: 'layout(xfb_buffer)' requires "#extension GL_ARB_enhanced_layouts : enable" before use
Then I add it and I get
error: Transform feedback can't capture varyings belonging to different vertex streams in a single buffer.
But now they should be both on stream 1, what I am missing?
Moreover, what is the definition of a stream?

DirectX11 Execution Warning #355

When I run my DirectX11 project, I get spammed in my output window every time the ID3D10Device::DrawIndexed is called with this warning
D3D11: WARNING: ID3D11DeviceContext::DrawIndexed: Input vertex slot 0
has stride 48 which is less than the minimum stride logically expected
from the current Input Layout (56 bytes). This is OK, as hardware is
perfectly capable of reading overlapping data. However the developer
probably did not intend to make use of this behavior. [ EXECUTION
WARNING #355: DEVICE_DRAW_VERTEX_BUFFER_STRIDE_TOO_SMALL ]
This is how I'm currently calling the function
pImmediateContext->DrawIndexed( this->vertexBuffer.indices.size() * 3,
0, 0 );
I'm not sure what I'm doing wrong that is causing this warning. If someone could shed some light on the issue I would appreciate it.
The error is telling you that your input layout has a different total byte size than the stride you have set when setting the vertex buffer.
To fix the problem you need to ensure that that the input layer set via IASetInputLayout() has the same stride as the one set when you call IASetVertexBuffers().

Get depth buffer from QGLPixelBuffer

I'm using OpenGL in a QT application. At some point I'm rendering to a QGLPixelBuffer. I need to get the depth buffer of the image, what I'd normally accomplish with glReadPixels(..., GL_DEPTH_COMPONENT, ...); I tried making the QGLPixelBuffer current and then using glReadPixels() but all I get is a white image.
Here's my code
bufferCanvas->makeCurrent();
[ ...render... ]
QImage snapshot(QSize(_lastWidth, _lastHeight), QImage::Format_Indexed8);
glReadPixels(0, 0, _lastWidth, _lastHeight, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, snapshot.bits());
snapshot.save("depth.bmp");
Anything obviously wrong with it?
Well, there is no guarantee that the underlying pixel data stored in QImage (and obtained via its QImage::bits() function) is compatible to what OpenGL's glReadPixels() function writes.
Since you are using QGLPixelBuffer, what is wrong with QGLPixelBuffer::toImage() ?
I have never used QImage directly, but I would try to answer or look into following areas:
Are you calling glClear() with Depth bit before reading image?
Does your QGLFormat has depth buffer enabled?
Can you dump readPixel directly and verify whether it has correct data?
Does QImage::bits() ensure sequential memory store with required alignment?
Hope this helps
Wild guess follows.
You're creating an indexed bitmap using QImage, however you're not assinging a color table. My guess is that the default color table is making your image appear white. Try this before saving your image:
for ( int i = 0 ; i <= 255 ; i++ ) {
snapshot.setColor( i, qRGB( i, i, i ) );
}