OpenGL - Provide a set of values in a 1D texture - opengl

I want to provide a set of values in a 1D texture. Please consider the following simple example:
gl.glBindTexture(GL4.GL_TEXTURE_1D, myTextureHandle);
FloatBuffer values = Buffers.newDirectFloatBuffer(N);
for (int x = 0; x < N; ++x)
values.put(x);
values.rewind();
gl.glTexImage1D(GL4.GL_TEXTURE_1D, 0, GL4.GL_R32F, N, 0, GL4.GL_RED, GL4.GL_FLOAT, values);
Here, N is the amount of values I want to store in the texture. However, calling textureSize(myTexture, 0) in my fragment shader yields 1 (no matter to what I set N). So, what's going wrong here?
EDIT: The code above is executed at initialization. My rendering loop looks like
gl.glClear(GL4.GL_COLOR_BIT |GL4.GL_DEPTH_BUFFER_BIT);
gl.glUseProgram(myProgram);
gl.glActiveTexture(MY_TEXTURE_INDEX);
gl.glBindTexture(GL4.GL_TEXTURE_1D, myTextureHandle);
gl.glUniform1i(uMyTexture, MY_TEXTURE_INDEX);
gl.glDrawArrays(GL4.GL_POINTS, 0, 1);
My vertex shader consists of a main-function which does nothing. I'm using the geometry shader to create a fullscreen quad. The pixel shader code looks like
uniform sampler1D myTexture;
out vec4 color;
void main()
{
if (textureSize(myTexture, 0) == 1)
{
color = vec4(1, 0, 0, 1);
return;
}
color = vec4(1, 1, 0, 1);
}
The result is a red-colored window.

Make sure your texture is complete. Since GL_TEXTURE_MIN_FILTER defaults to GL_NEAREST_MIPMAP_LINEAR you'll have to supply a full set of mipmaps.
Or set GL_TEXTURE_MIN_FILTER to GL_NEAREST/GL_LINEAR.
You also need to pass GL_TEXTURE0 + MY_TEXTURE_INDEX (instead of only MY_TEXTURE_INDEX) to glActiveTexture():
gl.glActiveTexture( GL_TEXTURE0 + MY_TEXTURE_INDEX );
...
gl.glUniform1i( uMyTexture, MY_TEXTURE_INDEX );

Related

SharpGL and RenderBuffers

I'm attempting to port a pathtracer to GLSL, and to do this I need to modify a shader sample program to use a texture as the framebuffer instead of the backbuffer.
This is the vertex fragment
#version 130
out vec2 texCoord;
// https://rauwendaal.net/2014/06/14/rendering-a-screen-covering-triangle-in-opengl/
void main()
{
float x = -1.0 + float((gl_VertexID & 1) << 2);
float y = -1.0 + float((gl_VertexID & 2) << 1);
texCoord.x = x;
texCoord.y = y;
gl_Position = vec4(x, y, 0, 1);
}
This is the setup code
gl.GenFramebuffersEXT(2, _FrameBuffer);
gl.BindFramebufferEXT(OpenGL.GL_FRAMEBUFFER_EXT, _FrameBuffer[0]);
gl.GenRenderbuffersEXT(2, _RaytracerBuffer);
gl.BindRenderbufferEXT(OpenGL.GL_RENDERBUFFER_EXT, _RaytracerBuffer[0]);
gl.RenderbufferStorageEXT(OpenGL.GL_RENDERBUFFER_EXT, OpenGL.GL_RGBA32F, (int)viewport[2], (int)viewport[3]);
And this is the runtime code
// Get a reference to the raytracer shader.
var shader = shaderRayMarch;
// setup first framebuffer (RGB32F)
gl.BindFramebufferEXT(OpenGL.GL_FRAMEBUFFER_EXT, _FrameBuffer[0]);
gl.Viewport((int)viewport[0], (int)viewport[1], (int)viewport[2], (int)viewport[3]); //0,0,width,height)
gl.FramebufferRenderbufferEXT(OpenGL.GL_FRAMEBUFFER_EXT, OpenGL.GL_COLOR_ATTACHMENT0_EXT, OpenGL.GL_RENDERBUFFER_EXT, _RaytracerBuffer[0]);
gl.FramebufferRenderbufferEXT(OpenGL.GL_FRAMEBUFFER_EXT, OpenGL.GL_DEPTH_ATTACHMENT_EXT, OpenGL.GL_RENDERBUFFER_EXT, 0);
uint [] DrawBuffers = new uint[1];
DrawBuffers[0] = OpenGL.GL_COLOR_ATTACHMENT0_EXT;
gl.DrawBuffers(1, DrawBuffers);
shader.Bind(gl);
shader.SetUniform1(gl, "screenWidth", viewport[2]);
shader.SetUniform1(gl, "screenHeight", viewport[3]);
shader.SetUniform1(gl, "fov", 40.0f);
gl.DrawArrays(OpenGL.GL_TRIANGLES, 0, 3);
shader.Unbind(gl);
int[] pixels = new int[(int)viewport[2]*(int)viewport[3]*4];
gl.GetTexImage(_RaytracerBuffer[0], 0, OpenGL.GL_RGBA32F, OpenGL.GL_INT, pixels);
But when I inspect the pixels coming back from GetTexImage they're black. When I bind this texture in a further transfer shader they remain black. I suspect I'm missing something in the setup code for the renderbuffer and would appreciate any suggestions you have!
Renderbuffers are not textures. So when you do glGetTexImage on your renderbuffer, you probably got an OpenGL error. When you tried to bind it as a texture with glBindTexture, you probably got an OpenGL error.
If you want to render to a texture, you should render to a texture. As in glGenTextures/glTexImage2D/glFramebufferTexture2D.
Also, please stop using EXT_framebuffer_object. You should be using the core FBO feature, which requires no "EXT" suffixes. Not unless you're using a really ancient OpenGL version.

rgba arrays to OpenGL texture

For the gui for my game, I have a custom texture object that stores the rgba data for a texture. Each GUI element registered by my game adds to the final GUI texture, and then that texture is overlayed onto the framebuffer after post-processing.
I'm having trouble converting my Texture object to an openGL texture.
First I create a 1D int array that goes rgbargbargba... etc.
public int[] toIntArray(){
int[] colors = new int[(width*height)*4];
int i = 0;
for(int y = 0; y < height; ++y){
for(int x = 0; x < width; ++x){
colors[i] = r[x][y];
colors[i+1] = g[x][y];
colors[i+2] = b[x][y];
colors[i+3] = a[x][y];
i += 4;
}
}
return colors;
}
Where r, g, b, and a are jagged int arrays from 0 to 255. Next I create the int buffer and the texture.
id = glGenTextures();
glBindTexture(GL_TEXTURE_2D, id);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
IntBuffer iBuffer = BufferUtils.createIntBuffer(((width * height)*4));
int[] data = toIntArray();
iBuffer.put(data);
iBuffer.rewind();
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_INT, iBuffer);
glBindTexture(GL_TEXTURE_2D, 0);
After that I add a 50x50 red square into the upper left of the texture, and bind the texture to the framebuffer shader and render the fullscreen rect that displays my framebuffer.
frameBuffer.unbind(window.getWidth(), window.getHeight());
postShaderProgram.bind();
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, guiManager.texture()); // this gets the texture id that was created
postShaderProgram.setUniform("gui_texture", 1);
mesh.render();
postShaderProgram.unbind();
And then in my fragment shader, I try displaying the GUI:
#version 330
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D texFramebuffer;
uniform sampler2D gui_texture;
void main()
{
outColor = texture(gui_texture, Texcoord);
}
But all it outputs is a black window!
I added a red 50x50 rectangle into the upper left corner and verified that it exists, but for some reason it isn't showing in the final output.
That gives me reason to believe that I'm not converting my texture into an opengl texture with glTexImage2D correctly.
Can you see anything I'm doing wrong?
Update 1:
Here I saw them doing a similar thing using a float array, so I tried converting my 0-255 to a 0-1 float array and passing it as the image data like so:
public float[] toFloatArray(){
float[] colors = new float[(width*height)*4];
int i = 0;
for(int y = 0; y < height; ++y){
for(int x = 0; x < width; ++x){
colors[i] = (( r[x][y] * 1.0f) / 255);
colors[i+1] = (( g[x][y] * 1.0f) / 255);
colors[i+2] = (( b[x][y] * 1.0f) / 255);
colors[i+3] = (( a[x][y] * 1.0f) / 255);
i += 4;
}
}
return colors;
}
...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_FLOAT, toFloatArray());
And it works!
I'm going to leave the question open however as I want to learn why the int buffer wasn't working :)
When you specified GL_UNSIGNED_INT as the type of the "Host" data, OpenGL expected 32 bits allocated for each color. Since OpenGL only maps the output colors in the default framebuffer to the range [0.0f, 1.0f], it'll take your input color values (mapped in the range [0, 255]) and divide all of them by the maximum size of an int (about 4.2 Billion) to get the final color displayed on screen. As an exercise, using your original code, set the "clear" color of the screen to white, and see that a black rectangle is getting drawn on screen.
You have two options. The first is to convert the color values to the range specified by GL_UNSIGNED_INT, which means for each color value, multiply them by Math.pow((long)2, 24), and trust that the integer overflow of multiplying by that value will behave correctly (since Java doesn't have unsigned integer types).
The other, far safer option, is to store each 0-255 value in a byte[] object (do not use char. char is 1 byte in C/C++/OpenGL, but is 2 bytes in Java) and specify the type of the elements as GL_UNSIGNED_BYTE.

Is there a simple way to get the depth of an object in OpenGL (JOGL)

how can I get the z-Coordinate of an Object in 3D-space when I click on it.
(Its not really an Object more an graph, I need to know what an user selected) I use JOGL.
I just finished to port a picking sample from g-truck ogl-samples.
I will try to give you a quick explanation about the code.
We start by enabling the depth test
private boolean initTest(GL4 gl4) {
gl4.glEnable(GL_DEPTH_TEST);
return true;
}
In the initBuffer we:
generate all the buffer we need with glGenBuffers
bind the element buffer and we transfer the content of our indices. Each index refers to the vertex to use. We need to bind it first because glBufferData will be using whatever is bounded at the target specify by the first argument, GL_ELEMENT_ARRAY_BUFFER in this case
do the same for the vertices themselves.
get the GL_UNIFORM_BUFFER_OFFSET_ALIGNMENT(it's a global parameter) to determine the minimum uniform block size to store our transform variable. This is necessary if we want to bind it via glBindBufferRange, function that we will not use, instead, for binding our picking buffer, this is why we pass just the size of a float, Float.BYTES
the last argument of glBufferData is just an hint (it's up to OpenGL and the driver do what they want), as you see is static for the indices and vertices, because we are not gonna change them anymore, but is dynamic for the uniform buffers, since we will update them every frame.
Code:
private boolean initBuffer(GL4 gl4) {
gl4.glGenBuffers(Buffer.MAX.ordinal(), bufferName, 0);
gl4.glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, bufferName[Buffer.ELEMENT.ordinal()]);
ShortBuffer elementBuffer = GLBuffers.newDirectShortBuffer(elementData);
gl4.glBufferData(GL_ELEMENT_ARRAY_BUFFER, elementSize, elementBuffer, GL_STATIC_DRAW);
gl4.glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
gl4.glBindBuffer(GL_ARRAY_BUFFER, bufferName[Buffer.VERTEX.ordinal()]);
FloatBuffer vertexBuffer = GLBuffers.newDirectFloatBuffer(vertexData);
gl4.glBufferData(GL_ARRAY_BUFFER, vertexSize, vertexBuffer, GL_STATIC_DRAW);
gl4.glBindBuffer(GL_ARRAY_BUFFER, 0);
int[] uniformBufferOffset = {0};
gl4.glGetIntegerv(GL_UNIFORM_BUFFER_OFFSET_ALIGNMENT, uniformBufferOffset, 0);
int uniformBlockSize = Math.max(projection.length * Float.BYTES, uniformBufferOffset[0]);
gl4.glBindBuffer(GL_UNIFORM_BUFFER, bufferName[Buffer.TRANSFORM.ordinal()]);
gl4.glBufferData(GL_UNIFORM_BUFFER, uniformBlockSize, null, GL_DYNAMIC_DRAW);
gl4.glBindBuffer(GL_UNIFORM_BUFFER, 0);
gl4.glBindBuffer(GL_TEXTURE_BUFFER, bufferName[Buffer.PICKING.ordinal()]);
gl4.glBufferData(GL_TEXTURE_BUFFER, Float.BYTES, null, GL_DYNAMIC_READ);
gl4.glBindBuffer(GL_TEXTURE_BUFFER, 0);
return true;
}
In the initTexture we initialize our textures, we:
generate both the textures with glGenTextures
set the GL_UNPACK_ALIGNMENT to 1 (default is usually 4 bytes), in order to avoid any problem at all, (because your horizontal texture size must match the alignment).
set the activeTexture to GL_TEXTURE0, there is a specific number of texture slots and you need to specify it before working on any texture.
bind the diffuse texture
set the swizzle, that is what each channel will receive
set the levels (mipmap), where 0 is the base (original/biggest)
set the filters
allocate the space, levels included with glTexStorage2D
transfer for each level the corresponding data
reset back the GL_UNPACK_ALIGNMENT
bind to GL_TEXTURE0 our other texture PICKING
allocate a single 32b float storage and associate the PICKING texture to the PICKING buffer with glTexBuffer
Code:
private boolean initTexture(GL4 gl4) {
try {
jgli.Texture2D texture = new Texture2D(jgli.Load.load(TEXTURE_ROOT + "/" + TEXTURE_DIFFUSE));
jgli.Gl.Format format = jgli.Gl.instance.translate(texture.format());
gl4.glGenTextures(Texture.MAX.ordinal(), textureName, 0);
// Diffuse
{
gl4.glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
gl4.glActiveTexture(GL_TEXTURE0);
gl4.glBindTexture(GL_TEXTURE_2D, textureName[Texture.DIFFUSE.ordinal()]);
gl4.glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_R, GL_RED);
gl4.glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_G, GL_GREEN);
gl4.glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_B, GL_BLUE);
gl4.glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_A, GL_ALPHA);
gl4.glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
gl4.glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, texture.levels() - 1);
gl4.glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
gl4.glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
gl4.glTexStorage2D(GL_TEXTURE_2D, texture.levels(), format.internal.value,
texture.dimensions(0)[0], texture.dimensions(0)[1]);
for (int level = 0; level < texture.levels(); ++level) {
gl4.glTexSubImage2D(GL_TEXTURE_2D, level,
0, 0,
texture.dimensions(level)[0], texture.dimensions(level)[1],
format.external.value, format.type.value,
texture.data(0, 0, level));
}
gl4.glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
}
// Piking
{
gl4.glBindTexture(GL_TEXTURE_BUFFER, textureName[Texture.PICKING.ordinal()]);
gl4.glTexBuffer(GL_TEXTURE_BUFFER, GL_R32F, bufferName[Buffer.PICKING.ordinal()]);
gl4.glBindTexture(GL_TEXTURE_BUFFER, 0);
}
} catch (IOException ex) {
Logger.getLogger(Gl_420_picking.class.getName()).log(Level.SEVERE, null, ex);
}
return true;
}
In the initProgram we initialize our program, by:
generating a pipeline (composition of different shaders), glGenProgramPipelines
creating a vertex shader code vertShaderCode, where GL_VERTEX_SHADER is the shader type, SHADERS_ROOT is the place where the shader source is located, SHADERS_SOURCE_UPDATE is the name and "vert" is the extension.
initializing it, similarly for the fragment shader
grabbing the generated index and saving in programName
setting the program separable, nothing useful here, just pure sport, glProgramParameteri
adding both shader to our shaderProgram and linking and compiling it, link
specifing which program stage our pipelineName has, glUseProgramStages
Code:
private boolean initProgram(GL4 gl4) {
boolean validated = true;
gl4.glGenProgramPipelines(1, pipelineName, 0);
// Create program
if (validated) {
ShaderProgram shaderProgram = new ShaderProgram();
ShaderCode vertShaderCode = ShaderCode.create(gl4, GL_VERTEX_SHADER,
this.getClass(), SHADERS_ROOT, null, SHADERS_SOURCE_UPDATE, "vert", null, true);
ShaderCode fragShaderCode = ShaderCode.create(gl4, GL_FRAGMENT_SHADER,
this.getClass(), SHADERS_ROOT, null, SHADERS_SOURCE_UPDATE, "frag", null, true);
shaderProgram.init(gl4);
programName = shaderProgram.program();
gl4.glProgramParameteri(programName, GL_PROGRAM_SEPARABLE, GL_TRUE);
shaderProgram.add(vertShaderCode);
shaderProgram.add(fragShaderCode);
shaderProgram.link(gl4, System.out);
}
if (validated) {
gl4.glUseProgramStages(pipelineName[0], GL_VERTEX_SHADER_BIT | GL_FRAGMENT_SHADER_BIT, programName);
}
return validated & checkError(gl4, "initProgram");
}
In the initVertexArray we:
generate a single vertex array, glGenVertexArrays, and bind it, glBindVertexArray
bind the vertices buffer and set the attribute for the position and the color, here interleaved. The position is identified by the attribute index Semantic.Attr.POSITION (this will match the one in the vertex shader), component size 2, type GL_FLOAT, normalized false, stride or the total size of each vertex attribute 2 * 2 * Float.BYTES and the offset in this attribute 0. Similarly for the color.
unbind the vertices buffer since it is not part of the vertex array state. It must be bound only for the glVertexAttribPointer so that OpenGL can know which buffer those parameters refers to.
enable the corresponding vertex attribute array, glEnableVertexAttribArray
bind the element (indices) array, part of the vertex array
Code:
private boolean initVertexArray(GL4 gl4) {
gl4.glGenVertexArrays(1, vertexArrayName, 0);
gl4.glBindVertexArray(vertexArrayName[0]);
{
gl4.glBindBuffer(GL_ARRAY_BUFFER, bufferName[Buffer.VERTEX.ordinal()]);
gl4.glVertexAttribPointer(Semantic.Attr.POSITION, 2, GL_FLOAT, false, 2 * 2 * Float.BYTES, 0);
gl4.glVertexAttribPointer(Semantic.Attr.TEXCOORD, 2, GL_FLOAT, false, 2 * 2 * Float.BYTES, 2 * Float.BYTES);
gl4.glBindBuffer(GL_ARRAY_BUFFER, 0);
gl4.glEnableVertexAttribArray(Semantic.Attr.POSITION);
gl4.glEnableVertexAttribArray(Semantic.Attr.TEXCOORD);
gl4.glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, bufferName[Buffer.ELEMENT.ordinal()]);
}
gl4.glBindVertexArray(0);
return true;
}
In the render we:
bind the TRANSFORM buffer that will contain our transformation matrix.
get a byteBuffer pointer out of that.
calculate the projection, view and model matrices and multiplying them in the same order p * v * m, called also mvp matrix.
save our mvp matrix in our pointer and rewind the buffer (position set to 0 again).
unmap it to make sure it gets uploaded to the gpu
set the viewport to match our window size
set the clear depthValue to 1 (superflous, since it is the default value), clear depth, with the depthValue, and color buffer, with the color {1.0f, 0.5f, 0.0f, 1.0f}
bind the pipeline
set active texture 0
bind the diffuse texture and the picking image texture
bind the vertex array
bind the transform uniform buffer
render, glDrawElementsInstancedBaseVertexBaseInstance is overused it, but what is important is the primitive type GL_TRIANGLES, the number of indices elementCount and their type GL_UNSIGNED_SHORT
bind the picking texture buffer and retrieve its value
Code:
#Override
protected boolean render(GL gl) {
GL4 gl4 = (GL4) gl;
{
gl4.glBindBuffer(GL_UNIFORM_BUFFER, bufferName[Buffer.TRANSFORM.ordinal()]);
ByteBuffer pointer = gl4.glMapBufferRange(
GL_UNIFORM_BUFFER, 0, projection.length * Float.BYTES,
GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_BUFFER_BIT);
FloatUtil.makePerspective(projection, 0, true, (float) Math.PI * 0.25f,
(float) windowSize.x / windowSize.y, 0.1f, 100.0f);
FloatUtil.makeIdentity(model);
FloatUtil.multMatrix(projection, view());
FloatUtil.multMatrix(projection, model);
for (float f : projection) {
pointer.putFloat(f);
}
pointer.rewind();
// Make sure the uniform buffer is uploaded
gl4.glUnmapBuffer(GL_UNIFORM_BUFFER);
}
gl4.glViewportIndexedf(0, 0, 0, windowSize.x, windowSize.y);
float[] depthValue = {1.0f};
gl4.glClearBufferfv(GL_DEPTH, 0, depthValue, 0);
gl4.glClearBufferfv(GL_COLOR, 0, new float[]{1.0f, 0.5f, 0.0f, 1.0f}, 0);
gl4.glBindProgramPipeline(pipelineName[0]);
gl4.glActiveTexture(GL_TEXTURE0);
gl4.glBindTexture(GL_TEXTURE_2D, textureName[Texture.DIFFUSE.ordinal()]);
gl4.glBindImageTexture(Semantic.Image.PICKING, textureName[Texture.PICKING.ordinal()],
0, false, 0, GL_WRITE_ONLY, GL_R32F);
gl4.glBindVertexArray(vertexArrayName[0]);
gl4.glBindBufferBase(GL_UNIFORM_BUFFER, Semantic.Uniform.TRANSFORM0, bufferName[Buffer.TRANSFORM.ordinal()]);
gl4.glDrawElementsInstancedBaseVertexBaseInstance(GL_TRIANGLES, elementCount, GL_UNSIGNED_SHORT, 0, 5, 0, 0);
gl4.glBindBuffer(GL_ARRAY_BUFFER, bufferName[Buffer.PICKING.ordinal()]);
ByteBuffer pointer = gl4.glMapBufferRange(GL_ARRAY_BUFFER, 0, Float.BYTES, GL_MAP_READ_BIT);
float depth = pointer.getFloat();
gl4.glUnmapBuffer(GL_ARRAY_BUFFER);
System.out.printf("Depth: %2.3f\n", depth);
return true;
}
In our vertex shader, executed for each vertex, we:
define the glsl version and profile
define all the attribute indices, that must coincide with our coming from the Semantic we used previously
set some memory layout parameters, such as std140 and column_mayor (useless, default value for matrices)
declare the Transform uniform buffer
declare a vec3 position and vec2 texCoord inputs
declare a (built in, incomplete and useless) gl_PerVertex output
declare a Block block output
save inside our block the incoming texCoord and inside gl_Position our vertex in clip space position. The incoming position vertex is in Model space -> * model matrix = vertex in World space, * view/camera matrix = vertex in Camera/View space, * projection matrix = vertex in Clip space.
Code:
#version 420 core
#define POSITION 0
#define COLOR 3
#define TEXCOORD 4
#define TRANSFORM0 1
precision highp float;
precision highp int;
layout(std140, column_major) uniform;
layout(binding = TRANSFORM0) uniform Transform
{
mat4 mvp;
} transform;
layout(location = POSITION) in vec3 position;
layout(location = TEXCOORD) in vec2 texCoord;
out gl_PerVertex
{
vec4 gl_Position;
};
out Block
{
vec2 texCoord;
} outBlock;
void main()
{
outBlock.texCoord = texCoord;
gl_Position = transform.mvp * vec4(position, 1.0);
}
There may be are other stages after the vertex shader, such as tessellation control/evaluation and geometry, but they are not mandatory.
The last stage is the fragment shader, executed once per fragment/pixel, that starts similarly, then we:
declare the texture diffuse on binding 0, that matches with our glActiveTexture(GL_TEXTURE0) inside the render and the imageBuffer picking where we will save our depth identified by binding 1, that matches our Semantic.Image.PICKING inside our render.glBindImageTexture
declare the picking coordinates, here hardcoded, but nothing stops you from turning them out as uniform variable and set it on runtime
declare the incoming Block block holding the texture coordinates
declare the default output color
if the current fragment coordinates gl_FragCoord (built in function) corresponds to the picking coordinates pickingCoord, save the current z value gl_FragCoord.z inside the imageBuffer depth and set the output color to vec4(1, 0, 1, 1), otherwise we set it equal to the diffuse texture by texture(diffuse, inBlock.texCoord.st). st is part of the stqp selection, synonymous of xywz or rgba.
Code:
#version 420 core
#define FRAG_COLOR 0
precision highp float;
precision highp int;
layout(std140, column_major) uniform;
in vec4 gl_FragCoord;
layout(binding = 0) uniform sampler2D diffuse;
layout(binding = 1, r32f) writeonly uniform imageBuffer depth;
uvec2 pickingCoord = uvec2(320, 240);
in Block
{
vec2 texCoord;
} inBlock;
layout(location = FRAG_COLOR, index = 0) out vec4 color;
void main()
{
if(all(equal(pickingCoord, uvec2(gl_FragCoord.xy))))
{
imageStore(depth, 0, vec4(gl_FragCoord.z, 0, 0, 0));
color = vec4(1, 0, 1, 1);
}
else
color = texture(diffuse, inBlock.texCoord.st);
}
Finally in the end we clean up all our OpenGL resources:
#Override
protected boolean end(GL gl) {
GL4 gl4 = (GL4) gl;
gl4.glDeleteProgramPipelines(1, pipelineName, 0);
gl4.glDeleteProgram(programName);
gl4.glDeleteBuffers(Buffer.MAX.ordinal(), bufferName, 0);
gl4.glDeleteTextures(Texture.MAX.ordinal(), textureName, 0);
gl4.glDeleteVertexArrays(1, vertexArrayName, 0);
return true;
}

Flipping texture when copying to another texture

I need to flip my texture vertically when copying it into another texture.I know about 3 simple ways to do it:
1 . Blit from once FBO into another using full screen quad (and flip in frag shader)
2 . Blit using glBlitFrameBuffer.
3 . Using glCopyImageSubData
I need to perform this copy between 2 textures which aren't attached to any FBO so I am trying to avoid first 2 solutions.I am trying the third one.
Doing it like this:
glCopyImageSubData(srcTex ,GL_TEXTURE_2D,0,0,0,0,targetTex,GL_TEXTURE_2D,0,0,width ,0,height,0,1);
It doesn't work.The copy returns garbage.Is this method supposed to be able to flip when reading?Is there an alternative FBO unrelated method(GPU side only)?
Btw:
glCopyTexSubImage2D(GL_TEXTURE_2D,0,0,0,0,height ,width,0 );
Doesn't work too.
Rendering a textured quad to a pbo by drawing the inverted quad would work.
Or you could go with a simple fragment shader doing a imageLoad + imageStore by inverting the y coordinate with 2 bound image buffers.
glBindImageTexture(0, copyFrom, 0, GL_FALSE, 0, GL_READ_ONLY, GL_RGBAUI32);
glBindImageTexture(1, copyTo, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBAUI32);
the shader would look something like:
layout(binding = 0, rbga32ui) uniform uimage2d input_buffer;
layout(binding = 1, rbga32ui) uniform uimage2d output_buffer;
uniform float u_texHeight;
void main(void)
{
vec4 color = imageLoad( input_buffer, ivec2(gl_FragCoord.xy) );
imageStore( output_buffer, ivec2(gl_FragCoord.x,u_texHeight-gl_FragCoord.y-1), color );
}
You'll have to tweak it a little, but I know it works I used it before.
Hope this helps

GLSL: sampler3D in vertex shader, texture appears blank

EDIT: Think I've narrowed down the problem. Skip to the running section.
I'm trying to sample a 3d texture in my vertex shader, I'm going to use the texel values as corner value in Marching Cubes. The issue I'm having is that no matter what method I use to sample it, I always get (0,0,0,0). I've tried using texelFetch and texture3D and neither seem to work.
I'm also using transform feedback, but as far as I'm aware that shouldn't cause this issue.
Shader setup:
glEnable(GL_TEXTURE_3D);
Shader vertListTriangles(GL_VERTEX_SHADER_ARB);
vertListTriangles.setSource(lst_tri_vert); //Util to load from file.
vertListTriangles.compile();
vertListTriangles.errorCheck(); //Prints errors to console if they exist - shader compiles fine.
Shader geomListTriangles(GL_GEOMETRY_SHADER_ARB);
geomListTriangles.setSource(lst_tri_geom); //Util to load from file
geomListTriangles.compile();
geomListTriangles.errorCheck(); //Prints errors to console if they exist - shader compiles fine.
program.attach(vertListTriangles);
program.attach(geomListTriangles);
//Setup transform feedback varyings, also works as expected.
const GLchar* varyings1[1];
varyings1[0] = "gTriangle";
glTransformFeedbackVaryings(program.getID(), 1, varyings1, GL_INTERLEAVED_ATTRIBS);
program.link();
program.checkLink(); //Prints link errors to console - program links fine aparently.
Texture setup:
glBindTexture(GL_TEXTURE_3D, textureID);
errorCheck("texture bind"); //<- Detects GL errors, I actually get a GL_INVALID_OPERATION here, not sure if its the cause of the problem though as all subsuquent binds go smoothly.
if(!(glIsTexture(textureID)==GL_TRUE)) consolePrint("Texture Binding Failed."); //Oddly the texture never registers as failed despite the previous error message.
//Generate Texture
GLfloat volumeData[32768*3];
for(int z = 0; z < 32; z++)
{
for(int y = 0; y < 32; y++)
{
for(int x = 0; x < 32; x++)
{
//Set all 1s for testing purposes
volumeData[(x*3)+(y*96)+(z*3072)] = 1.0f;
volumeData[(x*3)+(y*96)+(z*3072)+1] = 1.0f;
volumeData[(x*3)+(y*96)+(z*3072)+2] = 1.0f;
}
}
}
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAX_LEVEL, 0);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_BASE_LEVEL, 0);
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGB8, 32, 32, 32, 0, GL_RGB,
GL_FLOAT, volumeData);
glBindTexture(GL_TEXTURE_3D, 0);
Running Shader:
EDIT: Here it gets interesting. If I specify an incorrect uniform name or comment out the below lines it appears to work.
program.use();
//Disable Rastering
glEnable(GL_RASTERIZER_DISCARD);
//Input buffer: Initial vertices
glBindBuffer(GL_ARRAY_BUFFER, mInitialDataBuffer);
glEnableVertexAttribArray(0);
glVertexAttribIPointer(0, 1, GL_UNSIGNED_INT, 0, 0); //Initial input is array of uints
//Output buffer: Triangles
glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, mTriangleBuffer); //Triangle Markers, in the form of uints. NOT actual triangles.
//Texture setup
//If I comment out from here....
GLint sampler = glGetUniformLocation(program.getID(), "densityVol");
glUniform1i(sampler, GL_TEXTURE0);
glActiveTexture(GL_TEXTURE0);
//To here. It appears to work.
glBindTexture(GL_TEXTURE_3D, textureID);
//Just using this to debug texture.
//test is all 1s, so the texture is uploading correctly.
GLfloat test[32768*3];
memset(test, 0, sizeof(test));
glGetTexImage(GL_TEXTURE_3D, 0, GL_RGB, GL_FLOAT, test);
//Transform Feedback and Draw
glBeginTransformFeedback(GL_POINTS);
glDrawArrays(GL_POINTS, 0, 29790);
glEndTransformFeedback();
//Re-enable Rastering and cleanup
glDisable(GL_RASTERIZER_DISCARD);
glDisableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
My code is a little more spread out in reality, but I hope I managed to edit it into something cohesive. Anyway if I map to the output buffer it does indeed output some information, however it processes as if all the texture data is 0s. I hacked the shader to just output some test results instead but I can't find any evidence the shader is using the texture correctly:
#version 410
#extension GL_EXT_gpu_shader4 : require
layout (location = 0) in int x_y_z;
uniform sampler3D densityVol;
out Voxel
{
/*
Each triangle is the edges it joins. There are 12 edges and so we need 12 bits. 4 For each edge.
There are up to 32 voxels, which means we need 6 bits for each coord, which is 18.
30 bits total.
int format 00xxxxxxyyyyyyzzzzzz111122223333
*/
uint triangles[5];
uint triangleCount;
} vVoxel;
//... Omitted some huge ref tables.
void main()
{
vec4 sample0 = texture3D(densityVol, vec3(0.1,0.1,0.1) );
vec4 sample1 = texture3D(densityVol, vec3(0.9,0.9,0.9) );
vec4 sample2 = texture3D(densityVol, vec3(0.1,0.1,0.9) );
vec4 sample3 = texture3D(densityVol, vec3(0.9,0.9,0.1) );
if(sample0.r > 0.0f)
{
vVoxel.triangles[1] = 1;
}
if(sample1.r > 0.0f)
{
vVoxel.triangles[2] = 2;
}
if(sample2.r > 0.0f)
{
vVoxel.triangles[3] = 3;
}
if(sample3.r > 0.0f)
{
vVoxel.triangles[4] = 4;
}
vVoxel.triangleCount = 5;
}
Not the best designed test, but I didn't want to write something from scratch. If I change the if clauses to if(true) the test outputs correctly. When the shader is compiled as above, the buffer is blank. I'm using a GS for pass through.
Can anyone see an obvious mistake in there? I've been stumped for about 2 hours now and I can't see what I'm doing different from many of the GLSL texturing tutorials.
Okay figured it out.
glUniform1i(sampler, GL_TEXTURE0);
The GL_TEXTURE0 is incorrect here.
glUniform1i(sampler, 0);
Is how it should be.