OpenGL 4.5 Buffer Texture : extensions support - opengl

I use OpenGL Version 4.5.0, but somehow I can not make texture_buffer_object extensions work for me ("GL_EXT_texture_buffer_object" or "GL_ARB_texture_buffer_object"). I am quite new with OpenGL, but if I understand right, these extensions are quite old and even already included in the core functionality...
I looked for the extensions with "OpenGL Extensions Viewer 4.1", it says they are supported on my computer, glewGetExtension("GL_EXT_texture_buffer_object") and glewGetExtension("GL_ARB_texture_buffer_object") both also return true.
But the data from the buffer does not appear in the texture sample (in the fragment shader the texture contains only zeros).
So, I thought maybe the extensions are somehow disabled by default, and I included enabling these extensions in my fragment shader:
#version 440 core
#extension GL_ARB_texture_buffer_object : enable
#extension GL_EXT_texture_buffer_object : enable
And now I get such warnings at run-time:
***GLSL Linker Log:
Fragment info
-------------
0(3) : warning C7508: extension ARB_texture_buffer_object not supported
0(4) : warning C7508: extension EXT_texture_buffer_object not supported
Please see the code example below:
//#define GL_TEXTURE_BIND_TARGET GL_TEXTURE2D
#define GL_TEXTURE_BIND_TARGET GL_TEXTURE_BUFFER_EXT
.....
glGenTextures(1, &texObject);
glBindTexture(GL_TEXTURE_BIND_TARGET, texObject);
GLuint bufferObject;
glGenBuffers(1, &bufferObject);
// Make this the current UNPACK buffer (OpenGL is state-based)
glBindBuffer(GL_TEXTURE_BIND_TARGET, bufferObject);
glBufferData(GL_TEXTURE_BIND_TARGET, nWidth*nHeight*4*sizeof(float), NULL, GL_DYNAMIC_DRAW);
float *test = (float *)glMapBuffer(GL_TEXTURE_BIND_TARGET, GL_READ_WRITE);
for(int i=0; i<nWidth*nHeight*4; i++)
test[i] = i/(nWidth*nHeight*4.0);
glUnmapBuffer(GL_TEXTURE_BIND_TARGET);
glTexBufferEXT(GL_TEXTURE_BIND_TARGET, GL_RGBA32F_ARB, bufferObject);
//glTexImage2D(GL_TEXTURE_BIND_TARGET, 0, components, nWidth, nHeight,
// 0, format, GL_UNSIGNED_BYTE, data);
............
So if I use GL_TEXTURE2D target and load some data array directly to the texture, everything works fine. If I use GL_TEXTURE_BUFFER_EXT target and try to load texture from the buffer, then I get an empty texture in the shader.
Note: I have to load texture data from the buffer because in my real project I generate the data on the CUDA side, and the only way (that I know of) to visualize data from CUDA is using such texture buffers.
So, the questions are :
1) why I become no data in the texture, although the OpenGL version is ok, and Extensions Viewer shows the Extensions as supported ?
2) why trying to enable the extensions in the shader fails ?
Edit details : I updated the post, because I found out the reason for "Invalid Enum" error about that I mentioned first, it was caused by glTexParameteri that is not allowed for buffer textures.

I solved this. I was in a hurry and stupidly missed a very important thing on a wiki page:
https://www.opengl.org/wiki/Buffer_Texture
Access in shaders
In GLSL, buffer textures can only be accessed with the texelFetch​ function. This function takes pixel offsets into the texture rather than normalized texture coordinates. The sampler types for buffer textures are samplerBuffer​.
So in GLSL we should use buffer textures like this:
uniform samplerBuffer myTexture;
void main (void)
{
vec4 color = texelFetch(myTexture, [index]);
not like usual textures:
uniform sampler1D myTexture;
void main (void)
{
vec4 color = texture(myTexture, gl_FragCoord.x);
Warnings about not supported extensions : I think I get them because this functionality is included in the core since OpenGL 3.1, so they should not be enabled additionally any more.

Related

OpenGL point sprite uv coordinates not working correctly [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
So I'm trying to render a grid in OpenGL using point sprites and render a cross texture on to the point. I've read on a few sites that you can access the uv coordinates of a point sprite in the fragment shader with gl_PointCoord, but for some reason it is always 0, unless I capture a frame with renderdoc to take a look at whats going on. I have the same issue on my Windows laptop (Nvidia GTX 960m) and on my Linux desktop (NVidia GTX 1070). So either this is an general issue of the NVidia drivers or I'm configuring something wrong.
For debugging purposes I increased the size of some grid points and set the color equal to gl_PointCoord. This is the captured framebuffer content after the grid has been rendered and the original window as comparison:
My rendering setup is pretty complex and scattered around different classes because it is wrapped inside a GUI library, but basically these are the calls that happen when rendering the grid:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, this->m_framebufferId);
this->updateProjection(width, height);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_SCISSOR_TEST);
glEnable(GL_STENCIL_TEST);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glDepthMask(GL_TRUE);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
glClearColor(backgroundColor.m_red, backgroundColor.m_green, backgroundColor.m_blue, backgroundColor.m_alpha);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
<generate grid data>
glBindBuffer(GL_ARRAY_BUFFER, this->gridVboId);
glBufferSubData(GL_ARRAY_BUFFER, 0, 8 * this->pointCount, this->gridPoints);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, this->gridStyleTexture->getTextureId());
glUseProgram(this->gridShaderProgram->getId());
this->gridShaderProgram->setUniform4x4f("u_projectionMatrix", this->projectionMatrix);
this->gridShaderProgram->setUniform1f("u_depth", this->gridMode == GRID_MODE_BACKGROUND ? -1000.0f : -1.0f);
this->gridShaderProgram->setUniform1f("u_size", 1.0f);
this->gridShaderProgram->setUniform1i("u_texture", 0);
this->gridShaderProgram->setUniform2f("u_icon", 2.0f, 0.5f);
this->gridShaderProgram->setUniform4f("u_color", this->gridColor);
glEnable(GL_PROGRAM_POINT_SIZE);
glBindVertexArray(this->gridVaoId);
glEnableVertexAttribArray(0);
glDrawArrays(GL_POINTS, 0, this->pointCount);
glDisableVertexAttribArray(0);
glBindVertexArray(0);
glDisable(GL_PROGRAM_POINT_SIZE);
this->pointCount = 0;
This is the vertex shader:
#version 330
layout (location = 0) in vec2 v_position;
uniform mat4 u_projectionMatrix;
uniform float u_depth;
uniform float u_size;
void main()
{
gl_Position = u_projectionMatrix * vec4(v_position, u_depth, 1.0);
gl_PointSize = u_size;
}
The Fragment shader:
#version 330
layout (location = 0) out vec4 fragmentColor;
uniform sampler2D u_texture;
uniform vec2 u_icon;
uniform vec4 u_color;
uniform float u_size;
void main()
{
if (u_size > 1.0)
fragmentColor = vec4(gl_PointCoord, 0.0, 1.0);
else
{
vec2 uvCoord = gl_PointCoord / vec2(1.0, u_icon.x) + vec2(0.0, u_icon.y);
fragmentColor = texture(u_texture, uvCoord) * u_color;
}
}
The big squares are rendered with the top branch because u_size is greater than 1, in the captured frame it is 15.0.
Is this a bug or am I missing some openGL calls to make it work correctly?
From the additional comment:
I'm using GLFW to create a context with no specific profile selected.
If you do not explicitly request a core profile, you will get either a legacy context (something before the invention of profiles in GL), or a compatibility profile. Since support for compatibility profiles is optional,you can not rely on getting the a context supporting GL 3.3 that. way.
It creates a 4.6 profile on linux, should be the same on my Windows Laptop.
That's only luck. With the open source mesa driver on Linux, you will only get GL 3.0, and on MacOS, only 2.1.
I use glad as loader, which I configured to be in core profile.
That doesn't matter. It won't change which version and profile your context supports. It will just limit the loaded functions to the subset GL 3.3 core provides.
However, my main point about asking for the the GL profile is that point sprite rendering differs significantly between core and compatibility profiles:
In core profile OpenGL, point rendering will automatically be point sprite rendering
In compatibility profiles, you have to explicitly enable this via glEnable(GL_POINT_SPRITE), otherwise, gl_PointCoord will not be calculated.
I've read on a few sites that you can access the uv coordinates of a point sprite in the fragment shader with gl_PointCoord, but for some reason it is always 0, unless I capture a frame with renderdoc to take a look at whats going on.
That doesn't surprise me then: renderdoc only works with core profile contexts, and most likely tweaks the context creation to a core profile in your case.
Since your code seems to target core profile anyway (and seems to work on that, too, judging by the experience you get with renderdoc), you should explicitly request a core profile. This will have the additional benefit of greatly increasing the number of implementations which can run your code.
The other solution would be to detect if you're running in core or compat profiles, and conditionally call glEnable(GL_POINT_SPRITE) (or, the quick&dirty variant: always call that and ignore the GL error which will be generated by this on core profile contexts). However, your glad loader's GL header probably will not even contain the #define GL_POINT_SPRITE 0x8861 definition...

OpenGL ES 3 (iOS) texturing oddness - want to know why

I have a functioning OpenGL ES 3 program (iOS), but I've having a difficult time understanding OpenGL textures. I'm trying to render several quads to the screen, all with different textures. The textures are all 256 color images with a sperate palette.
This is C++ code that sends the textures to the shaders
// THIS CODE WORKS, BUT I'M NOT SURE WHY
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, _renderQueue[idx]->TextureId);
glUniform1i(_glShaderTexture, 1); // what does the 1 mean here
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, _renderQueue[idx]->PaletteId);
glUniform1i(_glShaderPalette, 2); // what does the 2 mean here?
glDrawElements(GL_TRIANGLES, sizeof(Indices)/sizeof(Indices[0]), GL_UNSIGNED_BYTE, 0);
This is the fragment shader
uniform sampler2D texture; // New
uniform sampler2D palette; // A palette of 256 colors
varying highp vec2 texCoordOut;
void main()
{
highp vec4 palIndex = texture2D(texture, texCoordOut);
gl_FragColor = texture2D(palette, palIndex.xy);
}
As I said, the code works, but I'm unsure WHY it works. Several seemingly minor changes break it. For example, using GL_TEXTURE0, and GL_TEXTURE1 in the C++ code breaks it. Changing the numbers in glUniform1i to 0, and 1 break it. I'm guessing I do not understand something about texturing in OpenGL 3+ (maybe Texture Units???), but need some guidance to figure out what.
Since it's often confusing to newer OpenGL programmers, I'll try to explain the concept of texture units on a very basic level. It's not a complex concept once you pick up on the terminology.
The whole thing is motivated by offering the possibility of sampling multiple textures in shaders. Since OpenGL traditionally operates on objects that are bound with glBind*() calls, this means that an option to bind multiple textures is needed. Therefore, the concept of having one bound texture was extended to having a table of bound textures. What OpenGL calls a texture unit is an entry in this table, designated by an index.
If you wanted to describe this state in a C/C++ style notation, you could define the table of bound texture as an array of texture ids, where the size is the maximum number of bound textures supported by the implementation (queried with glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, ...)):
GLuint BoundTextureIds[MAX_TEXTURE_UNITS];
If you bind a texture, it gets bound to the currently active texture unit. This means that the last call to glActiveTexture() determines which entry in the table of bound textures is modified. In a typical call sequence, which binds a texture to texture unit i:
glActiveTexture(GL_TEXTUREi);
glBindTexture(GL_TEXTURE_2D, texId);
this would correspond to modifying our imaginary data structure by:
BoundTextureIds[i] = texId;
That covers the setup. Now, the shaders can access all the textures in this table. Variables of type sampler2D are used to access textures in the GLSL code. To determine which texture each sampler2D variable accesses, we need to specify which table entry each one uses. This is done by setting the uniform value to the table index:
glUniform1i(samplerLoc, i);
specifies that the sampler uniform at location samplerLoc reads from table entry i, meaning that it samples the texture with id BoundTextureIds[i].
In the specific case of the question, the first texture was bound to texture unit 1 because glActiveTexture(GL_TEXTURE1) was called before glBindTexture(). To access this texture from the shader, the shader uniform needs to be set to 1 as well. Same thing for the second texture, with texture unit 2.
(The description above was slightly simplified because it did not take into account different texture targets. In reality, textures with different targets, e.g. GL_TEXTURE_2D and GL_TEXTURE_3D, can be bound to the same texture unit.)
GL_TEXTURE1 and GL_TEXTURE2 refer to texture units. glUniform1i takes a texture unit id for the second argument for samplers. This is why they are 1 and 2.
From the OpenGL website:
The value of a sampler uniform in a program is not a texture object,
but a texture image unit index. So you set the texture unit index for
each sampler in a program.

Deleting unused textures in opengl

while browsing here I noticed something strange.
After loading a compressed texture and getting the ID to the shaders uniform
GLuint Texture = loadDDS("uvtemplate.DDS");
// Get a handle for our "myTextureSampler" uniform
GLuint TextureID = glGetUniformLocation(programID, "myTextureSampler");
He deleted the texture using the ID he got from glGetUniformLocation
glDeleteTextures(1, &TextureID);
Shouldn't he use this instead?
glDeleteTextures(1, &Texture );
Yes, that is correct.
Some OpenGL implementations (like AMD and nVidia drivers) tend to return ascending resource ID's, starting from 1. If this is the first texture the code allocates and the sampler is the first uniform in the shader program, then IDs will match, and the code accidentally works. However, it will likely break on other platforms (like Intel drivers), or when more resources are used.

Directx 11, send multiple textures to shader

using this code I can send one texture to the shader:
devcon->PSSetShaderResources(0, 1, &pTexture);
Of course i made the pTexture by: D3DX11CreateShaderResourceViewFromFile
Shader:
Texture2D Texture;
return color * Texture.Sample(ss, texcoord);
I'm currently only sending one texture to the shader, but I would like to send multiple textures, how is this possible?
Thank You.
You can use multiple textures as long as their count does not exceed your shader profile specs. Here is an example:
HLSL Code:
Texture2D diffuseTexture : register(t0);
Texture2D anotherTexture : register(t1);
C++ Code:
devcon->V[P|D|G|C|H]SSetShaderResources(texture_index, 1, &texture);
So for example for above HLSL code it will be:
devcon->PSSetShaderResources(0, 1, &diffuseTextureSRV);
devcon->PSSetShaderResources(1, 1, &anotherTextureSRV); (SRV stands for Shader Texture View)
OR:
ID3D11ShaderResourceView * textures[] = { diffuseTextureSRV, anotherTextureSRV};
devcon->PSSetShaderResources(0, 2, &textures);
HLSL names can be arbitrary and doesn't have to correspond to any specific name - only indexes matter. While "register(tXX);" statements are not required, I'd recommend you to use them to avoid confusion as to which texture corresponds to which slot.
By using Texture Arrays. When you fill out your D3D11_TEXTURE2D_DESC look at the ArraySize member. This desc struct is the one that gets passed to ID3D11Device::CreateTexture2D. Then in your shader you use a 3rd texcoord sampling index which indicates which 2D texture in the array you are referring to.
Update: I just realised you might be talking about doing it over multiple calls (i.e. for different geo), in which case you update the shader's texture resource view. If you are using the effects framework you can use ID3DX11EffectShaderResourceVariable::SetResource, or alternatively rebind a new texture using PSSetShaderResources. However, if you are trying to blend between multiple textures, then you should use texture arrays.
You may also want to look into 3D textures, which provide a natural way to interpolate between adjacent textures in the array (whereas 2D arrays are automatically clamped to the nearest integer) via the 3rd element in the texcoord. See the HLSL sample remarks.

GLSL 4.10 Texture Mapping

I'm trying to figure out how to do texture mapping using GLSL version 4.10. I'm pretty new to GLSL and was happy to get a triangle rendering today with colors fading based on sin(time) using shaders. Now I'm interested in using shaders with a single texture.
A lot of tutorials and even Stack Overflow answers suggest using gl_MultiTexCoord0. However, this has been deprecated since GLSL 1.30 and the latest version is now 4.20. My graphics card doesn't support 4.20 which is why I'm trying to use 4.10.
I know I'm generating and binding my texture appropriately, and I have proper vertex coordinates and texture coordinates because my heightmap rendered perfectly when I was using the fixed-function pipeline, and it renders fine with color rather than the texture.
Here are my GLSL shaders and some of my C++ draw code:
---heightmap.vert (GLSL)---
in vec3 position;
in vec2 texcoord;
out vec2 p_texcoord;
uniform mat4 projection;
uniform mat4 modelview;
void main(void)
{
gl_Position = projection * modelview * vec4(position, 1.0);
p_texcoord = texcoord;
}
---heightmap.frag (GLSL)---
in vec2 p_texcoord;
out vec4 color;
uniform sampler2D texture;
void main(void)
{
color = texture2D(texture, p_texcoord);
}
---Heightmap::Draw() (C++)---
// Bind Shader
// Bind VBO + IBO
// Enable Vertex and Texcoord client state
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureId);
// glVertexPointer(...)
// glTexCoordPointer(...)
glUniform4fv(projLoc, projection);
glUniform4fv(modelviewLoc, modelview);
glUniform1i(textureId, 0);
// glDrawElements(...)
// glDisable/unbind everything
The thing that I am also suspicious about are whether I have to pass the texture coord stuff to the fragment shader as a varying since I'm not touching it in the vertex shader. Also, I have no idea how it's going to get the interpolated texcoords from that. It seems like it's just going to get 0.f or 1.f, not the interpolated coordinate. I don't know enough about shaders to understand how that works. If somebody could enlighten me I would be thrilled!
Edit 1:
#Bahbar: So sorry, that was a typo. I'm typing code on one machine while reading it off another. Like I said, it all worked with the fixed function pipeline. Although glEnableClientState and gl[Vertex|TexCoord]Pointer are deprecated, they should still work with shaders, no? glVertexPointer rather than glVertexAttribPointer worked with colors rather than textures. Also, I am using glBindAttribLocation (position to 0 and texcoord to 1).
The reason I am still using glVertexPointer is I am trying to un-deprecate one thing at a time.
glBindTexture takes a texture object as a second parameter.
// Enable Vertex and Texcoord client state
I assume you meant the generic vertex attributes ? Where are your position and texcoord attributes set up ? To do that, you need some calls to glEnableVertexAttrib, and glVertexAttribPointer instead of glEnableClientState and glVertex/TexCoordPointer (all those are deprecated in the same way that gl_MultiTexCoord is in glsl).
And of course, to figure out where the attributes are bound, you need to either call glGetAttribLocation to figure out where the GL chose to put the attrib, or define it yourself with glBindAttribLocation (before linking the program).
Edit to add, following your addition:
Well, 0 might end up pulling data from glVertexPointer (for reasons you should not rely on. attrib 0 is special and most IHVs make it work just like Vertex), but 1 very likely won't be pulling data from glTexCoord.
In theory, there is no overlap between the generic attributes (like your texcoord, that gets its data from glVertexAttribPointer(1,XXX), 1 here being your chosen location), and the built-in attributes (like gl_MultiTexCoord[0] that gets its data from glTexCoordPointer).
Now, nvidia is known to not follow the spec, and indeed aliases attributes (this comes from the Cg model, as far as I know), and will go so far as saying to use a specific attribute location for glTexCoord (the Cg spec suggests it uses location 8 for TexCoord0 - and location 1 is the attribute blendweight - see table 39, p242), but really you should just bite the bullet and switch your TexCoordPointer to VertexAttribPointer calls.