How to load a texture with GL Image? - opengl

I know how to load the texture
std::unique_ptr<glimg::ImageSet> pImgSet(glimg::loaders::dds::LoadFromFile("test.dds"));
GLuint tex = glimg::CreateTexture(pImgSet.get(), 0);
But how do I get this texture into my shader?
GL Image - Unoffcial OpenGL SDK

Bind the texture to a texture unit, e.g. unit 0:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex);
Add a sampler2D uniform to your shader:
uniform sampler2D myTexture;
Set the uniform to the number of the texture unit, as an integer:
glUseProgram(program);
GLint location = glGetUniformLocation(program, "myTexture");
glUniform1i(location, 0);
In the shader, use texture2D to sample it, e.g.:
gl_FragColor = texture2D(myTexture, texCoords);
The key thing to know is that sampler2D uniforms can be set as integers; setting it to 1 means to use the texture bound to GL_TEXTURE1, and so on. The uniform's value defaults to 0, and the active texture unit defaults to GL_TEXTURE0, so if you use only one texture unit, you don't even need to set the uniform.

Related

Apply a texture with OpenGL 3.0 / GLSL 1.3

Currently I create my 3D models using following code (simplified):
gl3Element->shaderProgram=glCreateProgram();
glAttachShader(gl3Element->shaderProgram,m_gl3VertexShader);
glAttachShader(gl3Element->shaderProgram,m_gl3DynColourFragmentShader);
glLinkProgram(gl3Element->shaderProgram);
glDeleteShader(m_gl3VertexShader);
glDeleteShader(m_gl3DynColourFragmentShader);
glGenVertexArrays(1, &gl3Element->VAO);
glGenBuffers(1, &gl3Element->VBO);
glBindVertexArray(entity->m_gl3Element.VAO);
glBindBuffer(GL_ARRAY_BUFFER,entity->m_gl3Element.VBO);
glBufferData(GL_ARRAY_BUFFER,size,data,GL_STATIC_DRAW);
glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,3*sizeof(float), (void*)0);
glEnableVertexAttribArray(0);
// todo: add texture code here?
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
glPolygonMode(GL_FRONT_AND_BACK,GL_FILL);
My current (not working) texture code looks like this:
glGenTextures(1, &imgEntity->m_glTexture);
glBindTexture(GL_TEXTURE_2D, imgEntity->m_glTexture);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,transImage->GetWidth(),transImage->GetHeight(),0,GL_RGB,GL_UNSIGNED_BYTE,transImage->GetData());
glBindTexture(GL_TEXTURE_2D, imgEntity->m_glTexture);
The things that are missing obviously are texture coordinates and assignment of the texture to the created model. So my questions:
How can I apply some valid texture coordinates to the object using only OpenGL 3.0 and GLSL 1.3?
How do I assign these texture data to the model so that they are drawn on my next call to
glBindVertexArray(element->VAO);
glDrawArrays(element->arrayType,arrayStart,element->arraySize);
for this model ?
Thanks!
How can I apply some valid texture coordinates to the object using
only OpenGL 3.0 and GLSL 1.3?
Texture coordinates are normally generated by a third party program such as 3DS Max or Blender. 3D artists use these programs to texture their models, when the model is exported the texture coordinates are also exported in the model file. When the model is loaded for rendering, the texture coordinates for each vertex are extracted and then we can pass these coordinates to the shader via a shader attribute.
How do I assign these texture data to the model?
Getting textured geometry in OpenGL can be a bit of a process so I will try to break down the process within a few steps.
Get the models texture coordinates; could be programmatically generated or loaded from a model.
Load in the texture so that it can be used by OpenGL
Set up the attribute array so that the shader can find the texture coordinates.
Modify the vertex shader and fragment shader to support textured geometry
It looks like you already have a mechanism for number 2 (loading in the texture).
So you seem to be just missing the last two steps.
To get the texture coordinates associated with the vertex data you can specify attributes that are associated with the vertex data.
As per the OpenGL documentation:
Vertex attributes are used to communicate from "outside" to the vertex shader. Unlike uniform variables, values are provided per vertex (and not globally for all vertices). There are built-in vertex attributes like the normal or the position, or you can specify your own vertex attribute like a tangent or another custom value. Attributes can't be defined in the fragment shader.
Some sample code might look like this:
//An easy way keep track of what locations are assigned for each attribute
enum Attribute_Location
{
AL_Vertices = 0,
AL_DiffuseTexCoords = 1,
AL_AlphaTexCoords = 2,
AL_Normals = 3,
};
GLuint uvBuffer;
glGenBuffers(1, &uvBuffer);
//Bind the buffer
glBindBuffer(
GL_ARRAY_BUFFER,
uvBuffer);
//Bind the data to the buffer
glBufferData(GL_ARRAY_BUFFER,
bufferSize, //size of the buffer you are uploading
&diffuseTexCoords[0], //array of texture coords
GL_STATIC_DRAW);
glEnableVertexAttribArray(AL_DiffuseTexCoords);
//Tells OpenGL how to assign data from the texture buffer to the shader
glVertexAttribPointer(AL_DiffuseTexCoords,
2,
GL_FLOAT,
GL_FALSE,
0,
0);
And here is an example of how the vertex shader and fragment shader would look, courtesy of http://www.opengl-tutorial.org/beginners-tutorials/tutorial-5-a-textured-cube/
Textured.vs
#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;
// Output data ; will be interpolated for each fragment.
out vec2 UV;
// Values that stay constant for the whole mesh.
uniform mat4 MVP;
void main()
{
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
// UV of the vertex. No special space for this one.
UV = vertexUV;
}
Textured.fs
#version 330 core
// Interpolated values from the vertex shaders
in vec2 UV;
// Ouput data
out vec3 color;
// Values that stay constant for the whole mesh.
uniform sampler2D myTextureSampler;
void main()
{
// Output color = color of the texture at the specified UV
color = texture( myTextureSampler, UV ).rgb;
}
Note that the attribute location of the vertices and texture coordinates specified in the enum Attribute_Location match the layout location in the vertex shader:
enum Attribute_Location
{
AL_Vertices = 0,
AL_DiffuseTexCoords = 1,
...
}
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;

Purpose of uniform while using multiple texture

I am trying to understand this code:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture1);
glUniform1i(glGetUniformLocation(ourShader.Program, "ourTexture1"), 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture2);
glUniform1i(glGetUniformLocation(ourShader.Program, "ourTexture2"), 1);
This is the related shader code:
#version 330 core
...
uniform sampler2D ourTexture1;
uniform sampler2D ourTexture2;
void main()
{
color = mix(texture(ourTexture1, TexCoord), texture(ourTexture2, TexCoord), 0.2);
}
So, as far as I understand, after activating GL_TEXTURE0 we bind texture1 to it. My understanding is that this binds texture1 to first sampler2d.The part I dont understand is, why do we need to use glUniform call.
It's an indirection. You choose the texture that is input at location GL_TEXTURE0 then you tell the uniform in your shader to fetch its texture from that same location. It's kind of like this (apologies for the diagram).
The first row is texture unit locations and the second row is shader uniform locations. You may want to bind texture unit 4 to shader sampler 2, for example.
(DatenWolf will be along in a moment to correct me :).

Sampling a GL_TEXTURE_3D in the Fragment Shader

I have a GL_TEXTURE_3D which is of size 16x16x6, it has been populated with floats in a compute shader and I am trying to sample it in the fragment shader.
To make it available to the fragment shader I have this code just before the draw call:
//Set the active texture and bind it
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_3D, textureID);
//Add the sampler uniform
glUniform1i(glGetUniformLocation(textureID, "TextureSampler"), 0);
Then in the fragment shader itself:
uniform sampler3D TextureSampler;
Then to test that the texture has come through correctly:
vec4 test = texture(TextureSampler, ivec3(0,0,0));
color = vec4(1.0,0.0,0.0,1.0) * test.x; //or test.y, test.z, test.w
Based on this it looks like every texels value is (0.0,0.0,0.0,1.0)
I can't work out why this is? I would expect at coords (0,0,0) for the value to be either (16.0,0.0,0.0,0.0) or (16.0,16.0,16.0,16.0) based on what I set them to in the compute shader
P.S. It might be worth noting that the values are being written to the texture correctly I check inbetween the compute and fragment shader calls using glGetTexImage()

How can I pass multiple textures to a single shader?

I am using freeglut, GLEW and DevIL to render a textured teapot using a vertex and fragment shader. This is all working fine in OpenGL 2.0 and GLSL 1.2 on Ubuntu 14.04.
Now, I want to apply a bump map to the teapot. My lecturer evidently doesn't brew his own tea, and so doesn't know they're supposed to be smooth. Anyway, I found a nice-looking tutorial on old-school bump mapping that includes a fragment shader that begins:
uniform sampler2D DecalTex; //The texture
uniform sampler2D BumpTex; //The bump-map
What they don't mention is how to pass two textures to the shader in the first place.
Previously I
//OpenGL cpp file
glBindTexture(GL_TEXTURE_2D, textureHandle);
//Vertex shader
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
//Fragment shader
gl_FragColor = color * texture2D(DecalTex,gl_TexCoord[0].xy);
so now I
//OpenGL cpp file
glBindTexture(GL_TEXTURE_2D, textureHandle);
glBindTexture(GL_TEXTURE_2D, bumpHandle);
//Vertex shader
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
gl_TexCoord[1] = gl_TextureMatrix[1] * gl_MultiTexCoord1;
//Fragment shader
gl_FragColor = color * texture2D(BumpTex,gl_TexCoord[0].xy);
//no bump logic yet, just testing I can use texture 1 instead of texture 0
but this doesn't work. The texture disappears completely (effectively the teapot is white). I've tried GL_TEXTURE_2D_ARRAY, glActiveTexture and few other likely-seeming but fruitless options.
After sifting through the usual mixed bag of references to OpenGL and GLSL new and old, I've come to the conclusion that I probably need glGetUniformLocation. How exactly do I use this in the OpenGL cpp file to pass the already-populated texture handles to the fragment shader?
How to pass an array of textures with different sizes to GLSL?
Passing Multiple Textures from OpenGL to GLSL shader
Multiple textures in GLSL - only one works
(This is homework so please answer with minimal code fragments (if at all). Thanks!)
Failing that, does anyone have a tea cosy mesh?
It is very simple, really. All you need is to bind the sampler to some texture unit with glUniform1i. So for your code sample, assuming the two uniform samplers:
uniform sampler2D DecalTex; // The texture (we'll bind to texture unit 0)
uniform sampler2D BumpTex; // The bump-map (we'll bind to texture unit 1)
In your initialization code:
// Get the uniform variables location. You've probably already done that before...
decalTexLocation = glGetUniformLocation(shader_program, "DecalTex");
bumpTexLocation = glGetUniformLocation(shader_program, "BumpTex");
// Then bind the uniform samplers to texture units:
glUseProgram(shader_program);
glUniform1i(decalTexLocation, 0);
glUniform1i(bumpTexLocation, 1);
OK, shader uniforms set, now we render. To do so, you will need the usual glBindTexture plus glActiveTexture:
glActiveTexture(GL_TEXTURE0 + 0); // Texture unit 0
glBindTexture(GL_TEXTURE_2D, decalTexHandle);
glActiveTexture(GL_TEXTURE0 + 1); // Texture unit 1
glBindTexture(GL_TEXTURE_2D, bumpHandle);
// Done! Now you render normally.
And in the shader, you will use the textures samplers just like you already do:
vec4 a = texture2D(DecalTex, tc);
vec4 b = texture2D(BumpTex, tc);
Note: For techniques like bump-mapping, you only need one set of texture coordinates, since the textures are the same, only containing different data. So you should probably pass texture coordinates as a vertex attribute.
instead of using:
glUniform1i(decalTexLocation, 0);
glUniform1i(bumpTexLocation, 1);
in your code,
you can have:
layout(binding=0) uniform sampler2D DecalTex;
// The texture (we'll bind to texture unit 0)
layout(binding=1)uniform sampler2D BumpTex;
// The bump-map (we'll bind to texture unit 1)
in your shader. That also mean you don't have to query for the location.

GLSL and FBOs - glActiveTexture doesn't work?

I'm trying to write a simple shader which would add textures attached to FBOs. There is no problem with FBO initialization and such (I've tested it). The problem is I believe with
glActiveTexture(GL_TEXTURE0). It doesn't seem to be doing anything- here is my frag shader:
(but generally shader is called - I've tested that by putting gl_FragColor = vec4(0,1,0,1);
uniform sampler2D Texture0;
uniform sampler2D Texture1;
varying vec2 vTexCoord;
void main()
{
vec4 texel0 = texture2D(Texture0, gl_TexCoord[0].st);
vec4 vec = texel0;
gl_FragColor = texel0;
}
And in C++ code i have:
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, iFrameBufferAccumulation);
glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT);
( Render something - it works fine to iTextureImgAccumulation texture attached to GL_COLOR_ATTACHMENT0_EXT )
glClear (GL_COLOR_BUFFER_BIT );
glEnable(GL_TEXTURE_RECTANGLE_NV);
glActiveTexture(GL_TEXTURE0);
glBindTexture( GL_TEXTURE_RECTANGLE_NV, iTextureImgAccumulation ); // Bind our frame buffer texture
xShader.setUniform1i("Texture0", 0);
glLoadIdentity(); // Load the Identity Matrix to reset our drawing locations
glTranslatef(0.0f, 0.0f, -2.0f);
xShader.bind();
glBegin(GL_QUADS);
glTexCoord2f(0,OPT.m_nHeight);
glVertex3f(-1,-1,0);
glTexCoord2f(OPT.m_nWidth,OPT.m_nHeight);
glVertex3f(1,-1,0);
glTexCoord2f(OPT.m_nWidth,0);
glVertex3f(1,1,0);
glTexCoord2f(0,0);
glVertex3f(-1,1,0);
glEnd();
glBindTexture( GL_TEXTURE_RECTANGLE_NV, NULL );
xShader.unbind();
Result: black screen (when displaying second texture and using shader (without using shader its fine). I'm aware that this shader shouldn't do much but he doesn't even display
the first texture.
I'm in the middle of testing things but idea is that after rendering to first texture
I would add first texture to the second one. To do this I imagine that this fragment shader
would work :
uniform sampler2D Texture0;
uniform sampler2D Texture1;
varying vec2 vTexCoord;
void main()
{
vec4 texel0 = texture2D(Texture0, gl_TexCoord[0].st);
vec4 texel1 = texture2D(Texture1, gl_TexCoord[0].st);
vec4 vec = texel0 + texel1;
vec.w = 1.0;
gl_FragColor = vec;
}
And whole idea is that in a loop tex2 = tex2 + tex1 ( would it be possible that i use tex2 in this shader to render to GL_COLOR_ATTACHMENT1_EXT which is attached to tex2 ?)
I've tested both xShader.bind(); before initializing uniform variables and after. Both cases - black screen.
Anyway for a moment, I'm pretty sure that there is some problem with initialization of sampler for textures (maybe cos they are attached to FBO)?
I've checked the rest and it works fine.
Also another stupid problem:
How can i render texture on whole screen ?
I've tried something like that but it doesn't work ( i have to translate a bit this quad )
glViewport(0,0 , OPT.m_nWidth, OPT.m_nHeight);
glBindTexture( GL_TEXTURE_RECTANGLE_NV, iTextureImg/*iTextureImgAccumulation*/ ); // Bind our frame buffer texture
glBegin(GL_QUADS);
glTexCoord2f(0,OPT.m_nHeight);
glVertex3f(-1,-1,0);
glTexCoord2f(OPT.m_nWidth,OPT.m_nHeight);
glVertex3f(1,-1,0);
glTexCoord2f(OPT.m_nWidth,0);
glVertex3f(1,1,0);
glTexCoord2f(0,0);
glVertex3f(-1,1,0);
glEnd();
Doesnt work with glVertex2f also..
Edit: I've checked out and I can initialise some uniform variables only textures are problematic.
I've changed order but it still doesn't work.:( Btw other uniforms values are working well. I've displayed texture I want to pass to shader too. It works fine. But for unknown reason texture sampler isn't initialized in fragment shader. Maybe it has something to do that this texture is glTexImage2D(GL_TEXTURE_RECTANGLE_NV, 0, GL_RGB16F /GL_FLOAT_R32_NV/, OPT.m_nWidth, OPT.m_nHeight, 0, GL_RED, GL_FLOAT, NULL); (its not GL_TEXTURE_2D)?
It's not clear what does your xShader.bind(), I can gues you do glUseProgram(...) there. But uniform variables (sampler index in your case) should be set up after the glUseProgram(...) is called. In this order:
glUseProgram(your_shaders); //probably your xShader.bind() does it.
GLuint sampler_idx = 0;
GLint location = glGetUniformLocation(your_shaders, "Texture0");
if(location != -1) glUniform1i(location, sampler_idx);
else error("cant, get uniform location");
glActiveTexture(GL_TEXTURE0 + sampler_idx);
glBindTexture(GL_TEXTURE_2D, iTextureImg);
and 'yes' you can render FBO texture and use it in shader in another context
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, your_fbo_id);
// render to FBO there
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
then use your FBO texture the same way as you use regular textures.
glActiveTexture(GL_TEXTURE0);
glBindTexture( GL_TEXTURE_RECTANGLE_NV, iTextureImgAccumulation ); // Bind our frame buffer texture
xShader.setUniform1i("Texture0", 0);
This is a rectangle texture.
uniform sampler2D Texture0;
This is a 2D texture. They are not the same thing. The sampler type must match the texture type. You need to use a samplerRect, assuming your version of GLSL supports that.