I've been banging my head against this for hours now, I'm sure it's something simple, but I just can't get a result. I've had to edit this code down a bit because I've built a little library to encapsulate the OpenGL calls, but the following is an accurate description of the state of affairs.
I'm using the following vertex shader:
#version 330
in vec4 position;
in vec2 uv;
out vec2 varying_uv;
void main(void)
{
gl_Position = position;
varying_uv = uv;
}
And the following fragment shader:
#version 330
in vec2 varying_uv;
uniform sampler2D base_texture;
out vec4 fragment_colour;
void main(void)
{
fragment_colour = texture2D(base_texture, varying_uv);
}
Both shaders compile and the program links without issue.
In my init section, I load a single texture like so:
// Check for errors.
kt::kits::open_gl::Core<QString>::throw_on_error();
// Load an image.
QImage image("G:/test_image.png");
image = image.convertToFormat(QImage::Format_RGB888);
if(!image.isNull())
{
// Load up a single texture.
glGenTextures(1, &Texture);
glBindTexture(GL_TEXTURE_2D, Texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, image.width(), image.height(), 0, GL_RGB, GL_UNSIGNED_BYTE, image.constBits());
glBindTexture(GL_TEXTURE_2D, 0);
}
// Check for errors.
kt::kits::open_gl::Core<QString>::throw_on_error();
You'll observe that I'm using Qt to load the texture. The calls to ::throw_on_error() check for errors in OpenGL (by calling Error()), and throw an exception if one occurs. No OpenGL errors occur in this code, and the image loaded using Qt is valid.
Drawing is performed as follows:
// Clear previous.
glClear(GL_COLOR_BUFFER_BIT |
GL_DEPTH_BUFFER_BIT |
GL_STENCIL_BUFFER_BIT);
// Use our program.
glUseProgram(GLProgram);
// Bind the vertex array.
glBindVertexArray(GLVertexArray);
/* ------------------ Setting active texture here ------------------- */
// Tell the shader which textures are which.
kt::kits::open_gl::gl_int tAddr = glGetUniformLocation(GLProgram, "base_texture");
glUniform1i(tAddr, 0);
// Activate the texture Texture(0) as texture 0.
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, Texture);
/* ------------------------------------------------------------------ */
// Draw vertex array as triangles.
glDrawArrays(GL_TRIANGLES, 0, 4);
glBindVertexArray(0);
glUseProgram(0);
// Detect errors.
kt::kits::open_gl::Core<QString>::throw_on_error();
Similarly, no OpenGL errors occur, and a triangle is drawn to screeen. However, it looks like this:
It occurred to me the problem may be related to my texture coordinates. So, I rendered the following image using s as the 'red' component, and t as the 'green' component:
The texture coordinates appear correct, yet I'm still receiving the black triangle of doom. What am I doing wrong?
I think it could be depending on an incomplete init of your texture object.
Try to init the texture MIN and MAG filter
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
Moreover, I would suggest to check the size of the texture. If it is not power of 2, then you have to set the wrapping mode to CLAMP_TO_EDGE
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
Black textures are often due to this issue, very common problem around.
Ciao
In your fragment shader you're writing to a self defined target
fragment_colour = texture2D(base_texture, varying_uv);
If that's not to be gl_FragColor or gl_FragData[…], did you properly set the designated fragment data location?
Related
Open GL 3.3
My textures suddenly became black after working for many days
Pretty much all the posts that had a similiar issue were about
incorrect or absent use of glTexParameteri or incorrect texture loading but i seem to be doing everything correctly regarding that,
the vector containing the data is 1024 bytes (16 pixels x 16 pixels x 4 bytes) so that's good,
after the issue arose i made a test texture just to make shure everything about that was right.
also saw that many posts issues were incomplete texture but here im using glTexImage2D passing the data so the texture has to be complete, also am not creating mipmaps, i disabled them for testing. Altough they were on and working before this bug.
Also im calling glGetError quite frequently and there are no errors
Here is the texture creation code:
unsigned int testTexture;
unsigned long w, h;
std::vector<byte> data;
std::vector<byte> img;
loadFile(data, "./assets/textures/blocks/brick.png");
decodePNG(img, w, h, &data[0], data.size());
glGenTextures(1, &testTexture);
glBindTexture(GL_TEXTURE_2D, testTexture);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA8,w,h,0,GL_RGBA, GL_UNSIGNED_BYTE,&img[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
data.clear();
img.clear();
And here is where i setup my Uniforms:
glUseProgram(worldShaderProgram);
glUniform1f(glGetUniformLocation(worldShaderProgram, "time"), gameTime);
glUniformMatrix4fv(glGetUniformLocation(worldShaderProgram, "MVP"), 1, GL_FALSE, &TheMatrix[0][0]);
glUniform1i(glGetUniformLocation(worldShaderProgram, "texAtlas"), testTexture);
glUniform1f(glGetUniformLocation(worldShaderProgram, "texMult"), 16.0f / 256.0f);
glUniform4f(glGetUniformLocation(worldShaderProgram, "fogColor"), fogColor.r, fogColor.g, fogColor.b, fogColor.a);
Also Here Is The Fragment Shader
#version 330
in vec4 tex_color;
in vec2 tex_coord;
layout(location = 0) out vec4 color;
uniform sampler2D texAtlas;
uniform mat4 MVP;
uniform vec4 fogColor;
const float fogStart = 0.999f;
const float fogEnd = 0.9991f;
const float fogMult = 1.0f / (fogEnd - fogStart);
void main() {
if (gl_FragCoord.z >= fogEnd)
discard;
//color = vec4(tex_coord.x,tex_coord.y,0.0f,1.0f) * tex_color; // This Line Does What Its Supposed To
color = texture(texAtlas,tex_coord) * tex_color; // This One Does Not
if (gl_FragCoord.z >= fogStart)
color = mix(color,fogColor,(gl_FragCoord.z - fogStart) * fogMult);
}
If i use this line color = vec4(tex_coord.x,tex_coord.y,0.0f,1.0f) * tex_color;
Instead of this line color = texture(texAtlas,tex_coord) * tex_color;
To show the coord from witch it would be getting its color from the texture, the result is what you would expect: (Currenlty only testing it with the top faces)
Image Link Cause I Cant Do Images But Please Click
That Proves That The Vertex Shader Is Working Corretly
(The sampler2D is obtained from a uniform at the fragment shader)
Main Loop Rendering Code
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindTexture(GL_TEXTURE_2D, textures.textureId);
glUseProgram(worldShaderProgram);
wm.render();
// wm.render() calls lots of meshes to render themselves
// just wanted to point out each one of them has their own
// vertex arry buffer, vertex buffer, and index buffer
// to render i bind the vertex array buffer with glBindVertexArray(vertexArrayBuffer);
// then i call glDrawElements();
Also here is the OpenGL Initialization Code
if (!glfwInit()) // Initialize the library
return -1;
window = glfwCreateWindow(wndSize.width, wndSize.height, "Minecraft", NULL, NULL);
if (!window)
{
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window); // Make the window's context current
glfwSetWindowSizeCallback(window,resiseEvent);
glfwSwapInterval(1);
if (glewInit() != GLEW_OK)
return -1;
glClearColor(fogColor.r, fogColor.g, fogColor.b, fogColor.a);
glClearDepth(1.0f);
glEnable(GL_DEPTH_TEST); // Enable depth testing for z-culling
glEnable(GL_CULL_FACE); // Orientation Culling
glDepthFunc(GL_LEQUAL); // Set the type of depth-test (<=)
glShadeModel(GL_SMOOTH); // Enable smooth shading
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Nice perspective corrections
glLineWidth(2.0f);
You wrongly set the texture object to the texture sampler uniform. This is wrong:
glUniform1i(glGetUniformLocation(worldShaderProgram, "texAtlas"), testTexture);
The binding point between the texture object and the texture sampler uniform is the texture unit. When glBindTexture is invoked, then the texture object is bound to the specified target and the current texture unit. The texture unit can be chosen by glActivTexture. The default texture unit is GL_TEXTURE0.
Since your texture is bound to texture unit 0 (GL_TEXTURE0), you have set the value 0 to the texture sampler uniform:
glUniform1i(glGetUniformLocation(worldShaderProgram, "texAtlas"), 0);
Note that your code worked before by chance. You just had 1 texture object or testTexture was the first texture name created. Hence the value of testTexture was 0. Now the value of testTexture is no longer 0, causing your code to fail.
I'm trying to load multiple textures in openGL.
To validate this I want to load 2 textures and mix them with the following fragment shader:
#version 330 core
out vec4 color;
in vec2 v_TexCoord;
uniform sampler2D u_Texture0;
uniform sampler2D u_Texture1;
void main()
{
color = mix(texture(u_Texture0, v_TexCoord), texture(u_Texture1, v_TexCoord), 0.5);
}
I'have abstract couple of OpenGL's functionality into classes like Shader, Texture UniformXX etc..
Here's an attempt to load the 2 textures into the sampler units of the fragment:
Shader shader;
shader.Attach(GL_VERTEX_SHADER, "res/shaders/vs1.shader");
shader.Attach(GL_FRAGMENT_SHADER, "res/shaders/fs1.shader");
shader.Link();
shader.Bind();
Texture texture0("res/textures/container.jpg", GL_RGB, GL_RGB);
texture0.Bind(0);
Uniform1i textureUnit0Uniform("u_Texture0");
textureUnit0Uniform.SetValues({ 0 });
shader.SetUniform(textureUnit0Uniform);
Texture texture1("res/textures/awesomeface.png", GL_RGBA, GL_RGBA);
texture1.Bind(1);
Uniform1i textureUnit1Uniform("u_Texture1");
textureUnit1Uniform.SetValues({ 1 });
shader.SetUniform(textureUnit1Uniform);
Here's what the Texture implementation looks like:
#include "Texture.h"
#include "Renderer.h"
#include "stb_image/stb_image.h"
Texture::Texture(const std::string& path, unsigned int destinationFormat, unsigned int sourceFormat)
: m_Path(path)
{
stbi_set_flip_vertically_on_load(1);
m_Buffer = stbi_load(path.c_str(), &m_Width, &m_Height, &m_BPP, 0);
GLCALL(glGenTextures(1, &m_RendererID));
GLCALL(glBindTexture(GL_TEXTURE_2D, m_RendererID));
GLCALL(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST));
GLCALL(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR));
GLCALL(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT));
GLCALL(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT));
GLCALL(glTexImage2D(GL_TEXTURE_2D, 0, destinationFormat, m_Width, m_Height, 0, sourceFormat, GL_UNSIGNED_BYTE, m_Buffer));
glGenerateMipmap(GL_TEXTURE_2D);
GLCALL(glBindTexture(GL_TEXTURE_2D, 0));
if (m_Buffer)
stbi_image_free(m_Buffer);
}
Texture::~Texture()
{
GLCALL(glDeleteTextures(1, &m_RendererID));
}
void Texture::Bind(unsigned int unit) const
{
GLCALL(glActiveTexture(GL_TEXTURE0 + unit));
GLCALL(glBindTexture(GL_TEXTURE_2D, m_RendererID));
}
void Texture::Unbind() const
{
GLCALL(glBindTexture(GL_TEXTURE_2D, 0));
}
Now instead of actually getting an even mix of color from both textures I only get the second texture appearing and blending with the background:
I've pinpointed the problem to the constructor of the Texture implementation, if I comment out the initialization of the second texture such as that its constructor is never being called then I can get the first texture to show up.
Can anyone suggest what I'm doing wrong?
Took me a while to spot, but at the point where you call the constructor of the second texture, your active texture unit is still 0, so the constructor happily repoints your texture unit and you are left with two texture units bound to the same texture.
The solution should be simple enough: do not interleave texture creation and texture unit assignment, by creating the textures first and only then binding them explicitly.
Better yet, look into using direct state access to avoid all this binding.
To highlight the problem for future viewers of this question, this is the problematic sequence of calls:
// constructor of texture 1
glGenTextures(1, &container)
glBindTexture(GL_TEXTURE_2D, container) // Texture Unit 0 is now bound to container
// explicit texture0.Bind call
glActiveTexture(GL_TEXTURE0) // noop
glBindTexture(GL_TEXTURE_2D, container) // Texture Unit 0 is now bound to container
// constructor of texture 2
glGenTextures(1, &awesomeface)
glBindTexture(GL_TEXTURE_2D, awesomeface) // Texture Unit 0 is now bound to awesomeface instead of container.
// explicit texture1.Bind call
glActiveTexture(GL_TEXTURE1)
glBindTexture(GL_TEXTURE_2D, awesomeface) // Texture Unit 0 and 1 are now bound to awesomeface.
I'm currently working with a compute shader in OpenGl and my goal is to render from one texture onto another texture with some modifications. However, it does not seem like my compute shader has any effect on the textures at all.
After creating a compute shader I do the following
//Use the compute shader program
(*shaderPtr).useProgram();
//Get the uniform location for a uniform called "sourceTex"
//Then connect it to texture-unit 0
GLuint location = glGetUniformLocation((*shaderPtr).program, "sourceTex");
glUniform1i(location, 0);
//Bind buffers and call compute shader
this->bindAndCompute(bufferA, bufferB);
The bindAndCompute() function looks like this and its purpose is to ready the two buffers to be accessed by the compute shader and then run the compute shader.
bindAndCompute(GLuint sourceBuffer, GLuint targetBuffer){
glBindImageTexture(
0, //Always bind to slot 0
sourceBuffer,
0,
GL_FALSE,
0,
GL_READ_ONLY, //Only read from this texture
GL_RGB16F
);
glBindImageTexture(
1, //Always bind to slot 1
targetBuffer,
0,
GL_FALSE,
0,
GL_WRITE_ONLY, //Only write to this texture
GL_RGB16F
);
//this->height is currently 960
glDispatchCompute(1, this->height, 1); //Call upon shader
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
}
And finally, here is the compute shader. I currently only try to set it so that it makes the second texture completely white.
#version 440
#extension GL_ARB_compute_shader : enable
#extension GL_ARB_shader_image_load_store : enable
layout (rgba16, binding=0) uniform image2D sourceTex; //Textures bound to 0 and 1 resp. that are used to
layout (rgba16, binding=1) uniform image2D targetTex; //acquire texture and save changes made to texture
layout (local_size_x=960 , local_size_y=1 , local_size_z=1) in; //Local work-group size
void main(){
vec4 result; //Vec4 to store the value to be written
pxlPos = ivec2(gl_GlobalInvocationID.xy); //Get pxl-pos
/*
result = imageLoad(sourceTex, pxlPos);
...
*/
imageStore(targetTex, pxlPos, vec4(1.0f)); //Write white to texture
}
Now, when I start bufferB is empty. When I run this I expect bufferB to become completely white. However, after this code bufferB remains empty. My conclusion is that either
A: The compute shader does not write to the texture
B: glDispatchCompute() is not run at all
However, i get no errors and the shader does compile as it should. I have checked that I bind the texture correctly when rendering by binding bufferA which I already know what it contains, then running bindAndCompute(bufferA, bufferA) to turn bufferA white. However, bufferA is unaltered. So, I've not been able to figure out why my compute shader has no effect. If anyone has any ideas on what I can try to do it would be appreciated.
End note: This has been my first question asked on this site. I've tried to present only relevant information but I still feel like maybe it became too much text anyway. If there is feedback on how to improve the structure of the question that is welcome as well.
---------------------------------------------------------------------
EDIT:
The textures I send in with sourceBuffer and targetBuffer is defined as following:
glGenTextures(1, *buffer);
glBindTexture(GL_TEXTURE_2D, *buffer);
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_RGBA16F, //Internal format
this->width,
this->height,
0,
GL_RGBA, //Format read
GL_FLOAT, //Type of values in read format
NULL //source
);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
The image format of the images you bind doesn't match the image format in the shader. You bind a RGB16F (48byte per texel) texture, but state in the shader that it is of rgba16 format (64byte per texel).
Formats have to match according to the rules given here. Assuming that you allocated the texture in OpenGL, this means that the total size of each texel have to match. Also note, that 3-channel textures are (without some rather strange exceptions) not supported by image load/store.
As a side-note: The shader will execute and write if the texture format size matches. But what you write might be garbage because your textures are in 16-bit floating point format (RGBA_16F) while you tell the shader that they are in 16-bit unsigned normalized format (rgba16). Although this doesn't directlyy matter for the compute shader, it does matter if you read-back the texture or access it trough a sampler or write data > 1.0f or < 0.0f into it. If you want 16-bit floats, use rgba16f in the compute shader.
I've been primarily developing on Linux (Mint) and Windows using SDL2 and OpenGL 3.3, with few issues in regards to drawing objects. CPU usage never really spiking past ~40%.
That was, until I tried porting what I had to OSX (Sierra).
Utilizing the exact same shader and code that runs on Linux and Windows just fine, spikes the cpu usage on OSX to ~99% consistently.
At first, I thought it was a batching issue, so I batched my draw calls together to minimize the number of calls to glDrawElements, and that didn't work.
Then, I thought it was an issue involving not using attributes in the vertex/fragment shader (like: OpenGL core profile incredible slowdown on OS X)
Also, I maintain the framerate at 60 fps.
After sorting that out, no luck. Tried logging everything I could, nothing from glGetError() nor from shader logs.
So I removed bits and pieces from my vertex/fragment shaders to see what in particular was slowing down my draw calls. I managed to reduce it down to this: Any call in either my vertex/fragment shaders to the texture() function will run the cpu to high usage.
Texture loading code:
// Texture loading
void PCShaderSurface::AddTexturePairing(HashString const &aName)
{
GLint minFilter = GL_LINEAR;
GLint magFilter = GL_LINEAR;
GLint wrapS = GL_REPEAT;
GLint wrapT = GL_REPEAT;
if(Constants::GetString("OpenGLMinFilter") == "GL_NEAREST")
{
minFilter = GL_NEAREST;
}
if(Constants::GetString("OpenGLMagFilter") == "GL_NEAREST")
{
magFilter = GL_NEAREST;
}
if(Constants::GetString("OpenGLWrapModeS") == "GL_CLAMP_TO_EDGE")
{
wrapS = GL_CLAMP_TO_EDGE;
}
if(Constants::GetString("OpenGLWrapModeT") == "GL_CLAMP_TO_EDGE")
{
wrapT = GL_CLAMP_TO_EDGE;
}
glGenTextures(1, &mTextureID);
glBindTexture(GL_TEXTURE_2D, mTextureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, minFilter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, magFilter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, wrapS);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, wrapT);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mSurface->w, mSurface->h, 0, mTextureFormat, GL_UNSIGNED_BYTE, mSurface->pixels);
GetManager()->AddTexturePairing(aName, TextureData(mTextureID, mSurface->w, mSurface->h));
}
Draw Code:
// I batch objects that use the same program and texture id to draw in the same call.
glUseProgram(program);
int activeTexture = texture % mMaxTextures;
int vertexPosLocation = glGetAttribLocation(program, "vertexPos");
int texCoordPosLocation = glGetAttribLocation(program, "texCoord");
int objectPosLocation = glGetAttribLocation(program, "objectPos");
int colorPosLocation = glGetAttribLocation(program, "primaryColor");
// Calculate matrices and push vertex, color, position, texCoord data
// ...
// Enable textures and set uniforms.
glBindVertexArray(mVertexArrayObjectID);
glActiveTexture(GL_TEXTURE0 + activeTexture);
glBindTexture(GL_TEXTURE_2D, texture);
glUniform1i(glGetUniformLocation(program, "textureUnit"), activeTexture);
glUniform3f(glGetUniformLocation(program, "cameraDiff"), cameraTranslation.x, cameraTranslation.y, cameraTranslation.z);
glUniform3f(glGetUniformLocation(program, "cameraSize"), cameraSize.x, cameraSize.y, cameraSize.z);
glUniformMatrix3fv(glGetUniformLocation(program, "cameraTransform"), 1, GL_TRUE, cameraMatrix);
// Set shader properties. Due to batching, done on a per surface / shader basis.
// Shader uniforms are reset upon relinking.
SetShaderProperties(surface, true);
// Set VBO and buffer data.
glBindVertexArray(mVertexArrayObjectID);
BindAttributeV3(GL_ARRAY_BUFFER, mVertexBufferID, vertexPosLocation, vertexData);
BindAttributeV3(GL_ARRAY_BUFFER, mTextureBufferID, texCoordPosLocation, textureData);
BindAttributeV3(GL_ARRAY_BUFFER, mPositionBufferID, objectPosLocation, positionData);
BindAttributeV4(GL_ARRAY_BUFFER, mColorBufferID, colorPosLocation, colorData);
// Set index data
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mIndexBufferID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLuint) * indices.size(), &indices[0], GL_DYNAMIC_DRAW);
// Draw and disable
glDrawElements(GL_TRIANGLES, static_cast<unsigned>(vertexData.size()), GL_UNSIGNED_INT, 0);
DisableVertexAttribArray(vertexPosLocation);
DisableVertexAttribArray(texCoordPosLocation);
DisableVertexAttribArray(objectPosLocation);
DisableVertexAttribArray(colorPosLocation);
// Reset shader property values.
SetShaderProperties(surface, false);
// Reset to default texture
glBindTexture(GL_TEXTURE_2D, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
glUseProgram(0);
Example binding code:
void PCShaderScreen::BindAttributeV3(GLenum aTarget, int const aBufferID, int const aAttributeLocation, std::vector<Vector3> &aData)
{
if(aAttributeLocation != -1)
{
glEnableVertexAttribArray(aAttributeLocation);
glBindBuffer(aTarget, aBufferID);
glBufferData(aTarget, sizeof(Vector3) * aData.size(), &aData[0], GL_DYNAMIC_DRAW);
glVertexAttribPointer(aAttributeLocation, 3, GL_FLOAT, GL_FALSE, sizeof(Vector3), 0);
glBindBuffer(aTarget, 0);
}
}
VS code:
#version 330
in vec4 vertexPos;
in vec4 texCoord;
in vec4 objectPos;
in vec4 primaryColor;
uniform vec3 cameraDiff;
uniform vec3 cameraSize;
uniform mat3 cameraTransform;
out vec2 texValues;
out vec4 texColor;
void main()
{
texColor = primaryColor;
texValues = texCoord.xy;
vec3 vertex = vertexPos.xyz + objectPos.xyz;
vertex = (cameraTransform * vertex) - cameraDiff;
vertex.x /= cameraSize.x;
vertex.y /= -cameraSize.y;
vertex.y += 1.0;
vertex.x -= 1.0;
gl_Position.xyz = vertex;
gl_Position.w = 1.0;
}
FS code:
#version 330
uniform sampler2D textureUnit;
in vec2 texValues;
in vec4 texColor;
out vec4 fragColor;
void main()
{
// Slow, 99% CPU usage on OSX only
fragColor = texture(textureUnit, texValues) * texColor;
// Fine on everything
fragColor = vec4(1,1,1,1);
}
I'm really out of ideas here, I even followed Apple's best practices (https://developer.apple.com/library/content/documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide/opengl_texturedata/opengl_texturedata.html) as best as I could, with no luck.
Are the Windows and Linux drivers I'm using just offering me some form of forgiveness that I'm not aware of? Is the OSX driver really that sensitive? I must be missing something. Any help and insight would be appreciated. Thanks for reading my long winded speech.
All credit to #keltar for finding this, but my problem was in the glActiveTexture call.
I changed the call from glActiveTexture(GL_TEXTURE0 + activeTexture) to just glActiveTexture(GL_TEXTURE0).
To paraphrase #keltar: "Constantly changing the texture slot number might force driver to recompile shader each time. I don't think it matters which exact value it would be, as long as it doesn't change (and within GL limits). I suppose hardware that you use can't effectively (or at all) sample texture from any slot specified by uniform variable - but GL implies so. On some hardware e.g. fetching vertex attributes is internally part of shader too. When state changes, driver attempts to patch shader, but if change is too big (or driver don't know how to patch) - it falls to recompilation. Sadly OSX graphics drivers aren't known to be good, to my knowledge."
You do a lot of gl-calls in your draw code: binding buffers, uploading data to buffers, etc. Most of them would be better done when preparing or uploading data.
I prefer to do in the draw code just:
glUseProgram(program);
Enable de VAO by glBindVertexArray
Pass uniforms
Active texture units by glActiveTexture
glDrawXXX commands
glUseProgram(0);
Disable de VAO
I need to display some indexed graphic file, that additionally has per-pixel alpha channel. Also, I need to make sure that I can change the palette at any time and the resulting image will also change. For this, I first used software pixel precomputing, but that was just too slow for realtime rendering, so I decided to write a shader that will handle indexed textures on GPU-side. The problem is that the second texture (rec_colors) doesn't load (at least it seems like so — every texel read from that sampler appears completely empty).
Data from zero texture reads correctly, resulting in black image with right alpha :)
Shader-initializing-related code:
Application::Display->GetRC();
glewInit();
if(!GLEW_VERSION_2_0) return false;
char* code_frag = loadCode("shader.frag");
char* code_verx = loadCode("shader.verx");
aShader_palette = glCreateShader(GL_FRAGMENT_SHADER);
//glShaderSource(aShader_palette, 1, &aShaderProgram_palette, NULL);
glShaderSource(aShader_palette, 1, (const GLchar**)&code_frag, NULL);
glCompileShader(aShader_palette);
GLint compiled = 0;
glGetShaderiv(aShader_palette, GL_COMPILE_STATUS, &compiled);
if(!compiled)
{
/* error-handling */
}
GLuint texloc = glGetUniformLocation(aShader_palette, "rec");
glUniform1i(texloc, 0);
texloc = glGetUniformLocation(aShader_palette, "rec_colors");
glUniform1i(texloc, 1);
glsl_palette_Program = glCreateProgram();
glAttachShader(glsl_palette_Program, aShader_palette);
glLinkProgram(glsl_palette_Program);
And rendering-related:
glPushAttrib(GL_CURRENT_BIT);
glColor4ub(255, 255, 255, t_a); // t_a is overall alpha of sprite displayed
glUseProgram(glsl_palette_Program); // this one is a compiled/linked shader declared above
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, this->m_SpriteData[idx].texture);
glActiveTexture(GL_TEXTURE1); // at this point, it looks like texture unit is actually changed (I checked that via glGetIntegerv)
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, this->m_PaletteTex);
glTexSubImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, 0, 0, 256, 1, GL_RGBA, GL_UNSIGNED_BYTE, palette); // update possibly changed palette on each render
glActiveTexture(GL_TEXTURE0);
glBegin(GL_QUADS);
glTexCoord2i(0, 0);
glVertex2i(x, y);
glTexCoord2i(0, this->GetHeight(idx));
glVertex2i(x, y+this->GetHeight(idx));
glTexCoord2i(this->GetWidth(idx), this->GetHeight(idx));
glVertex2i(x+this->GetWidth(idx), y+this->GetHeight(idx));
glTexCoord2i(this->GetWidth(idx), 0);
glVertex2i(x+this->GetWidth(idx), y);
glEnd();
glActiveTexture(GL_TEXTURE1);
glUnbindTexture(GL_TEXTURE_RECTANGLE_ARB); // custom macro
glActiveTexture(GL_TEXTURE0);
glUnbindTexture(GL_TEXTURE_RECTANGLE_ARB);
glUseProgram(0);
glPopAttrib();
Shader code:
#extension GL_ARB_texture_rectangle : enable
uniform sampler2DRect rec;
uniform sampler2DRect rec_colors;
void main(void)
{
vec4 oldcol = texture2DRect(rec, gl_TexCoord[0].st);
vec4 newcol = texture2DRect(rec_colors, vec2(oldcol.r*255.0, 0.0)); // palette index should be*255 bcs rectangle coordinates aren't normalized
gl_FragColor.rgb = newcol.rgb;
gl_FragColor.a = oldcol.g; // alpha from green part
}
Googled a lot, any similar posts I found were solved by fixing texture unit IDs in glUniform1i call, but for me that looks absolutely normal (at least, TEXTURE0 loads correctly into rec).
Do you check for errors anywhere with glGetError? I belive you're doing something incorrectly. glGetUniformLocation is supposed to be executed against a linked program, not a shader. You're calling glGetUniformLocation before your program is linked.
See relevant text from man page: http://www.opengl.org/wiki/GLAPI/glGetUniformLocation
The actual locations assigned to uniform variables are not
known until the program object is linked successfully. After
linking has occurred, the command
glGetUniformLocation can be used to obtain
the location of a uniform variable. Uniform variable locations and values can only
be queried after a link if the link was successful.
You should always at the very least check for opengl errors with glGetError once per frame during development. It will alert you to these problems before you have to go online to ask for help.