So I was trying to learn OpenGL from learnopengl.com and I'm currently on the topic of text rendering and I thought about combining scalable font textures in my program to one big texture, but for some reason, I get an exception bc of glCopyImageSubData(...).
First I try to measure how big the texture should be and then copy textures I already created to one big texture, and I was playing with this function for quite a while now but I can't find a solution.
Here is the original code with texture used for each face.
I tried to create fbo and attach a texture to it, but after research, I found this function that was far more clear to me so I decided to use it instead.
So I added xoffset to Character structure:
struct Character {
GLuint textureID;
glm::ivec2 size;
glm::ivec2 bearing;
GLuint advance;
GLuint xoffset;
};
I add this offset to each face in the first for loop:
Character character = {
texture,
glm::ivec2(face->glyph->bitmap.width, face->glyph->bitmap.rows),
glm::ivec2(face->glyph->bitmap_left, face->glyph->bitmap_top),
face->glyph->advance.x,
font_texture_width
};
font_texture_width += character.size.x;
characters.insert(std::pair<GLchar, Character>(c, character));
And then I try to paste each face texture to fontTexture:
glGenTextures(1, &fontTexture);
glBindTexture(GL_TEXTURE_2D, fontTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED,
font_texture_width, 100, 0, GL_RED, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
std::cout << glGetError();
for (GLubyte c = 0; c < 128; ++c)
glCopyImageSubData( characters[c].textureID, GL_TEXTURE_2D, 0, 0, 0, 0,
fontTexture, GL_TEXTURE_2D, 0, characters[c].xoffset, 0, 0,
characters[c].size.x, characters[c].size.y, 0);
Here is modyfied renderText function:
void renderText(Shader &s, std::string text, GLfloat x, GLfloat y, GLfloat scale, glm::vec3 color)
{
s.use();
s.setVec3("textColor", color);
glActiveTexture(GL_TEXTURE0);
glBindVertexArray(VAO);
std::string::const_iterator c;
glBindTexture(GL_TEXTURE_2D, fontTexture);
for (c = text.begin(); c != text.end(); ++c)
{
Character ch = characters[*c];
GLfloat xpos = x + ch.bearing.x * scale;
GLfloat ypos = y - (ch.size.y - ch.bearing.y) * scale;
GLfloat w = ch.size.x * scale;
GLfloat h = ch.size.y * scale;
GLfloat vertices[6][4] = {
{ xpos + characters[*c].xoffset, ypos + h, 0.0, 0.0 },
{ xpos + characters[*c].xoffset, ypos, 0.0, 1.0 },
{ xpos + w + characters[*c].xoffset, ypos, 1.0, 1.0 },
{ xpos + characters[*c].xoffset, ypos + h, 0.0, 0.0 },
{ xpos + w + characters[*c].xoffset, ypos, 1.0, 1.0 },
{ xpos + w + characters[*c].xoffset, ypos + h, 1.0, 0.0 }
};
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(vertices), vertices);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glDrawArrays(GL_TRIANGLES, 0, 6);
x += (ch.advance >> 6) * scale;
}
glBindVertexArray(0);
glBindTexture(GL_TEXTURE_2D, 0);
}
Exception is being thrown where I call glCopyImageSubData.
Here is my whole program: https://pastebin.com/vAeeX3Xh
EDIT:
Now I figured out that it would be better to use glTexSubImage2D instead, so this is how it looks like (instead of this block of code with glCopyImageSubData)
glGenTextures(1, &fontTexture);
glBindTexture(GL_TEXTURE_2D, fontTexture);
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_RED,
face->glyph->bitmap.width * 250,
face->glyph->bitmap.rows * 250,
0,
GL_RED,
GL_UNSIGNED_BYTE,
NULL
);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
for (GLubyte c = 0; c < 128; ++c)
{
if (FT_Load_Char(face, c, FT_LOAD_RENDER))
{
std::cout << "ERROR::FREETYPE: Failed to load Glyph" << std::endl;
continue;
}
glTexSubImage2D(GL_TEXTURE_2D, 0, characters[c].xoffset, 0, characters[c].size.x, characters[c].size.y, GL_RED, GL_UNSIGNED_BYTE, face->glyph->bitmap.buffer);
}
Now when I render this texture it looks like this:
https://imgur.com/a/A0gDy6T
Related
This question already has answers here:
In a fragment shader, why can't I use a flat input integer to index a uniform array of sampler2D?
(1 answer)
Why are glsl variables not working as expected?
(2 answers)
Why are Uniform variables not working in GLSL?
(1 answer)
Closed 2 years ago.
So I've managed to successfully implement batching into my engine, but have come across some weird behavior with the array of samplerCubes in my fragment shader. The batch renderer works fine with 2 texture units - I can successfully draw cubes with the right textures if I only bind 2 textures, but as soon as I add a 3rd to the mTextures vector, and use glUniform1i for that index in the samplerCube array, the wrong texture is displayed, even though the texture id (TexID) is correct (I checked this in the fragment shader).
The issue appears to be with my understanding of OpenGL as for some reason, the texture what should be displayed by textureCubes[2] (fragment shader uniform) is displayed by textureCubes[1], and textureCubes[2] displays the same texture as textureCubes[0]. What should be displayed by textureCubes[1] just doesn't exist with the 3rd texture bound.
Here's my code:
Main.cpp
#include "Window.h"
#include "Block.h"
#include "BatchRenderer.h"
#include "Shader.h"
#include "Camera.h"
void Submit(gfx::BatchRenderer* renderer, float height) //temporary to test batching
{
for (int i = 0; i < 16; i++)
{
for (int j = 0; j < 16; j++)
{
gfx::Block* block = new gfx::Block(j % 2 == 0 ? (i % 2 == 0 ? gfx::BlockType::DIRT : gfx::BlockType::GRASS) : gfx::BlockType::COBBLE, math::Vec3f(i * 5.0, height, j * 5.0));
renderer->Submit(block);
}
}
}
int main()
{
gfx::Window* window = new gfx::Window("MineClone", 800, 800);
gfx::Shader* shader = new gfx::Shader();
shader->FromFile(GL_VERTEX_SHADER, "Resources/ModelTest.vert");
shader->FromFile(GL_FRAGMENT_SHADER, "Resources/ModelTest.frag");
shader->RefreshProgram();
math::Mat4f projection = math::Mat4f::Perspective(70.0, window->GetWidth() / window->GetHeight(), 0.1, 1000.0);
gfx::Camera* camera = new gfx::Camera(projection, 0.05, 0.0015);
gfx::BatchRenderer* renderer = new gfx::BatchRenderer();
gfx::TextureCube* grass = new gfx::TextureCube("Resources/top.png", "Resources/dirt.png", "Resources/sides.png");
renderer->Submit(grass);
gfx::TextureCube* dirt = new gfx::TextureCube("Resources/dirt.png");
renderer->Submit(dirt);
gfx::TextureCube* cobble = new gfx::TextureCube("Resources/cobble.png");
renderer->Submit(cobble);
Submit(renderer, 0.0);
Submit(renderer, 5.0);
Submit(renderer, 10.0);
Submit(renderer, 15.0);
Submit(renderer, 20.0);
Submit(renderer, 25.0);
Submit(renderer, 30.0);
Submit(renderer, 35.0);
while (!window->IsClosed())
{
window->Update();
if (window->GetInputHandler()->IsKeyDown(VK_ESCAPE))
window->SwitchMouseState();
if (window->IsSynced())
camera->Update(window->GetInputHandler());
shader->Bind();
math::Mat4f projection = camera->GetProjection();
shader->SetUniform("projection", projection);
math::Mat4f view = camera->GetView();
shader->SetUniform("view", view);
shader->SetUniform("cubeTextures[0]", 0);
shader->SetUniform("cubeTextures[1]", 1);
shader->SetUniform("cubeTextures[2]", 2);
renderer->Render();
shader->Unbind();
}
return 0;
}
BatchRenderer.cpp
#include "BatchRenderer.h"
namespace gfx
{
BatchRenderer::BatchRenderer()
: mMesh(NULL)
{
}
BatchRenderer::~BatchRenderer()
{
mBlocks.clear();
mTextures.clear();
delete mMesh;
}
void BatchRenderer::Submit(Block* block)
{
MeshData data = block->GetMesh();
mMeshData.vertices.insert(mMeshData.vertices.end(),
data.vertices.begin(),
data.vertices.end());
for (int i = 0; i < 36; i++)
{
data.indices[i] += mBlocks.size() * 8;
}
mMeshData.indices.insert(mMeshData.indices.end(),
data.indices.begin(),
data.indices.end());
mMesh = Mesh::Make(mMeshData);
mBlocks.push_back(block);
}
void BatchRenderer::Submit(TextureCube* texture)
{
mTextures.push_back(texture);
}
void BatchRenderer::Render()
{
for (int i = 0; i < mTextures.size(); i++)
{
mTextures[i]->Bind(i);
}
mMesh->Render();
for (int i = 0; i < mTextures.size(); i++)
{
mTextures[i]->Unbind(i);
}
}
}
TextureCube.cpp
#include "TextureCube.h"
namespace gfx
{
TextureCube::TextureCube(std::string paths[6])
: Texture(TextureEnum::TEXTURE_CUBE)
{
glGenTextures(1, &mHandle);
glBindTexture(GL_TEXTURE_CUBE_MAP, mHandle);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
for (int i = 0; i < 6; i++)
{
std::vector<byte> pixels;
lodepng::decode(pixels, mWidth, mHeight, paths[i].c_str());
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_RGBA8, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels.data());
}
glGenerateMipmap(GL_TEXTURE_CUBE_MAP);
glBindTexture(GL_TEXTURE_CUBE_MAP, 0);
}
TextureCube::TextureCube(std::string path)
: Texture(TextureEnum::TEXTURE_CUBE)
{
glGenTextures(1, &mHandle);
glBindTexture(GL_TEXTURE_CUBE_MAP, mHandle);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
std::vector<byte> pixels;
lodepng::decode(pixels, mWidth, mHeight, path.c_str());
for (int i = 0; i < 6; i++)
{
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_RGBA8, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels.data());
}
glGenerateMipmap(GL_TEXTURE_CUBE_MAP);
glBindTexture(GL_TEXTURE_CUBE_MAP, 0);
}
TextureCube::TextureCube(std::string top, std::string bottom, std::string sides)
: Texture(TextureEnum::TEXTURE_CUBE)
{
glGenTextures(1, &mHandle);
glBindTexture(GL_TEXTURE_CUBE_MAP, mHandle);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
std::vector<byte> topPixels;
lodepng::decode(topPixels, mWidth, mHeight, top.c_str());
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Y, 0, GL_RGBA8, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, topPixels.data());
std::vector<byte> bottomPixels;
lodepng::decode(bottomPixels, mWidth, mHeight, bottom.c_str());
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, 0, GL_RGBA8, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, bottomPixels.data());
std::vector<byte> sidesPixels;
lodepng::decode(sidesPixels, mWidth, mHeight, sides.c_str());
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X, 0, GL_RGBA8, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, sidesPixels.data());
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_X, 0, GL_RGBA8, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, sidesPixels.data());
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Z, 0, GL_RGBA8, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, sidesPixels.data());
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, 0, GL_RGBA8, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, sidesPixels.data());
glGenerateMipmap(GL_TEXTURE_CUBE_MAP);
glBindTexture(GL_TEXTURE_CUBE_MAP, 0);
}
TextureCube::~TextureCube()
{
glDeleteTextures(1, &mHandle);
mHandle = NULL;
}
void TextureCube::Bind(uint32_t slot)
{
glBindTextureUnit(slot, mHandle);
}
void TextureCube::Unbind(uint32_t slot)
{
glBindTextureUnit(slot, 0);
}
}
ModelTest.vert
#version 450 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 normal;
layout (location = 2) in vec3 offset;
layout (location = 3) in float textureID;
uniform mat4 projection;
uniform mat4 view;
out vec3 TexPos;
out flat float TexID;
void main()
{
gl_Position = projection * view * vec4(position, 1.0);
TexPos = position - offset;
TexID = textureID;
}
ModelTest.frag
#version 450 core
out vec4 FragColour;
in vec3 TexPos;
in flat float TexID;
uniform samplerCube cubeTextures[3];
void main()
{
vec3 norm = normalize(TexPos);
int id = int(TexID);
FragColour = texture(cubeTextures[id], norm);
}
For anyone looking for a solution, it took me a while but I found it (I'm relatively new to OpenGL). The issue was I was setting the texture sampler slots individually using glUniform1i. To fix this, as cubeTextures is an array of samplerCubes, I created a struct for sampler data containing an int array and a length, and set the texture sampler slots using glUniform1iv.
I'm trying to render an video through OpenGL in GTK3 on linux. These Vertex and Fragment shaders were used with succes on QT. Actually all the opengl function calls are the same from working examples on Qt.
Anyone knows why? Is it possible that GLEW coordinates are different from coordinates in QT OpenGL?
The video is in YUV420P format, so that's why the fragment shader does matrix multiplication. Is it possible that my fragment coordinates are wrong?
Anyways, here's the video:
My Vertex Shader:
#version 130
attribute vec4 vertexIn;
attribute vec2 textureIn;
varying vec2 textureOut;
void main(void)
{
gl_Position = vertexIn;
textureOut = textureIn;
}
My Fragment Shader:
#version 130
varying vec2 textureOut;
uniform sampler2D tex_y;
uniform sampler2D tex_u;
uniform sampler2D tex_v;
void main(void)
{
vec3 yuv;
vec3 rgb;
yuv.x = texture2D(tex_y, textureOut).r;
yuv.y = texture2D(tex_u, textureOut).r - 0.5;
yuv.z = texture2D(tex_v, textureOut).r - 0.5;
rgb = mat3(1.0, 1.0, 1.0,
0.0, -0.39465, 2.03211,
1.13983, -0.58060, 0.0) * yuv;
gl_FragColor = vec4(rgb, 1.0);
}
My rendering code:
static const GLfloat ver[] = {
-1.0f,-1.0f,
1.0f,-1.0f,
-1.0f, 1.0f,
1.0f, 1.0f
};
static const GLfloat tex[] = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f
};
void OpenGLArea::init()
{
std::cout << "OpenGLArea init" << std::endl;
set_size_request(640, 360);
Singleton::instance()->getStream("cam1").mediaStream->ffmpegDecoder->setVideoReceiver(this);
}
void OpenGLArea::receiveVideo(unsigned char **videoBuffer, int frameWidth, int frameHeight)
{
this->frameWidth = frameWidth;
this->frameHeight = frameHeight;
//Before first render, datas pointer isn't even created yet
if (!firstFrameReceived)
{
buffer[0] = new unsigned char[frameWidth * frameHeight]; //Y
buffer[1] = new unsigned char[frameWidth * frameHeight / 4]; //U
buffer[2] = new unsigned char[frameWidth * frameHeight / 4]; //V
firstFrameReceived = true;
}
else
{
memcpy(buffer[0], videoBuffer[0], frameWidth * frameHeight);
memcpy(buffer[1], videoBuffer[1], frameWidth * frameHeight / 4);
memcpy(buffer[2], videoBuffer[2], frameWidth * frameHeight / 4);
}
//glDraw();
}
void OpenGLArea::glInit()
{
int frameWidth = 640;
int frameHeight = 360;
glClearColor(0.0f, 0.0f, 0.4f, 0.0f);
Shader vertex_shader(ShaderType::Vertex, "vertex.shader");
Shader fragment_shader(ShaderType::Fragment, "fragment.shader");
program = new Program();
program->attach_shader(vertex_shader);
program->attach_shader(fragment_shader);
program->link();
glGenTextures(3, texs);//TODO: delete texture
//Y
glBindTexture(GL_TEXTURE_2D, texs[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, frameWidth, frameHeight, 0, GL_RED, GL_UNSIGNED_BYTE, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
//U
glBindTexture(GL_TEXTURE_2D, texs[1]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, frameWidth / 2, frameHeight / 2, 0, GL_RED, GL_UNSIGNED_BYTE, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
//V
glBindTexture(GL_TEXTURE_2D, texs[2]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, frameWidth / 2, frameHeight / 2, 0, GL_RED, GL_UNSIGNED_BYTE, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
}
void OpenGLArea::glDraw()
{
program->use();
glVertexAttribPointer(A_VER, 2, GL_FLOAT, 0, 0, ver);
glEnableVertexAttribArray(A_VER);
glVertexAttribPointer(T_VER, 2, GL_FLOAT, 0, 0, tex);
glEnableVertexAttribArray(T_VER);
unis[0] = glGetAttribLocation(program->get_id(), "tex_y");
unis[1] = glGetAttribLocation(program->get_id(), "tex_u");
unis[2] = glGetAttribLocation(program->get_id(), "tex_v");
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texs[0]);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, frameWidth, frameHeight, GL_RED, GL_UNSIGNED_BYTE, buffer[0]);
glUniform1i(unis[0], 0);
glActiveTexture(GL_TEXTURE0 + 1);
glBindTexture(GL_TEXTURE_2D, texs[1]);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, frameWidth / 2, frameHeight / 2, GL_RED, GL_UNSIGNED_BYTE, buffer[1]);
glUniform1i(unis[1], 1);
glActiveTexture(GL_TEXTURE0 + 2);
glBindTexture(GL_TEXTURE_2D, texs[2]);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, frameWidth / 2, frameHeight / 2, GL_RED, GL_UNSIGNED_BYTE, buffer[2]);
glUniform1i(unis[2], 2);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
class GLWindow : public Gtk::Window
{
public:
GLWindow()
{
vbox = new Gtk::VBox;
drawing_area = new OpenGLArea();
vbox->pack_start(*drawing_area, true, true);
add(*vbox);
}
private:
Gtk::Button *button;
Gtk::VBox *vbox;
OpenGLArea *drawing_area;
};
Also, I'm only getting image updates when I resize or refocus on the screen. It's possible that I'm forgetting to call some function when the video is updated. Anyone knows which function is this?
ps: OpenGLArea is subclass of Gtk::DrawingArea
UPDATE:
I just noticed that in the lines
unis[0] = glGetAttribLocation(program->get_id(), "tex_y");
unis[1] = glGetAttribLocation(program->get_id(), "tex_u");
unis[2] = glGetAttribLocation(program->get_id(), "tex_v");
unis[i] always have the same value: 4294967295, so glGetAttribLocation isn't returning anything
GLEW is a library that is used for retrieving the function pointers to OpenGL functions. It's needed for OGL version > 1.1.
So, don't think of Glew as the culprit of your vertices issue. If you need to change the order of vertices is due to Winding order.
The order also may be the cause of drawing a texture upside-down.
glGetAttribLocation is used for attributes. In your VS these are vertexIn and textureIn.
For uniforms (your tex_XXX) you must use glGetUniformLocation
For the resizing issue, this question may be useful.
In short, you must connect a callback for the "configure-event" and use glViewport
I'm learning OpenGL from the MakingGamesWithBen series and I'm writing a simple asteroid shooter based on his engine. I have created a system that randomly positions the asteroid sprites with random sizing, and selects a random texture path from an std::vector, and passes the path to the asteroid constructor. The sprites are drawn, however only the first texture is drawn. I've read that I need to bind those textures and switch to the relevant glActiveTexture; from my code below, how would I go about this?
void MainGame::prepareTextures() {
//compile shaders and get Texlocations
initShaders("Shaders/background.vert", "Shaders/background.frag");
GLint TexLoc = _colorProgram.getUniformLocation("bgTexture");
glActiveTexture(GL_TEXTURE0);
}
m_asteroid[i].draw():
glm::vec4 uv(0.0f, 0.0f, 1.0f, 1.0f);
//convert m_imgNum to string and remove trailing zeros
std::string strImgNum = std::to_string(m_imgNum);
strImgNum.erase(strImgNum.find_last_not_of('0') + 1, std::string::npos);
//construct filpath
std::string filePath = m_dir + strImgNum + ".png";
static Engine::GLTexture texture = Engine::ResourceManager::GetTexture(filePath, 0, 0, 32, 4);
Engine::Color color;
color.r = 255;
color.g = 255;
color.b = 255;
color.a = 255;
glm::vec4 posAndSize = glm::vec4(m_posX, m_posY, m_width, m_height);
spriteBatch.Draw(posAndSize, uv, texture.id, 0.0f, color);
Engine::ResourceManager::GetTexture():
GLTexture texture = {};
unsigned char *imageData = stbi_load(filePath.c_str(), &width, &height, &bitsPerPixel, forceBpp);
if (imageData == NULL) {
const char *loadError = stbi_failure_reason();
stbi_image_free(imageData);
fatalError(loadError);
}
//Create the texture in opengl
glGenTextures(1, &(texture.id));
glBindTexture(GL_TEXTURE_2D, texture.id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glGenerateMipmap(GL_TEXTURE_2D);
stbi_image_free(imageData);
glBindTexture(GL_TEXTURE_2D, 0);
texture.width = width;
texture.height = height;
return texture;
renderbatch():
void SpriteBatch::renderbatch() {
glBindVertexArray(m_vao);
for (unsigned int i = 0; i < m_renderBatches.size(); i++) {
glBindTexture(GL_TEXTURE_2D, m_renderBatches[i].texture);
glDrawArrays(GL_TRIANGLES, m_renderBatches[i].offset, m_renderBatches[i].numVertices);
}
glBindVertexArray(0);
}
I can provide any other code/clarification that may be needed!
Usually when not all textures are showing up, it's probably has something to do with uploading the texture by OpenGL. You could try to debug this by checking if your textures are all uploaded properly;
std::uint32_t textureId = Engine::ResourceManager::GetTexture(filePath, 0, 0, 32, 4).id;
std::cout << "texture id, should not be 0 :" << textureId << std::endl;
This could happen if you called this function not from a thread with OpenGL context.
EDIT:
Is there any reason to use a static object here?
static Engine::GLTexture texture = Engine::ResourceManager::GetTexture(filePath, 0, 0, 32, 4);
try changing that to just
Engine::GLTexture texture = Engine::ResourceManager::GetTexture(filePath, 0, 0, 32, 4);
UPDATE:
I just replaced your spitebatch.cpp/.h with the ones from Ben's github and put a simple test in your MainGame.cpp;
m_spriteBatch.begin();
asteroids.draw(m_spriteBatch);
ship.draw(m_spriteBatch);
m_spriteBatch.end();
m_spriteBatch.renderBatch();
_colorProgram.unuse();
_window.SwapBuffers();
and I can render the two uploaded textures properly;
HTH.
I'm trying to display some text using OpenGL with FreeType library. It's working, yet text looks not so smooth. In FreeType documentation it says that there's some antialiasing happing to the texture during loading, but it doesn't look that way in my case.
This is what I'm doing:
FT_Init_FreeType(&m_fontLibrary);
FT_New_Face(m_fontLibrary, "src/VezusLight.OTF", 0, &m_BFont);
FT_Set_Pixel_Sizes(m_BFont, 0, 80);
m_glyph = m_BFont->glyph;
GLuint tex;
glActiveTexture(GL_TEXTURE1);
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glUseProgram(m_textPipeline);
glUniform1i(m_texLocation, 1);
glUseProgram(0);
and then rendering:
glActiveTexture(GL_TEXTURE1);
glEnableVertexAttribArray(m_coordTex);
glBindBuffer(GL_ARRAY_BUFFER, m_VBO);
const char *p;
float x = x_i, y = y_i;
const char* result = text.c_str();
for (p = result; *p; p++)
{
if (FT_Load_Char(m_BFont, *p, FT_LOAD_RENDER))
continue;
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_ALPHA,
m_glyph->bitmap.width,
m_glyph->bitmap.rows,
0,
GL_ALPHA,
GL_UNSIGNED_BYTE,
m_glyph->bitmap.buffer
);
float x2 = x - 1024 + m_glyph->bitmap_left;
float y2 = y - 600 - m_glyph->bitmap_top;
float w = m_glyph->bitmap.width;
float h = m_glyph->bitmap.rows;
GLfloat box[4][4] = {
{ x2, -y2 - h, 0, 1 },
{ x2 + w, -y2 - h, 1, 1 },
{ x2, -y2, 0, 0 },
{ x2 + w, -y2, 1, 0 },
};
glBufferData(GL_ARRAY_BUFFER, 16 * sizeof(GLfloat), box, GL_DYNAMIC_DRAW);
glVertexAttribPointer(m_coordTex, 4, GL_FLOAT, GL_FALSE, 4 * sizeof(GLfloat), NULL);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
x += (m_glyph->advance.x >> 6);
y += (m_glyph->advance.y >> 6);
}
glDisableVertexAttribArray(m_coordTex);
Result looks like this:
Can anyone spot a problem in my code?
Two issues with your code.
First one is a buffer overflow: Texture coodinates in your box structure are vec2, however you tell glVertexAttribPointer it was a vec4 (the stride of 4*sizeof(float) is what matters, and the mismatched size parameters makes OpenGL read out of bounds 2 elements over the end of the box array).
That your texture looks pixelated stems from the fact that texture coordinates 0 and 1 do not come to lie on pixel centers, but the edges of the texture. Either use texelFetch in the fragment shader to address pixels by their pixel coordinate, or remap the texture extents to the range [0…1] properly like explained in https://stackoverflow.com/a/5879551/524368
I think for having transparent color or smooth or anti-aliased glyphs,
you must enable blending in opengl and also disable depth-testing. (you can find out why and how by searching in the internet).
Something like this:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
//and if it didn't work, then disable depth testing by uncommenting this:
//glDisable(GL_DEPTH_TEST);
hope it helps!
I'm trying to write a small, portable 2D engine for iOS to learn C++ and OpenGL. Right now I'm trying to display a texture that I've loaded in. I've been successful displaying a texture when loading in the with CoreGraphic libraries, but now I'm trying to load the file in C++ with fread and libpng and all I see is a white box.
My texture is 64x64 so it's not a power of 2 problem. Also I have enabled GL_TEXTURE_2D.
The first block of code is used to load the png, convert the png to image data, and load the data into OpenGL.
void AssetManager::loadImage(const std::string &filename)
{
Texture texture(filename);
png_structp png_ptr = NULL;
png_infop info_ptr = NULL;
png_bytep *row_pointers = NULL;
int bitDepth, colourType;
FILE *pngFile = fopen(std::string(Game::environmentData.basePath + "/" + filename).c_str(), "rb");
if(!pngFile)
return;
png_byte sig[8];
fread(&sig, 8, sizeof(png_byte), pngFile);
rewind(pngFile);//so when we init io it won't bitch
if(!png_check_sig(sig, 8))
return;
png_ptr = png_create_read_struct(PNG_LIBPNG_VER_STRING, NULL,NULL,NULL);
if(!png_ptr)
return;
if(setjmp(png_jmpbuf(png_ptr)))
return;
info_ptr = png_create_info_struct(png_ptr);
if(!info_ptr)
return;
png_init_io(png_ptr, pngFile);
png_read_info(png_ptr, info_ptr);
bitDepth = png_get_bit_depth(png_ptr, info_ptr);
colourType = png_get_color_type(png_ptr, info_ptr);
if(colourType == PNG_COLOR_TYPE_PALETTE)
png_set_palette_to_rgb(png_ptr);
/*if(colourType == PNG_COLOR_TYPE_GRAY && bitDepth < 8)
png_set_gray_1_2_4_to_8(png_ptr);*/
if(png_get_valid(png_ptr, info_ptr, PNG_INFO_tRNS))
png_set_tRNS_to_alpha(png_ptr);
if(bitDepth == 16)
png_set_strip_16(png_ptr);
else if(bitDepth < 8)
png_set_packing(png_ptr);
png_read_update_info(png_ptr, info_ptr);
png_get_IHDR(png_ptr, info_ptr, &texture.width, &texture.height,
&bitDepth, &colourType, NULL, NULL, NULL);
int components = GetTextureInfo(colourType);
if(components == -1)
{
if(png_ptr)
png_destroy_read_struct(&png_ptr, &info_ptr, NULL);
return;
}
GLubyte *pixels = (GLubyte *)malloc(sizeof(GLubyte) * (texture.width * texture.height * components));
row_pointers = (png_bytep *)malloc(sizeof(png_bytep) * texture.height);
for(int i = 0; i < texture.height; ++i)
row_pointers[i] = (png_bytep)(pixels + (i * texture.width * components));
png_read_image(png_ptr, row_pointers);
png_read_end(png_ptr, NULL);
// make it
glGenTextures(1, &texture.name);
// bind it
glBindTexture(GL_TEXTURE_2D, texture.name);
GLuint glcolours;
switch (components) {
case 1:
glcolours = GL_LUMINANCE;
break;
case 2:
glcolours = GL_LUMINANCE_ALPHA;
break;
case 3:
glcolours = GL_RGB;
break;
case 4:
glcolours = GL_RGBA;
break;
}
glTexImage2D(GL_TEXTURE_2D, 0, components, texture.width, texture.height, 0, glcolours, GL_UNSIGNED_BYTE, pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
png_destroy_read_struct(&png_ptr, &info_ptr, NULL);
fclose(pngFile);
free(row_pointers);
free(pixels);
textures.push_back(texture);
}
Here is the code for my Texture class:
class Texture
{
public:
Texture(const std::string &filename) { this->filename = filename; }
unsigned int name;
unsigned int size;
unsigned int width;
unsigned int height;
std::string filename;
};
Here is how I setup my view:
void Renderer::Setup(Rectangle rect, CameraType cameraType)
{
_viewRect = rect;
glViewport(0,0,_viewRect.width,_viewRect.height);
if(cameraType == Renderer::Orthographic)
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(_viewRect.x,_viewRect.width,_viewRect.y,_viewRect.height,kZNear,kZFar);
glMatrixMode(GL_MODELVIEW);
}
else
{
GLfloat size = kZNear * tanf(DegreesToRadians(kFieldOfView) / 2.0);
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustumf(-size, size, -size / (_viewRect.width / _viewRect.height), size / (_viewRect.width / _viewRect.height), kZNear, kZFar);
glMatrixMode(GL_MODELVIEW);
}
glEnable(GL_TEXTURE_2D);
glBlendFunc(GL_ONE, GL_SRC_COLOR);
glEnable(GL_BLEND);
}
Now here is where I draw the texture:
void Renderer::DrawTexture(int x, int y, Texture &texture)
{
GLfloat vertices[] = {
x, _viewRect.height - y, 0.0,
x, _viewRect.height - (y + texture.height), 0.0,
x + texture.width, _viewRect.height - y, 0.0,
x + texture.width, _viewRect.height - (y + texture.height), 0.0
};
static const GLfloat normals[] = {
0.0, 0.0, 1.0,
0.0, 0.0, 1.0,
0.0, 0.0, 1.0,
0.0, 0.0, 1.0
};
GLfloat texCoords[] = {
0.0, 1.0,
0.0, 0.0,
1.0, 1.0,
1.0, 0.0
};
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glLoadIdentity();
glTranslatef(0.0, 0.0, -3.0);
glBindTexture(GL_TEXTURE_2D, texture.name);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glNormalPointer(GL_FLOAT, 0, normals);
glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
UPDATE:
I changed the function I use to generate a texture. It now create a random test texture. I still see a white texture no matter what. Gonna keep digging into how I'm renderering.
Here's the new function:
void AssetManager::CreateNoisyTexture(const char * key)
{
Texture texture(key, 64, 64);
const unsigned int components = 4;
GLubyte *pixels = (GLubyte *)malloc(sizeof(GLubyte) * (texture.width * texture.height * components));
GLubyte *pitr1 = pixels;
GLubyte *pitr2 = pixels + (texture.width * texture.height * components);
while (pitr1 != pitr2) {
*pitr1 = rand() * 0xFF;
*(pitr1 + 1) = rand() * 0xFF;
*(pitr1 + 2) = rand() * 0xFF;
*(pitr1 + 3) = 0xFF;
pitr1 += 4;
}
glGenTextures(1, &texture.name);
glBindTexture(GL_TEXTURE_2D, texture.name);
glTexImage2D(GL_TEXTURE_2D, 0, components, texture.width, texture.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
free(pixels);
printf("Created texture with key: %s name: %d", texture.key, texture.name);
textures.push_back(texture);
}
Ok I figured it out. The problem was that I was using the wrong internal pixel format.
glTexImage2D(GL_TEXTURE_2D, 0, 4, texture.width, texture.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
Should be:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture.width, texture.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
I needed to pass in GL_RGBA instead of 4. In the docs I read this:
internalFormat - Specifies the number of color components in the texture. Must be 1, 2, 3, or 4, or one of the following symbolic constants:
I figured it didn't matter but I guess it does. One other thing to note, I figured this out by using glGetError() to figure out where the error was occurring. When I call glTexImage2D() the error was GL_INVALID_OPERATION.
Thanks for your help IronMensan!
UPDATE:
So the reason why I couldn't send 4 is because it is not allowed and internalFormat and format need to be the same in the OpenGL ES 1.1 spec. You can only send GL_ALPHA, GL_RGB, GL_RGBA, GL_LUMINANCE, or GL_LUMINANCE_ALPHA. Lesson learned.