I'll preface this by saying I'm rather new to these topics, and have been giving myself a crash course the last few days for a school project. I apologize that this is quite long, but as I'm not sure where the problem is I figured I'd try to be thorough.
I'm attempting to use a framebuffer to draw to some RGBA textures, which are used later for other drawing (I'm using the values in the textures as vectors to draw particles -- textures store values such as their position and velocity).
However, as best I can tell, the textures after I draw them are blank. Sampling values in the texture gives back (0, 0, 0, 1). This also seems to be confirmed by my later drawing procedure, as it appears that all the particles are being drawn overlapped at origin (this observation is based on my blend function and some hard-coded test colour values).
I'm using OpenGL 4 with SDL2 and glew.
I'll attempt to post everything relevant in an orderly fashion:
The framebuffer class creates the framebuffer and attaches (3) textures.
It has:
GLuint buffer_id_ that is the framebuffer's id, initialized to 0, unsigned int width, height, which are the width and height of the textures in pixels, numTextures, which is how many textures to create for the framebuffer (again, in this case 3), and std::vector<GLuint> textures_ for all the textures associated with the framebuffer.
bool Framebuffer::init() {
assert( numTextures <= GL_MAX_COLOR_ATTACHMENTS );
// Get the buffer id.
glGenFramebuffers( 1, &buffer_id_ );
glBindFramebuffer( GL_FRAMEBUFFER, buffer_id_ );
// Generate the textures.
for (unsigned int i = 0; i < numTextures; ++i) {
GLuint tex;
glGenTextures( 1, &tex );
glBindTexture(GL_TEXTURE_2D, tex);
// Give empty image to OpenGL.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_FLOAT, 0);
// Use GL_NEAREST, since we don't want any kind of averaging across values:
// we just want one pixel to represent a particle's data.
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
// This probably isn't necessary, but we don't want to have UV coords past the image edges.
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
textures_.push_back(tex);
}
glBindTexture(GL_TEXTURE_2D, 0);
// Bind our textures to the framebuffer.
GLenum drawBuffers[GL_MAX_COLOR_ATTACHMENTS];
for (unsigned int i = 0; i < numTextures; ++i) {
GLenum attach = GL_COLOR_ATTACHMENT0 + i;
glFramebufferTexture(GL_FRAMEBUFFER, attach, textures_[i], 0);
drawBuffers[i] = attach;
}
glDrawBuffers( numTextures, drawBuffers );
if ( glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE ) {
std::cerr << "Error: Failed to create framebuffer." << std::endl;
return false;
}
glBindFramebuffer( GL_FRAMEBUFFER, 0 );
return true;
}
Two matching framebuffers are made by the particle system, so that one can be used as input to the shader programs and the other as output.
bool AmbientParticleSystem::initFramebuffers() {
framebuffers_[0] = new Framebuffer(particle_texture_width_, particle_texture_height_, NUM_TEXTURES_PER_FRAMEBUFFER);
framebuffers_[1] = new Framebuffer(particle_texture_width_, particle_texture_height_, NUM_TEXTURES_PER_FRAMEBUFFER);
return framebuffers_[0]->init() && framebuffers_[1]->init();
}
Then I attempt to draw an initial state to the first framebuffer:
void AmbientParticleSystem::initParticleDrawing() {
glBindFramebuffer( GL_FRAMEBUFFER, framebuffers_[0]->getBufferID() );
// Store the previous viewport.
GLint prevViewport[4];
glGetIntegerv( GL_VIEWPORT, prevViewport );
glViewport( 0, 0, particle_texture_width_, particle_texture_height_ );
glClear( GL_COLOR_BUFFER_BIT );
glDisable( GL_DEPTH_TEST );
glBlendFunc( GL_ONE, GL_ZERO);
init_shader_->use(); // This sets glUseProgram to the init shader.
GLint attrib = init_variable_ids_[constants::init::Variables::IN_VERTEX_POS]->id;
glEnableVertexAttribArray( attrib );
glBindBuffer( GL_ARRAY_BUFFER, full_size_quad_->getBufferID() );
glVertexAttribPointer( attrib, full_size_quad_->vertexSize,
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 ); // Array buffer offset
glDrawArrays( GL_TRIANGLES, 0, full_size_quad_->numVertices );
// Here I'm just printing some sample values to confirm it is blank/black.
glBindTexture( GL_TEXTURE_2D, framebuffers_[0]->getTexture(0) );
GLfloat *pixels = new GLfloat[particle_texture_width_ * particle_texture_height_ * 4];
glGetTexImage( GL_TEXTURE_2D, 0, GL_RGBA, GL_FLOAT, pixels);
std::cout << "some pixel entries:\n";
std::cout << "pixel0: " << pixels[0] << " " << pixels[1] << " " << pixels[2] << " " << pixels[3] << std::endl;
std::cout << "pixel10: " << pixels[40] << " " << pixels[41] << " " << pixels[42] << " " << pixels[43] << std::endl;
std::cout << "pixel100: " << pixels[400] << " " << pixels[401] << " " << pixels[402] << " " << pixels[403] << std::endl;
std::cout << "pixel10000: " << pixels[40000] << " " << pixels[40001] << " " << pixels[40002] << " " << pixels[40003] << std::endl;
// They all print 0, 0, 0, 1
delete pixels;
glBindBuffer( GL_ARRAY_BUFFER, 0 );
glDisableVertexAttribArray( attrib );
init_shader_->stopUsing();
glBindFramebuffer( GL_FRAMEBUFFER, 0 );
// Return the viewport to its previous state.
glViewport(prevViewport[0], prevViewport[1], prevViewport[2], prevViewport[3] );
}
You can see here is also where I try getting some pixel values, which all return (0, 0, 0, 1).
The full_size_quad_ used here is defined by:
// Create quad for textures to draw onto.
static const GLfloat quad_array[] = {
-1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
-1.0f, 1.0f, 0.0f,
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
};
std::vector<GLfloat> quad(quad_array, quad_array + 18);
full_size_quad_ = new VertexBuffer(3, 6, quad);
VertexBuffer is my own class, which I don't think I'll need to show here. It just glGens and glBinds the vertex array and buffers.
The shader used here has this for the vertex shader:
#version 400 core
in vec3 inVertexPos;
void main() {
gl_Position = vec4(inVertexPos, 1.0);
}
and this for the fragment shader:
#version 400 core
layout(location = 0) out vec4 position;
layout(location = 1) out vec4 velocity;
layout(location = 2) out vec4 other;
uniform vec2 uResolution;
// http://stackoverflow.com/questions/4200224/random-noise-functions-for-glsl
float rand(vec2 seed) {
return fract(sin(dot(seed.xy,vec2(12.9898,78.233))) * 43758.5453);
}
void main() {
vec2 uv = gl_FragCoord.xy/uResolution.xy;
vec3 pos = vec3(uv.x, uv.y, rand(uv));
vec3 vel = vec3(-2.0, 0.0, 0.0);
vec4 col = vec4(1.0, 0.3, 0.1, 0.5);
position = vec4(pos, 1.0);
velocity = vec4(vel, 1.0);
other = col;
}
uResolution is the width and height of the textures in pixels, set by:
init_shader_->use();
glUniform2f( init_uniform_ids_[constants::init::Uniforms::U_RESOLUTION]->id, particle_texture_width_, particle_texture_height_ );
Out of curiosity I tried changing position = vec4(pos, 1.0); to different values, but my pixel printing still gave 0 0 0 1.
I've been debugging this for about 20-30 hours now, and looked up a dozen or so different tutorials and other questions here, but the last several hours I don't feel I've gained any ground.
Anything here standing out for why the textures appear to be blank/black, or any thing else that needs addressing? I was using this project to learn about shaders, so I'm quite new to all of this. Any help would be immensely appreciated.
This was resolved with a complete re-write of the code.
I'm still not 100% sure where the problem was, but I think I likely had some mismatched enable/disable calls.
Related
I am using OpenGL version string: 4.6 (Compatibility Profile) Mesa 21.3.5. I load objects from an .obj file with matching textures, 51 textures to be exact. To be able to match various textures to various triangles and surfaces, I am adding texture coordinates along with an texture identifier in my vertex array buffer.
So now I use a stride looking like this:
x y z u v r g b id which clarifies with: pos tex col id.
The problem now is that I can generate textures and bind them to corresponding texture units, but I cannot select each texture in my shader in regards to the texture unit id I get from my texture array buffer. To clarify: I generate and bind textures like so:
void loadTextures(void) {
std::cout << "Size: " << this->materialLib.materials.size() << std::endl;
std::cout << "Loading textures" << std::endl;
for (long unsigned int i = 0; i < this->materialLib.materials.size(); i++) {
Material m = this->materialLib.materials[i];
// init
struct TEXTURE *t = (struct TEXTURE*)malloc(sizeof(struct TEXTURE));
t->data = NULL;
t->path = (char*)malloc(sizeof(char) * (m.ambient_map.length() + 1)); // add one to include \0 character
t->texture_int = 0;
strcpy(t->path, m.ambient_map.c_str());
t->data = stbi_load( t->path, &t->width, &t->height, &t->nrChannels, 0);
textures.push_back(t);
std::cout << '\r';
std::cout << i << "/" << this->materialLib.materials.size();
std::cout.flush(); // see wintermute's comment
}
}
void applyTextures(Shader ourShader) {
for (long unsigned int i = 0; i < textures.size(); i++) {
struct TEXTURE *t = textures[i];
glGenTextures( 1, &t->texture_int );
glBindTexture( GL_TEXTURE_2D, t->texture_int );
// set the texture wrapping parameters
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
// set texture filtering parameters
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
// load image, create texture and generate mipmap
//stbi_set_flip_vertically_on_load(true); // tell stb_image.h to flip loaded texture's on the y-axis.
if (t->data) {
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, t->width, t->height, 0, GL_RGB, GL_UNSIGNED_BYTE, t->data);
glGenerateMipmap(GL_TEXTURE_2D);
} else
std::cout << "ERROR::LOAD::TEXTURE " << t->path << " " << i << std::endl;
}
return;
/* This could be optimized by moving this to the above loop but we'll separate for simplicity for now */
GLint gl_textures[MAX_TEXTURES];
ourShader.use();
for (long unsigned int i = 0; i < textures.size() && i < MAX_TEXTURES; i++) {
gl_textures[i] = i;
// set texture unit as a uniform for the fragment shader, could be set by ourShader.setInt()
glUniform1i(glGetUniformLocation(ourShader.ID, "texture" + (1 + i)), i); // 'i' is our texture id, for another texture, use another id
}
// set texture units as a uniform for the fragment shader
glUniform1iv(glGetUniformLocation(ourShader.ID, "textures"), MAX_TEXTURES, gl_textures);
}
void bindTextures(void) {
for (long unsigned int i = 0; i < textures.size(); i++) {
glActiveTexture(GL_TEXTURE0 + i);
glBindTexture(GL_TEXTURE_2D, textures[i]->texture_int);
}
}
This gives me varius uniforms in my fragment shader that I can select like so:
uniform sampler2D texture1;
uniform sampler2D texture2;
...
FragColor = texture(texture2, TexCoord);
I want to be able to select a texture like this:
uniform sampler2D texures;
...
FragColor = texture(textures[id], TexCoord);
What I have tried is to pass the id as the first argument to "texture()" but that gave me an error, I also tried to cast my id as a sampler2D type but that didn't work either.
How can I fix this?
Here is my code:
modify from qt example: Examples\Qt-5.14.2\quick\scenegraph\openglunderqml
void SquircleRenderer::init()
{
unsigned char* data = (unsigned char*)malloc(1200*4);
for(int i=0;i<600;i++)
{
data[i*4] = 0;
data[i*4+1] = 255;
data[i*4+2] = 0;
data[i*4+3] = 255;
}
for(int i=600;i<1200;i++)
{
data[i*4] = 0;
data[i*4+1] = 0;
data[i*4+2] = 255;
data[i*4+3] = 255;
}
if (!m_program) {
QSGRendererInterface *rif = m_window->rendererInterface();
Q_ASSERT(rif->graphicsApi() == QSGRendererInterface::OpenGL || rif->graphicsApi() == QSGRendererInterface::OpenGLRhi);
initializeOpenGLFunctions();
if (texs[0])
{
glDeleteTextures(1, texs);
}
glGenTextures(1, texs);
glBindTexture(GL_TEXTURE_2D, texs[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 30, 40, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
m_program = new QOpenGLShaderProgram();
m_program->addCacheableShaderFromSourceCode(QOpenGLShader::Vertex,
"attribute highp vec4 vertices;"
"varying highp vec2 coords;"
"void main() {"
" gl_Position = vertices;"
" coords = vertices.xy;"
"}");
m_program->addCacheableShaderFromSourceCode(QOpenGLShader::Fragment,
"varying highp vec2 coords;"
"uniform sampler2D inputImageTexture;"
"void main() {"
" gl_FragColor = texture2D(inputImageTexture, coords);"
"}");
m_program->bindAttributeLocation("vertices", 0);
m_program->link();
arrUni[0] = m_program->uniformLocation("inputImageTexture");
}
}
//! [4] //! [5]
void SquircleRenderer::paint()
{
// Play nice with the RHI. Not strictly needed when the scenegraph uses
// OpenGL directly.
m_window->beginExternalCommands();
m_program->bind();
m_program->enableAttributeArray(0);
float values[] = {
-1, 1,
1, 1,
-1, -1,
1, -1
};
// This example relies on (deprecated) client-side pointers for the vertex
// input. Therefore, we have to make sure no vertex buffer is bound.
glBindBuffer(GL_ARRAY_BUFFER, 0);
m_program->setAttributeArray(0, GL_FLOAT, values, 2);//values
//m_program->setUniformValue("t", (float) m_t);
qDebug()<<m_viewportSize.width()<<m_viewportSize.height()<<"\n";
glViewport(0, 0, m_viewportSize.width(), m_viewportSize.height());
glDisable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texs[0]);
glUniform1i(arrUni[0], 0);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
m_program->disableAttributeArray(0);
m_program->release();
m_window->endExternalCommands();
}
As the picture you can see,it produces 4 same pictures,could you tell me how to produce 1 picture fill the whole window?:
I tried so many methods, but it didn't work, I guess the problem exists in the values array or the glTexImage2D function.
Textures are mapped accross the [0, 1] range, and values outside of that range are modulo-looped back into it, which creates a repeating pattern. Interpreting the texture over the [-1, 1] range leads to what you are seeing since you are mapping exactly twice the UV range in both axises.
There's a few ways to fix this. But my personal preference for a full-framebuffer pass like this is to have the attribute be normalized, and then have it converted to the expected [-1, 1] range for the clip-space coordinate in the vertex shader:
float values[] = {
0.f, 1.f,
1.f, 1.f,
0.f, 0.f,
1.f, 0.f
};
gl_Position = vertices * 2.0 - vec4(1.0, 1.0,0.0,1.0);
Another common technique is to do away with the attribute buffer altogether, and use gl_VertexID to directly generate both the UVs and coordinates.
i'm having a problem trying to reading an image from a fragment shader, first i write into the image in shader porgram A (im just painting blue on the image) then i'm reading from another shader program B to display the image, but the reading part is not getting the right color i'm getting a black image
Unexpected result
This is my application code:
void GLAPIENTRY MessageCallback(GLenum source, GLenum type, GLuint id, GLenum severity, GLsizei length, const GLchar* message, const void* userParam)
{
std::cout << "GL CALLBACK: type = " << std::hex << type << ", severity = " << std::hex << severity << ", message = " << message << "\n"
<< (type == GL_DEBUG_TYPE_ERROR ? "** GL ERROR **" : "") << std::endl;
}
class ImgRW
: public Core
{
public:
ImgRW()
: Core(512, 512, "JFAD")
{}
virtual void Start() override
{
glEnable(GL_DEBUG_OUTPUT);
glDebugMessageCallback(MessageCallback, nullptr);
shader_w = new Shader("w_img.vert", "w_img.frag");
shader_r = new Shader("r_img.vert", "r_img.frag");
glGenTextures(1, &space);
glBindTexture(GL_TEXTURE_2D, space);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA32F, 512, 512);
glBindImageTexture(0, space, 0, GL_FALSE, 0, GL_READ_WRITE, GL_RGBA32F);
glGenVertexArrays(1, &vertex_array);
glBindVertexArray(vertex_array);
}
virtual void Update() override
{
shader_w->use(); // writing shader
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glMemoryBarrier(GL_TEXTURE_FETCH_BARRIER_BIT | GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
shader_r->use(); // reading shader
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
virtual void End() override
{
delete shader_w;
delete shader_r;
glDeleteTextures(1, &space);
glDeleteVertexArrays(1, &vertex_array);
}
private:
Shader* shader_w;
Shader* shader_r;
GLuint vertex_array;
GLuint space;
};
#if 1
CORE_MAIN(ImgRW)
#endif
and these are my fragment shaders:
Writing to image
Code glsl:
#version 430 core
layout (binding = 0, rgba32f) uniform image2D img;
out vec4 out_color;
void main()
{
imageStore(img, ivec2(gl_FragCoord.xy), vec4(0.0f, 0.0f, 1.0f, 1.0f));
}
Reading from image
Code glsl:
#version 430 core
layout (binding = 0, rgba32f) uniform image2D img;
out vec4 out_color;
void main()
{
vec4 color = imageLoad(img, ivec2(gl_FragCoord.xy));
out_color = color;
}
The only way that i get the correct result is if i change the order of the drawing commands and i dont need the memory barriers, like this (in the Update fuction of above):
shader_r->use(); // reading shader
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
shader_w->use(); // writing shader
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
I don't know if the problem is the graphics card or the drivers or if i'm missing some kind of flag that enables memoryBarriers or if i put the wrong barrier bits or if i placed the barriers in the code in the wrong part
The Vertex shader for both shader programs is the next:
#version 430 core
void main()
{
vec2 v[4] = vec2[4]
(
vec2(-1.0, -1.0),
vec2( 1.0, -1.0),
vec2(-1.0, 1.0),
vec2( 1.0, 1.0)
);
vec4 p = vec4(v[gl_VertexID], 0.0, 1.0);
gl_Position = p;
}
and in my init function is:
void Window::init()
{
glfwInit();
window = glfwCreateWindow(getWidth(), getHeight(), name, nullptr, nullptr);
glfwMakeContextCurrent(window);
glfwSetFramebufferSizeCallback(window, framebufferSizeCallback);
glfwSetCursorPosCallback(window, cursorPosCallback);
//glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED);
assert(gladLoadGLLoader((GLADloadproc)glfwGetProcAddress) && "Couldn't initilaize OpenGL");
glEnable(GL_DEPTH_TEST);
}
and in my function run i'm calling my start, update and end functions
void Core::Run()
{
std::cout << glGetString(GL_VERSION) << std::endl;
Start();
float lastFrame{ 0.0f };
while (!window.close())
{
float currentFrame = static_cast<float>(glfwGetTime());
Time::deltaTime = currentFrame - lastFrame;
lastFrame = currentFrame;
glViewport(0, 0, getWidth(), getHeight());
glClearBufferfv(GL_COLOR, 0, &color[0]);
glClearBufferfi(GL_DEPTH_STENCIL, 0, 1.0f, 0);
Update();
glfwSwapBuffers(window);
glfwPollEvents();
}
End();
}
glEnable(GL_DEPTH_TEST);
As I suspected.
Just because a fragment shader doesn't write a color output doesn't mean that those fragments will not affect the depth buffer. If the fragment passes the depth test and the depth write mask is on (assuming no other state is involved), it will update the depth buffer with the current fragment's depth (and the color buffer with uninitialized values, but that's a different matter).
Since you're drawing the same geometry both times, the second rendering's fragments will get the same depth values as the corresponding fragments from the first rendering. But the default depth function is GL_LESS. Since any value is not less than itself, this means that all fragments from the second rendering fail the depth test.
And therefore, they don't get rendered.
So just turn off the depth test. And while you're at it, turn off color writes for your "writing" rendering pass, since you're not writing to the color buffers.
Now, you do properly need the memory barrier between the two draw calls. But you only need the GL_SHADER_IMAGE_ACCESS_BARRIER_BIT, since that's how you're reading the data (via image load/store, not samplers).
I am attempting to submit a small float array as an OpenGL ES 2.0 Texture and read it back in order to understand GPGPU better. I am attempting to do this on the SGX 530 GPU on a TI OMAP3 ARM SoC.
I've been following this guide:
enter link description here
My code currently creates and populates 2 float arrays, and then creates "pass through" shaders like this:
void GLWidget::initializeGL()
{
// Max texture size in each direction
int maxSize;
glGetIntegerv(GL_MAX_TEXTURE_SIZE,&maxSize);
texSize = sqrt(maxSize);
texSize = 2;
qDebug() << "GL_MAX_TEXTURE_SIZE " << maxSize << " SQRT " << texSize;
// Define input and output arrays of RGBA format with each channel being u8
m_Format = 4;
dataX = (quint8*)malloc(m_Format*texSize*texSize*sizeof(quint8));
dataY = (quint8*)malloc(m_Format*texSize*texSize*sizeof(quint8));
// Setup some dummy data
int arraySize = m_Format*texSize*texSize;
qDebug() << "Array Size: " << arraySize;
for (int i = 0; i < arraySize ; i++) {
dataX[i] = i;
}
for (int i = 0; i < arraySize ; i++) {
dataY[i] = 0;
}
QGLShader *vshader = new QGLShader(QGLShader::Vertex);
const char *vsrc =
"attribute highp vec4 vertex;\n"
"attribute highp vec4 texCoord;\n"
"varying vec2 texc;\n"
"void main(void)\n"
"{\n"
" gl_Position = vertex;\n"
" texc = texCoord.xy;\n"
"}\n";
vshader->compileSourceCode(vsrc);
QGLShader *fshader = new QGLShader(QGLShader::Fragment);
const char *fsrc =
"varying highp vec2 texc;\n"
"uniform sampler2D tex;\n"
"void main(void)\n"
"{\n"
" gl_FragColor = texture2D(tex, texc);\n"
"}\n";
fshader->compileSourceCode(fsrc);
program.addShader(vshader);
program.addShader(fshader);
program.link();
vertexAttr = program.attributeLocation("vertex");
texCoordAttr = program.attributeLocation("texCoord");
textureUniform = program.uniformLocation("tex");
}
I then attempt to submit the texture to the GPU, render it to a framebuffer, and read it back like this:
void GLWidget::renderToScene()
{
// Bind and configure a texture
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &m_hTexture);
glBindTexture(GL_TEXTURE_2D, m_hTexture);
glUniform1i(textureUniform, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texSize, texSize, 0, GL_RGBA, GL_UNSIGNED_BYTE, dataX); // Allocate buffer to hold RGBA with 8 bytes each
// Generate handles for Frame Buffer Object
glGenFramebuffers(1, &m_hFBO);
// Switch the render target to the current FBO to update the texture map
glBindFramebuffer(GL_FRAMEBUFFER, m_hFBO);
qDebug() << "Data before roundtrip:";
int arraySize = m_Format*texSize*texSize;
for (int i = 0 ; i < arraySize ; i++)
qDebug() << dataX[i];
// FBO attachment is complete?
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE)
{
qDebug() << "Frame buffer is present...";
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_hTexture, 0);
//glTexSubImage2D(GL_TEXTURE_2D,0,0,0,texSize,texSize, GL_RGBA,GL_UNSIGNED_BYTE,dataX); // pixel data is RGBA and each channel u8
static const GLfloat squareVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
static const GLfloat textureVertices[] = {
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
0.0f, 0.0f,
};
// ensure no VBOs or IBOs are bound
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
// Set pointers to the arrays
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureVertices);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableVertexAttribArray(ATTRIB_VERTEX);
glDisableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
}
qDebug() << "Zero data:";
for (int i = 0; i < arraySize ; i++)
qDebug() << dataY[i];
// GPGPU Extract
glReadPixels(0, 0, texSize, texSize, GL_RGBA,GL_UNSIGNED_BYTE,dataY);
// print out results
qDebug() << "Data after roundtrip:";
for (int i = 0; i < arraySize ; i++)
qDebug() << dataY[i];
// Unbind the FBO so rendering will return to the main buffer.
glBindFramebuffer(GL_FRAMEBUFFER, 0);
qDebug() << "Done...";
sleep(60000);
}
The draw call looks like this:
void GLWidget::draw() {
glClearColor(0.1f, 0.1f, 0.2f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glFrontFace(GL_CW);
glCullFace(GL_FRONT);
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
// Draw
program.bind();
renderToScene();
program.release();
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
swapBuffers();
}
Everything compiles and runs, but my data output looks like this:
Found SGX/MBX driver, enabling FullClearOnEveryFrame
Found v1.4 driver, enabling brokenTexSubImage
Found non-Nokia v1.4 driver, enabling brokenFBOReadBack
GL_MAX_TEXTURE_SIZE 2048 SQRT 2
Array Size: 16
Data before roundtrip:
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Zero data:
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Data after roundtrip:
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Done...
How can I submit and read back my text properly? Thanks!
You got a number of issues with this code. The main one being that ES 2.0 does not support float textures. Specifically:
GL_FLOAT is not valid as the type argument of glTexImage2D() and glTexSubImage2D(). This needs to be GL_UNSIGNED_BYTE, or one of various GL_UNSIGNED_SHORT_* options.
The only format/type combination for glReadPixels() supported across all implementations is GL_RGBA/GL_UNSIGNED_BYTE. There is a second implementation dependent combination you can query.
As others already pointed out, you're attempting to fill a texture with data, and then reading data from the current framebuffer. So even if both operations used valid arguments, you still would not get the data.
I would strongly recommend that you check some documentation before blindly trying to pass arguments that may or may not be valid to API calls. The man pages are a good start. Also, use glGetError(), which will return error codes if you make invalid calls.
Implementations can support extensions that add support for some of the functionality you're looking for. E.g. OES_texture_float adds support for float textures. But you need to check for the presence of these extensions before attempting to use them.
The code shown does not have a draw call that would use the texture and render to the default color buffer (a call like glDrawElements). The glReadPixels reads from the default Framebuffer. It can be changed in OpenGLES2 using GL_COLOR_ATTACHMENT0, not GL_COLOR_ATTACHMENT0_EXT
For the vast majority of TI devices with framebuffer support/nullws support, you can use the profiling code at below link. The below code tests some GPGPU scenarios including edge detection, colour conversion, and using FBO. Refer to calls to tests 11, 17
https://github.com/prabindh/sgxperf/blob/master/sgxperf_gles20_vg.cpp
BTW, OMAP3 production chipsets have(had) SGX530 GPU not SGX540. SGX540/variants are used in the OMAP4 chipsets. You would be extremely lucky if you had a production OMAP3 variant with SGX540 :) You can confirm it in the bootlogs, or with the debug version of the PVR drivers.
Also mention what board/platform is being used.
I then attempt to submit the texture to the GPU and read it back like this
[...]
So it is obvious that I am not "getting" the buffer back from OpenGL. I would expect it to contain the 1.5's, since I am reading the texture back with glReadPixels into my dataY array.
Why would it obviously not be the back buffer you are reading back?
You never render the texture to the color buffer, so reading back from the color buffer will never return the texture data.
The original code uses framebuffer objects to access the texture as a color buffer, which will have a totally different semantic in this scenario (it is still a mostly useless benchmark, though). Note that most real world ES2 devices also support the GL_OES_framebuffer_object extension, so you can conceptually port that.
Image with two cubes is when its using BLIT Call - Test Code
Image with one Cube is when its not Using BLIT CALL - Test Code
Folks,
Please follow the attached pictures & Code as below..
Struggling with this from last 2 days..
Facing an issue while using FBO in below code, I am able to attach & render to FBo successfully, BUT some how not able to use that as a texture (color & depth buffers) in the same program, MY glReadpixel call after rendering to FBO Shows me all Zero's data...
When generated FBO(color,depth) is used in GLSL- shader code , which is called from Draw2() function displays a blend white cube without texture..
please suggest..., below is the complete code... , I am creating an FBO & using 2 shaders one for Normal rendering & other one uses the color & depth from FBO processed earlier..
Here is MY SHADER CODE
init_gl()
{
This Fragment Shader Code is for normal rendering used in Program under draw()
static const char *fragment_shader_source =
"precision mediump float; \n"
" \n"
"varying vec4 vVaryingColor; \n"
" \n"
"void main() \n"
"{ \n"
" gl_FragColor = vVaryingColor; \n"
"} \n";
This Fragment Shader Code is used in Program2 under draw2()
static const char *fragment_shader_source2 =
"precision mediump float; \n"
" \n"
"varying vec4 vVaryingColor; \n"
"uniform sampler2D color; \n"
"uniform sampler2D depth; \n"
" \n"
"void main() \n"
"{ \n"
"float r = pow(texture2D(color, gl_FragCoord.xy / vec2(256, 256)).a, 128.0); \n"
"float g = texture2D(depth, gl_FragCoord.xy / vec2(256, 256)).a;\n"
"float b = pow(texture2D(depth, gl_FragCoord.xy / vec2(256, 256)).a, 128.0); \n"
"gl_FragColor = vec4(r, g, b, 1.); \n"
"} \n";
MY FIRST DRAWING ON FBO
static void draw(uint32_t i)
{
EGL_CHECK(glEnable(GL_DEPTH_TEST));
glEnable(GL_TEXTURE_2D);
EGL_CHECK(glDepthFunc(GL_ALWAYS));
EGL_CHECK(glBindFramebuffer(GL_FRAMEBUFFER, fboInfo.id));
EGL_CHECK(glUseProgram(program));
//CUBE DRAWING
Doing Read Pixel here, Its returning all zero's
GLubyte *pixels = malloc(4 * 256 * 256);
EGL_CHECK(glReadBuffer(GL_COLOR_ATTACHMENT0));
EGL_CHECK(glReadPixels(0, 0, 256, 256, GL_RGBA, GL_UNSIGNED_BYTE, pixels));
//**TO TEST my FBO , IF ITS HAVING SOME RENDER DATA**
EGL_CHECK(glBindFramebuffer(GL_FRAMEBUFFER, 0));
glBindFramebuffer(GL_READ_FRAMEBUFFER, fboInfo.id);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, 256, 256, 0, 0, 256, 256, GL_COLOR_BUFFER_BIT, GL_NEAREST);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
//TEST CODE ONLY
glBindTexture(GL_TEXTURE_2D, fboInfo.id);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
draw2(i);
}
2nd DRAWING WITHOUT FBO (BUT this uses FRAG SHADER WHICH CAN ACCESS FBO data color,depth)
static void draw2(uint32_t i)
{
glUseProgram(program2);
//Draw Same Kind of Cube , Which we draw in draw() function
}
These are the functions to create FBO (color,depth)
GLuint createTexture2D(const int w, const int h, GLint internalFormat, GLenum format, GLenum type)
{
GLuint textureIdX;
EGL_CHECK(glGenTextures(1, &textureIdX));
EGL_CHECK(glBindTexture(GL_TEXTURE_2D, textureIdX));
EGL_CHECK(glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST));
EGL_CHECK(glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST));
EGL_CHECK(glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE));
EGL_CHECK(glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE));
if (GL_DEPTH_COMPONENT == format) {
EGL_CHECK(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE));
EGL_CHECK(glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_INTENSITY));
}
EGL_CHECK(glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, w, h, 0, format, type, 0));
EGL_CHECK(glBindTexture(GL_TEXTURE_2D, 0));
return textureIdX;
}
int createFBO(void)
{
int result = 0;
unsigned int fb = 0;
fboInfo.color = createTexture2D(WINDOW_WIDTH, WINDOW_HEIGHT, GL_RGBA, GL_RGBA, GL_UNSIGNED_BYTE);
fboInfo.depth = createTexture2D(WINDOW_WIDTH, WINDOW_HEIGHT, GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT, GL_FLOAT);
EGL_CHECK(glGenFramebuffers(1, &fboInfo.id));
EGL_CHECK(glBindFramebuffer(GL_FRAMEBUFFER, fboInfo.id));
EGL_CHECK(glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, fboInfo.color, 0));
EGL_CHECK(glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, fboInfo.depth, 0));
int data = EGL_CHECK(glCheckFramebufferStatus(GL_FRAMEBUFFER));
if (GL_FRAMEBUFFER_COMPLETE == glCheckFramebufferStatus(GL_FRAMEBUFFER)) {
printf("FBO %d set up successfully\n", fboInfo.id);
result = 1;
}
else {
printf("FBO %d NOT set up properly!\n", fboInfo.id);
}
EGL_CHECK(glBindFramebuffer(GL_FRAMEBUFFER, 0));
return result;
}
This is how my FBO Struct Looks Like
typedef struct {
GLuint id;
GLuint color;
GLuint depth;
}FBOInfo;
My MAIN function looks a below where I create FBO & then call my draw function & swap the buffers..
unsigned long i = 0;
int ret = init_gl();
ret = createFBO();
draw(i);
EGL_CHECK(eglSwapBuffers(eglDisplay, eglSurface));
Since this is quite a lot of code, and you suggested that the problem is with the FBO setup, I focused on that section. I did spot a critical problem in that part of the code where you set up the attachments:
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, fboInfo.depth, 0);
If you call glGetError() after this, you should see an error returned, because GL_RENDERBUFFER is not a valid value for the 3rd argument. The only valid arguments are GL_TEXTURE_2D and the GL_TEXTURE_CUBE_MAP_* values. You have to use GL_TEXTURE_2D in this case.
It might actually be better to use a renderbuffer for the depth buffer if you never use it as a texture. But to do that, you'll have to use all the corresponding calls:
Create the id with glGenRenderbuffers.
Bind it with glBindRenderbuffer.
Allocated it with glRenderbufferStorage.
Attach it to the FBO with glFramebufferRenderbuffer.