I have used https://github.com/akrinke/Font-Stash.git for some desktop applications. Now I want to use it on a raspberry Pi which use gles2. I looked into the code and see the only path that don't work on gles is flush_draw function:
glBindTexture(GL_TEXTURE_2D, texture->id);
glEnable(GL_TEXTURE_2D);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, VERT_STRIDE, texture->verts);
glTexCoordPointer(2, GL_FLOAT, VERT_STRIDE, texture->verts+2);
glDrawArrays(GL_TRIANGLES, 0, texture->nverts);
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
I'm trying to port to gles to this:
glBindTexture(GL_TEXTURE_2D, texture->id);
glEnable(GL_TEXTURE_2D);
GLint position_index = get_attrib(stash->program, "position");
glEnableVertexAttribArray(position_index);
glVertexAttribPointer (position_index, 2, GL_FLOAT, GL_FALSE, VERT_STRIDE, texture->verts);
GLint texture_coord_index = get_attrib(stash->program, "texCoord");
glEnableVertexAttribArray(texture_coord_index);
glVertexAttribPointer (texture_coord_index, 2, GL_FLOAT, GL_FALSE, VERT_STRIDE, texture->verts + 2);
GLint texture_index = get_uniform(stash->program, "texture");
glUniform1i(texture_index, 0);
glDrawArrays(GL_TRIANGLES, 0, texture->nverts);
glDisable(GL_TEXTURE_2D);
with vertex sl
attribute vec4 position;
attribute vec2 texCoord;
varying vec2 texCoordVar;
void main() {
gl_Position = position;
texCoordVar = texCoord;
}
and fragment sl
precision mediump float; // set default precision for floats to medium
uniform sampler2D texture; // shader texture uniform
varying vec2 texCoordVar; // fragment texture coordinate varying
void main() {
// sample the texture at the interpolated texture coordinate
// and write it to gl_FragColor
gl_FragColor = texture2D( texture, texCoordVar);
}
but I can't get anything, nothing on screen.
Can anybody show me what's wrong with my code?
You should setup transformations in your vertex shader. Best way to port fixed function OpenGL app is to write vertex and pixel shader that replicate fixed pipeline with transformations set as uniforms and set those uniforms every time transform is changed.
glEnable(GL_TEXTURE_2D), is not valid GLES2 btw. Also you're not doing any manipulation of the position in your vertex shader, so unless the coordinates are guaranteed to sit within the frustum and you're just passing them through to the rasterizer, then you are leaving it to luck as to whether or not they end up in the frustum. Are you sure you've accounted for everything the fixed function pipe used to handle regarding transforms?
Related
I am tearing my hair out at this problem! I have a simple vertex and fragment shader that worked perfectly (and still does) on an old Vaio laptop. It's for a particle system, and uses point sprites and a single texture to render particles.
The problem starts when I run the program on my desktop, with a much newer graphics card (Nvidia GTX 660). I'm pretty sure I've narrowed it down to the fragment shader, as if I ignore the texture and simply pass inColor out again, everything works as expected.
When I include the texture in the shader calculations like you can see below, all points drawn while that shader is in use appear in the center of the screen, regardless of camera position.
You can see a whole mess of particles dead center using the suspect shader, and untextured particles rendering correctly to the right.
Vertex Shader to be safe:
#version 150 core
in vec3 position;
in vec4 color;
out vec4 Color;
uniform mat4 view;
uniform mat4 proj;
uniform float pointSize;
void main() {
Color = color;
gl_Position = proj * view * vec4(position, 1.0);
gl_PointSize = pointSize;
}
And the fragment shader I suspect to be the issue, but really can't see why:
#version 150 core
in vec4 Color;
out vec4 outColor;
uniform sampler2D tex;
void main() {
vec4 t = texture(tex, gl_PointCoord);
outColor = vec4(Color.r * t.r, Color.g * t.g, Color.b * t.b, Color.a * t.a);
}
Untextured particles use the same vertex shader, but the following fragment shader:
#version 150 core
in vec4 Color;
out vec4 outColor;
void main() {
outColor = Color;
}
Main Program has a loop processing SFML window events, and calling 2 functions, draw and update. Update doesn't touch GL at any point, draw looks like this:
void draw(sf::Window* window)
{
glClearColor(0.3f, 0.3f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
sf::Texture::bind(&particleTexture);
for (ParticleEmitter* emitter : emitters)
{
emitter->useShader();
camera.applyMatrix(shaderProgram, window);
emitter->draw();
}
}
emitter->useShader() is just a call to glUseShader() using a GLuint pointing to a shader program that is stored in the emitter object on creation.
camera.applyMatrix() :
GLuint projUniform = glGetUniformLocation(program, "proj");
glUniformMatrix4fv(projUniform, 1, GL_FALSE, glm::value_ptr(projectionMatrix));
...
GLint viewUniform = glGetUniformLocation(program, "view");
glUniformMatrix4fv(viewUniform, 1, GL_FALSE, glm::value_ptr(viewMatrix));
emitter->draw() in it's entirity:
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// Build a new vertex buffer object
int vboSize = particles.size() * vboEntriesPerParticle;
std::vector<float> vertices;
vertices.reserve(vboSize);
for (unsigned int particleIndex = 0; particleIndex < particles.size(); particleIndex++)
{
Particle* particle = particles[particleIndex];
particle->enterVertexInfo(&vertices);
}
// Bind this emitter's Vertex Buffer
glBindBuffer(GL_ARRAY_BUFFER, vbo);
// Send vertex data to GPU
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * vertices.size(), &vertices[0], GL_STREAM_DRAW);
GLint positionAttribute = glGetAttribLocation(shaderProgram, "position");
glEnableVertexAttribArray(positionAttribute);
glVertexAttribPointer(positionAttribute,
3,
GL_FLOAT,
GL_FALSE,
7 * sizeof(float),
0);
GLint colorAttribute = glGetAttribLocation(shaderProgram, "color");
glEnableVertexAttribArray(colorAttribute);
glVertexAttribPointer(colorAttribute,
4,
GL_FLOAT,
GL_FALSE,
7 * sizeof(float),
(void*)(3 * sizeof(float)));
GLuint sizePointer = glGetUniformLocation(shaderProgram, "pointSize");
glUniform1fv(sizePointer, 1, &pointSize);
// Draw
glDrawArrays(GL_POINTS, 0, particles.size());
And finally, particle->enterVertexInfo()
vertices->push_back(x);
vertices->push_back(y);
vertices->push_back(z);
vertices->push_back(r);
vertices->push_back(g);
vertices->push_back(b);
vertices->push_back(a);
I'm pretty sure this isn't an efficient way to do all this, but this was a piece of coursework I wrote a semester ago. I'm only revisiting it to record a video of it in action.
All shaders compile and link without error. By playing with the fragment shader, I've confirmed that I can use gl_PointCoord to vary a solid color across particles, so that is working as expected. When particles draw in the center of the screen, the texture is drawn correctly, albeit in the wrong place, so that is loaded and bound correctly as well. I'm by no means a GL expert, so that's about as much debugging as I could think to do myself.
This wouldn't be annoying me so much if it didn't work perfectly on an old laptop!
Edit: Included a ton of code
As turned out in the comments, the shaderProgram variable which was used for setting the camera-related uniforms did not depend on the actual program in use. As a result, the uniform locations were queried for a different program when drawing the textured particles.
The uniform location assignment is totally implementation specific, nvidia for example tends to assign them by the alphabetical order of the uniform names, so view's location would change depending if tex is actually present (and acttively used) or not. If the other implementation just assigns them by the order they appear in the code or some other scheme, things might work by accident.
I am currently trying to render the value of an integer using a bitmap (think scoreboard for invaders) but I'm having trouble changing texture coordinates while the game is running.
I link the shader and data like so:
GLint texAttrib = glGetAttribLocation(shaderProgram, "texcoord");
glEnableVertexAttribArray(texAttrib);
glVertexAttribPointer(texAttrib, 2, GL_FLOAT, GL_FALSE,
4 * sizeof(float), (void*)(2 * sizeof(float)));
And in my shaders I do the following:
Vertex Shader:
#version 150
uniform mat4 mvp;
in vec2 position;
in vec2 texcoord;
out vec2 Texcoord;
void main() {
Texcoord = texcoord;
gl_Position = mvp * vec4(position, 0.0, 1.0) ;
}
FragmentShader:
#version 150 core
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D tex;
void main() {
outColor = texture2D(tex, Texcoord);
}
How would I change this code/implement a function to be able to change the texcoord variable?
If you need to modify the texture coordinates frequently, but the other vertex attributes remain unchanged, it can be beneficial to keep the texture coordinates in a separate VBO. While it's generally preferable to use interleaved attributes, this is one case where that's not necessarily the most efficient solution.
So you would have two VBOs, one for the positions, and one for the texture coordinates. Your setup code will look something like this:
GLuint vboIds[2];
glGenBuffers(2, vboIds);
// Load positions.
glBindBuffer(GL_ARRAY_BUFFER, vboIds[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(positions), positions, GL_STATIC_DRAW);
// Load texture coordinates.
glBindBuffer(GL_ARRAY_BUFFER, vboIds[1]);
glBufferData(GL_ARRAY_BUFFER, sizeof(texCoords), texCoords, GL_DYNAMIC_DRAW);
Note the different last argument to glBufferData(), which is a usage hint. GL_STATIC_DRAW suggests to the OpenGL implementation that the data will not be modified on a regular basis, while GL_DYNAMIC_DRAW suggests that it will be modified frequently.
Then, anytime your texture data changes, you can modify it with glBufferSubData():
glBindBuffer(GL_ARRAY_BUFFER, vboIds[1]);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(texCoords), texCoords);
Of course if only part of them change, you would only make the call for the part that changes.
You did not specify how exactly the texture coordinates change. If it's just something like a simple transformation, it would be much more efficient to apply that transformation in the shader code, instead of modifying the original texture coordinates.
For example, say you only wanted to shift the texture coordinates. You could have a uniform variable for the shift in your vertex shader, and then add it to the incoming texture coordinate attribute:
uniform vec2 TexCoordShift;
in vec2 TexCoord;
out vec2 FragTexCoord;
...
FragTexCoord = TexCoord + TexCoordShift;
and then in your C++ code:
// Once during setup, after linking program.
TexCoordShiftLoc = glGetUniformLocation(program, "TexCoordShift");
// To change transformation, after glUseProgram(), before glDraw*().
glUniform2f(TexCoordShiftLoc, xShift, yShift);
So I make no promises on the efficiency of this technique, but it's what I do and I'll be damned if text rendering is what slows down my program.
I have a dedicated class to store mesh, which consists of a few vectors of data, and a few GLuints to store pointers to my uploaded data. I upload data to openGL like this:
glBindBuffer(GL_ARRAY_BUFFER, position);
glBufferData(GL_ARRAY_BUFFER, sizeof(vec3) * data.position.size(), &data.position[0], GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, normal);
glBufferData(GL_ARRAY_BUFFER, sizeof(vec3) * data.normal.size(), &data.normal[0], GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, uv);
glBufferData(GL_ARRAY_BUFFER, sizeof(vec2) * data.uv.size(), &data.uv[0], GL_DYNAMIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned int) * data.index.size(), &data.index[0], GL_DYNAMIC_DRAW);
Then, to draw it I go like this:
glEnableVertexAttribArray(positionBinding);
glBindBuffer(GL_ARRAY_BUFFER, position);
glVertexAttribPointer(positionBinding, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(normalBinding);
glBindBuffer(GL_ARRAY_BUFFER, normal);
glVertexAttribPointer(normalBinding, 3, GL_FLOAT, GL_TRUE, 0, NULL);
glEnableVertexAttribArray(uvBinding);
glBindBuffer(GL_ARRAY_BUFFER, uv);
glVertexAttribPointer(uvBinding, 2, GL_FLOAT, GL_FALSE, 0, NULL);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index);
glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_INT, NULL);
glDisableVertexAttribArray(positionBinding);
glDisableVertexAttribArray(normalBinding);
glDisableVertexAttribArray(uvBinding);
This setup is designed for a full fledged 3D engine, so you can definitely tone it down a little. Basically, I have 4 buffers, position, uv, normal, and index. You probably only need the first two, so just ignore the others.
Anyway, each time I want to draw some text, I upload my data using the first code chunk I showed, then draw it using the second chunk. It works pretty well, and it's very elegant. This is my code to draw text using it:
vbo(genTextMesh("some string")).draw(); //vbo is my mesh containing class
I hope this helps, if you have any questions feel free to ask.
I use a uniform vec2 to pass the texture offset into the vertex shader.
I am not sure how efficient that is, but if your texture coordinates are the same shape, and just moved around, then this is an option.
#version 150
uniform mat4 mvp;
uniform vec2 texOffset;
in vec2 position;
in vec2 texcoord;
out vec2 Texcoord;
void main() {
Texcoord = texcoord + texOffset;
gl_Position = mvp * vec4(position, 0.0, 1.0) ;
}
I am working on a game, and trying to implement the instancized CPU-Particle System programmed on http://www.opengl-tutorial.org/intermediate-tutorials/billboards-particles/particles-instancing/
i managed to get it working in my code structure, but i am trying to draw other objects in the same window, which i can't, i have tested it, and it only allows me to draw one, either draw the particle system or draw the object i want.
The problem happens specifically at this code part :
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// Use our shader
glUseProgram(particleprogramID->programHandle);
unit2 +=1;
glActiveTexture(GL_TEXTURE0 + unit2);
glBindTexture(GL_TEXTURE_2D, texture);
glUniform1i(TextureID, unit2);
glm::mat4 ViewMatrix = camera->getViewMatrix();
// Same as the billboards tutorial
glUniform3f(CameraRight_worldspace_ID, ViewMatrix[0][0], ViewMatrix[1][0], ViewMatrix[2][0]);
glUniform3f(CameraUp_worldspace_ID , ViewMatrix[0][1], ViewMatrix[1][1], ViewMatrix[2][1]);
glUniformMatrix4fv(ViewProjMatrixID, 1, GL_FALSE, &mvp[0][0]);
//glUniformMatrix4fv(modviewprojID, 1, GL_FALSE, &mvp[0][0]);
//1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, billboard_vertex_buffer);
glVertexAttribPointer(
0,
3,
GL_FLOAT,
GL_FALSE,
0,
(void*)0
);
// 2nd attribute buffer : positions of particles' centers
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, particles_position_buffer);
glVertexAttribPointer(
1,
4,
GL_FLOAT,
GL_FALSE,
0,
(void*)0
);
// 3rd attribute buffer : particles' colors
glEnableVertexAttribArray(2);
glBindBuffer(GL_ARRAY_BUFFER, particles_color_buffer);
glVertexAttribPointer(
2,
4,
GL_UNSIGNED_BYTE,
GL_TRUE,
0,
(void*)0
);
glVertexAttribDivisor(0, 0);
glVertexAttribDivisor(1, 1);
glVertexAttribDivisor(2, 1);
glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, 4, ParticlesCount);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
then i try to draw my star:
unit2 += 1;
starTexture->Bind(unit2);
shaderObject ->useShader();
glUniform1i(glGetUniformLocation(shaderObject->programHandle, "colorTexture"), unit2);
glUniformMatrix4fv(glGetUniformLocation(shaderObject->programHandle, "modelMatrix"), 1, GL_FALSE, glm::value_ptr(star1->getModelMatrix()));
glUniformMatrix4fv(glGetUniformLocation(shaderObject->programHandle, "projectionMatrix"), 1, GL_FALSE, glm::value_ptr(projectionViewMatrix));
star1->draw();
the vertex and fragment shader for the particle system:
#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 squareVertices;
layout(location = 1) in vec4 xyzs; // Position of the center of the particule and size of the square
layout(location = 2) in vec4 color; // Position of the center of the particule and size of the square
// Output data ; will be interpolated for each fragment.
out vec2 UV;
out vec4 particlecolor;
// Values that stay constant for the whole mesh.
uniform vec3 CameraRight_worldspace;
uniform vec3 CameraUp_worldspace;
uniform mat4 VP; // Model-View-Projection matrix, but without the Model (the position is in BillboardPos; the orientation depends on the camera)
void main()
{
float particleSize = xyzs.w; // because we encoded it this way.
vec3 particleCenter_wordspace = xyzs.xyz;
vec3 vertexPosition_worldspace =
particleCenter_wordspace
+ CameraRight_worldspace * squareVertices.x * particleSize
+ CameraUp_worldspace * squareVertices.y * particleSize;
// Output position of the vertex
gl_Position = VP * vec4(vertexPosition_worldspace, 1.0f);
// UV of the vertex. No special space for this one.
UV = squareVertices.xy + vec2(0.5, 0.5);
particlecolor = color;
}
frragment shader:
#version 330 core
// Interpolated values from the vertex shaders
in vec2 UV;
in vec4 particlecolor;
// Ouput data
out vec4 color;
uniform sampler2D myTexture;
void main(){
// Output color = color of the texture at the specified UV
color = texture2D( myTexture, UV ) * particlecolor;
}
and it only displays the particle system:
worth mentioning is:
the object i want to draw is a star modelled in blender and is displayed correctly when drawn alone or with other objects other than the particle system. and has its own class having buffers for psitions, UVs, indices and normals...
it seems like the star data are being swallowed by the buffer...
i appreciate every help...
I have been successful in rendering primitives with a colour component via the shader and also translating them. However, upon attempting to load a texture and render it for the primitive via the shader, the primitives glitch, they should be squares:
As you can see, it successfully loads and applies the texture with the colour component to the single primitive in the scene.
If I then remove the color component, I again have primitives, but oddly, they are scaled by changing the uvs - this should not be the case, only the uvs should scale! (also their origin is offset)
My shader init code:
void renderer::initRendererGfx()
{
shader->compilerShaders();
shader->loadAttribute(#"Position");
shader->loadAttribute(#"SourceColor");
shader->loadAttribute(#"TexCoordIn");
}
Here is my object handler rendering function code:
void renderer::drawRender(glm::mat4 &view, glm::mat4 &projection)
{
//Loop through all objects of base type OBJECT
for(int i=0;i<SceneObjects.size();i++){
if(SceneObjects.size()>0){
shader->bind();//Bind the shader for the rendering of this object
SceneObjects[i]->mv = view * SceneObjects[i]->model;
shader->setUniform(#"modelViewMatrix", SceneObjects[i]->mv);//Calculate object model view
shader->setUniform(#"MVP", projection * SceneObjects[i]->mv);//apply projection transforms to object
glActiveTexture(GL_TEXTURE0); // unneccc in practice
glBindTexture(GL_TEXTURE_2D, SceneObjects[i]->_texture);
shader->setUniform(#"Texture", 0);//Apply the uniform for this instance
SceneObjects[i]->draw();//Draw this object
shader->unbind();//Release the shader for the next object
}
}
}
Here is my sprite buffer initialisation and draw code:
void spriteObject::draw()
{
glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, sizeof(SpriteVertex), NULL);
glVertexAttribPointer((GLuint)1, 4, GL_FLOAT, GL_FALSE, sizeof(SpriteVertex) , (GLvoid*) (sizeof(GL_FLOAT) * 3));
glVertexAttribPointer((GLuint)2, 2, GL_FLOAT, GL_FALSE, sizeof(SpriteVertex) , (GLvoid*)(sizeof(GL_FLOAT) * 7));
glDrawElements(GL_TRIANGLE_STRIP, sizeof(SpriteIndices)/sizeof(SpriteIndices[0]), GL_UNSIGNED_BYTE, 0);
}
void spriteObject::initBuffers()
{
glGenBuffers(1, &vertexBufferID);
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferID);
glBufferData(GL_ARRAY_BUFFER, sizeof(SpriteVertices), SpriteVertices, GL_STATIC_DRAW);
glGenBuffers(1, &indexBufferID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBufferID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(SpriteIndices), SpriteIndices, GL_STATIC_DRAW);
}
Here is the vertex shader:
attribute vec3 Position;
attribute vec4 SourceColor;
varying vec4 DestinationColor;
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
uniform mat4 MVP;
attribute vec2 TexCoordIn;
varying vec2 TexCoordOut;
void main(void) {
DestinationColor = SourceColor;
gl_Position = MVP * vec4(Position,1.0);
TexCoordOut = TexCoordIn;
}
And finally the fragment shader:
varying lowp vec4 DestinationColor;
varying lowp vec2 TexCoordOut;
uniform sampler2D Texture;
void main(void) {
gl_FragColor = DestinationColor * texture2D(Texture, TexCoordOut);
}
If you want to see any more specifics of certain elements, just ask.
Many thanks.
Are you sure your triangles have the same winding? The winding is the order in which the triangle points are listed ( either clockwise or counter-clockwise ). The winding is used in face culling to determine if the triangle is facing or back-facing.
You can easily check if your triangle are wrongly winded by disabling face culling.
glDisable( GL_CULL_FACE );
More information here ( http://db-in.com/blog/2011/02/all-about-opengl-es-2-x-part-23/#face_culling )
I'm trying to output some data from compute shader to a texture, but imageStore() seems to do nothing. Here's the shader:
#version 430
layout(RGBA32F) uniform image2D image;
layout (local_size_x = 1, local_size_y = 1) in;
void main() {
imageStore(image, ivec2(gl_GlobalInvocationID.xy), vec4(0.0f, 1.0f, 1.0f, 1.0f));
}
and the application code is here:
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, WIDTH, HEIGHT, 0, GL_RGBA, GL_FLOAT, 0);
glBindImageTexture(0, tex, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA32F);
glUseProgram(program->GetName());
glUniform1i(program->GetUniformLocation("image"), 0);
glDispatchCompute(WIDTH, HEIGHT, 1);
then a full screen quad is rendered with that texture but currently it only shows some random old data from video memory. Any idea what could be wrong?
EDIT:
This is how I display the texture:
// This comes right after the previous block of code
glUseProgram(drawProgram->GetName());
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex);
glUniform1i(drawProgram->GetUniformLocation("sampler"), 0);
glBindVertexArray(vao);
glDrawArrays(GL_TRIANGLES, 0, 6);
glfwSwapBuffers();
and the drawProgram consists of:
#version 430
#extension GL_ARB_explicit_attrib_location : require
layout(location = 0) in vec2 position;
out vec2 uvCoord;
void main() {
gl_Position = vec4(position.x, position.y, 0.0f, 1.0f);
uvCoord = position;
}
and:
#version 430
in vec2 uvCoord;
out vec4 color;
uniform sampler2D sampler;
void main() {
vec2 uv = (uvCoord + vec2(1.0f)) / 2.0f;
uv.y = 1.0f - uv.y;
color = texture(sampler, uv);
//color = vec4(uv.x, uv.y, 0.0f, 1.0f);
}
The last commented line in fragment shader produces this output: Render output
The vertex array object (vao) has one buffer with 6 2D vertices:
-1.0, -1.0
1.0, -1.0
1.0, 1.0
1.0, 1.0
-1.0, 1.0
-1.0, -1.0
This is how I display the texture:
That's not good enough. I don't see a call to glMemoryBarrier, so there's no guarantee that your code actually works.
Remember: writes to images via Image Load/Store are not memory coherent. They require explicit user synchronization before they become visible. If you want to use an image you have stored to as a texture later, there must be an explicit glMemoryBarrier call after the rendering command that writes to it, but before the rendering command that samples from it as a texture.
Why that is a problem, I don't know
Because desktop OpenGL is not OpenGL ES.
The last three parameters only describe the arrangement of the pixel data you're giving OpenGL. They change nothing about how OpenGL stores the data. In ES, they do, but that's only because ES doesn't do format conversions.
In desktop OpenGL, it is perfectly legal to upload floating-point data to a normalized integer texture; OpenGL is expected to convert the data as best it can. ES doesn't do conversions, so it has to change the internal format (the third parameter) to match the data.
Desktop GL does not. If you want a specific image format, you ask for it. Desktop GL gives you what you ask for, and only what you ask for.
Always use sized internal formats.
GL_RGBA is not a sized internal format and so you're not able to know which it is really. Most often, it is transformed to GL_RGBA8 by OpenGL.
In your case, the GL_FLOAT parameter you set only describes the pixel data you could upload in the texture.
Read the table 2 here to know what you can set as an internal texture format.
Okay I found the solution. The problem lies here:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, WIDTH, HEIGHT, 0, GL_RGBA, GL_FLOAT, 0);
this line doesn't specify the size of the internal format (GL_RGBA). When I supplied GL_RGBA32F it started working. Why that is a problem, I don't know (hopefully somebody will be able to explain).