How do I fix the jaggedness and darkness around the edges of 2D textures in WebGL? - glsl

I've tried setting the antialias property on the WebGL context to true, but that didn't fix it.
This is what I'm getting in WebGL:
This is canvas rendering, via drawImage, which is what I'm trying to replicate:
I'm using the default WebGL settings, aside from these three changed flags:
gl.enable(gl.BLEND); // Enable blending
gl.depthFunc(gl.LEQUAL); //near things obscure far things
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
And here's how I load the sprites (with the sprite variable being an Image object)
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, sprite);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);

Alright, fixed it. It was happening because my textures used premultiplied alpha values, which messed up the blending.
I fixed it by changing my blendFunc from gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA) to gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA)
I also had to tell WebGL to unpack premultiplied alpha values by doing
gl.pixelStorei(gl.UNPACK_PREMULTIPLY_ALPHA_WEBGL, true)

Related

OpenGL: Draw color with mask on a background image

I need to draw a color with some shape onto an image. My thought was to supply a mask with the given shape (say, hearts), then fill the rectangular area with the color and use the mask to render it over the final image.
Masked by:
PLUS
EQUALS:
The rectangle color is decided at runtime - that's why I don't draw the colored heart on my own.
The black heart image is transparent (alpha is 0) anywhere except for the heart (alpha is 255).
I tried using:
glBlendFunc(GL_DST_ALPHA, GL_ZERO)
where the source is the solid color, and the destination is the alpha channel image.
I used https://www.andersriggelsen.dk/glblendfunc.php for help.
However the bottom image (tree) is being used as the DST image...
Seems like I need an intermediate buffer to first render the blue heart, then do a second render onto the tree.
What is the way to do it?
If the tree is drawn before, it will appear in the dest Color and change your final result.
You are right, you need an intermediate buffer to store which part of the quand should be rendered, with the shape of your heart.
OpenGL provide a perfect tool for this, it's called stencil buffer.
In your case i will render my scene like usual (the tree)
Then i will enable the stencil buffer glEnable(GL_STENCIL_TEST);
Disable the write to the colorBuffer glColorMask(false, false, false, false);,
Draw only the heart with the appropriate mask. glStencilMask(0xFF);
Then you draw your colored quad with stencil test enable with glStencilFunc(GL_EQUAL, 1, 0xFF)
Don't forget to clear your stencil buffer each frame glClear(GL_STENCIL_BUFFER_BIT);
You can find some good tutorials online: https://learnopengl.com/Advanced-OpenGL/Stencil-testing
Here's a very simple way to do this in legacy OpenGL (which I assume you're using) that does not require a stencil buffer:
public void render() {
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();
glOrtho(0, 1, 1, 0, 1, -1);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
// Regular blending
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_ALPHA_TEST);
// Discard transparent pixels. Not strictly necessary but good for performance in this case.
glAlphaFunc(GL_GREATER, 0.01f);
glColor3f(1,1,1);
glBindTexture(GL_TEXTURE_2D, treeTexture);
drawQuad();
glColor3f(1,0,1); // Your color goes here
glBindTexture(GL_TEXTURE_2D, maskTexture);
drawQuad();
}
private void drawQuad() {
glBegin(GL_QUADS);
glTexCoord2f(0,0);
glVertex2f(0,0);
glTexCoord2f(0,1);
glVertex2f(0,1);
glTexCoord2f(1,1);
glVertex2f(1,1);
glTexCoord2f(1,0);
glVertex2f(1,0);
glEnd();
}
Here, treeTexture is the tree texture, and maskTexture is the white-on-transparent heart shape.
Result:
The principle is that in the legacy OpenGL pipeline, you can use glColor* before glVertex* to specify a color that the texture color (in this case white or transparent) is multiplied by component-wise.
Note that with this method you can easily render multiple colored shapes in multiple different colors without needing any (relatively expensive) clears of the stencil buffer. I suggest cropping the mask texture to the boundaries of the actual mask shape, to save the GPU the small effort of discarding all the transparent fragments.

OpenGL ping pong works with one pass, not with two

This might be a more basic OpenGL mistake than the title suggests.
I am doing segmentation using fragment shaders in OpenGL, which require multiple rendering passes to do successive operations (eg. gaussian blur + edge detection + segmentation).
As far as I understood, there is this common technique called ping pong which takes two frame buffers (FBO) and simply renders to one FBO using the other as input.
The thing is, one pass--shader_0 outputting stuff to FBO_1 using FBO_0 as input--works, but when I try to use shader_1 with FBO_0 as input and render into FBO_1, I get a completely transparent image.
I checked both shaders and they do work individually, yet together they produce this transparent output.
Here is the set of calls I do for each pass, with segmentationBuffers containing the two FBOs, respectively used as input and output for this pass:
glBindFramebuffer(
GL_FRAMEBUFFER,
segmentationBuffers[lastSegmentationFboRenderedTo]->FramebufferName
);
glViewport(0, 0, windowWidth, windowHeight);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
currentStepShader->UseProgram();
glClearColor(0, 0, 0, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Enable blending
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
lastSegmentationFboRenderedTo = (lastSegmentationFboRenderedTo + 1) % 2;
glActiveTexture(GL_TEXTURE0);
glBindTexture(
GL_TEXTURE_2D,
segmentationBuffers[lastSegmentationFboRenderedTo]->renderedTexture
);
glUniform1i(glGetUniformLocation(shader->shaderPtr, "inputTexture"), 0);
glUniform2fv(
glGetUniformLocation(shader->shaderPtr, "texCoordOffsets"),
25,
texCoordOffsets
);
quad->Draw(GL_TRIANGLES, shader,
orthographicProjection,
glm::mat4(1.0f),
getOverlayModelMatrix()
);
And as stated above, doing one pass yields correct intermediary results, but doing two in a row gives a transparent frame. I suspect this is a more basic OpenGL mistake than it seems, but any help is appreciated!
I solved the issue by removing the call to glEnable(GL_DEPTH_TEST);.
I suspect that by enabling depth testing, OpenGL was discarding fragments from subsequent computation steps since they had the same depth value.

Swapping between different framebuffer

I am attempting to create a scenario where I render two completely different textures and swap between them, to represent different states in a game. Is it possible to render textures to framebuffer A, and then completely different textures to framebuffer B. Is then possible to switch back and forth between these frames, so for instance if framebuffer A is being rendered onto the screen, then the contents of framebuffer B are stored in memory until they are selected. Give only helpful answers.
Although this particular scenario may be handled in various way depending on what you exactly need to render on them, swapping between framebuffer textures is not uncommon practice and it can serve well the purpose of making e.g. gaussian blur post process effects with the use of ping-pong framebuffers as explained in the following article: https://learnopengl.com/#!Advanced-Lighting/Bloom
One of the possible solution would then require the creation of two offscreen framebuffers to be later displayed on the main framebuffer
//this renderbuffer provides DEPTH ONLY - change to include stencilBuffer
function createFramebufferA() {
var FBO_A = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, FBO_A);
var FBO_A_texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, FBO_A_texture);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, screen.width, screen.height, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
var renderbuffer = gl.createRenderbuffer();
gl.bindRenderbuffer(gl.RENDERBUFFER, renderbuffer);
gl.renderbufferStorage(gl.RENDERBUFFER, gl.DEPTH_COMPONENT16, screen.width, screen.height);
gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, renderbuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, FBO_A_texture, 0);
gl.bindTexture(gl.TEXTURE_2D, null);
gl.bindRenderbuffer(gl.RENDERBUFFER, null);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
return FBO_A;
}
Later on after you've finished rendering on those framebuffers you could setup a quick shader program bound to the main framebuffer to display a quad that fills the entire screen and outputs the content of those framebuffer's texture
gl.bindFramebuffer(gl.FRAMEBUFFER, null); //we're now drawing inside main framebuffer
gl.useProgram(PostProcessProgram); //this program displays a quad that fills the entire screen and takes a texture as uniform
/* ... after binding the VBO and attribute pointers ... */
// this condition decides whether we're going to display the content
// of framebuffer A or framebuffer B
if(certain_condition)
gl.bindTexture(gl.TEXTURE_2D, FBO_A_texture);
else
gl.bindTexture(gl.TEXTURE_2D, FBO_B_texture);
gl.activeTexture(gl.TEXTURE0);
gl.uniform1i(PostProcessProgram.texture, 0);
//drawing one of either framebuffer's texture
gl.drawArrays(gl.TRIANGLES, 0, 6);
If you're worried about performance, keep in mind that using multiple framebuffers is not uncommon and at times is mandatory to achieve certain post process effects or tecniques such as deferred shading.

draw in FrameBuffer but get only black

windows
using glew
I'm trying to render offscreen and save the img opengl rendered to a png file.
I followed a highly rated answer on stackoverflow:
How to render offscreen on OpenGL?
But the png file I get is only a black screen.
Here's my code relating to it:
glutCreateWindow(argv[0]);
if(GLEW_OK!=glewInit())
{
return -1;
}
initScene();
GLuint fbo, render_buf;
glGenFramebuffers(1,&fbo);
glGenRenderbuffers(1,&render_buf);
glBindRenderbuffer(GL_RENDERBUFFER,render_buf);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB8, viewport.w, viewport.h);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,fbo);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, render_buf);
//Before drawing
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,fbo);
glClear(GL_COLOR_BUFFER_BIT); // clear the color buffer
glMatrixMode(GL_MODELVIEW); // indicate we are specifying camera transformations
glLoadIdentity(); // make sure transformation is "zero'd"
//draw...
//glBegin(GL_POINTS) glColor3f, glVertex2f
//glFlush();
glFinish();
/*glutDisplayFunc(myDisplay);
glutPostRedisplay();*/
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);
savePNG(outputPNGName,0,0,viewport.w,viewport.h);
//At deinit:
glDeleteFramebuffers(1,&fbo);
glDeleteRenderbuffers(1,&render_buf);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,0);
How to solve the problem?
Thank you
savePNG (related code):
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(x, y, width, height, GL_RGB, GL_UNSIGNED_BYTE, (GLvoid *)image);
There are at least two problems in this code:
GL_RGB8 is not a valid format for a renderbuffer. From the glRenderbufferStorage() man page:
internalformat specifies the internal format to be used for the renderbuffer object's storage and must be a color-renderable, depth-renderable, or stencil-renderable format.
Table 8.13 in the latest spec document (4.5, downloadable from https://www.opengl.org/registry) lists all formats, with a column showing which of them are color-renderable. RGB8 does not have a checkmark in that column. You can use GL_RGBA8 instead, which is color-renderable.
You may also want to check out the glCheckFramebufferStatus() function, which allows you to check if your framebuffer setup is valid.
While we don't see the code for savePNG(), there is no way it can know that you want to read the pixel data from your FBO. It will most likely use glReadPixels(), which reads data from the current read framebuffer, while your code only sets the draw framebuffer. Before calling savePNG(), add this call to set the read framebuffer to your FBO:
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);

Draw the contents of the render buffer Object

Do not quite understand the operation render buffer object. For example if I want to show what is in the render buffer, I must necessarily do the render to texture?
GLuint fbo,color_rbo,depth_rbo;
glGenFramebuffers(1,&fbo);
glBindFramebuffer(GL_FRAMEBUFFER,fbo);
glGenRenderbuffersEXT(1, &color_rb);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, color_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_RGBA8, 256, 256);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,GL_RENDERBUFFER_EXT, color_rb);
glGenRenderbuffersEXT(1, &depth_rb);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, 256, 256);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT,GL_RENDERBUFFER_EXT, depth_rb);
if(glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT)!=GL_FRAMEBUFFER_COMPLETE_EXT)return 1;
glBindFramebuffer(GL_FRAMEBUFFER,0);
//main loop
//This does not work :-(
glBindFramebuffer(GL_FRAMEBUFFER,fbo);
glClearColor(0.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
drawCube();
glBindFramebuffer(GL_FRAMEBUFFER,0);
any idea?
You are not going to see anything when you draw to an FBO instead of the default framebuffer, that is part of the point of FBOs.
Your options are:
Blit the renderbuffer into another framebuffer (in this case it would probably be GL_BACK for the default backbuffer)
Draw into a texture attachment and then draw texture-mapped primitives (e.g. triangles / quad) if you want to see the results.
Since 2 is pretty self-explanatory, I will explain option 1 in greater detail:
/* We are going to blit into the window (default framebuffer) */
glBindFramebuffer (GL_DRAW_FRAMEBUFFER, 0);
glDrawBuffer (GL_BACK); /* Use backbuffer as color dst. */
/* Read from your FBO */
glBindFramebuffer (GL_READ_FRAMEBUFFER, fbo);
glReadBuffer (GL_COLOR_ATTACHMENT0); /* Use Color Attachment 0 as color src. */
/* Copy the color and depth buffer from your FBO to the default framebuffer */
glBlitFramebuffer (0,0, width,height,
0,0, width,height,
GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT,
GL_NEAREST);
There are a couple of things worth mentioning here:
First, blitting from one framebuffer to another is often measurably slower than drawing two textured triangles that fill the entire viewport. Second, you cannot use linear filtering when you blit a depth or stencil image... but you can if you take the texture mapping approach (this only truly matters if the resolution of your source and destination buffers differ when blitting).
Overall, drawing a textured primitive is the more flexible solution. Blitting is most useful if you need to do Multisample Anti-Aliasing, because you would have to implement that in a shader otherwise and multisample texturing was added after Framebuffer Objects; some older hardware/drivers support FBOs but not multisample color (requires DX10 hardware) or depth (requires DX10.1 hardware) textures.