How to mask OpenGL quad with another quad - opengl

I am trying to make display a quad, but only when it is over an other quad that I know the position of. I thought about using that quad as a mask for the other quad, but I am unsure about how to do it (I already found this post that talks about masking, however in my case I don't have a mask texture; I only know the X, Y, width and height of the area to mask). The current solution I found is to use glBlendFunc, and it only works if I don't render anything behind it, which won't be the case later on.
glBlendFunc(GL_ONE, GL_ZERO);
// draw the background quad, that is acting as the mask...
glBlendFuncSeparate(GL_ZERO, GL_ONE, GL_SRC_ALPHA, GL_ZERO);
// draw the background quad again, this time it will act as a mask...
glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA);
// draw the quads that will be masked...
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // this is the blend func used for the rest of the rendering
Before drawing each frame, I also have a function that clears the screen:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glClearColor(0,0,0,0);
How could I make it so that whatever it is that I draw before that, it will only mask on the previous quad?

If you want to restrict the rendering to a rectangular area than you can use the Scissor Test.
The scissor test has to be enabled (GL_SCISSOR_TEST) a nd th rectangular area can be set by glScissor. e.g.:
glEnable(GL_SCISSOR_TEST);
glScissor(x, w, width, height);

Related

OpenGL: Draw color with mask on a background image

I need to draw a color with some shape onto an image. My thought was to supply a mask with the given shape (say, hearts), then fill the rectangular area with the color and use the mask to render it over the final image.
Masked by:
PLUS
EQUALS:
The rectangle color is decided at runtime - that's why I don't draw the colored heart on my own.
The black heart image is transparent (alpha is 0) anywhere except for the heart (alpha is 255).
I tried using:
glBlendFunc(GL_DST_ALPHA, GL_ZERO)
where the source is the solid color, and the destination is the alpha channel image.
I used https://www.andersriggelsen.dk/glblendfunc.php for help.
However the bottom image (tree) is being used as the DST image...
Seems like I need an intermediate buffer to first render the blue heart, then do a second render onto the tree.
What is the way to do it?
If the tree is drawn before, it will appear in the dest Color and change your final result.
You are right, you need an intermediate buffer to store which part of the quand should be rendered, with the shape of your heart.
OpenGL provide a perfect tool for this, it's called stencil buffer.
In your case i will render my scene like usual (the tree)
Then i will enable the stencil buffer glEnable(GL_STENCIL_TEST);
Disable the write to the colorBuffer glColorMask(false, false, false, false);,
Draw only the heart with the appropriate mask. glStencilMask(0xFF);
Then you draw your colored quad with stencil test enable with glStencilFunc(GL_EQUAL, 1, 0xFF)
Don't forget to clear your stencil buffer each frame glClear(GL_STENCIL_BUFFER_BIT);
You can find some good tutorials online: https://learnopengl.com/Advanced-OpenGL/Stencil-testing
Here's a very simple way to do this in legacy OpenGL (which I assume you're using) that does not require a stencil buffer:
public void render() {
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();
glOrtho(0, 1, 1, 0, 1, -1);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
// Regular blending
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_ALPHA_TEST);
// Discard transparent pixels. Not strictly necessary but good for performance in this case.
glAlphaFunc(GL_GREATER, 0.01f);
glColor3f(1,1,1);
glBindTexture(GL_TEXTURE_2D, treeTexture);
drawQuad();
glColor3f(1,0,1); // Your color goes here
glBindTexture(GL_TEXTURE_2D, maskTexture);
drawQuad();
}
private void drawQuad() {
glBegin(GL_QUADS);
glTexCoord2f(0,0);
glVertex2f(0,0);
glTexCoord2f(0,1);
glVertex2f(0,1);
glTexCoord2f(1,1);
glVertex2f(1,1);
glTexCoord2f(1,0);
glVertex2f(1,0);
glEnd();
}
Here, treeTexture is the tree texture, and maskTexture is the white-on-transparent heart shape.
Result:
The principle is that in the legacy OpenGL pipeline, you can use glColor* before glVertex* to specify a color that the texture color (in this case white or transparent) is multiplied by component-wise.
Note that with this method you can easily render multiple colored shapes in multiple different colors without needing any (relatively expensive) clears of the stencil buffer. I suggest cropping the mask texture to the boundaries of the actual mask shape, to save the GPU the small effort of discarding all the transparent fragments.

OpenGL bitmap alpha buffer

I have two RGBA images (simple 2D raster of type GL_UNSIGNED_BYTE, not textures or anything) of the same scene, one sharp, one blurred. With the blurred image displayed, I need to create an effect where the sharp image shows through in one or more (possibly overlapping) circular areas, smoothly blending with the background blurred image around the edges of the circles. I used to do the following.
Call this at the initialization:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
For every circle, make a copy of the sharp image, assign new values for their alpha components, starting from 255 at the center of the circle, reducing it to 0 towards the edges. Then rendered them one by one with glDrawPixels(), starting from the blurred image. It works, but as the number of those circular areas grows, it is getting noticeably slow.
I was thinking of using some alpha buffers (don't know the correct term), so that I create a small image with a cut out alpha circle in the middle, render its alpha component at one or several places of the framebuffer, then somehow blend the blurred image and the sharp image with those pre-rendered alpha values. So I wrote this in my display() function:
//render the alpha-mask first, at one place for now
GLfloat rp[4];
glGetFloatv(GL_CURRENT_RASTER_POSITION, rp);
glRasterPos2f(0.2f, 0.2f); //just some arbitrary coordinates on the screen
glColorMask(false, false, false, true); //think I only need the alpha-channel
glBlendFunc(GL_ONE, GL_ZERO);
glDrawPixels(sharp_mask_width, sharp_mask_height, GL_RGBA, GL_UNSIGNED_BYTE, alpha_mask);
//render the blurred image, blending the the previously rendered alpha
glRasterPos2f(rp[0], rp[1]);
glColorMask(true, true, true, true); //all channels
glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA); //make a hole where alpha mask had the maximum alpha
glDrawPixels(g_width, g_height, GL_RGBA, GL_UNSIGNED_BYTE, imageDataBlurred);
glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA);
glDrawPixels(g_width, g_height, GL_RGBA, GL_UNSIGNED_BYTE, imageDataSharp);
glFlush();
It seems I don't understand something important about all this blending stuff, because nothing is rendered at all, all I get is an empty screen. All I got is an effect remotely similar to what I need, messing with glBlendFunc parameters and blending rgb-s instead of alpha-s.
How, if at all, can it be done?
I know I will probably burn in hell for using outdated OpenGL in the year 2015, when programmable shaders rule the world and cure cancer, but if possible I'd much prefer an answer describing how to do it in old style OpenGL, so I don't have to change that crap of a legacy program too much...

OpenGL ES 2.0 : paint in FBO + Texture = gray blending in texture

This is how I render my brush in the fragment shader :
gl_FragColor.rgb = Color.rgb;
gl_FragColor.a = Texture.a * Color.a;
With this Blending function on a (0, 0, 0, 0) texture :
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
This is what I see when I draw my texture ADDED to my white background with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) :
But this is what I get in my texture :
Can someone help me to understand why I got this grayed stroke in my texture ? Because I need to take a screenshot of this texture, and I want to have the same rendering but without white background.
[1st picture] When I draw my "view" I have a white background
[2nd picture] But I store my stroke in a texture who have a transparent background
You're doing two blending operations, one in your shader, one using the glBlendFunc call. When rendered directly to the screen it doesn't apply the glBlendFunc a second time, however, when rendering to a texture it gets applied when rendering to the texture, and then again when rendering the texture to the screen.
You have two options, 1) don't do blending your shader, 2) use a different blend function (I find glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); to work well). I find option 2 to work the best for me, handling OpenGL blend functions is notoriously annoying.

Drawing to different size FBO

I'm having an issue while using FBO.
My window size is 1200x300.
When I create a FBO that's 1200x300, everything is fine.
However, when I create FBO with 2400x600 size (effectively, two times bigger on both axes) and try to render the exact same primitives, I get used only one quarter of the FBO's actual area.
FBO same size as window:
FBO twice bigger (triangle clipping can be noticed):
I render these two triangles into FBO, then render a fullscreen quad with a FBO's texture over it. I clear FBO with this pine green color, so I know for sure that all that empty space on the second picture actually comes from the FBO.
// init() of the program
albedo = new RenderTarget(2400, 600, 24 /*depth*/); // in first case, params are 1200, 300, 24
// draw()
RenderTarget::set(albedo); // render to fbo
RenderTarget::clearColor(0.0f, 0.3f, 0.3f, 1.0f);
RenderTarget::clear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// render triangles ...
glDrawArrays(GL_TRIANGLES, 0, 6);
// now it's time to render a fullscreen quad
RenderTarget::set(); // render to back-buffer
RenderTarget::clearColor(0.3f, 0.0f, 0.0f, 1.0f);
RenderTarget::clear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, albedo->texture());
glUniform1i(albedoUnifLoc, 0);
RenderTarget::drawFSQ(); // draw fullscreen quad
I have no cameras of any kind, I don't use glViewport anywhere, I always send coordiantes of the primitives to be drawn in the unit-square space (both x and y coord are in [-1,1] range).
Question is, what am I doing wrong and how do I fix it?
Aside question is, is glViewport in any kind related to currently bound framebuffer? As far as I could understand, that function is just used to set the rectangle area on the window in which the drawing will occur.
Any suggestion would be greatly appreciated. I tried searching for the problem online, the only similar thing was in this SO question, but it hasn't helped me.
You need to call glViewport() with the size of your render target. The only time you can get away without calling it is when you render to the window, and the window is never resized. That's because the default viewport matches the initial window size. From the spec:
In the initial state, w and h are set to the width and height, respectively, of the window into which the GL is to do its rendering.
If you want to render to an FBO with a size different from your window, you have to call glViewport() with the size of the FBO. And when you go back to rendering to the window, you need to call glViewport() with the window size again.
The viewport dimensions are not per framebuffer state. I always thought that would have made sense, but it is not defined that way. So whenever you call glViewport(), you are changing global (i.e. per context) state, independent of the currently bound framebuffer.

OpenGL non-square textures

I'm a little new to OpenGL. I am making a 2D application, and I defined a Quad class which defines a square with a texture on it. It loads these textures from a texture atlas, and it does this correctly. Everything works with regular textures, and the textures display correctly, but doesn't display correctly when the texture image is not a square.
For example, I want a Quad to have a star texture, and have the star to show up, and the area around the star image that still lies in the Quad to be transparent. But what ends up happening is that the star shows up fine, and then behind it is another texture from my texture atlas that fills the Quad. I assume the texture behind it is just the last texture I loaded into the system? Either way, I don't want that texture to show up.
Here's what I mean. I want the star but not the cloud-ish texture behind it showing up:
The important part of my render function is:
glDisable(GL_CULL_FACE);
glVertexPointer(vertexStride, GL_FLOAT, 0, vertexes);
glEnableClientState(GL_VERTEX_ARRAY);
glColorPointer(colorStride, GL_FLOAT, 0, colors);
glEnableClientState(GL_COLOR_ARRAY);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, textureID);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, uvCoordinates);
//render
glDrawArrays(renderStyle, 0, vertexCount);
It seems like the obvious choice would be to use an RGBA texture, and make everything but the star transparent by setting the alpha channel to zero for those pixels (and enable alpha blending for the texture unit).
Use an image manipulation program. Photoshop is a great one, gimp is a free one. You don't really use OpenGL to crop your textures. Rather, your textures need to be prepared beforehand for your program.
There should be some sort of very easy tool to remove everything outside of the star. By remove, I mean make it transparent, which will require an alpha channel. This means you need to make sure that the way you load your textures in your program takes into account 32-bit colors (RGBA - red, green, blue, alpha), not just 24-bit colors (RGB - red, green, blue).
This will make everything behind your star see-through, or transparent.
Also, just an afterthought, it looks like you could be taking a copyrighted image off the internet and using it in your game/program. If you're doing anything commercial, I'd strongly recommend creating your own textures.
You want to make a call to glBindTexture(GL_TEXTURE_2D,0); after you have mapped your texture
here is an example from some code ive written
// Bind the texture
glBindTexture(GL_TEXTURE_2D, image.getID());
// Draw a QUAD with setting texture coordinates
glBegin(GL_QUADS);
{
// Top left corner of the texture
glTexCoord2f(0, 0);
glVertex2f(x, y);
// Top right corner of the texture
glTexCoord2f(image.getRelativeWidth(), 0);
glVertex2f(x+image.getImageWidth(), y);
// Bottom right corner of the texture
glTexCoord2f(image.getRelativeWidth(), image.getRelativeHeight());
glVertex2f(x+image.getImageWidth()-20, y+image.getImageHeight());
// Bottom left corner of the texture
glTexCoord2f(0, image.getRelativeHeight());
glVertex2f(x+20, y+image.getImageHeight());
}
glEnd();
glBindTexture(GL_TEXTURE_2D, 0);
I am no expert but this certainly solved what you are experiencing for me.