Rendering a Semi-Transparent Sprite to a Texture - opengl

I have a 1024x1024 background texture and am trying to render a 100x100 sprite (also stored in a texture) to the bottom left corner of the background texture.
I want to render the sprite at 50% opacity. This needs to be done in the CPU, not the GPU using a shader. Most examples I've found are using shaders to achieve this.
What's the best way to do this?

I suppose you mean from CPU-side opengl commands, therefore using the fixed function (or fixed pipeline). I deduce this from the "no shader" request.
Because "doing this on CPU" would actually really mean do a retrieval/mapping of the texture to access it on CPU, loop on pixels, and copy back result to graphic card using glTexImage or unmap the texture afterward. this last approach would be terribly inefficient.
So you just need to activate blending.
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
and render in order: background, then a little quad with your 100x100 image after. it will take the alpha channel from your 100x100 image to make the blend. You could set it to a constant 50% from an image editing tool.

Related

How do I make my object transparent but still show the texture?

I'm trying to render a model in OpenGL. I'm on Day 4 of C++ and OpenGL (Yes, I have learned this quickly) and I'm at a bit of a stop with textures.
I'm having a bit of trouble making my texture alpha work. In this image, I have this character from Spiral Knights. As you can see on the top of his head, there's those white portions.
I've got Blending enabled and my blend function set to glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
What I'm assuming here, and this is why I ask this question, is that the texture transparency is working, but the triangles behind the texture are still showing.
How do I make those triangles invisible but still show my texture?
Thanks.
There are two important things to be done when using blending:
You must sort primitives back to front and render in that order (order independent transparency in depth buffer based renderers is still an ongoing research topic).
When using textures to control the alpha channel you must either write a shader that somehow gets the texture's alpha values passed down to the resulting fragment color, or – if you're using the fixed function pipeline – you have to use GL_MODULATE texture env mode, or GL_DECAL with the primitive color alpha value set to 0, or use GL_REPLACE.

mix RGBA pixmap with texture

I have a RGBA pixmap (e.g. an antialiased circular 4x4 dot) that I want to draw over a texture in a way similar to a brush stroke. The obvious solution of using glTexSubImage2D just overwrites a rectangular area with no respect to alpha value. Is there a better solution than the obvious maintaining a mirrored version of the texture in local RAM, doing a blending there and then using glTexSubImage2D to upload it - preferrably OpenGL/GPU based one? Is FBO the way to go?
Also, is using FBO for this efficient both in terms of maintaining 1:1 graphics quality (no artifacts, interpolation etc) and in terms of speed? With 4x4 object in RAM doing a CPU blending is basically transforming 4x4 matrix with basic float arithmetics, totalling to 16 simple math iterations & 1 glTexSubImage2D call... is setting an FBO, switching rendering contexts & doing the rendering still faster?
Benchmarking data would be very appreciated, as well as MVCEs/pseudocode for proposed solutions.
Note: creating separate alpha-blended quads for each stroke is not an option, mainly due to very high amount of strokes used. Go science!
You can render to a texture with a framebuffer object (FBO).
At the start of your program, create an FBO and attach the texture to it. Whenever you need to draw a stroke, bind the FBO and draw the stroke as if you were drawing it to the screen (with triangles). The stroke gets written to the attached texture.
For your main draw loop, unbind the FBO, bind the attached texture, and draw a quad over the entire screen (from -1,-1 to 1,1 without using any matrices).
Also, is using FBO for this efficient both in terms of maintaining 1:1 graphics quality (no artifacts, interpolation etc) and in terms of speed?
Yes.
If the attached texture is as big as the window, then there are no artifacts.
You only need to switch to the FBO when adding a new stroke, after which you can forget about the stroke since it's already rendered to the texture.
The GPU does all of the sampling, interpolation, blending, etc., and it's much better at it than the CPU (after all, it's what the GPU is designed for)
Switching FBO's isn't that expensive. Modern games can switch FBOs for render-to-texture several times a frame while still pumping out thousands of triangles; One FBO switch per frame isn't going to kill a 2D app, even on a mobile platform.

OpenGL blend two FBOs

In a game I'm writing, I have a level, which is properly rendered to the on-screen render buffer provided to me by the OS. I can also render this to a framebuffer, then render this framebuffer onto the output render buffer.
To add a background, I want to render a different scene, an effect, or whatever to a second framebuffer, then have this "show through" wherever the framebuffer containing the level has no pixel set, i.e. the alpha value is 0. I think this is called alpha blending.
How would I go about doing this with OpenGL? I think glBlendFunc could be used to achieve this, but I am not sure how I can couple this with the framebuffer drawing routines to properly achieve the result I want.
glBlendFunc allows the application to blend (merge) the output of all your current draw operations (say, X) with the current "display" framebuffer (say, Y) that already exists.
ie,
New display output = X (blend) Y
You can control the blend function by gl as below snippet shows for example:
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Full usage shown here
https://github.com/prabindh/sgxperf/blob/master/sgxperf_test4.cpp
Note that the concepts of "showing through" and "blending" are a little different, you might just want to stick with "per pixel alpha blending" terminology.
FBOs are just a containers and are not storage. What you need to do is attach a texture target for each FBO and render your output to that texture, once you have done this. You can use your output textures on a fullscreen quad and do whatever you want with your blending.

'Render to Texture' and multipass rendering

I'm implementing an algorithm about pencil rendering. First, I should render the model using Phong shading to determine the intensity. Then I should map the texture to the rendered result.
I'm going to do a multipass rendering with opengl and cg shaders. Someone told me that I should try 'render to texture'. But I don't know how to use this method to get the effects that I want. In my opinion, we should first use this method to render the mesh, then we can get a 2D texture about the whole scene. Now that we have draw content to the framebuffer, next we should render to the screen, right? But how to use the rendered texture and do some post-processing on it? Can anybody show me some code or links about it?
I made this tutorial, it might help you : http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture/
However, using RTT is overkill for what you're trying to do, I think. If you need the fragment's intensity in the texture, well, you already have it in your shader, so there is no need to render it twice...
Maybe this could be useful ? http://www.ozone3d.net/demos_projects/toon-snow.php
render to a texture with Phong shading
Draw that texture to the screen again in a full screen textured quad, applying a shader that does your desired operation.
I'll assume you need clarification on RTT and using it.
Essentially, your screen is a framebuffer (very similar to a texture); it's a 2D image at the end of the day. The idea of RTT is to capture that 2D image. To do this, the best way is to use a framebuffer object (FBO) (Google "framebuffer object", and click on the first link). From here, you have a 2D picture of your scene (you should check it by saving to an image file that it actually is what you want).
Once you have the image, you'll set up a 2D view and draw that image back onto the screen with an 800x600 quadrilateral or what-have-you. When drawing, you use a fragment program (shader), which transforms the brightness of the image into a greyscale value. You can output this, or you can use it as an offset to another, "pencil" texture.

Is it possible to save the current viewport and then re draw the saved viewport in OpenGL and C++ during the next draw cycle?

I want to know if I can save a bitmap of the current viewport in memory and then on the next draw cycle simply draw that memory to the viewport?
I'm plotting a lot of data points as a 2D scatter plot in a 256x256 area of the screen and I could in theory re render the entire plot each frame but in my case it would require me to store a lot of data points (50K-100K) most of which would be redundant as a 256x256 box only has ~65K pixels.
So instead of redrawing and rendering the entire scene at time t I want to take a snapshot of the scene at t-1 and draw that first, then I can draw updates on top of that.
Is this possible? If so how can I do it, I've looked around quite a bit for clues as to how to do this but I haven't been able to find anything that makes sense.
What you can do is render the scene into a texture and then first draw this texture (using a textured full-screen quad) before drawing the additional points. Using FBOs you can directly render into a texture without any data copies. If these are not supported, you can copy the current framebuffer (after drawing, of course) into a texture using glCopyTex(Sub)Image2D.
If you don't clear the framebuffer when rendering into the texture, it already contains the data of the previous frame and you just need to render the additional points. Then all you need to do to display it is drawing the texture. So you would do something like:
render additional points for time t into texture (that already contains the data of time t-1) using an FBO
display texture by rendering textured full-screen quad into display framebuffer
t = t+1 -> step 1.
You might even use the framebuffer_blit extension (which is core since OpenGL 3.0, I think) to copy the FBO data onto the screen framebuffer, which might even be faster than drawing the textured quad.
Without FBOs it would be something like this (requiring a data copy):
render texture containing data of time t-1 into display framebuffer
render additional points for time t on top of the texture
capture framebuffer into texture (using glCopyTexSubImage2D) for next loop
t = t+1 -> step 1
You can render to texture the heavy part. Then when rendering the scene, render that texture, and on top the changing things.