Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I'm facing this situation where I need to render the content of a framebuffer object onto the screen. The screen already has some contents onto it and I would like to draw the contents of my framebuffer onto this content.
I'm using Qt5 and QNanoPainter to implement this.
The rendering commands I've implemented essentially take a QOpenGLFramebufferObject and converts this into a QNanoImage (using this) and then calls QNanoPainter::drawImage.
This works ok but the problem is that when the content of the fbo is rendered onto the screen, the previously existing content of the screen becomes "pixelated".
So for example, before I draw the FBO the screen looks like this
Then when I draw the FBO onto the default render target, I get this (red is the content of FBO)
I assume this has something to do with blending and OpenGL, but I'm not sure how to solve this problem.
This happens when you over-draw a semi-transparent image over itself multiple times. The white pixels become whiter, the blue pixels become bluer, and, consequently, the anti-aliased edge disappears over a couple iterations.
I therefore deduce that your 'transparent framebuffer' already contains the blue line and the black grid lines. The solution thus will be to clear the 'transparent framebuffer' before you proceed with drawing the red line into in.
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
if i first draw a image with alpha channel at z depth 0.1 and after that i draw a rectangle at z depth 0.0.
The result is the following image where the transparent part of the image becomes black.
I can correct this by first drawing the rectangle and than drawing the image.
Since image is in front of the rectangle in z is their a way i can first draw the image and than draw the rectangle without having the transparent part of image becoming black.
After discarding the transparent fragments this is the result.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I'm searching for a very fast way to render a tiled map with three layers with SDL2.
I'm using SDL_RenderCopy but it's very slow...
Ok, I've found what I needed, so I'll explain it here.
I've actually four layers, and I used to render them in a simple for loop.
In fact, the for loop isn't a good way to render tiled maps.
The best way is to render each layer into a big texture, before the main rendering loop, and then render each big texture to screen. The for loop takes a lot of time to process, however, rendering a big texture is very fast.
Take a look at the following code, considering that "bigTexture" is a layer, and "width" and "height" the size of that layer.
Uint32 pixelFormat;
SDL_QueryTexture(tileset, &pixelFormat, NULL, NULL, NULL);
SDL_Texture *bigTexture = SDL_CreateTexture(renderer, pixelFormat, SDL_TEXTUREACCESS_TARGET, width, height);
SDL_SetRenderTarget(renderer, bigTexture);
// Put your for loop here
It's done, we loaded our layer into a big texture. Let's see how to render it.
SDL_SetRenderTarget(renderer, NULL);
// Create a SDL_Rect which defines the big texture's part to display, ie the big texture's part visible in the window.
// Display the big texture with a simple SDL_RenderCopy
That's all. You're now able to render tiled maps in a very fast way.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm making a 2D game with OpenGL, using textured quads to display 2D sprites. I'd like to create an effect whereby any character sprite that's partially hidden by a terrain sprite will have the occluded region visible as a solid-colored silhouette, as demonstrated by the pastoral scene in this image.
I'm really not sure how to achieve this. I'm guessing the solution will involve some trick with the fragment shader, but as for specifics I'm stumped. Can anyone point me in the right direction?
Here's what I've done in the past
Draw the world/terrain (everything you want the silhouette to show through)
Disable depth test
disable draw to depth buffer
Draw sprites in silhouette mode (a different shader or texture)
enable depth test
enable draw to depth buffer
draw sprites in normal mode
draw anything else that should go on top (like the HUD)
Explanation:
When you draw the first time (in silhouette mode) it will draw over everything, but not affect the depth buffer, so that when you draw the 2nd time you won't get z-fighting. When you draw the 2nd time, some of it will be behind the terrain, but where the silhouette has already been drawn.
You can do things like this using stenciling or depth buffering.
When rendering the wall make sure that it writes a different value to the stencil buffer than the background. Then render the cow twice, once passing the stencil test when not at the wall, and once otherwise. Use a different shader each time.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
My problem is that i want to take a kind of snapshot of a 3d scene manipulate that snapshot and draw it back to another viewport of the scene,
I just read the image using the glReadPixel method.
Now I want to draw back that image to a specified viewport but with the usage of modern OpenGL.
I read about FrameBufferObject (fbo) and PixelBufferObject (pbo) and the solution to write back the FrameBufferObject contents into a gl2DTexture and pass it to the FragementShader as simple texture.
Is this way correct or can anyone provide a simple example of how to render the image back to the scene using modern OpenGL and not the deprecated glDrawPixel method?
The overall process you want to do will look something like this
Create an FBO with a color and depth attachment. Bind it.
Render your scene
Copy the contents out of its color attachment to client memory to do the operations you want on it.*
Copy the image back into an OpenGL texture (may as well keep the same one).
Bind the default framebuffer (0)
Render a full screen quad using your image as a texture map. (Possibly using a different shader or switching shader functionality).
Possible questions you may have:
Do I have to render a full screen quad? Yup. You can't bypass the vertex shader. So somewhere just go make four vertices with texture coordinates in a VBO, yada yada.
My vertex shader deals with projecting things, how do I deal with that quad? You can create a subroutine that toggles how you deal with vertices in your vertex shader. One can be for regular 3D rendering (ie transforming from model space into world/view/screen space) and one can just be a pass through that sends along your vertices unmodified. You'll just want your vertices at the four corners of the square on (-1,-1) to (1,1). Send those along to your fragment shader and it'll do what you want. You can optionally just set all your matrices to identity if you don't feel like using subroutines.
*If you can find a way do your texture operations in a shader, I'd highly recommend it. GPUs are quite literally built for this.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
How can I add a glowing effect to a line that I draw? I'm using OpenGL for Linux.
You can implement the radial blur effect described on Nehe Lesson 36. The main idea is to render the drawing to a texture, and do that N times with a small offset after each render, until the drawing is ready to be copied to the framebuffer.
I've written a small demo that uses Qt and OpenGL. You can see the original drawing (without the blur) below:
The next image shows the drawing with the blur effect turned on:
I know it's not much, but it's a start.
I too once hoped there was a very simple solution to this, but unfortunately it is a little complicated, at least for a beginner.
The way glowing effects are implemented today, regardless of API (D3D,OpenGL) is with pixel/fragment-shaders. It usually involves multiple render passes where you render your scene, then render a pass where only "glowing objects" are visible, then you apply a bloom pixelshader and compose them together.
See the link provided by #Valmond for details
Edit:
It should be added that this can be achieved with deferred rendering, where normals, positions and other information like a "glow flag" is rendered to a texture, i.e. stored in different components of the texture. Then a shader will read from the textures and do lightning computations and post-processing effects in a single pass since all data it needs is available from that rendered texture.
Check this out : http://developer.download.nvidia.com/books/HTML/gpugems/gpugems_ch21.html
It explains easily how to make glow effects.
Without using shaders, you might also try rendering to texture and doing a radial blur.
As a starting point check out NeHe-Tutorials.