Texture transparency with P3D [closed] - opengl

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 4 years ago.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Improve this question
I’m currently coding a “Doom” clone, using P3D. I’m not using any external library.
I’m having a weird artifact that happens when I’m using a masked texture. I’ve tried several methods to fix it, but to no avail :(. DISABLE_DEPTH_MASK suppresses the artifact, but then my sprite get sorted all wrong.
If anyone can point me in the right direction, I would really appreciate it! I’m so close to having a functional engine!
(disregard the “doom” face and soldier sprites, of course, they are just temporary assets I’m using…)
artifact1As you can see, eventhough the mask is working (see the feet of the soldier) it’s leaving a huge black (background color) artifact behind.
artifact2When walking behind the quad, the artifact isn't too bad but still present.

You have to use/enable Blending.
This technique is used if you want ot render a sprite which is not completely opaque , but has partly or completely transparent areas
The soldier has be rendered at the end, after the environment of the scene is finished. The depth test has to be enabled.
Enable blending before you render soldier sprite (texture) with the alpha channel.
gl.glEnable(GL.GL_BLEND);
gl.glBlendFunc(GL.GL_SRC_ALPHA,GL.GL_ONE_MINUS_SRC_ALPHA);
And disable blending after:
gl.glDisable(GL.GL_BLEND);
Note, the "black" background of the soldier sprite has to have an alpha channel of 0.0 (completely transparent) and the alpha channel of the soldier itself has to be 1.0 (completely opaque). In common PNG images fulfill this.
Explanation:
At the point when the soldier is rendered, then the background has already been drawn and the background color (color of the environment) has been stored in the framebuffer.
The blending function is:
dest.rgba = dest.rgba * (1 - src.a) + src.rgba * src.a
where dest.rgba is the color in the framebuffer and src.rgba is the color of the sprite.
If the alpha channel of the sprite is 1.0 (src.a = 1.0; opaque), then
dest.rgba = dest.rgba * (1.0 - 1.0) + src.rgba * 1.0
dest.rgba = src.rgba
If the alpha channel of the sprite is 0.0 (src.a = 0.0; transparent), then
dest.rgba = dest.rgba * (1.0 - 0.0) + src.rgba * 0.0
dest.rgba = dest.rgba

Related

Draw a transparent framebuffer onto the default framebuffer [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I'm facing this situation where I need to render the content of a framebuffer object onto the screen. The screen already has some contents onto it and I would like to draw the contents of my framebuffer onto this content.
I'm using Qt5 and QNanoPainter to implement this.
The rendering commands I've implemented essentially take a QOpenGLFramebufferObject and converts this into a QNanoImage (using this) and then calls QNanoPainter::drawImage.
This works ok but the problem is that when the content of the fbo is rendered onto the screen, the previously existing content of the screen becomes "pixelated".
So for example, before I draw the FBO the screen looks like this
Then when I draw the FBO onto the default render target, I get this (red is the content of FBO)
I assume this has something to do with blending and OpenGL, but I'm not sure how to solve this problem.
This happens when you over-draw a semi-transparent image over itself multiple times. The white pixels become whiter, the blue pixels become bluer, and, consequently, the anti-aliased edge disappears over a couple iterations.
I therefore deduce that your 'transparent framebuffer' already contains the blue line and the black grid lines. The solution thus will be to clear the 'transparent framebuffer' before you proceed with drawing the red line into in.

Loading many images into OpenGL and rendering them to the screen [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have an image database on my computer, and I would like to load each of the images up and render them in 3D space, in OpenGL.
I'm thinking of instantiating a VBO for each image, as well as a VAO for each one of the VBO's.
What would be the most efficient way to do this?
Here's the best way:
Create just one quad out of 4 vertices.
Use a transformation matrix (not 3D transform; just transforming 2D position and size) to move the quad around the screen and resize it if you want.
This way you can use 1 vertex array (of the quad) and texture Coordinates array and 1 VAO and do the same vertex bindings for every drawcall however for each drawcall there is a different texture.
Note: the texture coordinates will also have to be transformed with the vertices.
I think the conversion between the vertex coordinate system (2D) and texture coordinate system is vertex vPos = texturePos / 2 + 0.5, therefore texturePos = (vPos - 0.5) * 2
OpenGL's textureCoords system goes from 0 - 1 (with the axes starting at the bottom left of the screen):
while the vertex (screen) coordinate system goes from -1 to 1 (with axes starting in the middle of the screen)
This way you can correctly transform textureCoords to your already transformed vertices.
OR
if you do not understand this method, your proposed method is alright but be careful not to have way too many textures or else you will rendering lots of VAOs!
This might be hard to understand, so feel free to ask questions below in the comments!
EDIT:
Also, noticing #Botje helpful comment below, I realised the textureCoords array is not needed. This is because if your textureCoords are calculated relative to the vertex positions through the method above, it can be directly performed in the vertex shader. Make sure to have the vertices transformed first though.

How to draw a image with alpha texture without changing the draw order [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
if i first draw a image with alpha channel at z depth 0.1 and after that i draw a rectangle at z depth 0.0.
The result is the following image where the transparent part of the image becomes black.
I can correct this by first drawing the rectangle and than drawing the image.
Since image is in front of the rectangle in z is their a way i can first draw the image and than draw the rectangle without having the transparent part of image becoming black.
After discarding the transparent fragments this is the result.

OpenGL depth issue [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I draw a 3D shape with both fill mode and lines mode "wire frame", I get intersection between triangles and lines.
I enable the depth buffer before I renderGL.Enable(EnableCap.DepthTest); , this is the code of drawing the model:
GL.BindVertexArray(VAO);
Vector4 color;
int colorLoc = GL.GetUniformLocation(programID, "color");
if (BorderThickness > 0)
{
color = new Vector4((float)BorderColor.R / 255, (float)BorderColor.G / 255, (float)BorderColor.B / 255, (float)BorderColor.A / 255);
GL.Uniform4(colorLoc, color.X, color.Y, color.Z, color.W);
GL.PolygonMode(MaterialFace.FrontAndBack, PolygonMode.Line);
GL.LineWidth(BorderThickness);
GL.DrawArrays(PrimitiveType.LineStrip, 0, _positions.Count);
}
color = new Vector4((float)FillColor.R / 255, (float)FillColor.G / 255, (float)FillColor.B / 255, (float)FillColor.A / 255);
GL.Uniform4(colorLoc, color.X, color.Y, color.Z, color.W);
GL.PolygonMode(MaterialFace.FrontAndBack, PolygonMode.Fill);
GL.DrawArrays(PrimitiveType.Triangles, 0, _positions.Count);
GL.BindVertexArray(0);
I got the following result:
The result after adding glPolygonOffset
It fix the issue but the lines are not clear.
If the vertex that is far away is at a distance of 1100, then it is unlikely that you need a close plane at 0.1f precision.
If we look at the model, all the points seem to be in a similar scale. This kind of issue looks a lot like a Z-fighting issue.
Try
projection = Matrix4.CreatePerspectiveFieldOfView(MathHelper.PiOver4, (float)width / (float)height, 10f, 2000f)
You will be significantly more precise at the scale you are working. You'll get clipping if your points are less then 10 away from the camera, but considering how big the mesh is, it shouldn't be an issue.
Since Z-buffer's precision is logarithmic, the close you are to the far plane, the lower your precision will be and the more Z-fighting you'll get. Considering your points are very close to the far plane, this would explain the issue we are seeing. By bringing the close plane to an order of magnitude a bit closer to the one of your vertices the issues should disappear.

OpenGL Lighting for uniform illumination [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
Have a look at the following 2 images -
The 2 images are of the same model at different angles. It is made of multiple cylinders stacked on top of each other. As you can see there is something funny with the lighting. One side of all cylinders is dark in the first image. When the same model is rotated, the other end of all the cylinders becomes dark. The explanation is pretty clear. The normals get aligned in the direction of light to light up one side. I want both sides to be equally well lit and not compromise on the 3d look and feel of the cylinder. How should I set up the lighting?
I am using Smooth Shading.
It's hard to say what the problem is without more information.
What I think is happening is that the cylinders were created with smooth shading normals. This is visually pleasing but it can create problems like this one when the poly count is low.
(source: k-3D.org)
In this image, the first cylinder from the left has flat shading and the middle one has smooth shading. As you can see in this example, the smooth shaded one also has problems with too little light on one side. The reason is that, with smooth shading, the normals on the edge of the cylinder are an average of the normals from the side and the normals from the top, and that can cause lighting problems. See this diagram:
The yellow arrow is the light direction, the red is the smooth normal and the greens are the flat normals. See how the angle between the smooth normal and the light is around 90º so it will get no light.
The solution is to set the normals as smooth, but detach the top and bottom faces from the side. This way, the circular edge won't get smoothed but the side will. The result is the third cylinder on the first image.
If you cannot achieve that with your software, an easy solution is to add a bevel around the edges like this:
The bevel can be as small as you want and it will achieve the effect you want.
Hope it helps.