While implementing billboard objects to my engine i encountered a problem (screenshot below)
as you can see billboard object covers everything in background (skybox seems to be an exception). And this is not exacly how i would like it to work. I have no idea where is the problem.
my fragment shader is pretty simple:
#version 330
uniform sampler2D tex;
in vec2 TexCoord;
out vec4 FragColor;
void main()
{
FragColor = texture2D(tex, TexCoord);
}
and the billboard is just triangle strip made in geometry shader.
All ideas would be nice.
Probably draw order issue, you need to draw opaque objects first and then alpha blended objects back to front. Alternatively you can enabled alpha testing or in your shader discard fragments if their alpha is below a certain threshold.
Related
I realised that models can be loaded in many different ways. Some might have no textures and just use vertex colours, while others have materials with textures. Now, I have one shader that is used to draw 3d models with lighting but how would I go about loading models that are required to be rendered differently and have them rendered correctly in the scene.
One method I thought of is having hard defined attribute pointers in the vertex shader and that if the model doesn't require binding an attribute, it doesn't have to. I would imagine that if an attribute isn't binded to and is used in calculations, it won't contribute or offset any values (especially if you do the calculations separately and sum them in the end (for example, calculating the colour of the pixel for the vertex colour at that fragment and summing that for the colour of the pixel for the texture)
You can imagine the vertex shader looking like:
#version 330 core
layout (location = 0) in vec3 aPos; // Vertex positions
layout (location = 1) in vec3 aNormal; // Normals
layout (location = 2) in vec2 aTexCoords; // Optional Texture coordinates
layout (location = 3) in vec3 aColors; // Optional vertex colours
out vec3 FragPos;
out vec3 Normal;
out vec2 TexCoords;
out vec3 Colors;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
FragPos = vec3(model * vec4(aPos, 1.0));
Normal = mat3(transpose(inverse(model))) * aNormal;
TexCoords = aTexCoords;
Colors = aColors;
gl_Position = projection * view * vec4(FragPos, 1.0);
}
An issue I can see with this is I'm not sure how I could tell whether to use texCoords or colours or both. Perhaps using a uniform as a flag and keeping that data in the model object?
Another method would be by defining shaders for different types of lighting techniques and then determining which shader is called for which model. This is my least favourite as it doesn't allow for dynamic approaches (i.e both vertex colours and textures at the same time)
What would be the approach done in a professional setting so that any model can be loaded and drawn into a scene with lighting in the same vain as game engines manage? (Using forward rendering right now so don't need any deffered shading concepts as that's above my paygrade at the moment). I'm not seeking specific solutions, more of an experts knowledge of how this issue is approached.
Also, I'm using OpenGL 3.3
What would be the approach done in a professional setting so that any model can be loaded and drawn into a scene with lighting in the same vain as game engines manage?
Either this:
defining shaders for different types of lighting techniques and then determining which shader is called for which model
or this:
having hard defined attribute pointers in the vertex shader and that if the model doesn't require binding an attribute, it doesn't have to
Pick one. In the second approach, you don't need flags - you can just bind dummy values, like a plain white texture if the model has no texture, and a white vertex colour if the model has no vertex colours.
I am learning opengl and I thought I pretty much understand fragment shader. My intuition is that fragment shader gets applied once to every pixel but recently when working with texture, I became confused on how they exactly work.
First of all, fragment shader typically takes in a series of texture coordinate so if I have a quad, the fragment shader would takes in the texture coordinates for the 4 corners of the quads. Now what I don't understand is the sampling process which is the process of taking the texture coordinates and getting the appropriate color value at that texture coordinates. Specifically, since I only supply 4 texture coordinates, how does opengl knows to samples the coordinates in between for color value.
This task is made even more confusing when you consider the fact that vertex shader goes straight to fragment shader and vertex shader gets applied per vertex. This means that at any given time, the fragment shader only knows about the texture coordinate corresponding to a single vertex rather than the whole 4 coordinates that make up the quads. Thus how exactly it knows to samples the values that fit the shapes on the screen when it only have one texture coordinates available at a time?
All varying variables are interpolated automatically.
Thus if you put texture coordinates for each vertex into a varying, you don't need to do anything special with them after that.
It could be as simple as this:
// Vertex
#version 330 compatibility
attribute vec2 a_texcoord;
varying vec2 v_texcoord;
void main()
{
v_texcoord = a_texcoord;
}
// Fragment
uniform vec2 u_texture;
varying vec2 v_texcoord;
void main()
{
gl_FragColor = texture2D(u_texture, v_texcoord);
}
Disclaimer: I used the old GLSL syntax. In newer GLSL versions, attribute would be replaced with in. varying would replaced with out in the vertex shader and with in in the fragment shader. gl_FragColor would be replaced with a custom out vec4 variable. texture2D() would be replaced with texture()
Notice how this fragment shader doesn't do any manual interpolation. It receives just a single vec2 v_texcoord, which was interpolated under the hood from v_texcoords of vertices comprising a primitive1 current fragment belongs to.
1. A primitive means a point, a line, a triangle or a quad.
first : in core context you still can use gl_FragColor.
second : you have texel ,fragment and real_monitor_pixel.These are different
things.
say this line is about convert texel to fragment(or to pixel idk exactly what it does):
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
when texel is less then fragment(pixel)
I am working with an old game format and am trying to finish up a rendering project. I am working on lighting and am trying to get a multiply blend mode going in my shader.
The lighting is provided as vertex values which are they interpolated over the triangle.
I have taken a screenshot of an unlit scene, and just the light maps and put them in Photoshop with a multiply layer. It provides exactly what I want.
I also want to factor in the ambient light which is kind of a like a 'Opacity' on the Photoshop color layer.
I have tried just multiplying them, which works great but again, I want to be able to control the amount of the lightmaps. I tried mix, which just blended the lightmaps and the textures but not in a multiply blend.
Here are the images. The first is the diffuse, the second is the lightmap and the third is them combined at 50% opacity with the lightmaps.
http://imgur.com/Zwg9IZr,6hq0t0p,7hR88I2#0 [1]
So, my question is, how do I multiply blend these with the sort of ambient light "opacity" factor. Again, I have tried a direct mix but it's more of an overlay, rather than a multiply blend.
My GLSL fragment source:
#version 120
uniform sampler2D texture;
varying vec2 outtexture; in vec4 colorout;
void main(void)
{
float ambient = 1.0f;
vec4 textureColor = texture2D(texture, outtexture);
vec4 lighting = colorout;
// here is where I want to blend with the ambient light dictating the contribution from each
gl_FragColor = textureColor * lighting;
}
I want to put a texture on a rectangle which has been transformed by a non-affine transform (more specifically a perspective transform).
I have a very complex implementation based on openscenegraph and loading my own vertex and fragment shaders.
The problem starts with the fact that the shaders were written quite a long time ago and are using GLSL 120.
The OpenGL side is written in C++ and in its simplest form, loads a texture and applies it to a quad. Up to recently, everything was working fine because the quad was at most affine-transformed (rotation + translation) so the rendering of the texture on it was correct.
Now however we want to support quads of any shape, including something like this:
http://ibin.co/1dbsGPpzbkOX
As you can see in the picture above, the texture on it is incorrect in the middle (shown by arrows)
After hours of research I found out that this is due to OpenGL splitting quads into triangles and rendering each triangle independently. This is of course incorrect if my quad is as shown, because the 4th point influences the texture stretch.
I then even found that this issue has a name: it's a "perspectively incorrect interpolation of texture coordinates", as explained here:
[1]
Looking for solutions to this, I came across this article which mentions the use of the "smooth" attribute in later GLSL versions: [2]
but this means updating my shaders to a newer version.
An alternative I found was to use GL_Hints, as described here: [3]
but the disadvantage here is that it is only a hint, and there is no way to make sure it is used.
Now that I have shown my research, here is my question:
Updating my (complex) shaders and all the OpenGL which goes with it to abide by the new OpenGL pipeline paradigm would be too time-consuming so I tried using the GLSL "version 330 compatibility" and changing the "varying" to "smooth out" and "smooth in", as well as adding the GL_NICE hint on the C++ side, but these changes did not solve my problem. Is this normal, because the compatibility mode somehow doesn't support this correct perspective transform? Or is there something more that I need to do?
Or is there a better way for me to get this functionality without needing to refactor everything?
Here is my vertex shader:
#version 330 compatibility
smooth out vec4 texel;
void main(void) {
gl_Position = ftransform();
texel = gl_TextureMatrix[0] * gl_MultiTexCoord0;
}
and the fragment shader is much too complex, but it starts with
#version 330 compatibility
smooth in vec4 texel;
Using derhass's hint I solved the problem in a much different way.
It is true that the "smooth" keyword was not the problem but rather the projective texture mapping.
To solve it I passed directly from my C++ code to the frag shader the perspective transform matrix and calculated the "correct" texture coordinate in there myself, without using GLSL's barycentric interpolation.
To help anyone with the same problem, here is a cut-down version of my shaders:
.vert
#version 330 compatibility
smooth out vec4 inQuadPos; // Used for the frag shader to know where each pixel is to be drawn
void main(void) {
gl_Position = ftransform();
inQuadPos = gl_Vertex;
}
.frag
uniform mat3 transformMat; // the transformation between texture coordinates and final quad coordinates (passed in from c++)
uniform sampler2DRect source;
smooth in vec4 inQuadPos;
void main(void)
{
// Calculate correct texel coordinate using the transformation matrix
vec3 real_texel = transformMat * vec3(inQuadPos.x/inQuadPos.w, inQuadPos.y/inQuadPos.w, 1);
vec2 tex = vec2(real_texel.x/real_texel.z, real_texel.y/real_texel.z);
gl_FragColor = texture2DRect(source, tex).rgba;
}
Note that the fragment shader code above has not been tested exactly like that so I cannot guarantee it will work out-of-the-box, but it should be mostly there.
I'm just starting to learn graphics using opengl and barely grasp the ideas of shaders and so forth. Following a set of tutorials, I've drawn a triangle on screen and assigned a color attribute to each vertex.
Using a vertex shader I forwarded the color values to a fragment shader which then simply assigned the vertex color to the fragment.
Vertex shader:
[.....]
layout(location = 1) in vec3 vertexColor;
out vec3 fragmentColor;
void main(){
[.....]
fragmentColor = vertexColor;
}
Fragment shader:
[.....]
out vec3 color;
in vec3 fragmentColor;
void main()
{
color = fragmentColor;
}
So I assigned a different colour to each vertex of the triangle. The result was a smoothly interpolated coloured triangle.
My question is: since I send a specific colour to the fragment shader, where did the smooth interpolation happen? Is it a state enabled by default in opengl? What other values can this state have and how do I switch among them? I would expect to have total control over the pixel colours using a fragment shader, but there seem to be calculations behind the scenes that alter the result. There are clearly things I don't understand, can anyone help on this matter?
Within the OpenGL pipeline, between the vertex shading stages (vertex, tesselation, and geometry shading) and fragment shading, is the rasterizer. Its job is to determine which screen locations are covered by a particular piece of geometry(point, line, or triangle). Knowing those locations, along with the input vertex data, the rasterizer linearly interpolates the data values for each varying variable in the fragment shader and sends those values as inputs into your fragment shader. When applied to color values, this is called Gouraud shading.
source : OpenGL Programming Guide, Eighth Edition.
If you want to see what happens without interpolation, call glShadeModel(GL_FLAT) before you draw. The default value is GL_SMOOTH.