Before I implement this, I want to know whether it is possible or not:
I want to use instanced rendering along with multiple color attachments. For each instance or a geometry, i want to write the color data to different color attachments of FBO. Based on my value of gl_instance I want to write to particular color attachment. In other words, my 1st instance should go to 1st color attachment and 2nd instance should go into 2nd attachment so my VS would be:
in vec4 position;
uniform mat4 u_mvp0;
uniform mat4 u_mvp1;
flat out instance_id;
main(){
mat4 mvp;
if(gl_Instance_ID == 0)
mvp = u_mvp1;
if(gl_Instance_ID == 1)
mvp = u_mvp0;
gl_Position = mvp * position;
instance_id = gl_Instance_ID;
}
FS:
out vec4 Color0;
out vec4 Color1;
flat in instance_id;
main(){
if(instance_id == 0)
Color0 = vec4(1,1,1,1);
if(instance_id == 1)
Color1 = vec4(1,0,0,1);
}
Can we do conditional writing to color attachment? Can we assign value of gl_Instance_ID to any variable inside shader? or Can we use gl_Instance_ID in fragment shader?
Can we do conditional writing to color attachment?
Not in the way you mean.
If the glDrawBuffers state says that a fragment output will be routed to a color buffer, then one will always be routed to it. If you don't write to that value, then the value written to the attachment will be undefined.
So while you can change the value you write, you cannot choose to write nothing to an attachment. That choice is made based on the glDrawBuffers state.
It seems very much like what you want is layered rendering.
Can we assign value of gl_Instance_ID to any variable inside shader?
... yes. Why would you think otherwise? It's just an input value filled in by the implementation.
Can we use gl_Instance_ID in fragment shader?
No. It only exists within the vertex shader. If you need any other shader stage to access it, then your VS needs to pass the value to it.
debonair, I think you are slightly misunderstanding Nicol Bolas. I'll try to rephrase his answer.
The code in your fragment shader will work. You can decide which color attachment output gets what.
What Nicol is saying is that you can't not generate an output. Your fragment shader will always output Color0 and Color1 (Color2, 3, …). If you don't assign a value then the output will be garbage, but something will be written.
So you should have a default value for when the instance id doesn't match the color attachment and assign that to the non-matching outputs.
Hope this helps.
Related
I have a mesh with many thousands of vertices. The model represents a complex structure that needs to be visualized both in its entirety but also in part. The user of my application should be able to define a minimum and maximum value for the Z-Axis. Only fragments with a position between these limits should be rendered.
My naive solution would be to write a Fragment shader somewhat like this:
#extension GL_EXT_texture_array : enable
uniform sampler2DArray m_ColorMap;
uniform float m_minZ;
uniform float m_maxZ;
in vec4 fragPosition;
in vec3 texCoord;
void main(){
if (fragPosition.z < m_minZ || fragPosition.z > m_maxZ) {
discard;
}
gl_FragColor = texture2DArray(m_ColorMap, texCoord);
}
Alternatively I could try to somehow filter out vertices in the vertex shader. Perhaps by setting their position values to (0,0,0,0) if they fall out of range.
I am fairly certain both of these approaches can work. But I would like to know if there is some better way of doing this that I am not aware of. Some kind of standard approach for slicing models along an axis.
Please keep in mind that I do not want to use separate VBO's for each slice since they can be set dynamically by the user.
Thank you very much.
I am trying to use mipmapping with vulkan. I do understand that I should use vkCmdBlit between each layer for each image, but before doing that, I just wanted to know how to change the layer in GLSL.
Here is what I did.
First I load and draw a texture (using layer 0) and there was no problem. The "rendered image" is the texture I load, so it is good.
Second, I use this shader (so I wanted to use the second layer (number 1)) but the "rendered image" does not change :
#version 450
layout(set = 0, binding = 0) uniform sampler2D tex;
in vec2 texCoords;
layout(location = 0) out vec4 outColor;
void main() {
outColor = textureLod(tex, texCoords, 1);
}
According to me, the rendered image should be changed, but not at all, it is always the same image, even if I increase the "1" (the number of the layer).
Third instead changing anything in the glsl code, I change the layer number into the ImageSubresourceRange to create the imageView, and the "rendered image" changed, so it seems normal to me and when I will use vkCmdBlit, I must see the original image in lower resolution.
The real problem is, when I try to use a mipmapping (through mipmapping) in GLSL, it does not affect at all the rendered image, but in C++ it does (and that seems fair).
here is (all) my source code
https://github.com/qnope/Vulkan-Example/tree/master/Mipmap
Judging by your default sampler creation info (https://github.com/qnope/Vulkan-Example/blob/master/Mipmap/VkTools/System/sampler.cpp#L28) you always set the maxLod member of your samplers to zero, so your lod is always clamped between 0.0 and 0.0 (minLod/maxLod). This would fit the behaviour you described.
So try setting the maxLod member of your sampler creation info to the actual number of mip maps in your texture and changing the lod level in the shader shoudl work fine.
I would like to build a vertex shader with 1 texture map but multiple uv sets.
So far, I stored the differents UV sets in FaceVertexUvs[0] and FaceVertexUvs[1].
However, in the vertex shader, I can access only the first uv set using "vUv".
varying float vUv;
void main() {
(...)
vUv = uv;
(...)
}
It seems like 'uv' is something magical from threejs. Does something like uv2, uv3 exists? I need something to access the uv mapping in FaceVertexUvs[1].
My goal is to build a house with wall using part of a texture, and windows in another part of the same texture, and blend both.
Is it the correct way to do it? In which part of three.js source code is that magical 'uv' set?
I have a problem with fragment shader, I want to get an effect where two different objects are illuminated with different light. Here is my main code:
glUniform1i(TextureID, 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, depthTexture);
glUniform1i(ShadowMapID, 1);
//Here I draw my first object
//Then I want to change light from my fragment shader to color2.
My fragment shader:
// Ouput data
layout(location = 0) out vec3 color;
layout(location = 1) out vec3 color2;
void main(){
//Here I calculate my color variables
}
I have no idea how to achieve this effect. Do I have to write a second fragmentshader? Is it necessary?
Not quite.
Think about what a fragment shader is; it gets run for every pixel on your screen. As such, it typically has one color output, denoting value of said pixel. Multiple outputs of fragment shader are used in advanced techniques such as MRT (multi render-targets), to avoid unnecessary geometry computations.
If you want to change a value of the light between the calls, you simply change the shader uniforms, and then just execute the drawcall again. Another, analogous solution is to use an UBO.
Writing different shaders is necessary if you have fundamental changes in logic; otherwise, they are often generic enough to make just data bindings' modifications enough for stuff like changing lights. (Changing the number of lights, though, is another story).
I'm just starting to learn graphics using opengl and barely grasp the ideas of shaders and so forth. Following a set of tutorials, I've drawn a triangle on screen and assigned a color attribute to each vertex.
Using a vertex shader I forwarded the color values to a fragment shader which then simply assigned the vertex color to the fragment.
Vertex shader:
[.....]
layout(location = 1) in vec3 vertexColor;
out vec3 fragmentColor;
void main(){
[.....]
fragmentColor = vertexColor;
}
Fragment shader:
[.....]
out vec3 color;
in vec3 fragmentColor;
void main()
{
color = fragmentColor;
}
So I assigned a different colour to each vertex of the triangle. The result was a smoothly interpolated coloured triangle.
My question is: since I send a specific colour to the fragment shader, where did the smooth interpolation happen? Is it a state enabled by default in opengl? What other values can this state have and how do I switch among them? I would expect to have total control over the pixel colours using a fragment shader, but there seem to be calculations behind the scenes that alter the result. There are clearly things I don't understand, can anyone help on this matter?
Within the OpenGL pipeline, between the vertex shading stages (vertex, tesselation, and geometry shading) and fragment shading, is the rasterizer. Its job is to determine which screen locations are covered by a particular piece of geometry(point, line, or triangle). Knowing those locations, along with the input vertex data, the rasterizer linearly interpolates the data values for each varying variable in the fragment shader and sends those values as inputs into your fragment shader. When applied to color values, this is called Gouraud shading.
source : OpenGL Programming Guide, Eighth Edition.
If you want to see what happens without interpolation, call glShadeModel(GL_FLAT) before you draw. The default value is GL_SMOOTH.