I would like to build a vertex shader with 1 texture map but multiple uv sets.
So far, I stored the differents UV sets in FaceVertexUvs[0] and FaceVertexUvs[1].
However, in the vertex shader, I can access only the first uv set using "vUv".
varying float vUv;
void main() {
(...)
vUv = uv;
(...)
}
It seems like 'uv' is something magical from threejs. Does something like uv2, uv3 exists? I need something to access the uv mapping in FaceVertexUvs[1].
My goal is to build a house with wall using part of a texture, and windows in another part of the same texture, and blend both.
Is it the correct way to do it? In which part of three.js source code is that magical 'uv' set?
Related
I have a mesh whose vertex positions are generated dynamically by the vertex shader. I've been using https://www.khronos.org/opengl/wiki/Calculating_a_Surface_Normal to calculate the surface normal for each primitive in the geometry shader, which seems to work fine.
Unfortunately, I'm planning on switching to an environment where using a geometry shader is not possible. I'm looking for alternative ways to calculate surface normals. I've considered:
Using compute shaders in two passes. One to generate the vertex positions, another (using the generated vertex positions) to calculate the surface normals, and then passing that data into the shader pipeline.
Using ARB_shader_image_load_store (or related) to write the vertex positions to a texture (in the vertex shader), which can then be read from the fragment shader. The fragment shader should be able to safely access the vertex positions (since it will only ever access the vertices used to invoke the fragment), and can then calculate the surface normal per fragment.
I believe both of these methods should work, but I'm wondering if there is a less complicated way of doing this, especially considering that this seems like a fairly common task. I'm also wondering if there are any problems with either of the ideas I've proposed, as I've had little experience with both compute shaders and image_load_store.
See Diffuse light with OpenGL GLSL. If you just want the face normals, you can use the partial derivative dFdx, dFdy. Basic fragment shader that calculates the normal vector (N) in the same space as the position:
in vec3 position;
void main()
{
vec3 dx = dFdx(position);
vec3 dy = dFdy(position);
vec3 N = normalize(cross(dx, dy));
// [...]
}
I have a mesh with many thousands of vertices. The model represents a complex structure that needs to be visualized both in its entirety but also in part. The user of my application should be able to define a minimum and maximum value for the Z-Axis. Only fragments with a position between these limits should be rendered.
My naive solution would be to write a Fragment shader somewhat like this:
#extension GL_EXT_texture_array : enable
uniform sampler2DArray m_ColorMap;
uniform float m_minZ;
uniform float m_maxZ;
in vec4 fragPosition;
in vec3 texCoord;
void main(){
if (fragPosition.z < m_minZ || fragPosition.z > m_maxZ) {
discard;
}
gl_FragColor = texture2DArray(m_ColorMap, texCoord);
}
Alternatively I could try to somehow filter out vertices in the vertex shader. Perhaps by setting their position values to (0,0,0,0) if they fall out of range.
I am fairly certain both of these approaches can work. But I would like to know if there is some better way of doing this that I am not aware of. Some kind of standard approach for slicing models along an axis.
Please keep in mind that I do not want to use separate VBO's for each slice since they can be set dynamically by the user.
Thank you very much.
I always did my shaders in glsl 3 (with the #version 330 line) but it's starting to be pretty old, so I recently tried to make a shader in glsl 4, and use it with the SFML library for rendering, instead of pure openGL.
For now, my goal is to do a basic shader for a 2d game, which takes the color of each pixel of a texture and modify them. I always did that with gl_TexCoord[0].xy, but it seems to be depreciated now, so I searched and I heard that I must use the in and out variables with a vertex shader, so I tried.
Fragment shader
#version 400
in vec2 fragCoord;
out vec4 fragColor;
uniform sampler2D image;
void main(){
// Get the color
vec4 color = texture( image, fragCoord );
/*
* Do things with the color
*/
// Return the color
fragColor = color;
}
Vertex shader
#version 400
in vec3 position;
in vec2 textureCoord;
out vec2 fragCoord;
void main(){
// Set the position of the pixel and vertex (I guess)
fragCoord = textureCoord;
gl_Position = vec4( position, 1.0 );
}
I also seen that we could add the projection, model, and view matrices, but I don't know how to do that with SFML (I don't even think we can), and I don't want to learn some complex things about openGL or SFML just to change some colors on a 2d game, so here is my question:
Is there an easy way to just get the coordinates of the pixel we're working on? Maybe get rid of the vertex shader, or use it without using matrices?
Unless you really want to learn a lot of nasty OpenGl, writing your own shaders just for textures is a little overkill. SFML can handle textures and shaders for you behind the scenes (here is a good article on how to use them) so you don't need to worry about shaders at all. Also note that you can change the color of SFML sprites (which is, I believe, what you are trying to do), with sprite.setColor(sf::color(*whatever*));. Plus, there's no problem in using version 330. That's what I usually use, albeit with in and out stuff as well.
If you really want to use your own shaders for fancy effects, like pixellation, blurring, etc. I can't help you much since I've only ever worked with pure OpenGl, so I don't know how the vertex information is handled by SFML, but this is some interesting example code you can check out, here is a tutorial, and here is a reference.
To more directly answer your question. gl_FragCoord is a built-in variable with GLSL that keeps track of the fragments position, but you have to set gl_Position in the vertex shader. You can't get rid of the vertex shader if you are doing anything OpenGl related. You'd have to do fancy matrix stuff (this is a wonderful library) and probably buffer stuff (like this) to tell GLSL yourself where everything is.
I've seen the shader code using these two. But i don't understand what's difference between them, between texture and fragment.
As i know, fragment is pixels, so what's texture?
Some use these code:
vec2 uv = gl_FragCoord.xy / rectSize.xy;
vec4 bkg_color = texture2D(CC_Texture0, uv);
some use:
vec4 bkg_color = texture2D(CC_Texture0, v_texCoord);
with v_texCoord = a_texCoord;
Both works, except the first way displays inverted image.
In your second example 'v_texCoord' looks like a pre-calculated texture coordinate that is passed to the Fragment Shader as a Vertex Attribute, versus the 'uv' coordinate calculated within the Fragment Shader of the first example.
You can base texture coordinates off whatever you like - so long as you give the texture2D sampler normalised coordinates - its all about your use case and what you want to display from a texture.
Perhaps there is such a use-case difference here, which is why they give different visual outputs.
For more information about how texture coordinates work I recommend this question's answer: How do opengl texture coordinates work?
What I'm trying to accomplish: Drawing the depth map of my scene on top of my scene (so that objects closer are darker, and further away are lighter)
Problem: I don't seem to understand how to pass the right texture coordinates from my vertex shader to my fragment shader.
So I created my FBO, and the texture that the depth map gets drawn to... not that I'm entirely sure what I was doing, but whatever, it works. I tested drawing the texture using the fixed functionality pipeline, and it looks just like it's supposed to (the depth map that is).
But trying to use it in my shaders just isn't working...
Here's the part from my render method that binds the texture:
glActiveTexture(GL_TEXTURE7);
glBindTexture(GL_TEXTURE_2D, depthTextureId);
glUniform1i(depthMapUniform, 7);
glUseProgram(shaderProgram);
look(); //updates my viewing matrix
box.render(); //renders box VBO
So... I think that's sort of right? Maybe? No clue why texture 7, that was just something that was in a tutorial I was checking...
And here's the important stuff from my vertex shader:
out vec4 ShadowCoord;
void main() {
gl_Position = PMatrix * (VMatrix * MMatrix) * gl_Vertex; //projection, view and model matrices
ShadowCoord = gl_MultiTexCoord0; //something I kept seeing in examples, was hoping it would work.
}
Aaand, fragment shader:
in vec4 ShadowCoord;
in vec3 Color; //passed from vertex shader, didn't include the code for it though. Just the vertex color.
out vec4 FragColor;
void main(
FragColor = vec4(texture2D(ShadowMap,shadowCoord.st).x * vec3(Color), 1.0);
Now the problem is that the coordinate that the fragment shader receives for the texture is always (0,0), or the bottom-left corner. I tried changing it to ShadowCoord = gl_MultiTexCoord7, because I figured maybe it had something to do with me putting the texture in slot number 7... but alas, the problem persisted. When the color of (0, 0) changes, so does the color of the entire scene, rather than being a change in color for only the appropriate pixel/fragment.
And that's what I'm hoping to get some insight on... how to pass the correct coordinates (I'd like for the corners of the texture to be the same coordinates as the corners of my screen). And yes, this is a beginners question... but I have been looking in the Orange Book, and the problem with it is that it's great on the GLSL side of things, but the OpenGL side of things is severely lacking in the examples that I could really use...
The input variable gl_MultiTexCoord0 (or 7) is the builtin per-vertex texture coordinate for the 0th (or 7th) texture coordinate, set by gl(Multi)TexCoord (when using immediate mode) or by glTexCoordPointer (when using arrays/VBOs).
But as your depth buffer is already in screen space, what you want is not a usual texture laid onto the object, but just the value in the texture for a specific pixel/fragment. So the vertex shader isn't involved in any way. Instead you just use the current fragment's screen space position as texture coordinate, that can be read in the fragment shader using gl_FragCoord. But keep in mind that this coordinate is in [0,w]x[0,h] and textures are accessed by normalized texture coordinates in [0,1]. So you have to divide the fragment's coordinate by the screen size:
uniform vec2 screenSize;
...
... texture2D(ShadowMap, gl_FragCoord.st/screenSize) ...
But you actually don't need two passes for this effect anyway, as you can just use the fragment's depth directly, without writing it into a texture. Instead of
texture2D(ShadowMap, gl_FragCoord.st/screenSize).x
you can just use
gl_FragCoord.z
which is nothing else than the fragment's depth value, that would have been written into the texture in the first pass. This way you completely spare the first depth-writing pass and the texture access in the second pass.