Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I am trying to make a sphere, shaded a color, except for a circle centered on the yz origin, which I want to dilate and constrict (already have a time variable based on the runtime repeating at 5s, thinking I'll just use that and a sin function.) However I don't know how to implement the control to change color within the circle on the sphere.
Here's my main function within my frag shader:
vec3 myColor = vec3( 0, 1., 0. );
float uSize = 0.2;
if( vST.t > .4 && vST.t < .6 && vST.s > .4 && vST.s < .6)
{
myColor = vec3( 1., 0., 0. );
}
gl_FragColor = vec4( myColor, 1. );
Right now its just a green sphere, except a box between s / t 0.4 -> 0.6 which is red. vST is the vertex s/t coords.
Replace if( ... ) with if(distance(vST, center)<radius).
Better yet, make it antialiased and replace the if with mix. Like so:
float dist = distance(uv, center);
float aa = fwidth(dist);
float inside = smoothstep(radius - aa, radius + aa, dist);
myColor = mix(color0, color1, inside);
Demo at https://www.shadertoy.com/view/tsGXWh
Related
This question already has answers here:
OpenGL Line Width
(4 answers)
Closed 2 years ago.
glLineWidth guarantees to support only width 1. On Windows, it's limited to width 10. To overcome this limitation, the common suggestion is to "simply" render a rectangle instead.
Since this seems like a basic requirement (render 2D/3D lines of arbitrary width, mesh wireframe, etc.), I was wondering if anyone has a code snippet for it.
It would work similar to what the legacy OpenGL offers.
Input: two 3D points and width.
Output: It would render a 3D line that faces the camera with width in pixels.
Emphasis:
It needs to face the camera.
The width is in screen pixels.
Since it's a 3D (flat) line, these properties aren't defined properly. So, I guess it would be something like "as much as possible" and "on average" (whatever that means). This is probably why glLineWidth is limited.
Something basic that doesn't answer the nuances, which is enough for me at the moment (for now, only 2D lines, for given world thickness):
GLUquadricObj *pQuadric = gluNewQuadric();
glPushMatrix();
// flatten y to make a rectangle
glm::dmat4 S = glm::scale( glm::dvec3(1., 0.001 / radius, 1.) );
// translate
glm::dmat4 T = glm::translate( toPoint<glm::dvec3>(p0) );
// rotate
glm::dvec3 xaxis(1, 0, 0);
glm::dmat4 R1 = glm::rotate( -M_PI / 2, xaxis );
glm::dvec3 u( toPoint<glm::dvec3>(p1 - p0) );
u = glm::normalize( u );
glm::dvec3 yaxis(0, 1, 0);
glm::dmat4 R2 = glm::orientation(u, yaxis);
// combine transforms
glm::dmat4 A = T * R2 * R1 * S;
glMultMatrixd( (double*)&A[0] );
glGetDoublev(GL_MODELVIEW_MATRIX, (double*)&A[0]);
gluCylinder(pQuadric, radius, radius, height, 4, 1);
glPopMatrix();
gluDeleteQuadric(pQuadric);
NOTE: I've edited my code. See below the divider.
I'm implementing refraction in my (fairly basic) ray tracer, written in C++. I've been following (1) and (2).
I get the result below. Why is the center of the sphere black?
The center sphere has a transmission coefficient of 0.9 and a reflective coefficient of 0.1. It's index of refraction is 1.5 and it's placed 1.5 units away from the camera. The other two spheres just use diffuse lighting, with no reflective/refraction component. I placed these two different coloured spheres behind and in front of the transparent sphere to ensure that I don't see a reflection instead of a transmission.
I've made the background colour (the colour achieved when a ray from the camera does not intersect with any object) a colour other than black, so the center of the sphere is not just the background colour.
I have not implemented the Fresnel effect yet.
My trace function looks like this (verbatim copy, with some parts omitted for brevity):
bool isInside(Vec3f rayDirection, Vec3f intersectionNormal) {
return dot(rayDirection, intersectionNormal) > 0;
}
Vec3f trace(Vec3f origin, Vec3f ray, int depth) {
// (1) Find object intersection
std::shared_ptr<SceneObject> intersectionObject = ...;
// (2) Compute diffuse and ambient color contribution
Vec3f color = ...;
bool isTotalInternalReflection = false;
if (intersectionObject->mTransmission > 0 && depth < MAX_DEPTH) {
Vec3f transmissionDirection = refractionDir(
ray,
normal,
1.5f,
isTotalInternalReflection
);
if (!isTotalInternalReflection) {
float bias = 1e-4 * (isInside(ray, normal) ? -1 : 1);
Vec3f transmissionColor = trace(
add(intersection, multiply(normal, bias)),
transmissionDirection,
depth + 1
);
color = add(
color,
multiply(transmissionColor, intersectionObject->mTransmission)
);
}
}
if (intersectionObject->mSpecular > 0 && depth < MAX_DEPTH) {
Vec3f reflectionDirection = computeReflectionDirection(ray, normal);
Vec3f reflectionColor = trace(
add(intersection, multiply(normal, 1e-5)),
reflectionDirection,
depth + 1
);
float intensity = intersectionObject->mSpecular;
if (isTotalInternalReflection) {
intensity += intersectionObject->mTransmission;
}
color = add(
color,
multiply(reflectionColor, intensity)
);
}
return truncate(color, 1);
}
If the object is transparent then it computes the direction of the transmission ray and recursively traces it, unless the refraction causes total internal reflection. In that case, the transmission component is added to the reflection component and thus the color will be 100% of the traced reflection color.
I add a little bias to the intersection point in the direction of the normal (inverted if inside) when recursively tracing the transmission ray. If I don't do that, then I get this result:
The computation for the direction of the transmission ray is performed in refractionDir. This function assumes that we will not have a transparent object inside another, and that the outside material is air, with a coefficient of 1.
Vec3f refractionDir(Vec3f ray, Vec3f normal, float refractionIndex, bool &isTotalInternalReflection) {
float relativeIndexOfRefraction = 1.0f / refractionIndex;
float cosi = -dot(ray, normal);
if (isInside(ray, normal)) {
// We should be reflecting across a normal inside the object, so
// re-orient the normal to be inside.
normal = multiply(normal, -1);
relativeIndexOfRefraction = refractionIndex;
cosi *= -1;
}
assert(cosi > 0);
float base = (
1 - (relativeIndexOfRefraction * relativeIndexOfRefraction) *
(1 - cosi * cosi)
);
if (base < 0) {
isTotalInternalReflection = true;
return ray;
}
return add(
multiply(ray, relativeIndexOfRefraction),
multiply(normal, relativeIndexOfRefraction * cosi - sqrtf(base))
);
}
Here's the result when the spheres are further away from the camera:
And closer to the camera:
Edit: I noticed a couple bugs in my code.
When I add bias to the intersection point, it should be in the same direction as the transmission. I was adding it in the wrong direction by adding negative bias when inside the sphere. This doesn't make sense as when the ray is coming from inside the sphere, it will transmit outside the sphere (when TIR is avoided).
Old code:
add(intersection, multiply(normal, bias))
New code:
add(intersection, multiply(transmissionDirection, 1e-4))
Similarly, the normal that refractionDir receives is the surface normal pointing away from the center of the sphere. The normal I want to use when computing the transmission direction is one pointing outside if the transmission ray is going to go outside the object, or inside if the transmission ray is going to go inside the object. Thus, the surface normal pointing out of the sphere should be inverted if we're entering the sphere, thus is the ray is outside.
New code:
Vec3f refractionDir(Vec3f ray, Vec3f normal, float refractionIndex, bool &isTotalInternalReflection) {
float relativeIndexOfRefraction;
float cosi = -dot(ray, normal);
if (isInside(ray, normal)) {
relativeIndexOfRefraction = refractionIndex;
cosi *= -1;
} else {
relativeIndexOfRefraction = 1.0f / refractionIndex;
normal = multiply(normal, -1);
}
assert(cosi > 0);
float base = (
1 - (relativeIndexOfRefraction * relativeIndexOfRefraction) * (1 - cosi * cosi)
);
if (base < 0) {
isTotalInternalReflection = true;
return ray;
}
return add(
multiply(ray, relativeIndexOfRefraction),
multiply(normal, sqrtf(base) - relativeIndexOfRefraction * cosi)
);
}
However, this all still gives me an unexpected result:
I've also added some unit tests. They pass the following:
A ray entering the center of the sphere parallel with the normal will transmit through the sphere without being bent (this tests two refractionDir calls, one outside and one inside).
Refraction at 45 degrees from the normal through a glass slab will bend inside the slab by 15 degrees towards the normal, away from the original ray direction. Its direction when it exits the sphere will be the original ray direction.
Similar test at 75 degrees.
Ensuring that total internal reflection happens when a ray is coming from inside the object and is at 45 degrees or wider.
I'll include one of the unit tests here and you can find the rest at this gist.
TEST_CASE("Refraction at 75 degrees from normal through glass slab") {
Vec3f rayDirection = normalize(Vec3f({ 0, -sinf(5.0f * M_PI / 12.0f), -cosf(5.0f * M_PI / 12.0f) }));
Vec3f normal({ 0, 0, 1 });
bool isTotalInternalReflection;
Vec3f refraction = refractionDir(rayDirection, normal, 1.5f, isTotalInternalReflection);
REQUIRE(refraction[0] == 0);
REQUIRE(refraction[1] == Approx(-sinf(40.0f * M_PI / 180.0f)).margin(0.03f));
REQUIRE(refraction[2] == Approx(-cosf(40.0f * M_PI / 180.0f)).margin(0.03f));
REQUIRE(!isTotalInternalReflection);
refraction = refractionDir(refraction, multiply(normal, -1), 1.5f, isTotalInternalReflection);
REQUIRE(refraction[0] == Approx(rayDirection[0]));
REQUIRE(refraction[1] == Approx(rayDirection[1]));
REQUIRE(refraction[2] == Approx(rayDirection[2]));
REQUIRE(!isTotalInternalReflection);
}
I'm experimenting trying to implement "as simple as possible" SSR in GLSL. Any chance someone could please help me set up an extremely basic ssr code?
I do not need (for now) any roughness/metalness calculations, no Fresnel, no fading in-out effect, nothing fancy - I just want the most simple setup that I can understand, learn and maybe later improve upon.
I have 4 source textures: Color, Position, Normal, and a copy of the previous final frame's image (Reflection)
attributes.position is the position texture (FLOAT16F) - WorldSpace
attributes.normal is the normal texture (FLOAT16F) - WorldSpace
sys_CameraPosition is the eye's position in WorldSpace
rd is supposed to be the reflection direction in WorldSpace
texture(SOURCE_2, projectedCoord.xy) is the position texture at the current reflection-dir endpoint
vec3 rd = normalize(reflect(attributes.position - sys_CameraPosition, attributes.normal));
vec2 uvs;
vec4 projectedCoord;
for (int i = 0; i < 10; i++)
{
// Calculate screen space position from ray's current world position:
projectedCoord = sys_ProjectionMatrix * sys_ViewMatrix * vec4(attributes.position + rd, 1.0);
projectedCoord.xy /= projectedCoord.w;
projectedCoord.xy = projectedCoord.xy * 0.5 + 0.5;
// this bit is tripping me up
if (distance(texture(SOURCE_2, projectedCoord.xy).xyz, (attributes.position + rd)) > 0.1)
rd += rd;
else
uvs = projectedCoord.xy;
break;
}
out_color += (texture(SOURCE_REFL, uvs).rgb);
Is this even possible, using worldSpace coordinates? When I multiply the first pass' outputs with the viewMatrix aswell as the modelMatrix, my light calculations go tits up, because they are also in worldSpace...
Unfortunately there is no basic SSR tutorials on the internet, that explain just the SSR bit and nothing else, so I thought I'd give it a shot here, I really can't seem to get my head around this...
I have been provided with a framework where a simple path tracer is implemented. What I am trying to do so far is understanding the whole code because I'll need to put my hands on it. Unfortunately I am arrived on a step where I don't actually get what's happening and since I am a newbie in the advanced graphics field I don't manage to "decrypt" this part. The developer is trying to get the coordinates of the screen corners as for comments. What I need to understand is the math behind it and therefore some of the variables that are used. Here is the code:
// setup virtual screen plane
vec3 E( 2, 8, -26 ), V( 0, 0, 1 );
static float r = 1.85f;
mat4 M = rotate( mat4( 1 ), r, vec3( 0, 1, 0 ) );
float d = 0.5f, ratio = SCRWIDTH / SCRHEIGHT, focal = 18.0f;
vec3 p1( E + V * focal + vec3( -d * ratio * focal, d * focal, 0 ) ); // top-left screen corner
vec3 p2( E + V * focal + vec3( d * ratio * focal, d * focal, 0 ) ); // top-right screen corner
vec3 p3( E + V * focal + vec3( -d * ratio * focal, -d * focal, 0 ) ); // bottom-left screen corner
p1 = vec3( M * vec4( p1, 1.0f ) );
p2 = vec3( M * vec4( p2, 1.0f ) );
p3 = vec3( M * vec4( p3, 1.0f ) );
For example:
what is the "d" variable and why both "d" and "focal" are fixed?
is "focal" the focal length?
What do you think are the "E" and "V" vectors?
is the matrix "M" the CameraToWorldCoordinates matrix?
I need to understand every step of those formulas if possible, the variables, and the math used in those few lines of code. Thanks in advance.
My guesses:
E: eye position—position of the eye/camera in world space
V: view direction—the direction the camera is looking, in world coordinates
d: named constant for one half—corners are half the screen size away from the centre (where the camera is looking)
focal: distance of the image plane from the camera. Given its use in screen corner offsets, it also seems to be the height of the image plane in world coordinates.
M: I'd say this is the WorldToCamera matrix. It's used to transform a point which is based on E
How the points are computed:
Start at the camera: E
Move focal distance along the view direction, effectively moving to the centre of the image plane: + V * focal
Add offsets on X & Y which will move half a screen distance: + vec3( ::: )
Given that V does not figure in the vec3() arguments (nor does any up or right vector), this seems to hard-code the idea that V is collinear with the Z axis.
Finally, the points are tranformed as points (as opposed to directions, since their homogenous coordinate is 1) by M.
I am doing some really basic experiments around some 2D work in GL. I'm trying to draw a "picture frame" around an rectangular area. I'd like for the frame to have a consistent gradient all the way around, and so I'm constructing it with geometry that looks like four quads, one on each side of the frame, tapered in to make trapezoids that effectively have miter joins.
The vert coords are the same on the "inner" and "outer" rectangles, and the colors are the same for all inner and all outer as well, so I'd expect to see perfect blending at the edges.
But notice in the image below how there appears to be a "seam" in the corner of the join that's lighter than it should be.
I feel like I'm missing something conceptually in the math that explains this. Is this artifact somehow a result of the gradient slope? If I change all the colors to opaque blue (say), I get a perfect solid blue frame as expected.
Update: Code added below. Sorry kinda verbose. Using 2-triangle fans for the trapezoids instead of quads.
Thanks!
glClearColor(1.0, 1.0, 1.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
// Prep the color array. This is the same for all trapezoids.
// 4 verts * 4 components/color = 16 values.
GLfloat colors[16];
colors[0] = 0.0;
colors[1] = 0.0;
colors[2] = 1.0;
colors[3] = 1.0;
colors[4] = 0.0;
colors[5] = 0.0;
colors[6] = 1.0;
colors[7] = 1.0;
colors[8] = 1.0;
colors[9] = 1.0;
colors[10] = 1.0;
colors[11] = 1.0;
colors[12] = 1.0;
colors[13] = 1.0;
colors[14] = 1.0;
colors[15] = 1.0;
// Draw the trapezoidal frame areas. Each one is two triangle fans.
// Fan of 2 triangles = 4 verts = 8 values
GLfloat vertices[8];
float insetOffset = 100;
float frameMaxDimension = 1000;
// Bottom
vertices[0] = 0;
vertices[1] = 0;
vertices[2] = frameMaxDimension;
vertices[3] = 0;
vertices[4] = frameMaxDimension - insetOffset;
vertices[5] = 0 + insetOffset;
vertices[6] = 0 + insetOffset;
vertices[7] = 0 + insetOffset;
glVertexPointer(2, GL_FLOAT , 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
// Left
vertices[0] = 0;
vertices[1] = frameMaxDimension;
vertices[2] = 0;
vertices[3] = 0;
vertices[4] = 0 + insetOffset;
vertices[5] = 0 + insetOffset;
vertices[6] = 0 + insetOffset;
vertices[7] = frameMaxDimension - inset;
glVertexPointer(2, GL_FLOAT , 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
/* top & right would be as expected... */
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
As #Newbie posted in the comments,
#quixoto: open your image in Paint program, click with fill tool somewhere in the seam, and you see it makes 90 degree angle line there... means theres only 1 color, no brighter anywhere in the "seam". its just an illusion.
True. While I'm not familiar with this part of math under OpenGL, I believe this is the implicit result of how the interpolation of colors between the triangle vertices is performed... I'm positive that it's called "Bilinear interpolation".
So what to do to solve that? One possibility is to use a texture and just draw a textured quad (or several textured quads).
However, it should be easy to generate such a border in a fragment shader.
A nice solution using a GLSL shader...
Assume you're drawing a rectangle with the bottom-left corner having texture coords equal to (0,0), and the top-right corner with (1,1).
Then generating the "miter" procedurally in a fragment shader would look like this, if I'm correct:
varying vec2 coord;
uniform vec2 insetWidth; // width of the border in %, max would be 0.5
void main() {
vec3 borderColor = vec3(0,0,1);
vec3 backgroundColor = vec3(1,1,1);
// x and y inset, 0..1, 1 means border, 0 means centre
vec2 insets = max(-coord + insetWidth, vec2(0,0)) / insetWidth;
If I'm correct so far, then now for every pixel the value of insets.x has a value in the range [0..1]
determining how deep a given point is into the border horizontally,
and insets.y has the similar value for vertical depth.
The left vertical bar has insets.y == 0,
the bottom horizontal bar has insets.x = 0,, and the lower-left corner has the pair (insets.x, insets.y) covering the whole 2D range from (0,0) to (1,1). See the pic for clarity:
Now we want a transformation which for a given (x,y) pair will give us ONE value [0..1] determining how to mix background and foreground color. 1 means 100% border, 0 means 0% border. And this can be done in several ways!
The function should obey the requirements:
0 if x==0 and y==0
1 if either x==1 or y==1
smooth values in between.
Assume such function:
float bias = max(insets.x,insets.y);
It satisfies those requirements. Actually, I'm pretty sure that this function would give you the same "sharp" edge as you have above. Try to calculate it on a paper for a selection of coordinates inside that bottom-left rectangle.
If we want to have a smooth, round miter there, we just need another function here. I think that something like this would be sufficient:
float bias = min( length(insets) , 1 );
The length() function here is just sqrt(insets.x*insets.x + insets.y*insets.y). What's important: This translates to: "the farther away (in terms of Euclidean distance) we are from the border, the more visible the border should be", and the min() is just to make the result not greater than 1 (= 100%).
Note that our original function adheres to exactly the same definition - but the distance is calculated according to the Chessboard (Chebyshev) metric, not the Euclidean metric.
This implies that using, for example, Manhattan metric instead, you'd have a third possible miter shape! It would be defined like this:
float bias = min(insets.x+insets.y, 1);
I predict that this one would also have a visible "diagonal line", but the diagonal would be in the other direction ("\").
OK, so for the rest of the code, when we have the bias [0..1], we just need to mix the background and foreground color:
vec3 finalColor = mix(borderColor, backgroundColor, bias);
gl_FragColor = vec4(finalColor, 1); // return the calculated RGB, and set alpha to 1
}
And that's it! Using GLSL with OpenGL makes life simpler. Hope that helps!
I think that what you're seeing is a Mach band. Your visual system is very sensitive to changes in the 1st derivative of brightness. To get rid of this effect, you need to blur your intensities. If you plot intensity along a scanline which passes through this region, you'll see that there are two lines which meet at a sharp corner. To keep your visual system from highlighting this area, you'll need to round this join over. You can do this with either a post processing blur or by adding some more small triangles in the corner which ease the transition.
I had that in the past, and it's very sensitive to geometry. For example, if you draw them separately as triangles, in separate operations, instead of as a triangle fan, the problem is less severe (or, at least, it was in my case, which was similar but slightly different).
One thing I also tried is to draw the triangles separately, slightly overlapping onto one another, with a right composition mode (or OpenGL blending) so you don't get the effect. I worked, but I didn't end up using that because it was only a tiny part of the final product, and not worth it.
I'm sorry that I have no idea what is the root cause of this effect, however :(