I'm trying to port GLSL code to HLSL but I'm getting type mismatch error at the end of this operation:
float2 pos = p;
float a = time * 100. + y * 31.;
float2 lineCenter = vec2(0.5, y);
pos -= lineCenter;
pos *- float2x2(cos(a), -sin(a), sin(a), cos(a));
The *- operator confuses me a lot, how can it be converted properly to HLSL?
The line does nothing and can be removed.
Basically *- does a computation as it multiplies the left part pos with the negative right part float2x2(cos(a), -sin(a), sin(a), cos(a)), but as this line has no assignment the result of that operation just gets discarded and never used.
You can test it in your reference as you can remove the lines 23-28 without changing anything in the resulting picture.
Related
I'm trying to wrap my head around calculating motion vectors (also called velocity buffer). I found this tutorial, but I'm not satisfied with explanations of how motion vector are calculated. Here is the code:
vec2 a = (vPosition.xy / vPosition.w) * 0.5 + 0.5;
vec2 b = (vPrevPosition.xy / vPrevPosition.w) * 0.5 + 0.5;
oVelocity = a - b;
Why are we multiplying our position vectors by 0.5 and then adding 0.5? I'm guessing that we're trying to get from clip space to NDC, but why? I completly don't understand that.
This is a mapping from the [-1, 1] clip space onto the [0, 1] texture space. Since lookups in the blur shader have to read from a textured at a position offset by the velocity vector, it's necessary to perform this conversion.
Note, that the + 0.5 part is actually unnecessary, since it cancels out in a-b anyway. So the same result would have been achieved by using something like
vec2 a = (vPosition.xy / vPosition.w);
vec2 b = (vPrevPosition.xy / vPrevPosition.w);
oVelocity = (a - b) * 0.5;
I don't know if there is any reason to prefer the first over the second, but my guess is that this code is written in the way it is because it builds up on a previous tutorial where the calculation had been the same.
I recently added support to phong shading in a cpu ray-tracer and the colors are all messed-up:
If I just do diffuse:
Some relevant code:
Vec3f camera_dir (xx_array[x],py,scene->getCameraDirZ());
camera_dir.normalize ();
Vec3f hit_normal = (1 - u - v) * vert1 + u * vert2 + v * vert3;
hit_normal.normalize ();
Vec3f light_ray_dir (current.pos - intersection);
float squared_length = light_ray_dir.normalize_return_squared_lenght ();
Vec3f reflected = light_ray_dir - 2 * (light_ray_dir * hit_normal) * hit_normal;
reflected.normalize();
specular_color += light_intensity * std::pow (std::max(0.f,reflected*camera_dir),mat.ns);
final_color = diffuse_color * mat.ks; + specular_color * mat.kd; (if is only difusse * mat.ks or mat.kd it produces the second image)
Some possibly relevant information
The four last lines are consecutive, although the last one happens outside the for loop responsible for doing the shading calculations for each light.
The two "hit_normal" lines happens earlier, they are before the aforementioned loop start.
The first two happens in the beginning of the ray-tracer, in earlier lines of the two for loops responsible for the image pixels.
if I swap the reflected calculation from
Vec3f reflected = light_ray_dir - 2 * (light_ray_dir * hit_normal) * hit_normal;
to:
Vec3f reflected = 2 * (light_ray_dir * hit_normal) * hit_normal - light_ray_dir;
image only changes slightly:
As code shows, all three component vectors (hit_normal, reflected, light_ray_dir, camera_dir) are normalized.
So I ask for suggestions of how to debug the issue, suggestions of what can be wrong. Thanks for the attention.
I've been trying for some time now to get a screen-space pixel (provided by a deferred HLSL shader) to convert to light space. The results have been surprising to me as my light rendering seems to be tiling the depth buffer.
Importantly, the scene camera (or eye) and the light being rendered from start in the same position.
First, I extract the world position of the pixel using the code below:
float3 eye = Eye;
float4 position = {
IN.texCoord.x * 2 - 1,
(1 - IN.texCoord.y) * 2 - 1,
zbuffer.r,
1
};
float4 hposition = mul(position, EyeViewProjectionInverse);
position = float4(hposition.xyz / hposition.w, hposition.w);
float3 eyeDirection = normalize(eye - position.xyz);
The result seems to be correct as rendering the XYZ position as RGB respectively yields this (apparently correct) result:
The red component seems to be correctly outputting X as it moves to the right, and blue shows Z moving forward. The Y factor also looks correct as the ground is slightly below the Y axis.
Next (and to be sure I'm not going crazy), I decided to output the original depth buffer. Normally I keep the depth buffer in a Texture2D called DepthMap passed to the shader as input. In this case, however, I try to undo the pixel transformation by offsetting it back into the proper position and multiplying it by the eye's view-projection matrix:
float4 cpos = mul(position, EyeViewProjection);
cpos.xyz = cpos.xyz / cpos.w;
cpos.x = cpos.x * 0.5f + 0.5f;
cpos.y = 1 - (cpos.y * 0.5f + 0.5f);
float camera_depth = pow(DepthMap.Sample(Sampler, cpos.xy).r, 100); // Power 100 just to visualize the map since scales are really tiny
return float4(camera_depth, camera_depth, camera_depth, 1);
This yields a correct looking result as well (though I'm not 100% sure about the Z value). Also note that I've made the results exponential to better visualize the depth information (this is not done when attempting live comparisons):
So theoretically, I can use the same code to convert that pixel world position to light space by multiplying by the light's view-projection matrix. Correct? Here's what I tried:
float4 lpos = mul(position, ShadowLightViewProjection[0]);
lpos.xyz = lpos.xyz / lpos.w;
lpos.x = lpos.x * 0.5f + 0.5f;
lpos.y = 1 - (lpos.y * 0.5f + 0.5f);
float shadow_map_depth = pow(ShadowLightMap[0].Sample(Sampler, lpos.xy).r, 100); // Power 100 just to visualize the map since scales are really tiny
return float4(shadow_map_depth, shadow_map_depth, shadow_map_depth, 1);
And here's the result:
And another to show better how it's mapping to the world:
I don't understand what is going on here. It seems it might have something to do with the projection matrix, but I'm not that good with math to know for sure what is happening. It's definitely not the width/height of the light map as I've tried multiple map sizes and the projection matrix is calculated using FOV and aspect ratios never inputing width/height ever.
Finally, here's some C++ code showing how my perspective matrix (used for both eye and light) is calculated:
const auto ys = std::tan((T)1.57079632679f - (fov / (T)2.0));
const auto xs = ys / aspect;
const auto& zf = view_far;
const auto& zn = view_near;
const auto zfn = zf - zn;
row1(xs, 0, 0, 0);
row2(0, ys, 0, 0);
row3(0, 0, zf / zfn, 1);
row4(0, 0, -zn * zf / zfn, 0);
return *this;
I'm completely at a loss here. Any guidance or recommendations would be greatly appreciated!
EDIT - I also forgot to mention that the tiled image is upside down as if the y flip broke it. That's strange to me as it's required to get it back to eye texture space correctly.
I did some tweaking and fixed things here and there. Ultimately, my biggest issue was an unexpectedly transposed matrix. It's a bit complicated as to how the matrix got transposed, but that's why things were flipped. I also changed to D32 depth buffers (though I'm not sure that helped any) and made sure that any positions divided by their W affected all component (including W).
So code like this: hposition.xyz = hposition.xyz / hposition.w
became this: hposition = hposition / hposition.w
After all this tweaking, it's starting to look more like a shadow map.
Oh and the transposed matrix was the ViewProjection of the light.
I am writing a shader according to the Phong Model. I am trying to implement this equation:
where n is the normal, l is direction to light, v is the direction to the camera, and r is the light reflection. The equations are described in more detail in the Wikipedia article.
As of right now, I am only testing on directional light sources so there is no r^2 falloff. The ambient term is added outside the below function and it works well. The function maxDot3 returns 0 if the dot product is negative, as it usually done in the Phong model.
Here's my code implementing the above equation:
#include "PhongMaterial.h"
PhongMaterial::PhongMaterial(const Vec3f &diffuseColor, const Vec3f &specularColor,
float exponent,const Vec3f &transparentColor,
const Vec3f &reflectiveColor,float indexOfRefraction){
_diffuseColor = diffuseColor;
_specularColor = specularColor;
_exponent = exponent;
_reflectiveColor = reflectiveColor;
_transparentColor = transparentColor;
}
Vec3f PhongMaterial::Shade(const Ray &ray, const Hit &hit,
const Vec3f &dirToLight, const Vec3f &lightColor) const{
Vec3f n,l,v,r;
float nl;
l = dirToLight;
n = hit.getNormal();
v = -1.0*(hit.getIntersectionPoint() - ray.getOrigin());
l.Normalize();
n.Normalize();
v.Normalize();
nl = n.maxDot3(l);
r = 2*nl*(n-l);
r.Normalize();
return (_diffuseColor*nl + _specularColor*powf(v.maxDot3(r),_exponent))*lightColor;
}
Unfortunately, the specular term seems to disappear for some reason. My output:
Correct output:
The first sphere only has diffuse and ambient shading. It looks right. The rest have specular terms and produce incorrect results. What is wrong with my implementation?
This line looks wrong:
r = 2*nl*(n-l);
2*nl is a scalar, so this is in the direction of n - l, which is clearly the wrong direction (you also normalize the result, so multiplying by 2*nl does nothing). Consider when n and l point in the same direction. The result r should also be in the same direction but this formula produces the zero vector.
I think your parentheses are misplaced. I believe it should be:
r = (2*nl*n) - l;
We can check this formula on two boundaries easily. When n and l point in the same direction, nl is 1 so the result is also the same vector which is correct. When l is tangent to the surface, nl is zero and the result is -l which is also correct.
I have really interesting problem, but I am solving it for 3 hours and I just can't figure out what is going on and why it isn't working. I tried google it, but with no results.
I am coding program on CUDA. I have this really simple piece of code:
__global__ void calcErrorOutputLayer_kernel(*arguments...*)
{
int idx = blockIdx.x * blockDim.x + threadIdx.x;
float gradient;
float derivation;
derivation = pow((2/(pow(euler, neuron_device[startIndex + idx].outputValue) +
pow(euler, -neuron_device[startIndex + idx].outputValue))), 2);
gradient = (backVector_device[idx] - neuron_device[startIndex + idx].outputValue);
gradient = gradient * derivation; //this line doesn't work
gradient = gradient * 2.0; //this line works
ok, so gradient is calculated correctly and also derivation. but when comes line, where should be these two variables multiplicated with each other nothing happens (value of gradient isn't changed) and on next line CUDA debugger tells me that: " 'derivation' has no value at the target location "
gradient * 2.0 works correctly and it change value of gradient 2 times.
Can anyone help me please?
a = pow(euler, neuron_device[startIndex + idx].outputValue);
b = pow(euler, -neuron_device[startIndex + idx].outputValue);
derivation = pow((2/(a + b),2);
Pow gives an error when:
the base is negative and exponent is not an integral value, or
the base is zero and the exponent is negative, a domain error occurs, setting the global variable errno to the value EDOM.
I guess that you are facing precision problems, and both 'a' and 'b' are 0. You probably are getting derivation = 0 or "inf".
Can you change floats to doubles?