I have 1024x1 gradient texture that I want to map to a quad. This gradient should be aligned along the line (p1,p2) inside that quad. The texture has the GL_CLAMP_TO_EDGE property, so it will fill the entire quad.
I now need to figure out the texture coordinates for the four corners (A,B,C,D) of the quad, but I can't wrap my head around the required math.
I tried to calculate the angle between (p1,p2) and then rotate the corner points around the center of the line between (p1,2), but I couldn't get this to work right. It seems a bit excessive anyway - is there an easier solution?
Are you using shaders? If yes, then assign your quad just default UVs from 0 to 1 .Then based on the p1 p2 segment slope calculate the degrees for rotation (don't forget to convert those to radians). Then in the vertex shader construct 2x2 rotation matrix and rotate the UVs the amount defined by the segment.At the end pass the rotated UVs into fragment shader and use with your gradient texture sampler.
I found another approach that actually works as I want it to, using only additions, multiplications and one div; sparing the expensive sqrt.
I first calculate the slope of the (p1,p2) line and the one orthogonal to it. I then work out the intersection points of the slope (starting at p1) and the orthogonal stating at each corner.
Vector2 slope = {p2.x - p1.x, p2.y - p1.y};
Vector2 ortho = {-slope.y, slope.x};
float div = 1/(slope.y * ortho.x - slope.x * ortho.y);
Vector2 A = {
(ortho.x * -p1.y + ortho.y * p1.x) * div,
(slope.x * -p1.y + slope.y * p1.x) * div
};
Vector2 B = {
(ortho.x * -p1.y + ortho.y * (p1.x - 1)) * div,
(slope.x * -p1.y + slope.y * (p1.x - 1)) * div
};
Vector2 C = {
(ortho.x * (1 - p1.y) + ortho.y * p1.x) * div,
(slope.x * (1 - p1.y) + slope.y * p1.x) * div
};
Vector2 D = {
(ortho.x * (1 - p1.y) + ortho.y * (p1.x - 1)) * div,
(slope.x * (1 - p1.y) + slope.y * (p1.x - 1)) * div
};
Related
I am a bit confused. I need help, I tried implementing an arcball camera. The theory I followed is here:
https://www.khronos.org/opengl/wiki/Object_Mouse_Trackball
It "works" except it doesn;t behave like the arcball camera in Renderdoc:
Mine:
Renderdoc
So in mine when you try to rotate too far away from the screen center the rotation seems to be on the opposite direction of where it should be
vec3 ScreenToArcSurface(vec2 pos)
{
const float radius = 0.9f; // Controls the speed
if(pos.x * pos.x + pos.y * pos.y >= (radius * radius) / 2.f - 0.00001)
{
// This is equal to (r^2 / 2) / (sqrt(x^2 + y^2)) since the magnitude of the
// vector is sqrt(x^2 + y^2)
return {pos, (radius * radius / 2.f) / (length(pos))};
}
return {pos.x, pos.y, sqrt(radius * radius - (pos.x * pos.x + pos.y * pos.y))};
}
void ArcballCamera::UpdateCameraAngles(void* ptr, glm::vec2 position, glm::vec2 offset)
{
auto camera = reinterpret_cast<ArcballCamera*>(ptr);
vec3 vb = ScreenToArcSurface(position);
vec3 va = ScreenToArcSurface(position - offset);
float angle = acos(glm::min(1.f, dot(vb, va)));
vec3 axis = cross(va, vb);
camera->rotation *= quat(cos(angle) / 2.f, sin(angle) * axis);
camera->rotation = normalize(camera->rotation);
}
glm::mat4 ArcballCamera::GetViewMatrix()
{
return glm::lookAt(
look_at_point + rotation * (position - look_at_point),
look_at_point,
rotation * up);
}
I don't understand what difference there is between what I implemented and what the khronos link is describing.
I fixed it by multiplying the position by -1;.
I don't understand why this fixes the math. The input coorindates are what i expect. the poisition is normalized from -1 to 1 and top left is (-,-) top right is (+,-), bottom left (-,+) and finally the last one is (+,+).
So I don;t know Why I need to work in a negated coordinate system for this to work.
The problem is that the Website I took this from defined the formula for the OpenGL coordinate system.
However I am working on vulkan, where the Y coordinate is flipped, which changes the handedness of the system. Due to this, using the formula as is uses the wrong half of the sphere.
The correct implementation for Vulkan just needs to negate the z component, i.e:
vec3 ScreenToArcSurface(vec2 pos)
{
const float radius = 0.9f; // Controls the speed
if(pos.x * pos.x + pos.y * pos.y >= (radius * radius) / 2.f - 0.00001)
{
// This is equal to (r^2 / 2) / (sqrt(x^2 + y^2)) since the magnitude of the
// vector is sqrt(x^2 + y^2)
return {pos, -(radius * radius / 2.f) / (length(pos))};
}
return {pos.x, pos.y, -sqrt(radius * radius - (pos.x * pos.x + pos.y * pos.y))};
}
I'm trying to implement terrain collision for my height map terrain, and I'm following this. The tutorial is for java but I'm using C++, though the principles are the same so it shouldn't be a problem.
To start off we need a function to get the height of the terrain based on the camera's position. WorldX and WorldZ is the camera's position (x, z) and heights is an 2D-array containing all the heights of the vertices.
float HeightMap::getHeightOfTerrain(float worldX, float worldZ, float heights[][256])
{
//Image is (256 x 256)
float gridLength = 256;
float terrainLength = 256;
float terrainX = worldX;
float terrainZ = worldZ;
float gridSquareLength = terrainLength / ((float)gridLength - 1);
int gridX = (int)std::floor(terrainX / gridSquareLength);
int gridZ = (int)std::floor(terrainZ / gridSquareLength);
//Check if position is on the terrain
if (gridX >= gridLength - 1 || gridZ >= gridLength - 1 || gridX < 0 || gridZ < 0)
{
return 0;
}
//Find out where the player is on the grid square
float xCoord = std::fmod(terrainX, gridSquareLength) / gridSquareLength;
float zCoord = std::fmod(terrainZ, gridSquareLength) / gridSquareLength;
float answer = 0.0;
//Top triangle of a square else the bottom
if (xCoord <= (1 - zCoord))
{
answer = barryCentric(glm::vec3(0, heights[gridX][gridZ], 0),
glm::vec3(1, heights[gridX + 1][gridZ], 0), glm::vec3(0, heights[gridX][gridZ + 1], 1),
glm::vec2(xCoord, zCoord));
}
else
{
answer = barryCentric(glm::vec3(1, heights[gridX + 1][gridZ], 0),
glm::vec3(1, heights[gridX + 1][gridZ + 1], 1), glm::vec3(0, heights[gridX][gridZ + 1], 1),
glm::vec2(xCoord, zCoord));
}
return answer;
}
To find the height of the triangle the camera is currently standing on we use the baryCentric interpolation function.
float HeightMap::barryCentric(glm::vec3 p1, glm::vec3 p2, glm::vec3 p3, glm::vec2 pos)
{
float det = (p2.z - p3.z) * (p1.x - p3.x) + (p3.x - p2.x) * (p1.z - p3.z);
float l1 = ((p2.z - p3.z) * (pos.x - p3.x) + (p3.x - p2.x) * (pos.y - p3.z)) / det;
float l2 = ((p3.z - p1.z) * (pos.x - p3.x) + (p1.x - p3.x) * (pos.y - p3.z)) / det;
float l3 = 1.0f - l1 - l2;
return l1 * p1.y + l2 * p2.y + l3 * p3.y;
}
Then we just have to use the height we have calculated to check for
collision during the game
float terrainHeight = heightMap.getHeightOfTerrain(camera.Position.x, camera.Position.z, heights);
if (camera.Position.y < terrainHeight)
{
camera.Position.y = terrainHeight;
};
Now according to the tutorial this should work perfectly fine, but the height is rather off and at some places it doesn't even work. I figured it might have something to do with the translation and scaling part of the terrain
glm::mat4 model;
model = glm::translate(model, glm::vec3(0.0f, -0.3f, -15.0f));
model = glm::scale(model, glm::vec3(0.1f, 0.1f, 0.1f));
and that I should multiply the values of the heights array by 0.1, as the scaling does that part for the terrain on the GPU side, but that didn't do the trick.
Note
In the tutorial the first lines in the getHeightOfTerrain function says
float terrainX = worldX - x;
float terrainZ = worldZ - z;
where x and z is the world position of the terrain. This is done to get the player position relative to the terrain's position. I tried with the values from the translation part, but it doensn't work either. I changed these lines because it doesn't seem necessary.
float terrainX = worldX - x;
float terrainZ = worldZ - z;
Those lines are, in fact, very necessary, unless your terrain is always at the origin.
Your code resource (tutorial) assumes that you haven't scaled or rotated the terrain in any way. The x and z variables are the XZ position of the terrain which take care of cases where the terrain is translated.
Ideally, you should transform the world position vector from world space to object space (using the inverse of the model matrix you use for the terrain), something like
vec3 localPosition = inverse(model) * vec4(worldPosition, 1)
And then use localPosition.x and localPosition.z instead of terrainX and terrainZ.
My Question
Can someone please link a good article/tutorial/anything or maybe even explain how to correctly cast a ray from the mouse coordinates to pick objects in 3D?
I already have the Ray and intersection works, now I only need to create the ray from the mouse click.
I would just like have something which I know actually should work, thats why I ask the professionals here, not something where I am unsure if it is even correct in the first place.
State right now
I have a ray class, which actually works and detects intersection if I set the origin and direction to be the same as the camera, so when I move the camera it actually selects the right thing.
Now I would like to actually have 3D picking with the mouse, not camera movement.
I have read so many other questions about this, 2 tutorials, and especially so much different math stuff, since I am really not good at it.
But that didn't help me much, because the people there often use some "unproject" functions, which seem to actually be deprecated and which I have no idea how to use and also don't have access to.
Right now I set the ray origin to the camera position and then try to get the direction of the ray from the calculations in this tutorial.
And it works a little bit, meaning the selection works when the camera is pointed at the object and also sometimes along the whole y-axis, I have no idea what is happening.
If someone wants to take a look at my code right now:
public Ray2(Camera cam, float mouseX, float mouseY) {
origin = cam.getEye();
float height = 600;
float width = 600;
float aspect = (float) width / (float) height;
float x = (2.0f * mouseX) / width - 1.0f;
float y = 1.0f - (2.0f * mouseX) / height;
float z = 1.0f;
Vector ray_nds = vecmath.vector(x, y, z);
Vector4f clip = new Vector4f(ray_nds.x(), ray_nds.y(), -1.0f, 1.0f);
Matrix proj = vecmath.perspectiveMatrix(60f, aspect, 0.1f, 100f);
proj = proj.invertRigid();
float tempX = proj.get(0, 0) * clip.x + proj.get(1, 0) * clip.y
+ proj.get(2, 0) * clip.z + proj.get(3, 0) * clip.w;
float tempY = proj.get(0, 1) * clip.x + proj.get(1, 1) * clip.y
+ proj.get(2, 1) * clip.z + proj.get(3, 1) * clip.w;
float tempZ = proj.get(0, 2) * clip.x + proj.get(1, 2) * clip.y
+ proj.get(2, 2) * clip.z + proj.get(3, 2) * clip.w;
float tempW = proj.get(0, 3) * clip.x + proj.get(1, 3) * clip.y
+ proj.get(2, 3) * clip.z + proj.get(3, 3) * clip.w;
Vector4f ray_eye = new Vector4f(tempX, tempY, tempZ, tempW);
ray_eye = new Vector4f(ray_eye.x, ray_eye.y, -1.0f, 0.0f);
Matrix view = cam.getTransformation();
view = view.invertRigid();
tempX = view.get(0, 0) * ray_eye.x + view.get(1, 0) * ray_eye.y
+ view.get(2, 0) * ray_eye.z + view.get(3, 0) * ray_eye.w;
tempY = view.get(0, 1) * ray_eye.x + view.get(1, 1) * ray_eye.y
+ view.get(2, 1) * ray_eye.z + view.get(3, 1) * ray_eye.w;
tempZ = view.get(0, 2) * ray_eye.x + view.get(1, 2) * ray_eye.y
+ view.get(2, 2) * ray_eye.z + view.get(3, 2) * ray_eye.w;
tempW = view.get(0, 3) * ray_eye.x + view.get(1, 3) * ray_eye.y
+ view.get(2, 3) * ray_eye.z + view.get(3, 3) * ray_eye.w;
Vector ray_wor = vecmath.vector(tempX, tempY, tempZ);
// don't forget to normalise the vector at some point
ray_wor = ray_wor.normalize();
direction = ray_wor;
}
First,unproject() method is the way to go.It is not deprecated at all.You can find it implemented in GLM math library for example.Here is my implementation of Ray based 3D picking:
// let's check if this renderable's AABB is clicked:
const glm::ivec2& mCoords = _inputManager->GetMouseCoords();
int mouseY = _viewportHeight - mCoords.y;
//unproject twice to build a ray from near to far plane"
glm::vec3 v0 = glm::unProject(glm::vec3(float(mCoords.x), float(mouseY), 0.0f),_camera->Transform().GetView(),_camera->Transform().GetProjection(), _viewport);
glm::vec3 v1 = glm::unProject(glm::vec3(float(mCoords.x), float(mouseY), 1.0f),_camera->Transform().GetView(),_camera->Transform().GetProjection(), _viewport);
glm::vec3 dir = (v1 - v0);
Ray r(_camera->Transform().GetPosition(),dir);
float ishit ;
//construct AABB:
glm::mat4 aabbMatr = glm::translate(glm::mat4(1.0),renderable->Transform().GetPosition());
aabbMatr = glm::scale(aabbMatr,renderable->Transform().GetScale());
//transforms AABB vertices(need it if the origianl bbox is not axis aligned as in this case)
renderable->GetBoundBox()->RecalcVertices(aabbMatr);
//this method makes typical Ray-AABB intersection test:
if(r.CheckIntersectAABB(*renderable->GetBoundBox().get(),&ishit)){
printf("HIT!\n");
}
But I would suggest you also to take a look at color based 3d picking which is pixel perfect and even easier to implement.
Let's say there is a grid terrain for a game composed of tiles made of two triangles - made from four vertices. How would we find the Y (up) position of a point between the four vertices?
I have tried this:
float diffZ1 = lerp(heights[0], heights[2], zOffset);
float diffZ2 = lerp(heights[1], heights[3], zOffset);
float yPosition = lerp(diffZ1, diffZ2, xOffset);
Where z/yOffset is the z/y offset from the first vertex of the tile in percent / 100. This works for flat surfaces but not so well on bumpy terrain.
I expect this has something to do with the terrain being made from triangles where the above may work on flat planes. I'm not sure, but does anybody know what's going wrong?
This may better explain what's going on here:
In the code above "heights[]" is an array of the Y coordinate of surrounding vertices v0-3.
Triangle 1 is made of vertex 0, 2 and 1.
Triangle 2 is made of vertex 1, 2 and 3.
I wish to find coordinate Y of p1 when its x,y coordinates lay between v0-3.
So I have tried determining which triangle the point is between through this function:
bool PointInTriangle(float3 pt, float3 pa, float3 pb, float3 pc)
{
// Compute vectors
float2 v0 = pc.xz - pa.xz;
float2 v1 = pb.xz - pa.xz;
float2 v2 = pt.xz - pa.xz;
// Compute dot products
float dot00 = dot(v0, v0);
float dot01 = dot(v0, v1);
float dot02 = dot(v0, v2);
float dot11 = dot(v1, v1);
float dot12 = dot(v1, v2);
// Compute barycentric coordinates
float invDenom = 1.0f / (dot00 * dot11 - dot01 * dot01);
float u = (dot11 * dot02 - dot01 * dot12) * invDenom;
float v = (dot00 * dot12 - dot01 * dot02) * invDenom;
// Check if point is in triangle
return (u >= 0.0f) && (v >= 0.0f) && (u + v <= 1.0f);
}
This isn't giving me the results I expected
I am then trying to find the y coordinate of point p1 inside each triangle:
// Position of point p1
float3 pos = input[0].PosI;
// Calculate point and normal for triangles
float3 p1 = tile[0];
float3 n1 = (tile[2] - p1) * (tile[1] - p1); // <-- Error, cross needed
// = cross(tile[2] - p1, tile[1] - p1);
float3 p2 = tile[3];
float3 n2 = (tile[2] - p2) * (tile[1] - p2); // <-- Error
// = cross(tile[2] - p2, tile[1] - p2);
float newY = 0.0f;
// Determine triangle & get y coordinate inside correct triangle
if(PointInTriangle(pos, tile[0], tile[1], tile[2]))
{
newY = p1.y - ((pos.x - p1.x) * n1.x + (pos.z - p1.z) * n1.z) / n1.y;
}
else if(PointInTriangle(input[0].PosI, tile[3], tile[2], tile[1]))
{
newY = p2.y - ((pos.x - p2.x) * n2.x + (pos.z - p2.z) * n2.z) / n2.y;
}
Using the following to find the correct triangle:
if((1.0f - xOffset) <= zOffset)
inTri1 = true;
And correcting the code above to use the correct cross function seems to have solved the problem.
Because your 4 vertices may not be on a plane, you should consider each triangle separately. First find the triangle that the point resides in, and then use the following StackOverflow discussion to solve for the Z value (note the different naming of the axes). I personally like DanielKO's answer much better, but the accepted answer should work too:
Linear interpolation of three 3D points in 3D space
EDIT: For the 2nd part of your problem (finding the triangle that the point is in):
Because the projection of your tiles onto the xz plane (as you define your coordinates) are perfect squares, finding the triangle that the point resides in is a very simple operation. Here I'll use the terms left-right to refer to the x axis (from lower to higher values of x) and bottom-top to refer to the z axis (from lower to higher values of z).
Each tile can only be split in one of two ways. Either (A) via a diagonal line from the bottom-left corner to the top-right corner, or (B) via a diagonal line from the bottom-right corner to the top-left corner.
For any tile that's split as A:
Check if x' > z', where x' is the distance from the left edge of the tile to the point, and z' is the distance from the bottom edge of the tile to the point. If x' > z' then your point is in the bottom-right triangle; otherwise it's in the upper-left triangle.
For any tile that's split as B: Check if x" > z', where x" is the distance from the right edge of your tile to the point, and z' is the distance from the bottom edge of the tile to the point. If x" > z' then your point is in the lower-left triangle; otherwise it's in the upper-right triangle.
(Minor note: Above I assume your tiles aren't rotated in the xz plane; i.e. that they are aligned with the axes. If that's not correct, simply rotate them to align them with the axes before doing the above checks.)
I need to write a function which shall take a sub-rectangle from a 2D texture (non power-of-2) and copy it to a destination sub-rectangle of an output 2D texture, using a shader (no glSubImage or similar).
Also the source and the destination may not have the same size, so I need to use linear filtering (or even mipmap).
void CopyToTex(GLuint dest_tex,GLuint src_tex,
GLuint src_width,GLuint src_height,
GLuint dest_width,GLuint dest_height,
float srcRect[4],
GLuint destRect[4]);
Here srcRect is in normalized 0-1 coordinates, that is the rectangle [0,1]x[0,1] touch the center of every border pixel of the input texture.
To achieve a good result when the input and source dimensions don't match, I want to use a GL_LINEAR filtering.
I want this function to behave in a coherent manner, i.e. calling it multiple times with many subrects shall produce the same result as one invocation with the union of the subrects; that is the linear sampler should sample the exact center of the input pixel.
Moreover, if the input rectangle fit exactly the destination rectangle an exact copy should occur.
This seems to be particularly hard.
What I've got now is something like this:
//Setup RTT, filtering and program
float vertices[4] = {
float(destRect[0]) / dest_width * 2.0 - 1.0,
float(destRect[1]) / dest_height * 2.0 - 1.0,
//etc..
};
float texcoords[4] = {
(srcRect[0] * (src_width - 1) + 0.5) / src_width - 0.5 / dest_width,
(srcRect[1] * (src_height - 1) + 0.5) / src_height - 0.5 / dest_height,
(srcRect[2] * (src_width - 1) + 0.5) / src_width + 0.5 / dest_width,
(srcRect[3] * (src_height - 1) + 0.5) / src_height + 0.5 / dest_height,
};
glBegin(GL_QUADS);
glTexCoord2f(texcoords[0], texcoords[1]);
glVertex2f(vertices[0], vertices[1]);
glTexCoord2f(texcoords[2], texcoords[1]);
glVertex2f(vertices[2], vertices[1]);
//etc...
glEnd();
To write this code I followed the information from this page.
This seems to work as intended in some corner cases (exact copy, copying a row or a column of one pixel).
My hardest test case is to perform an exact copy of a 2xN rectangle when both the input and output textures are bigger than 2xN.
I probably have some problem with offsets and scaling (the trivial ones don't work).
Solution:
The 0.5/tex_width part in the definition of the texcoords was wrong.
An easy way to work around is to completely remove that part.
float texcoords[4] = {
(srcRect[0] * (src_width - 1) + 0.5) / src_width,
(srcRect[1] * (src_height - 1) + 0.5) / src_height,
(srcRect[2] * (src_width - 1) + 0.5) / src_width,
(srcRect[3] * (src_height - 1) + 0.5) / src_height
};
Instead, we draw a smaller quad, by offsetting the vertices by:
float dx = 1.0 / (dest_rect[2] - dest_rect[0]) - epsilon;
float dy = 1.0 / (dest_rect[3] - dest_rect[1]) - epsilon;
// assume glTexCoord for every vertex
glVertex2f(vertices[0] + dx, vertices[1] + dy);
glVertex2f(vertices[2] - dx, vertices[1] + dy);
glVertex2f(vertices[2] - dx, vertices[3] - dy);
glVertex2f(vertices[0] + dx, vertices[3] - dy);
In this way we draw a quad which pass from the exact center of every border pixel.
Since OpenGL may or may not draw the border pixels in this case, we need the epsilons.
I believe that my original solution (don't offset vertex coords) can still work, but need a bit of extra math to compute the right offsets for the texcoords.