Realtime object painting - c++

I am trying to perform a realtime painting to the object texture. Using Irrlicht for now, but that does not really matter.
So far, i've got the right UV coordinates using this algorithm:
find out which object's triangle user selected (raycasting, nothing
really difficult)
find out the UV (baricentric) coordinates of intersection point on
that triangle
find out the UV (texture) coordinates of each triangle vertex
find out the UV (texture) coordinates of intersection point
calculate the texture image coordinates for intersection point
But somehow, when i am drawing in the point i got in the 5th step on texture image, i get totally wrong results. So, when drawing a rectangle in cursor point, the X (or Z) coordinate of its is inverted:
Here's the code i am using to fetch texture coordinates:
core::vector2df getPointUV(core::triangle3df tri, core::vector3df p)
{
core::vector3df
v0 = tri.pointC - tri.pointA,
v1 = tri.pointB - tri.pointA,
v2 = p - tri.pointA;
float dot00 = v0.dotProduct(v0),
dot01 = v0.dotProduct(v1),
dot02 = v0.dotProduct(v2),
dot11 = v1.dotProduct(v1),
dot12 = v1.dotProduct(v2);
float invDenom = 1.f / ((dot00 * dot11) - (dot01 * dot01)),
u = (dot11 * dot02 - dot01 * dot12) * invDenom,
v = (dot00 * dot12 - dot01 * dot02) * invDenom;
scene::IMesh* m = Mesh->getMesh(((scene::IAnimatedMeshSceneNode*)Model)->getFrameNr());
core::array<video::S3DVertex> VA, VB, VC;
video::SMaterial Material;
for (unsigned int i = 0; i < m->getMeshBufferCount(); i++)
{
scene::IMeshBuffer* mb = m->getMeshBuffer(i);
video::S3DVertex* vertices = (video::S3DVertex*) mb->getVertices();
for (unsigned long long v = 0; v < mb->getVertexCount(); v++)
{
if (vertices[v].Pos == tri.pointA)
VA.push_back(vertices[v]); else
if (vertices[v].Pos == tri.pointB)
VB.push_back(vertices[v]); else
if (vertices[v].Pos == tri.pointC)
VC.push_back(vertices[v]);
if (vertices[v].Pos == tri.pointA || vertices[v].Pos == tri.pointB || vertices[v].Pos == tri.pointC)
Material = mb->getMaterial();
if (VA.size() > 0 && VB.size() > 0 && VC.size() > 0)
break;
}
if (VA.size() > 0 && VB.size() > 0 && VC.size() > 0)
break;
}
core::vector2df
A = VA[0].TCoords,
B = VB[0].TCoords,
C = VC[0].TCoords;
core::vector2df P(A + (u * (C - A)) + (v * (B - A)));
core::dimension2du Size = Material.getTexture(0)->getSize();
CursorOnModel = core::vector2di(Size.Width * P.X, Size.Height * P.Y);
int X = Size.Width * P.X, Y = Size.Height * P.Y;
// DRAWING SOME RECTANGLE
Material.getTexture(0)->lock(true);
Device->getVideoDriver()->setRenderTarget(Material.getTexture(0), true, true, 0);
Device->getVideoDriver()->draw2DRectangle(video::SColor(255, 0, 100, 75), core::rect<s32>((X - 10), (Y - 10),
(X + 10), (Y + 10)));
Device->getVideoDriver()->setRenderTarget(0, true, true, 0);
Material.getTexture(0)->unlock();
return core::vector2df(X, Y);
}
I just wanna make my object paintable in realtime. My current problems are: wrong texture coordinate calculation and non-unique vertex UV coordinates (so, drawing something on the one side of the dwarfe's axe would draw the same on the other side of that axe).
How should i do this?

I was able to use your codebase and get it to work for me.
Re your second problem "non-unique vertex UV coordinates":
Well you are absolutely right, you need unique vertexUVs to get this working, which means that you have to unwrap you models and don't make use of shared uv-space for e.g. mirrored elements and stuff. (e.g. left/right boot - if they use the same uv-space, you'll paint automatically on both, where you want the one to be red and the other to be green). You can check out "uvlayout" (tool) or the uv-unwrap modifier ind 3ds max.
Re the first and more important problem: "**wrong texture coordinate calculation":
the calculation of your baycentric coordinates is correct, but as i suppose your input-data is wrong. I assume you get the triangle and the collisionPoint by using irrlicht's CollisionManager and TriangleSelector. The problem is, that the positions of the triangle's vertices (which you get as returnvalue from the collisionTest) is in WorldCoordiates. But you'll need them in ModelCoordinates for the calculation, so here's what you need to do:
pseudocode:
add the node which contains the mesh of the hit triangle as parameter to getPointUV()
get the inverse absoluteTransformation-Matrix by calling node->getAbsoluteTransformation() [inverse]
transform the vertices of the triangle by this inverse Matrix and use those values for the rest of the method.
Below you'll find my optimized method wich does it for a very simple mesh (one mesh, only one meshbuffer).
Code:
irr::core::vector2df getPointUV(irr::core::triangle3df tri, irr::core::vector3df p, irr::scene::IMeshSceneNode* pMeshNode, irr::video::IVideoDriver* pDriver)
{
irr::core::matrix4 inverseTransform(
pMeshNode->getAbsoluteTransformation(),
irr::core::matrix4::EM4CONST_INVERSE);
inverseTransform.transformVect(tri.pointA);
inverseTransform.transformVect(tri.pointB);
inverseTransform.transformVect(tri.pointC);
irr::core::vector3df
v0 = tri.pointC - tri.pointA,
v1 = tri.pointB - tri.pointA,
v2 = p - tri.pointA;
float dot00 = v0.dotProduct(v0),
dot01 = v0.dotProduct(v1),
dot02 = v0.dotProduct(v2),
dot11 = v1.dotProduct(v1),
dot12 = v1.dotProduct(v2);
float invDenom = 1.f / ((dot00 * dot11) - (dot01 * dot01)),
u = (dot11 * dot02 - dot01 * dot12) * invDenom,
v = (dot00 * dot12 - dot01 * dot02) * invDenom;
irr::video::S3DVertex A, B, C;
irr::video::S3DVertex* vertices = static_cast<irr::video::S3DVertex*>(
pMeshNode->getMesh()->getMeshBuffer(0)->getVertices());
for(unsigned int i=0; i < pMeshNode->getMesh()->getMeshBuffer(0)->getVertexCount(); ++i)
{
if( vertices[i].Pos == tri.pointA)
{
A = vertices[i];
}
else if( vertices[i].Pos == tri.pointB)
{
B = vertices[i];
}
else if( vertices[i].Pos == tri.pointC)
{
C = vertices[i];
}
}
irr::core::vector2df t2 = B.TCoords - A.TCoords;
irr::core::vector2df t1 = C.TCoords - A.TCoords;
irr::core::vector2df uvCoords = A.TCoords + t1*u + t2*v;
return uvCoords;
}

Related

Checking if vector passes through vertices

I have been struggling with this problem for over a month, so I really need your help.
To further elaborate on the question :
The question is whether a vector called 'direction' that starts at a vertex called 'start' passes through the 'taget'.
You need to confirm the direction and distance.
I decided that using the dot product was impossible because I went through enough debugging.
The result is good when calculated directly, but why is the result different when executed in the shader?
The same thickness should be printed depending on the distance, but why does the thin line appear when the distance is far?
Do you have any good ideas even if it's not the way I use the rotation matrix?
These are three questions.
First of all, my situation is
drawing fSQ.
I want to check whether the direction of start passes through the target.
Compute in the pixel shader.
1 is one pixel
The screen size is 1920*1080
bool intersect(float2 target, float2 direction, float2 start) {
bool intersecting = false;
static const float thresholdX = 0.5 / SCREENWIDTH;
static const float thresholdY = 0.5 / SCREENHEIGHT;
if (direction.x == 0 && direction.y == 0);
else {
float2 startToTarget = target - start;
float changedTargetPositionX = startToTarget.x * direction.x + startToTarget.y * direction.y;
float changedTargetPositionY = startToTarget.x * (-direction.y) + startToTarget.y * direction.x;
float rangeOfX = (direction.x * direction.x) + (direction.y * direction.y);
if (changedTargetPositionX <= rangeOfX + thresholdX && changedTargetPositionX >= -thresholdX &&
changedTargetPositionY <= thresholdY && changedTargetPositionY >= -thresholdY) {
intersecting = true;
}
}
return intersecting;
We use a rotation matrix to rotate a vector and then check the difference between the two vectors, which works in most cases, but fails for very small pixels.
For example
start = (15,0) direction= (10,0) taget = (10,0)
In this case, the intersect function should return false, but it returns true.
But if the pixel difference is bigger then it works fine.
and
#define MAX = 5;
float2 points[MAX*MAX];
for (float fi = 1; fi < MAX; fi++)
for (float fj = 1; fj < MAX; fj++)
points[(int)(fi * MAX + fj)] = float2(fi / MAX , fj / MAX);
for(uint ni=0; ni < MAX*MAX;ni++)
for(uint nj=3; nj < MAX*MAX; nj++)
if (intersect(uv, points[nj]- points[ni], points[ni])) {
color = float4(1, 0, 0, 1);
return color;
}
return float4(0, 0, 0, 1);
When debugging like this, the line becomes thinner depending on the distance.
All the lines should have the same thickness, but I don't know why.
This is the result of running the debugging code:
We look forward to your reply.
thank you

Centering the object into the 3D space with Direct3D

The idea is to present a drawn 3D object "centered" in the screen. After loading the object with WaveFrontReader I got an array of vertices:
float bmin[3], bmax[3];
bmin[0] = bmin[1] = bmin[2] = std::numeric_limits<float>::max();
bmax[0] = bmax[1] = bmax[2] = -std::numeric_limits<float>::max();
for (int k = 0; k < 3; k++)
{
for (auto& v : objx->wfr.vertices)
{
if (k == 0)
{
bmin[k] = std::min(v.position.x, bmin[k]);
bmax[k] = std::max(v.position.x, bmax[k]);
}
if (k == 1)
{
bmin[k] = std::min(v.position.y, bmin[k]);
bmax[k] = std::max(v.position.y, bmax[k]);
}
if (k == 2)
{
bmin[k] = std::min(v.position.z, bmin[k]);
bmax[k] = std::max(v.position.z, bmax[k]);
}
}
}
I got the idea from the Viewer in TinyObjLoader (which uses OpenGL though), and then:
float maxExtent = 0.5f * (bmax[0] - bmin[0]);
if (maxExtent < 0.5f * (bmax[1] - bmin[1])) {
maxExtent = 0.5f * (bmax[1] - bmin[1]);
}
if (maxExtent < 0.5f * (bmax[2] - bmin[2])) {
maxExtent = 0.5f * (bmax[2] - bmin[2]);
}
_3dp.scale[0] = maxExtent;
_3dp.scale[1] = maxExtent;
_3dp.scale[2] = maxExtent;
_3dp.translation[0] = -0.5 * (bmax[0] + bmin[0]);
_3dp.translation[1] = -0.5 * (bmax[1] + bmin[1]);
_3dp.translation[2] = -0.5 * (bmax[2] + bmin[2]);
However this doesn't work. With an object like this spider which has vertices that the coordinates do not extend +/-100, the scale gets to about 100x by the above formula and yet, with the current view set to 0,0,0 the object is too close and I have to put the Z translation manually to something like 50000 to view it into a full box with a D3D11_VIEWPORT viewport = { 0.0f, 0.0f, w, h, 0.0f, 1.0f };, Not to mention that the Y is not centered as well.
Is there a proper algorithm to center the object into view?
Thanks a lot
You can actually change the position of the camera itself and not the objects.
Its recommended that you edit the camera position in OpenGL tutorials.
In games the camera (which is what captures the viewpoint which the rendered objects are viewed from) are not in the middle of the view but actually further way so you can see everything going on in the view/scene.

Ray tracing obj files diffuse shading issue

The repository (GitHub)
I'm having issues with my diffuse shading for my models (It does not arise when rendering primitives.
What's interesting here to note is I think that when you look at the left reflective sphere, the shading appears normal (I might be wrong on this, basing on observation).
Low poly bunny and a triangle
Low poly bunny and 2 reflective spheres
Cube and 2 reflective spheres
I'm not sure what I'm doing wrong, as the normals are calculated each time I create a triangle in the constructor. I am using tinyobjloader to load my models, here is the TriangleMesh intersection algorithm.
FPType TriangleMesh::GetIntersection(const Ray &ray)
{
for(auto &shape : shapes)
{
size_t index_offset = 0;
for(size_t f = 0; f < shape.mesh.num_face_vertices.size(); ++f) // faces (triangles)
{
int fv = shape.mesh.num_face_vertices[f];
tinyobj::index_t &idx0 = shape.mesh.indices[index_offset + 0]; // v0
tinyobj::index_t &idx1 = shape.mesh.indices[index_offset + 1]; // v1
tinyobj::index_t &idx2 = shape.mesh.indices[index_offset + 2]; // v2
Vec3d &v0 = Vec3d(attrib.vertices[3 * idx0.vertex_index + 0], attrib.vertices[3 * idx0.vertex_index + 1], attrib.vertices[3 * idx0.vertex_index + 2]);
Vec3d &v1 = Vec3d(attrib.vertices[3 * idx1.vertex_index + 0], attrib.vertices[3 * idx1.vertex_index + 1], attrib.vertices[3 * idx1.vertex_index + 2]);
Vec3d &v2 = Vec3d(attrib.vertices[3 * idx2.vertex_index + 0], attrib.vertices[3 * idx2.vertex_index + 1], attrib.vertices[3 * idx2.vertex_index + 2]);
Triangle tri(v0, v1, v2);
if(tri.GetIntersection(ray))
return tri.GetIntersection(ray);
index_offset += fv;
}
}
}
The Triangle Intersection algorithm.
FPType Triangle::GetIntersection(const Ray &ray)
{
Vector3d v0v1 = v1 - v0;
Vector3d v0v2 = v2 - v0;
Vector3d pvec = ray.GetDirection().Cross(v0v2);
FPType det = v0v1.Dot(pvec);
// ray and triangle are parallel if det is close to 0
if(abs(det) < BIAS)
return false;
FPType invDet = 1 / det;
FPType u, v;
Vector3d tvec = ray.GetOrigin() - v0;
u = tvec.Dot(pvec) * invDet;
if(u < 0 || u > 1)
return false;
Vector3d qvec = tvec.Cross(v0v1);
v = ray.GetDirection().Dot(qvec) * invDet;
if(v < 0 || u + v > 1)
return false;
FPType t = v0v2.Dot(qvec) * invDet;
if(t < BIAS)
return false;
return t;
}
I think this is because when I'm handling all my object intersections, the triangle mesh is regarded as 1 object only, so it only returns 1 normal, when I'm trying to get the object normals: code
Color Trace(const Vector3d &origin, const Vector3d &direction, const std::vector<std::shared_ptr<Object>> &sceneObjects, const int indexOfClosestObject,
const std::vector<std::shared_ptr<Light>> &lightSources, const int &depth = 0)
{
if(indexOfClosestObject != -1 && depth <= DEPTH) // not checking depth for infinite mirror effect (not a lot of overhead)
{
std::shared_ptr<Object> sceneObject = sceneObjects[indexOfClosestObject];
Vector3d normal = sceneObject->GetNormalAt(origin);
screenshot of debug
EDIT: I have solved the issue and now shading works properly: https://github.com/MrCappuccino/Tracey/blob/testing/src/TriangleMesh.cpp#L35-L48
If you iterate all your faces and return on the first face you hit, it may happen that you hit faces which are behind other faces and therefore not really the face you want to hit, so you would have to measure the length of your ray and return the intersection for the shortest ray.

opengl trackball

I am trying to rotate opengl scene using track ball. The problem i am having is i am getting rotations opposite to direction of my swipe on screen. Here is the snippet of code.
prevPoint.y = viewPortHeight - prevPoint.y;
currentPoint.y = viewPortHeight - currentPoint.y;
prevPoint.x = prevPoint.x - centerx;
prevPoint.y = prevPoint.y - centery;
currentPoint.x = currentPoint.x - centerx;
currentPoint.y = currentPoint.y - centery;
double angle=0;
if (prevPoint.x == currentPoint.x && prevPoint.y == currentPoint.y) {
return;
}
double d, z, radius = viewPortHeight * 0.5;
if(viewPortWidth > viewPortHeight) {
radius = viewPortHeight * 0.5f;
} else {
radius = viewPortWidth * 0.5f;
}
d = (prevPoint.x * prevPoint.x + prevPoint.y * prevPoint.y);
if (d <= radius * radius * 0.5 ) { /* Inside sphere */
z = sqrt(radius*radius - d);
} else { /* On hyperbola */
z = (radius * radius * 0.5) / sqrt(d);
}
Vector refVector1(prevPoint.x,prevPoint.y,z);
refVector1.normalize();
d = (currentPoint.x * currentPoint.x + currentPoint.y * currentPoint.y);
if (d <= radius * radius * 0.5 ) { /* Inside sphere */
z = sqrt(radius*radius - d);
} else { /* On hyperbola */
z = (radius * radius * 0.5) / sqrt(d);
}
Vector refVector2(currentPoint.x,currentPoint.y,z);
refVector2.normalize();
Vector axisOfRotation = refVector1.cross(refVector2);
axisOfRotation.normalize();
angle = acos(refVector1*refVector2);
I recommend artificially setting prevPoint and currentPoint to (0,0) (0,1) and then stepping through the code (with a debugger or with your eyes) to see if each part makes sense to you, and the angle of rotation and axis at the end of the block are what you expect.
If they are what you expect, then I'm guessing the error is in the logic that occurs after that. i.e. you then take the angle and axis and convert them to a matrix which gets multiplied to move the model. A number of convention choices happen in this pipeline --which if swapped can lead to the type of bug you're having:
Whether the formula assumes the angle is winding left or right handedly around the axis.
Whether the transformation is meant to rotate an object in the world or meant to rotate the camera.
Whether the matrix is meant to operate by multiplication on the left or right.
Whether rows or columns of matrices are contiguous in memory.

How do I use texture-mapping in a simple ray tracer?

I am attempting to add features to a ray tracer in C++. Namely, I am trying to add texture mapping to the spheres. For simplicity, I am using an array to store the texture data. I obtained the texture data by using a hex editor and copying the correct byte values into an array in my code. This was just for my testing purposes. When the values of this array correspond to an image that is simply red, it appears to work close to what is expected except there is no shading.
first image http://dl.dropbox.com/u/367232/Texture.jpg
The bottom right of the image shows what a correct sphere should look like. This sphere's colour using one set colour, not a texture map.
Another problem is that when the texture map is of something other than just one colour pixels, it turns white. My test image is a picture of water, and when it maps, it shows only one ring of bluish pixels surrounding the white colour.
bmp http://dl.dropbox.com/u/367232/vPoolWater.bmp
When this is done, it simply appears as this:
second image http://dl.dropbox.com/u/367232/texture2.jpg
Here are a few code snippets:
Color getColor(const Object *object,const Ray *ray, float *t)
{
if (object->materialType == TEXTDIF || object->materialType == TEXTMATTE) {
float distance = *t;
Point pnt = ray->origin + ray->direction * distance;
Point oc = object->center;
Vector ve = Point(oc.x,oc.y,oc.z+1) - oc;
Normalize(&ve);
Vector vn = Point(oc.x,oc.y+1,oc.z) - oc;
Normalize(&vn);
Vector vp = pnt - oc;
Normalize(&vp);
double phi = acos(-vn.dot(vp));
float v = phi / M_PI;
float u;
float num1 = (float)acos(vp.dot(ve));
float num = (num1 /(float) sin(phi));
float theta = num /(float) (2 * M_PI);
if (theta < 0 || theta == NAN) {theta = 0;}
if (vn.cross(ve).dot(vp) > 0) {
u = theta;
}
else {
u = 1 - theta;
}
int x = (u * IMAGE_WIDTH) -1;
int y = (v * IMAGE_WIDTH) -1;
int p = (y * IMAGE_WIDTH + x)*3;
return Color(TEXT_DATA[p+2],TEXT_DATA[p+1],TEXT_DATA[p]);
}
else {
return object->color;
}
};
I call the colour code here in Trace:
if (object->materialType == MATTE)
return getColor(object, ray, &t);
Ray shadowRay;
int isInShadow = 0;
shadowRay.origin.x = pHit.x + nHit.x * bias;
shadowRay.origin.y = pHit.y + nHit.y * bias;
shadowRay.origin.z = pHit.z + nHit.z * bias;
shadowRay.direction = light->object->center - pHit;
float len = shadowRay.direction.length();
Normalize(&shadowRay.direction);
float LdotN = shadowRay.direction.dot(nHit);
if (LdotN < 0)
return 0;
Color lightColor = light->object->color;
for (int k = 0; k < numObjects; k++) {
if (Intersect(objects[k], &shadowRay, &t) && !objects[k]->isLight) {
if (objects[k]->materialType == GLASS)
lightColor *= getColor(objects[k], &shadowRay, &t); // attenuate light color by glass color
else
isInShadow = 1;
break;
}
}
lightColor *= 1.f/(len*len);
return (isInShadow) ? 0 : getColor(object, &shadowRay, &t) * lightColor * LdotN;
}
I left out the rest of the code as to not bog down the post, but it can be seen here. Any help is greatly appreciated. The only portion not included in the code, is where I define the texture data, which as I said, is simply taken straight from a bitmap file of the above image.
Thanks.
It could be that the texture is just washed out because the light is so bright and so close. Notice how in the solid red case, there doesn't seem to be any gradation around the sphere. The red looks like it's saturated.
Your u,v mapping looks right, but there could be a mistake there. I'd add some assert statements to make sure u and v and really between 0 and 1 and that the p index into your TEXT_DATA array is also within range.
If you're debugging your textures, you should use a constant material whose color is determined only by the texture and not the lights. That way you can make sure you are correctly mapping your texture to your primitive and filtering it properly before doing any lighting on it. Then you know that part isn't the problem.