My raw primitives cannot be displayed in Raymarching - glsl

I encountered a very strange problem while learning Raymarching.
My equation cannot be displayed well.
It can be displayed well in matlab. But it can't be shown on Shadertoy at all.
My equation:
f(x,y,z) = (x^2+y^2+z^2)^2+2*y*(x^2+y^2+z^2)+2*(x^2+z^2);
code in matlab:
f =#(x,y,z) (x.^2+y.^2+z.^2)^2+2*y*(x.^2+y.^2+z.^2)+2*(x.^2+z.^2);
fimplicit3(f)
matlab displays a normal picture
code in shadertoy:
float sdRound(vec3 p)
{
float lengthXYZ = (p.x * p.x+p.y * p.y+p.z * p.z);
return lengthXYZ * lengthXYZ+2.0 * p.y * lengthXYZ+2.0 * (p.x * p.x+p.z * p.z);
}
I learned, practiced, and modified the iq code. (https://www.shadertoy.com/view/Xds3zN).
But it cannot be displayed normally.
Shadertoy displays abnormal pictures
Shadertoy displays abnormal pictures
I don't know where the problem occurred. Please help me. Has troubled me for a long time.
Forgive my poor English. I use Google Translate.

Your problem seems to lay in the very understanding of what raymarching is.
Your formula mathematically describes a surface (that is why it is displayed nicely in matlab), but this is not how raymarching works.
Defining an object for raymarching means defining a distance function for your object (which is different from a function mathematically describing its surface). These functions are identical for zero-returned values (which means coordinates are on the surface), but they are different for other values.
To create something similar to your object, I would suggest to explore the sdRoundCone method from iq's Raymarching primitives, as it seems the closest to what you want to achieve.

Related

Project point from point cloud to Image in OpenCV

I'm trying to project a point from 3D to 2D in OpenCV with C++. At the Moment, I'm using cv::projectPoints() but it's just not working out.
But first things first. I'm trying to write a program, that finds an intersection between a point cloud and a line in space. So I calibrated two cameras, did rectification and matching using SGBM. Finally I projected the disparity map to 3d using reprojectTo3D(). That all works very well and in meshlab, I can visualize my point cloud.
After that I wrote an algorithm to find the intersection between the point cloud and a line which I coded manually. That works fine, too. I found a point in the point cloud about 1.5 mm away from the line, which is good enough for the beginning. So I took this point and tried to project it back to the image, so I could mark it. But here is the problem.
Now the point is not inside the image anymore. As I took an intersection in the middle of the image, this is not possible. I think the problem could be in the coordinate systems, as I don't know in which coordinate system the point cloud is written (left camera, right camera or something else).
My projectPoints function looks like:
projectPoints(intersectionPoint3D, R, T, cameraMatrixLeft, distortionCoeffsLeft, intersectionPoint2D, noArray(), 0);
R and T are the rotation and translation from one camera to another (got that from stereoCalibrate). Here might be my mistake, but how can I fix it? I also tried to set these to (0,0,0) but it doesn't work either. Also I tried to transform the R Matrix using Rodrigues to a vector. Still same problem.
I'm sorry if this has been asked before, but I'm not sure how to search for this problem. I hope my text is clear enought to help me... if you need more information, I will gladly provide it.
Many thanks in advance.
You have a 3D point and you want to get the corresponding 2D location of it right? If you have the camera calibration matrix (3x3 matrix), you will be able to project the point to the image
cv::Point2d get2DFrom3D(cv::Point3d p, cv::Mat1d CameraMat)
{
cv::Point2d pix;
pix.x = (p.x * CameraMat(0, 0)) / p.z + CameraMat(0, 2);
pix.y = ((p.y * CameraMat(1, 1)) / p.z + CameraMat(1, 2));
return pix;
}

Multiple instances of btDefaultMotionState, all ignored, but one

To summarize the problem(s):
I have two bodies in my world so far, one being the ground, the other one being a falling box called "fallingStar".
1) I do not understand why my bullet world is not aligned with my drawn world unless I set an offset of btVector3(2,2,2) to the (btDefault)MotionState.
There is no fancy magic going on anywhere in the code that would explain the offset. Or at least I could not find any reason, not in the shaders, not anywhere.
2) I expected to be able to use multiple instances of btDefaultMotionState, to be precise, I wanted to use one instance for the falling entity and place it somewhere above the ground and then create another instance for the ground that should simply be aligned with my graphics-ground, ever unmoving.
What I am experiencing in regards to 2) is that for whatever reason the btDefaultMotionState instance for the falling entity is always also influencing the one for the ground, without any reference.
Now to the code:
Creation of the fallingBox:
btCollisionShape *fallingBoxShape = new btBoxShape(btVector3(1,1,1));
btScalar fallingBoxMass = 1;
btVector3 fallingBoxInertia(0,0,0);
fallingBoxShape->calculateLocalInertia(fallingBoxMass, fallingBoxInertia);
// TODO this state somehow defines where exactly _ALL_ of the physicsWorld is...
btDefaultMotionState *fallMotionState = new btDefaultMotionState(btTransform(btQuaternion(0,0,0,1), btVector3(2,2,2)));
//btDefaultMotionState *fallMotionState = new btDefaultMotionState();
btRigidBody::btRigidBodyConstructionInfo fallingBoxBodyCI(fallingBoxMass, fallMotionState, fallingBoxShape, fallingBoxInertia);
/*btTransform initialTransform;
initialTransform.setOrigin(btVector3(0,5,0));*/
this->fallingBoxBody = new btRigidBody(fallingBoxBodyCI);
/*fallMotionState->setWorldTransform(initialTransform);
this->fallingBoxBody->setWorldTransform(initialTransform);*/
this->physicsWorld->addBody(*fallingBoxBody);
Now the interesting parts to me are the necessary offset of btVector3(2,2,2) to align it with my drawn world and this:
btTransform initialTransform;
initialTransform.setOrigin(btVector3(0,5,0));
this->fallingStarBody = new btRigidBody(fallingStarBodyCI);
fallMotionState->setWorldTransform(initialTransform);
If I reenable this part of the code ALL the bodies again show an offset, but NOT just 5 up, which I could somehow comprehend if for whatever reason the worldTransform would effect every entity, but about 2,2,2 off... which I cannot grasp at all.
I guess that this line is useless:
fallMotionState->setWorldTransform(initialTransform); as it does not change anything whether it's there or not.
Now to the code of the ground creation:
btCompoundShape *shape = new btCompoundShape();
... just some logic, nothing to do with bullet
btTransform transform;
transform.setIdentity();
transform.setOrigin(btVector3(x + (this->x * Ground::width),
y + (this->y * Ground::height),
z + (this->z * Ground::depth)));
btBoxShape *boxShape = new btBoxShape(btVector3(1,0,1)); // flat surface, no box
shape->addChildShape(transform, boxShape);
(this portion just creates a compoundshape for each surface tile :)
btRigidBody::btRigidBodyConstructionInfo info(0, nullptr, shape);
return new btRigidBody(info);
Here I purposely set the motionstate to nullptr, but this doesn't change anything.
Now I really am curious... I thought maybe the implementation of btDefaultMotionState is a singleton, but it doesn't look so, so... why the hell is setting the motionState of one body affecting the whole world?
Bullet is a good library but only few dedicate time to write good documentation.
To set position of a btRigidBody, try this :-
btTransform transform = body -> getCenterOfMassTransform();
transform.setOrigin(aNewPosition); //<- set orientation / position that you like
body -> setCenterOfMassTransform(transform);
If your code is wrong only at the set transformation part (that is what I guess from skimming your code), it should be solved.
Note that this snippet works only for dynamic body, not static body.
About CompoundBody:-
If it is a compound body, e.g. shape B contains shape C.
Setting transformation of B would work (set body of B), but not work for C.
(because C is just a shape, transformation support only body.)
If I want to change relative transformation of C to B, I would create a whole new compound shape and a new rigid body. Don't forget to remove old body & shape.
That is a library limitation.
P.S.
I can't answer some of your doubt/questions, these information are what I gathered after stalking in Bullet forum for a while, and tested by myself.
(I am also coding game + game library from scratch, using Bullet and other open sources.)
Edit: (about the new problem)
it just slowly falls down (along with the ground itself, which should
not move as I gave it a mass of 0)
I would try to solve it in this order.
Idea A
Set to the compound mass = 0 instead, because setting a child shape's mass has no meaning.
Idea B
First check -> getCenterOfMassTransform() every time-step , is it really falling?
If it is actually falling, to be sure, try dynamicsWorld->setGravity(btVector3(0,0,0));.
If still not work, try with very simple world (1 simple object, no compound) and see.
Idea C (now I start to be desperate)
Ensure your camera position is constant.
If the problem is still alive, I think you now can create a simple test-case and post it in Bullet forum without too much effort.
Lower amounts of lines of code = better feedback
What you are describing is not normal bullet behavior. Your understanding of the library is correct.
What you are most likely dealing with is either a buffer overrun or a dangling pointer. The code you have posted does not have an obvious one of either, so it would be coming from somewhere else in your codebase. You might be able to track that down using a well-placed memory breakpoint.
You "might" be dealing with a header/binary version inconsistency issue, but that's less likely as you would probably be seeing other major issues.
Just had the exact same type of behavior with the DebugDrawer suspended on top of the world. Solved it by passing to Bullet Physics the projectionview matrix alone, without the model matrix that he has and multiplies with already:
glUseProgram(shaderID);
m_MVP = m_camera->getProjectionViewMatrix();
glUniformMatrix4fv(shaderIDMVP, 1, GL_FALSE, &m_MVP[0][0]);
if (m_dynamicWorld) m_dynamicWorld->debugDrawWorld();

Scalable Ambient Obscurance rendering issue

I am trying to implement this SAO algorithm.
I am getting the following result :
I can't figure out why I have the nose on top of the walls, it seems to be a z-buffer issue.
Here are my input values :
const float projScale = 100.0;
const float radius = 0.9;
const float bias = 0.0005;
const float intensityDivR6 = pow(radius, 6);
I am using the original shader without modifications, except that I disable the usage of mipmaps of the depth buffer.
My depth buffer (on different scene, sorry) :
It should be an issue with the zbuffer linearization or it's not between -1 and 1.
Thank you Bruno, I finally figure out what were the issues.
The first was that I didn't transform my Z correctly, they use a specific pre-pass to make the Z linear and put it between -1 and 1. I was using an incompatible method to do it.
I also had to negate my near and far planes values directly in the projection matrix to compute correctly some uniforms.
Result :
I had a similar problem, having visual wrong occlusion, linked to the near/far, so I decided to give you what I've done to fix it.
The problem I had is discribed in a previous comment. I was getting self occlusion, when the camera was close to an object or when the radius was really too big.
If you take a closer look at the conversion from depth buffer value to camera-space value (the reconstructCSZ function from the g3d engine), you will see that replacing the depth by 0 will give you the near plane if you work with positive near/far. So, what it means is that every time you will get a tap outside the model, you will get a z component equals to near, which will give you wrong occlusion for fragments having a z close to 0.
You basically have to discard each taps that are located on the near plane, to avoid them being taken into account when comptuing the full contribution.

2d rotation on set of points causes skewing distortion

I'm writing an application in OpenGL (though I don't think this problem is related to that). I have some 2d point set data that I need to rotate. It later gets projected into 3d.
I apply my rotation using this formula:
x' = x cos f - y sin f
y' = y cos f + x sin f
Where 'f' is the angle. When I rotate the point set, the result is skewed. The severity of the effect varies with the angle.
It's hard to describe so I have pictures;
The red things are some simple geometry. The 2d point sets are the vertices for the white polylines you see around them. The first picture shows the undistorted pointsets, and the second picture shows them after rotation. It's not just skew that's occuring with the rotation; sometimes it seems like displacement occurs as well.
The code itself is trivial:
double cosTheta = cos(2.4);
double sinTheta = sin(2.4);
CalcSimplePolyCentroid(listHullVx,xlate);
for(size_t j=0; j < listHullVx.size(); j++) {
// translate
listHullVx[j] = listHullVx[j] - xlate;
// rotate
double xPrev = listHullVx[j].x;
double yPrev = listHullVx[j].y;
listHullVx[j].x = ((xPrev*cosTheta) - (yPrev*sinTheta));
listHullVx[j].y = ((yPrev*cosTheta) + (xPrev*sinTheta));
// translate
listHullVx[j] = listHullVx[j] + xlate;
}
If I comment out the code under '//rotate' above, the output of the application is the first image. And adding it back in gives the second image. There's literally nothing else that's going on (afaik).
The data types being used are all doubles so I don't think its a precision issue. Does anyone have any idea why rotation would cause skewing like the above pictures show?
EDIT
filipe's comment below was correct. This probably has nothing to do with the rotation and I hadn't provided enough information for the problem;
The geometry I've shown in the pictures represents buildings. They're generated from lon/lat map coordinates. In the point data I use to do the transform, I forgot to use an actual projection to cartesian coordinate space and just mapped x->lon, y->lat, and I think this is the reason I'm seeing the distortion. I'm going to request that this question be deleted since I don't think it'll be useful to anyone else.
Update:
As a result of your comments it tunred out the it is unlikely that the bug is in the presented code.
One final other hint: std transform formulars are only valid if the cooridnate system is cartesian,
on ios you sometimes have inverted y Achsis.

Kinect 3D to 2D bias

I am struggling with the interpretation of kinect depth data.
In order to obtain real world distance from kinect, i used the following formula :
if(i<2047){
depthToMeterTable[i] = i * -0.0030711016 + 3.3309495161;
}
else{
depthToMeterTable[i] = 0;
}
This formula gives something pretty good as a distance estimator.
However i do obtain strange output from a 90° wall corner visualisation.
On the following image is two different information. First, the violet lines represent the wall as i SHOULD see it. A 90° corner. The red dots represent the wall seen from the kinect. As you can see, the angle of the two planes is now bigger.
http://img843.imageshack.us/img843/4061/kinectbias.jpg
Do you have any idea where i could correct this bias, and how to do it ?
Thank you for reading,
Al_th
I'm not familiar with that conversion formula (also not sure how your depthToMeterTable gets filled - what formula is used there).
There's a built-in function in libfreenect for that though: freenect_camera_to_world
Before that utility function was added I used Matt Fischer's conversion functions(RawDepthToMeters and DepthToWorld).
HTH