3d Alternative for D3DXSPRITE for billboarding - c++

I am looking to billboard a sun image in my 3d world (directx 9).
Creating a D3DXSPRITE is great in some cases, but it is only a 2d object and can not exist in my "world" as a 3d object. What is an alternative method for billboarding, similar to d3dxsprite? How can I implement it?
The only alternative I have currently found is this link: http://www.two-kings.de/tutorials/dxgraphics/dxgraphics17.html which does not seem to work

Taking the center of your object vCenter. The object has a width and height of (w,h).
Firstly you need your camera to billboard vector. This is calculated as vCamToCen = normalise( vCamera - vCenter ).
You then need an appropriate rough up vector. This can be extracted from the view matrix (handily described here, ie the second column). You can then calculate the side vector by doing vSide = vCamToCen x vUp. Then calculate the REAL up vector by doing vUp = vCamToCen x vSide. Where 'x' is a cross product.
You now have all the info you need to do your billboarding.
You can then form your 4 verts as follows.
const float halfW = w / 2.0f;
const float halfH = h / 2.0f;
const D3DXVECTOR3 vHalfSide = vSide * halfW;
const D3DXVECTOR3 vHalfUp = vUp * halfH;
vertex[0].pos = vCenter;
vertex[1].pos = vCenter;
vertex[2].pos = vCenter;
vertex[3].pos = vCenter;
vertex[0].pos -= vHalfSide;
vertex[0].pos -= vHalfUp;
vertex[1].pos += vHalfSide;
vertex[1].pos -= vHalfUp;
vertex[2].pos += vHalfSide;
vertex[2].pos += vHalfUp;
vertex[3].pos -= vHalfSide;
vertex[3].pos += vHalfUp;
Build your 2 triangles out of those verts and pass it through your pipeline as normal (ie with your normal view and projection matrices).

Related

Minko - camera and rotation angle

using minko version 3.0, i am creating a camera as the samples :
auto camera = scene::Node::create("camera")
->addComponent(Renderer::create(0x000000ff))
->addComponent(Transform::create(
//test
Matrix4x4::create()->lookAt(Vector3::zero(), Vector3::create(0.f, 0.f, 3.f)) //ori
//Matrix4x4::create()->lookAt(Vector3::zero(), Vector3::create(0.f, 0.f, 30.f))
))
->addComponent(PerspectiveCamera::create(canvas->aspectRatio()));
Then loading my obj using a similar method :
RotateMyobj(const char *objName,float rotX, rotY, float rotZ)
{
...
auto myObjModel = sceneMan->assets()->symbol(objName);
auto clonedobj = myObjModel->clone(CloneOption::DEEP);
...
clonedobj->component<Transform>()->matrix()->prependRotationX(rotX); //test - ok
clonedobj->component<Transform>()->matrix()->prependRotationY(rotY);
clonedobj->component<Transform>()->matrix()->prependRotationZ(rotZ);
...
//include adding child to rootnode
}
calling it from asset complete callback :
auto _ = sceneManager->assets()->loader()->complete()->connect([=](file::Loader::Ptr loader)
{
...
RotateMyobj(0,0,0);
...
}
The obj does load however it is rotated "to the left" (compared when loaded within blender for example).
if i call my method using RotateMyobj(0,1.5,0); the obj is diaplyed at the right angle, however i think this shouldn't be needed.
PS: tested with many obj, all giving same results.
PS2 : commenting / turning off Matrix4x4::create()->lookAt leads to the same result
PS3 : shouldn't create cam with a position of 30 (Z axis) feels like looking at the ground from the top of a building ?
Any idea if this from the camera creation code or the obj loading one ?
Thx.
Update :
I found the source of my problem, it is being caused by calling this method inside enterFrame callback: UpdateSceneOnMouse( camera );
void UpdateSceneOnMouse( std::shared_ptr<scene::Node> &cam ){
yaw += cameraRotationYSpeed;
cameraRotationYSpeed *= 0.9f;
pitch += cameraRotationXSpeed;
cameraRotationXSpeed *= 0.9f;
if (pitch > maxPitch)
{
pitch = maxPitch;
}
else if (pitch < minPitch)
{
pitch = minPitch;
}
cam->component<Transform>()->matrix()->lookAt(
lookAt,
Vector3::create(
lookAt->x() + distance * cosf(yaw) * sinf(pitch),
lookAt->y() + distance * cosf(pitch),
lookAt->z() + distance * sinf(yaw) * sinf(pitch)
)
);}
with the following initialization parameters :
float CallbackManager::yaw = 0.f;
float CallbackManager::pitch = (float)M_PI * 0.5f;
float CallbackManager::minPitch = 0.f + 1e-5;
float CallbackManager::maxPitch = (float)M_PI - 1e-5;
std::shared_ptr CallbackManager::lookAt = Vector3::create(0.f, .8f, 0.f);
float CallbackManager::distance = 10.f;
float CallbackManager::cameraRotationXSpeed = 0.f;
float CallbackManager::cameraRotationYSpeed = 0.f;
If i turn off the call (inspired by the clone example), the object loads more or less correctly
(still a bit rotated to the left but better than previously). I am no math guru, can anyone suggest better
default parameters so the object / cameras aren't rotated at startup ?
Thx.
Many 3D/CAD tools will export files - including OBJ - with coordinates system different from the one used by Minko:
Minko 3 beta 2 uses a left-handed coordinates system
Minko 3 beta 3 uses the OpenGL coordinates system, which is right-handed.
For example, in a right handed coordinates system x+ goes "right", y+ goes "up" and z+ goes "out from the screen".
Your Minko app likely loads 3D files using the ASSIMP library through the Minko/ASSIMP plugin. If the 3D file (format) provides the information about the coordinate system it was used upon export, then ASSIMP will convert the coordinates to the right-handed system. But the OBJ file format standard does not include such information.
Solution 1 : Try exporting your 3D models using a more versatile format such as Collada (*.dae).
Commenting/turning off Matrix4x4::create()->lookAt() does not affect the orientation of your 3D model because there is no reason for it to do so. Changing the position of the camera has no reason to affect the orientation of a mesh. If you want to change the up axis of the camera you have to use the 3rd parameter of the Matrix4x4::lookAt() method.
Solution 2 : The best thing to do is to properly rotate your mesh** using RotateMyobj(0, PI / 2, 0).
I also recommend using the dev branch (Minko beta 3 instead of beta 2) because there are some API changes -especially math wise - and a tremendous performance boost.
Solution 3 : Try adding the symbol without cloning it, see if it makes any difference.

OpenGL ray casting (picking): account for object's transform

For picking objects, I've implemented a ray casting algorithm similar to what's described here. After converting the mouse click to a ray (with origin and direction) the next task is to intersect this ray with all triangles in the scene to determine hit points for each mesh.
I have also implemented the triangle intersection test algorithm based on the one described here. My question is, how should we account for the objects' transforms when performing the intersection? Obviously, I don't want to apply the transformation matrix to all vertices and then do the intersection test (too slow).
EDIT:
Here is the UnProject implementation I'm using (I'm using OpenTK by the way). I compared the results, they match what GluUnProject gives me:
private Vector3d UnProject(Vector3d screen)
{
int[] viewport = new int[4];
OpenTK.Graphics.OpenGL.GL.GetInteger(OpenTK.Graphics.OpenGL.GetPName.Viewport, viewport);
Vector4d pos = new Vector4d();
// Map x and y from window coordinates, map to range -1 to 1
pos.X = (screen.X - (float)viewport[0]) / (float)viewport[2] * 2.0f - 1.0f;
pos.Y = 1 - (screen.Y - (float)viewport[1]) / (float)viewport[3] * 2.0f;
pos.Z = screen.Z * 2.0f - 1.0f;
pos.W = 1.0f;
Vector4d pos2 = Vector4d.Transform(pos, Matrix4d.Invert(GetModelViewMatrix() * GetProjectionMatrix()));
Vector3d pos_out = new Vector3d(pos2.X, pos2.Y, pos2.Z);
return pos_out / pos2.W;
}
Then I'm using this function to create a ray (with origin and direction):
private Ray ScreenPointToRay(Point mouseLocation)
{
Vector3d near = UnProject(new Vector3d(mouseLocation.X, mouseLocation.Y, 0));
Vector3d far = UnProject(new Vector3d(mouseLocation.X, mouseLocation.Y, 1));
Vector3d origin = near;
Vector3d direction = (far - near).Normalized();
return new Ray(origin, direction);
}
You can apply the reverse transformation of each object to the ray instead.
I don't know if this is the best/most efficient approach, but I recently implemented something similar like this:
In world space, the origin of the ray is the camera position. In order to get the direction of the ray, I assumed the user had clicked on the near plane of the camera and thus applied the 'reverse transformation' - from screen space to world space - to the screen space position
( mouseClick.x, viewportHeight - mouseClick.y, 0 )
and then subtracted the origin of the ray, i.e. the camera position, from
the now transformed mouse click position.
In my case, there was no object-specific transformation, meaning I was done once I had my ray in world space. However, transforming origin & direction with the inverse model matrix would have been easy enough after that.
You mentioned that you tried to apply the reverse transformation, but that it didn't work - maybe there's a bug in there? I used a GLM - i.e. glm::unProject - for this.

Why do I have to divide by Z?

I needed to implement 'choosing an object' in a 3D environment. So instead of going with robust, accurate approach, such as raycasting, I decided to take the easy way out. First, I transform the objects world position onto screen coordinates:
glm::mat4 modelView, projection, accum;
glGetFloatv(GL_PROJECTION_MATRIX, (GLfloat*)&projection);
glGetFloatv(GL_MODELVIEW_MATRIX, (GLfloat*)&modelView);
accum = projection * modelView;
glm::mat4 transformed = accum * glm::vec4(objectLocation, 1);
Followed by some trivial code to transform the opengl coordinate system to normal window coordinates, and do a simple distance from the mouse check. BUT that doesn't quite work. In order to translate from world space to screen space, I need one more calculation added on to the end of the function shown above:
transformed.x /= transformed.z;
transformed.y /= transformed.z;
I don't understand why I have to do this. I was under the impression that, once one multiplied your vertex by the accumulated modelViewProjection matrix, you had your screen coordinates. But I have to divide by Z to get it to work properly. In my openGL 3.3 shaders, I never have to divide by Z. Why is this?
EDIT: The code to transform from from opengl coordinate system to screen coordinates is this:
int screenX = (int)((trans.x + 1.f)*640.f); //640 = 1280/2
int screenY = (int)((-trans.y + 1.f)*360.f); //360 = 720/2
And then I test if the mouse is near that point by doing:
float length = glm::distance(glm::vec2(screenX, screenY), glm::vec2(mouseX, mouseY));
if(length < 50) {//you can guess the rest
EDIT #2
This method is called upon a mouse click event:
glm::mat4 modelView;
glm::mat4 projection;
glm::mat4 accum;
glGetFloatv(GL_PROJECTION_MATRIX, (GLfloat*)&projection);
glGetFloatv(GL_MODELVIEW_MATRIX, (GLfloat*)&modelView);
accum = projection * modelView;
float nearestDistance = 1000.f;
gameObject* nearest = NULL;
for(uint i = 0; i < objects.size(); i++) {
gameObject* o = objects[i];
o->selected = false;
glm::vec4 trans = accum * glm::vec4(o->location,1);
trans.x /= trans.z;
trans.y /= trans.z;
int clipX = (int)((trans.x+1.f)*640.f);
int clipY = (int)((-trans.y+1.f)*360.f);
float length = glm::distance(glm::vec2(clipX,clipY), glm::vec2(mouseX, mouseY));
if(length<50) {
nearestDistance = trans.z;
nearest = o;
}
}
if(nearest) {
nearest->selected = true;
}
mouseRightPressed = true;
The code as a whole is incomplete, but the parts relevant to my question works fine. The 'objects' vector contains only one element for my tests, so the loop doesn't get in the way at all.
I've figured it out. As Mr David Lively pointed out,
Typically in this case you'd divide by .w instead of .z to get something useful, though.
My .w values were very close to my .z values, so in my code I change the statement:
transformed.x /= transformed.z;
transformed.y /= transformed.z;
to:
transformed.x /= transformed.w;
transformed.y /= transformed.w;
And it still worked just as before.
https://stackoverflow.com/a/10354368/2159051 explains that division by w will be done later in the pipeline. Obviously, because my code simply multiplies the matrices together, there is no 'later pipeline'. I was just getting lucky in a sense, because my .z value was so close to my .w value, there was the illusion that it was working.
The divide-by-Z step effectively applies the perspective transformation. Without it, you'd have an iso view. Imagine two view-space vertices: A(-1,0,1) and B(-1,0,100).
Without the divide by Z step, the screen coordinates are equal (-1,0).
With the divide-by-Z, they are different: A(-1,0) and B(-0.01,0). So, things farther away from the view-space origin (camera) are smaller in screen space than things that are closer. IE, perspective.
That said: if your projection matrix (and matrix multiplication code) is correct, this should already be happening, as the projection matrix will contain 1/Z scaling components which do this. So, some questions:
Are you really using the output of a projection transform, or just the view transform?
Are you doing this in a pixel/fragment shader? Screen coordinates there are normalized (-1,-1) to (+1,+1), not pixel coordinates, with the origin at the middle of the viewport. Typically in this case you'd divide by .w instead of .z to get something useful, though.
If you're doing this on the CPU, how are you getting this information back to the host?
I guess it is because you are going from 3 dimensions to 2 dimensions, so you are normalizing the 3 dimension world to a 2 dimensional coordinates.
P = (X,Y,Z) in 3D will be q = (x,y) in 2D where x=X/Z and y = Y/Z
So a circle in 3D will not be circle in 2D.
You can check this video out:
https://www.youtube.com/watch?v=fVJeJMWZcq8
I hope I understand your question correctly.

Get 3D model coordinate with 2D screen coordinates gluUnproject

I try to get the 3D coordinates of my OpenGL model. I found this code in the forum, but I donĀ“t understand how the collision is detected.
-(void)receivePoint:(CGPoint)loke
{
GLfloat projectionF[16];
GLfloat modelViewF[16];
GLint viewportI[4];
glGetFloatv(GL_MODELVIEW_MATRIX, modelViewF);
glGetFloatv(GL_PROJECTION_MATRIX, projectionF);
glGetIntegerv(GL_VIEWPORT, viewportI);
loke.y = (float) viewportI[3] - loke.y;
float nearPlanex, nearPlaney, nearPlanez, farPlanex, farPlaney, farPlanez;
gluUnProject(loke.x, loke.y, 0, modelViewF, projectionF, viewportI, &nearPlanex, &nearPlaney, &nearPlanez);
gluUnProject(loke.x, loke.y, 1, modelViewF, projectionF, viewportI, &farPlanex, &farPlaney, &farPlanez);
float rayx = farPlanex - nearPlanex;
float rayy = farPlaney - nearPlaney;
float rayz = farPlanez - nearPlanez;
float rayLength = sqrtf((rayx*rayx)+(rayy*rayy)+(rayz*rayz));
//normalizing rayVector
rayx /= rayLength;
rayy /= rayLength;
rayz /= rayLength;
float collisionPointx, collisionPointy, collisionPointz;
for (int i = 0; i < 50; i++)
{
collisionPointx = rayx * rayLength/i*50;
collisionPointy = rayy * rayLength/i*50;
collisionPointz = rayz * rayLength/i*50;
}
}
In my opinion there a break condition missing. When do I find the collisionPoint?
Another question is:
How do I manipulate the texture at these collision point? I think that I need the corresponding vertex!?
best regards
That code takes the ray from your near clipping place to your far at the position of your loke then partitions it in 50 and interpolates all the possible location of your point in 3D along this ray. At the exit of the loop, in the original code you posted, collisionPointx, y and z is the value of the far most point. There is no "collision" test in that code. you actually need to test your 3D coordinates against a 3D object you want to collide with.

OpenGL picking - Ray/sphere intersection wrong

I have problem with my ray picking code. My code
I am using this code for picking calulation:
/*-----------------------------------------------------------
Function: GetViewportSystem
Returns:
viewportCoordSystem
Get viewport coordinate system (only for reading)
Forward ray goes through origin
-------------------------------------------------------------*/
ViewportCoordSystem Camera::GetViewportSystem() const
{
ViewportCoordSystem viewportCoord;
viewportCoord.w = this->cameraPos;
viewportCoord.w -= this->lookAt;
viewportCoord.w.Normalize();
viewportCoord.u = MyMath::Vector3::Cross(MyMath::Vector3::UnitY(), viewportCoord.w);
viewportCoord.v = MyMath::Vector3::Cross(viewportCoord.w, viewportCoord.u);
float d = (this->viewport.Height / 2.0f) * (1.0f / tanf(this->viewport.fov / 2.0f));
viewportCoord.origin = this->cameraPos;
viewportCoord.origin -= d * viewportCoord.w;
return viewportCoord;
}
/*-----------------------------------------------------------
Function: MapViewport2Dto3D
Parametrs:
[in] viewportSystem - cameras viewport coordinate system
[in] point - 2D point on image
Returns:
3D mapped point in space
Map 2D image point to 3D space
Info about mapping 2D to 3D: http://meatfighter.com/juggler/
-------------------------------------------------------------*/
MyMath::Vector3 Camera::MapViewport2Dto3D(const ViewportCoordSystem & viewportSystem, const MyMath::Vector2 & point) const
{
MyMath::Vector3 res = viewportSystem.origin;
res += (point.X - this->viewport.Width * 0.5f) * viewportSystem.u;
res += (this->viewport.Height * 0.5f - point.Y) * viewportSystem.v;
return res;
}
Picking itself
ViewportCoordSystem vpSystem = this->camera->GetViewportSystem();
MyMath::Vector3 pos = this->camera->MapViewport2Dto3D(vpSystem, MyMath::Vector2(mouseX, mouseY));
this->ray.dir = pos - this->camera->GetPosition();
this->ray.dir.Normalize();
this->ray.origin = this->camera->GetPosition();
With this ray, I calculate ray - sphere intersection test.
bool BoundingSphere::RayIntersection(const MyMath::Ray & ray) const
{
MyMath::Vector3 Q = this->sphereCenter - ray.origin;
double c = Q.LengthSquared();
double v = MyMath::Vector3::Dot(Q, ray.dir);
double d = this->sphereRadius * this->sphereRadius - (c - v * v);
if (d < 0.0) return false;
return true;
}
Problem is, that my code works incorrect. If I visualuse my spheres, and click inside them, I got correct answer only for half of the sphere. When I move camera, than its all messed up and picking reacts outside spheres.
My world is not transformed (all world matrices are Identity). Only camera is moving. I calculate mouse position within OpenGL window correctly (upper left corner has [0, 0] and goes to [width, height]).
PS: I am using this code succesfully in DirectX for raycasting / raytracing. And I cant see anything wrong with it. My OpenGL renderer is using Left-Handed system (its not natural for OpenGL, but I want it that way)
Edit:
After visualizing the ray, problem is apearing, when I mov cameraleft / right. Center of ray is not cohherent with mouse position.
Ok.. found the problem... for anybody else, who might be interessted
Those wo lines are incorrect
viewportCoord.u = MyMath::Vector3::Cross(MyMath::Vector3::UnitY(), viewportCoord.w);
viewportCoord.v = MyMath::Vector3::Cross(viewportCoord.w, viewportCoord.u);
Working solution is
viewportCoord.u = MyMath::Vector3::Cross(viewportCoord.w, MyMath::Vector3::UnitY());
viewportCoord.u.Normalize();
viewportCoord.v = MyMath::Vector3::Cross(viewportCoord.u, viewportCoord.w);
viewportCoord.v.Normalize();