Ogre: simulating perspective view scaling object - c++

For my project Ogre in c++, I want to create an animation of an object using SimpleSpline of Ogre.
  Everything works perfectly, the object is animated along the sequence of points in the path correctly.
  
Since I need to use a scene with orthographic view, so no perspective, I would still simulate the effect depth "playing" on the scale of the object.
Thus, for each frame updating position and scale of the object in this way:
const Vector3 position = this->getPoint(index_, time_);
const float scale = 1 / (1 + position.z);
node_->setScale(scale, scale, scale);
node_->setPosition(position);
It works quite good. Is there a way to make the depth effect more realistic?

You can try using DeflectorPlane in the script of your particle system.
Here you can find the documentation and the usage.

Related

Getting 3D world coordinate from (x,y) pixel coordinates

I'm absolutely new to ROS/Gazebo world; this is probably a simple question, but I cannot find a good answer.
I have a simulated depth camera (Kinect) in a Gazebo scene. After some elaborations, I get a point of interest in the RGB image in pixel coordinates, and I want to retrieve its 3D coordinates in the world frame.
I can't understand how to do that.
I have tried compensating the distortions, given by the CameraInfo msg. I have tried using PointCloud with pcl library, retrieving the point as cloud.at(x,y).
In both cases, the coordinates are not correct (I have put a small sphere in the coords given out by the program, so to check if it's correct or no).
Every help would be very appreciated. Thank you very much.
EDIT:
Starting from the PointCloud, I try to find the coords of the points doing something like:
point = cloud.at(xInPixel, yInPixel);
point.x = point.x + cameraPos.x;
point.y = point.y + cameraPos.y;
point.z = point.z - cameraPos.z;
but the x,y,z coords I get as point.x seems not to be correct.
The camera has a pitch angle of pi/2, so to points on the ground.
I am clearly missing something.
I assume you've seen the gazebo examples for the kinect (brief, full). You can get, as topics, the raw image, raw depth, and calculated pointcloud (by setting them in the config):
<imageTopicName>/camera/color/image_raw</imageTopicName>
<cameraInfoTopicName>/camera/color/camera_info</cameraInfoTopicName>
<depthImageTopicName>/camera/depth/image_raw</depthImageTopicName>
<depthImageCameraInfoTopicName>/camera/dept/camera_info</depthImageCameraInfoTopicName>
<pointCloudTopicName>/camera/depth/points</pointCloudTopicName>
Unless you need to do your own things with the image_raw for rgb and depth frames (ex ML over rgb frame & find corresponding depth point via camera_infos), the pointcloud topic should be sufficient - it's the same as the pcl pointcloud in c++, if you include the right headers.
Edit (in response):
There's a magical thing in ros called tf/tf2. Your pointcloud, if you look at the msg.header.frame_id, says something like "camera", indicating it's in the camera frame. tf, like the messaging system in ros, happens in the background, but it looks/listens for transformations from one frame of reference to another frame. You can then transform/query the data in another frame. For example, if the camera is mounted at a rotation to your robot, you can specify a static transformation in your launch file. It seems like you're trying to do the transformation yourself, but you can make tf do it for you; this allows you to easily figure out where points are in the world/map frame, vs in the robot/base_link frame, or in the actuator/camera/etc frame.
I would also look at these ros wiki questions which demo a few different ways to do this, depending on what you want: ex1, ex2, ex3

Using a MotionController Component in Unreal with C++ instead of Blueprint

After iterating through an array of FMotionControllerSource of an OculusInputDevice IMotionController, I found a connected Oculus Right and Left Touch Controller based on it's ETrackingStatus. With the left and right controllers, I can get the location and rotation using the IMotionController API, which Returns the calibration-space orientation of the requested controller's hand.
Here's a reference to the IMotionController API:
https://docs.unrealengine.com/en-US/API/Runtime/HeadMountedDisplay/IMotionController/index.html
I want to apply the location/rotation to a PosableMesh, so that the mesh is shown where the Oculus controller is in reality. Currently, with the code below the 3D model is displayed down from the camera, so the mapping scale is off. I think WorldToMetersScalemight be off. When I use a small number the controller doesn't move the 3D model much, but this might be messing it up.
FVector position;
FRotator rotation;
int id = tracker.deviceIndex;
FName srcName = tracker.motionControllerSource;
bool success = tracker.motionController->GetControllerOrientationAndPosition(id, srcName, rotation, position, 250.0f);
if (success)
{
poseMesh->SetWorldLocationAndRotation(position, rotation);
}
Adding the camera position to the controller position seemed to fix the issue:
// get camera reference during BeginPlay:
camManager = GetWorld()->GetFirstPlayerController()->PlayerCameraManager;
// TickComponent
poseMesh->SetWorldLocationAndRotation(camManager->GetCameraLocation() + position, rotation);

Transforming Bounding Boxes referenced to an object?

I'm trying to implement AABBs/OOBBs with MathGeoLib since the ease to operate with BBs (and because I wanted to test some things with that library).
The problem is that the engine's objects transformations are based on glm since we started with glm (and they work properly) but when it comes to transform the OOBBs according to an object, it doesn't work very well.
What I basically do is to pass to a function the game object's translation, orientation and scale (I tried to pass a global matrix but it doesn't work, it seems to 'add' the transformation instead of setting it, and I can't access the oobb's matrix). That function does the next:
glm::vec3 pos = passedPosition - OOBBPreviousPos;
glm::mat4 Transformation = glm::translate(glm::mat4(1.0f), pos) *
glm::mat4_cast(passedRot) * glm::scale(glm::mat4(1.0f), passedScale);
glm::mat4 resMat = glm::transpose(Transformation);
math::float4x4 mat = math::float4x4::identity;
mat.Set(glm::value_ptr(resMat));
Which basically transposes the glm matrix (I have seen that that's they way of 'translating' them), passes it to a float* and then it constructs the MathGeoLib matrix with that. I have debugged it and the values seem to be right according to the object, so the next thing I do is actually transform the OOBB and then, enclose the AABB to have it inside, like this:
m_OBB.Transform(mat);
m_AABB.SetNegativeInfinity(); //Sets AABB to "null"
m_AABB.Enclose(m_OBB);
The final behaviour is pretty strange, believe me if I say that is the most close I've been from having it right, I've been some days testing different things and nothing works better (passing global/local matrices directly, trying different ways of passing/constructing transformation data, checking if the glm-MathGLib is correct...). It rotates but not around its own axis, and the scaling gets him crazy (although translation works). Its current behaviour can be seen here: https://gfycat.com/quarrelsomefineduck (blue cubes are AABBs, green ones are OOBBs).
Am I doing something wrong with the mathematics calculations or data transfer?
I still been looking on that but then some friend made me look into another direction, so I finally solved it (or better said: I "worked-around it") by storing an initial object's AABB and passing to the mentioned function the game object's global matrix. Then, inside the function, I used another MathGeoLib function to transform the OOBB.
That function finally looks like:
glm::mat4 resMat = glm::transpose(GlobalMatrixPassed);
math::float4x4 mat = math::float4x4::identity;
mat.Set(glm::value_ptr(resMat)); //"Translate" glm matrix passed into a MathGeoLib one
m_OOBB.SetFrom(m_InitialAABB); //Set OOBB from the initial aabb
m_OOBB.Transform(mat); //Transform it
m_AABB.SetFrom(m_OOBB); //Set the AABB in function of the transformed OOBB

Unreal Engine : Instanced Static Mesh doesn't rotate on instanciation

I'm currently doing real time growing trees, and i'm using an instanced static mesh component for the foliage, since every leaf is unique. When I add an leaf instance to my component, I put a random rotation on it. But for some reason, this rotation is not set, all my leaves have a zerorotator. The scale is set, the transform too, but not the rotation.
Here is the code :
//Instanced static mesh component instanciation, as a component of the tree
foliage = NewObject<UInstancedStaticMeshComponent>(this);
foliage->SetWorldLocation(GetActorLocation());
foliage->RegisterComponent();
foliage->SetStaticMesh(data->leaves[treeType]);
foliage->SetFlags(RF_Transactional);
this->AddInstanceComponent(foliage);
//Adding a instance of foliage
const FTransform leafTrans = FTransform(
FMath::VRandCone(normals[branches[i].segments[j].firstVertice + 2], 2.0f).Rotation(),
vertices[branches[i].segments[j].firstVertice + 2], FVector::ZeroVector);
foliage->AddInstance(leafTrans);
I recently changed the Instanced Static Mesh Component, I beggined to put it on a child actor, and the rotation worked. But I had to remove it for an other issue.
I'm sure it's a small thing I'm missing, but I'm losing too much time searching on the internet, and there is not that much documentation on this subject...
Thanks :)
Ok, the problem was the scale. For some reason, if it is set to zero, it also set the rotation to zero...

How to rotate object using the 3D graphics pipeline ( Direct3D/GL )?

I have some problems with trying to animate the rotation of mesh objects.
If to make the rotation process once all is fine. Meshes are rotated normally and the final image from the WebGL buffer looks pretty fine.
http://s22.postimg.org/nmzvt9zzl/311.png
But if to use the rotation in loop (with each new frame record) then the meshes are starting to look very weird, look the next screenshot:
http://s22.postimg.org/j2dpecga9/312.png
I won't provide here the programming code, because the issue is depend on incorrect 3D graphics handling.
I think some OpenGL/Direct3D developers may give an advice how to fix it, because this question relates to the 3D-programming subject and to the some specific GL or D3D function/method. Also I think the way of work with rotation is the same both in OpenGL and Direct3D because of linear algebra and affine affine transformations.
If you are really interested what I'm using, so the answer is WebGL.
Let me describe how do I use the rotation of object.
The simple rotation is made using the quaternions. Any mesh object I define has its quaternion property.
If to rotate the object then the method rotate() is doing the next:
// Some kind of pseudo-code
function rotateMesh( vector, angle ) {
var tempQuatenion = math.convertRotationToQuaternion( vector, angle );
this.quaternion = math.multiplyQuaternions( this.quaternion, tempQuatenion );
}
I use the next piece of code in Renderer class to handle the mesh translation and rotation:
// Some kind of pseudo-code
// for each mesh, which is added to scene
modelViewMatrix = new IdentityMatrix()
translateMatrixByVector( modelViewMatrix, mesh.position )
modelViewMatrix.multiplyByMatrix( mesh.quaternion.toMatrix() )
So... I want to ask you if the logic above is correct then I provide some sources of math functions which are used for quaternions, rotations, etc...
If the logic above is incorrect so I think it makes no sense to provide something else. Because it's required to fix the main logic.
Quaternion multiplication is not commutative, i.e., if A and B are quaternions then A * B != B * A. If you want to rotate quaternion A by quaternion B, you need to do A = B * A, so this:
this.quaternion = math.multiplyQuaternions( this.quaternion, tempQuatenion );
should have its arguments reversed.
In addition, as mentioned by #ratchet-freak in comments, you should make sure your quaternions are always normalized, otherwise transformations other than rotation may happen.