After iterating through an array of FMotionControllerSource of an OculusInputDevice IMotionController, I found a connected Oculus Right and Left Touch Controller based on it's ETrackingStatus. With the left and right controllers, I can get the location and rotation using the IMotionController API, which Returns the calibration-space orientation of the requested controller's hand.
Here's a reference to the IMotionController API:
https://docs.unrealengine.com/en-US/API/Runtime/HeadMountedDisplay/IMotionController/index.html
I want to apply the location/rotation to a PosableMesh, so that the mesh is shown where the Oculus controller is in reality. Currently, with the code below the 3D model is displayed down from the camera, so the mapping scale is off. I think WorldToMetersScalemight be off. When I use a small number the controller doesn't move the 3D model much, but this might be messing it up.
FVector position;
FRotator rotation;
int id = tracker.deviceIndex;
FName srcName = tracker.motionControllerSource;
bool success = tracker.motionController->GetControllerOrientationAndPosition(id, srcName, rotation, position, 250.0f);
if (success)
{
poseMesh->SetWorldLocationAndRotation(position, rotation);
}
Adding the camera position to the controller position seemed to fix the issue:
// get camera reference during BeginPlay:
camManager = GetWorld()->GetFirstPlayerController()->PlayerCameraManager;
// TickComponent
poseMesh->SetWorldLocationAndRotation(camManager->GetCameraLocation() + position, rotation);
Related
1. Goal
My colleague and I have been trying to render rotated ellipsoids in Qt. The typical solution approach, as we understand it, consists of shifting the center of the ellipsoids to the origin of the coordinate system, doing the rotation there, and shifting back:
http://qt-project.org/doc/qt-4.8/qml-rotation.html
2. Sample Code
Based on the solution outlined in the link above, we came up with the following sample code:
// Constructs and destructors
RIEllipse(QRect rect, RIShape* parent, bool isFilled = false)
: RIShape(parent, isFilled), _rect(rect), _angle(30)
{}
// Main functionality
virtual Status draw(QPainter& painter)
{
const QPen& prevPen = painter.pen();
painter.setPen(getContColor());
const QBrush& prevBrush = painter.brush();
painter.setBrush(getFillBrush(Qt::SolidPattern));
// Get rectangle center
QPoint center = _rect.center();
// Center the ellipse at the origin (0,0)
painter.translate(-center.x(), -center.y());
// Rotate the ellipse around its center
painter.rotate(_angle);
// Move the rotated ellipse back to its initial location
painter.translate(center.x(), center.y());
// Draw the ellipse rotated around its center
painter.drawEllipse(_rect);
painter.setBrush(prevBrush);
painter.setPen(prevPen);
return IL_SUCCESS;
}
As you can see, we have hard coded the rotation angle to 30 degrees in this test sample.
3. Observations
The ellipses come out at wrong positions, oftentimes outside the canvas area.
4. Question
What is wrong about the sample code above?
Best regards,
Baldur
P.S. Thanks in advance for any constructive response?
P.P.S. Prior to posting this message, we searched around quite a bit on stackoverflow.com.
Qt image move/rotation seemed to reflect a solution approach similar to the link above.
In painter.translate(center.x(), center.y()); you shift your object by the amount of current coordinate which makes (2*center.x(), 2*center.y()) as a result. You may need:
painter.translate(- center.x(), - center.y());
The theory of moving an object back to its origin, rotating and then replacing the object's position is correct. However, the code you've presented is not translating and rotating the object at all, but translating and rotating the painter. In the example question that you've referred to, they're wanting to rotate the whole image about an object, which is why they move the painter to the object's centre before rotating.
The easiest way to do rotations about a GraphicsItem is to initially define the item with its centre in the centre of the object, rather than in its top left corner. That way, any rotation will automatically be about the objects centre, without any need to translate the object.
To do this, you'd define the item with a bounding rect for x,y,width,height with (-width/2, -height/2, width, height).
Alternatively, assuming your item is inherited from QGraphicsItem or QGraphicsObject, you can use the function setTransformOriginPoint before any rotation.
I'm trying to apply an osg::AnimationPath to the camera of my osgViewer::Viewer instance by using an osgGA::AnimationPathManipulator. My problem is that the AnimationPathManipulator only applies the change of rotation and no change of position to the camera. So it only rotates but doesn't translate.
I am using OpenSceneGraph Library 3.0.1.
For a better insight, this is my current code:
void CameraFlyTest::animateCamera(osgViewer::Viewer* viewer) {
osg::AnimationPath* path = new osg::AnimationPath();
path->setLoopMode(osg::AnimationPath::SWING);
osg::AnimationPath::ControlPoint cp1;
cp1.setPosition(osg::Vec3d(-200,-450,60));
cp1.setRotation(osg::Quat(M_PI_2, osg::Vec3(1,0,0)));
osg::AnimationPath::ControlPoint cp2;
cp2.setPosition(osg::Vec3d(2000,-500,60));
cp2.setRotation(osg::Quat(M_PI_4, osg::Vec3(1,0,0)));
path->insert(1.0f,cp1);
path->insert(3.0f,cp2);
osgGA::AnimationPathManipulator* apm = new osgGA::AnimationPathManipulator(path);
viewer->setCameraManipulator(apm);
}
The problem was that another active camera manipulator has also updated the position of the camera. The osgGA::AnimationPathManipulator itself works as it should.
I haven't been able to find this after scavenging the forums. I would like to implement something like this ... the main character always moves in the direction it's facing. When the player touches the screen, the character will turn to face that touch location, which should cause the body to move in a different direction.
I can get the character to face a touch location as follows:
CGPoint diff = ccpSub(location, self.position);
CGFloat targetAngle = atan2f(diff.y, diff.x);
self.body->a = targetAngle;
I want something along these lines. Get the current angle the character is facing. Turn that angle into a unit vector. Multiply that unit vector by a max_velocity, and apply it to the character. This should should (theoretically) move the character in the direction it is facing at a constant velocity?
This seems to give me what I want:
cpVect rotatedVel = cpvmult(ccpForAngle(self.body->a), MAX_VELOCITY);
self.body->v = cpvlerpconst(self.body->v, rotatedVel, ACCELERATION * dt);
Now all I need is a way to rotate the character's direction slowly over time. How might I do that?
Sounds like you want to do something like this from Chipmunk's Tank demo:
// turn the control body based on the angle relative to the actual body
cpVect mouseDelta = cpvsub(ChipmunkDemoMouse, cpBodyGetPos(tankBody));
cpFloat turn = cpvtoangle(cpvunrotate(cpBodyGetRot(tankBody), mouseDelta));
cpBodySetAngle(tankControlBody, cpBodyGetAngle(tankBody) - turn);
'turn' is calculated relative to the body's current rotation by transforming the direction vector relative to the body's current rotation. The demo smooths out the rotation using constraints (which you might want to consider here too), but you could also just get away with using cpflerpconst() on 'turn' to get a maximum angular velocity too.
What about using the cpBodySetTorque to set object torque to make it spin/rotate?
For my project Ogre in c++, I want to create an animation of an object using SimpleSpline of Ogre.
Everything works perfectly, the object is animated along the sequence of points in the path correctly.
Since I need to use a scene with orthographic view, so no perspective, I would still simulate the effect depth "playing" on the scale of the object.
Thus, for each frame updating position and scale of the object in this way:
const Vector3 position = this->getPoint(index_, time_);
const float scale = 1 / (1 + position.z);
node_->setScale(scale, scale, scale);
node_->setPosition(position);
It works quite good. Is there a way to make the depth effect more realistic?
You can try using DeflectorPlane in the script of your particle system.
Here you can find the documentation and the usage.
I am creating a 3d flying game and using DXUTCamera for my view.
I can get the camera to take on the characters position, But I would like to view my character in the 3rd person.
Here is my core for first person view
//Put the camera on the object.
D3DXVECTOR3 viewerPos;
D3DXVECTOR3 lookAtThis;
D3DXVECTOR3 up ( 5.0f, 1.0f, 0.0f );
D3DXVECTOR3 newUp;
D3DXMATRIX matView;
//Set the viewer's position to the position of the thing.
viewerPos.x = character->x; viewerPos.y = character->y;
viewerPos.z = character->z;
// Create a new vector for the direction for the viewer to look
character->setUpWorldMatrix();
D3DXVECTOR3 newDir, lookAtPoint;
D3DXVec3TransformCoord(&newDir, &character->initVecDir,
&character->matAllRotations);
// set lookatpoint
D3DXVec3Normalize(&lookAtPoint, &newDir);
lookAtPoint.x += viewerPos.x;
lookAtPoint.y += viewerPos.y;
lookAtPoint.z += viewerPos.z;
g_Camera.SetViewParams(&viewerPos, &lookAtPoint);
So does anyone have an ideas how I can move the camera to the third person view? preferably timed so there is a smooth action in the camera movement. (I'm hoping I can just edit this code instead of bringing in another camera class)
Well, I believe I can help you in theory to change from a first person view to a third person. I'm sorry I can't give you the actual code, but I'm typing from a phone. You will have to put the point where your view starts slighty behind the player and make the lookAtPoint to look at the player. Also, be sure to make the , x , y and z to move according to the third person's logic. Hope that helps. I'm sorry if it doesn't help at all, but typing from a phone is kinda hard for me, and I can't explain it really good.