OSG: Camera flight with AnimationPathManipulator - c++

I'm trying to apply an osg::AnimationPath to the camera of my osgViewer::Viewer instance by using an osgGA::AnimationPathManipulator. My problem is that the AnimationPathManipulator only applies the change of rotation and no change of position to the camera. So it only rotates but doesn't translate.
I am using OpenSceneGraph Library 3.0.1.
For a better insight, this is my current code:
void CameraFlyTest::animateCamera(osgViewer::Viewer* viewer) {
osg::AnimationPath* path = new osg::AnimationPath();
path->setLoopMode(osg::AnimationPath::SWING);
osg::AnimationPath::ControlPoint cp1;
cp1.setPosition(osg::Vec3d(-200,-450,60));
cp1.setRotation(osg::Quat(M_PI_2, osg::Vec3(1,0,0)));
osg::AnimationPath::ControlPoint cp2;
cp2.setPosition(osg::Vec3d(2000,-500,60));
cp2.setRotation(osg::Quat(M_PI_4, osg::Vec3(1,0,0)));
path->insert(1.0f,cp1);
path->insert(3.0f,cp2);
osgGA::AnimationPathManipulator* apm = new osgGA::AnimationPathManipulator(path);
viewer->setCameraManipulator(apm);
}

The problem was that another active camera manipulator has also updated the position of the camera. The osgGA::AnimationPathManipulator itself works as it should.

Related

Getting 3D world coordinate from (x,y) pixel coordinates

I'm absolutely new to ROS/Gazebo world; this is probably a simple question, but I cannot find a good answer.
I have a simulated depth camera (Kinect) in a Gazebo scene. After some elaborations, I get a point of interest in the RGB image in pixel coordinates, and I want to retrieve its 3D coordinates in the world frame.
I can't understand how to do that.
I have tried compensating the distortions, given by the CameraInfo msg. I have tried using PointCloud with pcl library, retrieving the point as cloud.at(x,y).
In both cases, the coordinates are not correct (I have put a small sphere in the coords given out by the program, so to check if it's correct or no).
Every help would be very appreciated. Thank you very much.
EDIT:
Starting from the PointCloud, I try to find the coords of the points doing something like:
point = cloud.at(xInPixel, yInPixel);
point.x = point.x + cameraPos.x;
point.y = point.y + cameraPos.y;
point.z = point.z - cameraPos.z;
but the x,y,z coords I get as point.x seems not to be correct.
The camera has a pitch angle of pi/2, so to points on the ground.
I am clearly missing something.
I assume you've seen the gazebo examples for the kinect (brief, full). You can get, as topics, the raw image, raw depth, and calculated pointcloud (by setting them in the config):
<imageTopicName>/camera/color/image_raw</imageTopicName>
<cameraInfoTopicName>/camera/color/camera_info</cameraInfoTopicName>
<depthImageTopicName>/camera/depth/image_raw</depthImageTopicName>
<depthImageCameraInfoTopicName>/camera/dept/camera_info</depthImageCameraInfoTopicName>
<pointCloudTopicName>/camera/depth/points</pointCloudTopicName>
Unless you need to do your own things with the image_raw for rgb and depth frames (ex ML over rgb frame & find corresponding depth point via camera_infos), the pointcloud topic should be sufficient - it's the same as the pcl pointcloud in c++, if you include the right headers.
Edit (in response):
There's a magical thing in ros called tf/tf2. Your pointcloud, if you look at the msg.header.frame_id, says something like "camera", indicating it's in the camera frame. tf, like the messaging system in ros, happens in the background, but it looks/listens for transformations from one frame of reference to another frame. You can then transform/query the data in another frame. For example, if the camera is mounted at a rotation to your robot, you can specify a static transformation in your launch file. It seems like you're trying to do the transformation yourself, but you can make tf do it for you; this allows you to easily figure out where points are in the world/map frame, vs in the robot/base_link frame, or in the actuator/camera/etc frame.
I would also look at these ros wiki questions which demo a few different ways to do this, depending on what you want: ex1, ex2, ex3

(UE4) How to properly render after transforming the bones of UPoseableMeshComponent

I am trying to transform bones within UE4 (4.25) using UPoseableMeshComponent. (image of initial state)
However, after I transform the bones using SetBoneTransformByName, the rendering gets into some weird state, below is not motion blur, is just a pose after applied SetBoneTransformByName (image after transform blurred rendering). Although Unlit rendering seems just fine.
After I call AActor::SetActorHiddenInGame(true) to set invisible, and then AActor::SetActorHiddenInGame(false) to show the actor again, the rendering will be fixed. (Image after hide/show)
The code is purely in c++ (no BP), I first create custom Character with SkeletalMesh and added UPoseableMeshComponent in code something like in below:
void AMyCharacter::CreatePoseableMesh() {
USkeletalMeshComponent* skeletalMesh = GetMesh();
UPoseableMeshComponent* poseMesh =
NewObject<UPoseableMeshComponent>(this, UPoseableMeshComponent::StaticClass());
if (poseMesh) {
poseMesh->RegisterComponent();
poseMesh->SetWorldLocation(location);
poseMesh->SetWorldRotation(rotation);
poseMesh->AttachToComponent(GetRootComponent(),FAttachmentTransformRules::KeepRelativeTransform);
poseMesh->SetSkeletalMesh(skeletalMesh->SkeletalMesh);
poseMesh->SetVisibility(true);
skeletalMesh->SetVisibility(false);
}
}
Are there something missing to set in UPoseableMeshComponent?
I might be wrong, but I think this is because setting bone transform manually doesn't write to the velocity buffer, and temporal AA doesn't know that something moved, causing ugly blur.
If you switch to FXAA and the problem disappears - here's your hint.
There is a material node called Previous Frame Switch - you can control the velocity buffer through it using a custom parameter.
Self solved(sort of..). I tried with BP first, where even BP needs to SetVisibility(false) then SetVisibility(true) on PoseableMeshComponent to render properly. Maybe a minor bug within UE4.
TMap<FString, FTransform> const& transforms; // given. map of bone name and its transform.
poseMesh->SetVisibility(false); // PoseableMeshComponent. Hide once
for (auto& x : transforms) {
poseMesh->SetBoneTransformByName(FName(*x.Key), x.Value, EBoneSpaces::WorldSpace);
}
poseableMesh->SetVisibility(true); // show it.
seems to be the workaround for now.

Using a MotionController Component in Unreal with C++ instead of Blueprint

After iterating through an array of FMotionControllerSource of an OculusInputDevice IMotionController, I found a connected Oculus Right and Left Touch Controller based on it's ETrackingStatus. With the left and right controllers, I can get the location and rotation using the IMotionController API, which Returns the calibration-space orientation of the requested controller's hand.
Here's a reference to the IMotionController API:
https://docs.unrealengine.com/en-US/API/Runtime/HeadMountedDisplay/IMotionController/index.html
I want to apply the location/rotation to a PosableMesh, so that the mesh is shown where the Oculus controller is in reality. Currently, with the code below the 3D model is displayed down from the camera, so the mapping scale is off. I think WorldToMetersScalemight be off. When I use a small number the controller doesn't move the 3D model much, but this might be messing it up.
FVector position;
FRotator rotation;
int id = tracker.deviceIndex;
FName srcName = tracker.motionControllerSource;
bool success = tracker.motionController->GetControllerOrientationAndPosition(id, srcName, rotation, position, 250.0f);
if (success)
{
poseMesh->SetWorldLocationAndRotation(position, rotation);
}
Adding the camera position to the controller position seemed to fix the issue:
// get camera reference during BeginPlay:
camManager = GetWorld()->GetFirstPlayerController()->PlayerCameraManager;
// TickComponent
poseMesh->SetWorldLocationAndRotation(camManager->GetCameraLocation() + position, rotation);

VTK - how to flip\mirror image

I'm using vtkResliceImageViewer to display image (multi-planar reconstruction). How can I flip\mirror that image vertically and horizontally? Operating with camera is not working as expected, since flip has to take into consideration also camera rotation angle, so it gets very complicated. It would be great if there is a way to change image's texture coordinates. Is this possible?
// Create an image
vtkSmartPointer<vtkImageMandelbrotSource> source =
vtkSmartPointer<vtkImageMandelbrotSource>::New();
source->Update();
// Flip the image
vtkSmartPointer<vtkImageFlip> flipYFilter =
vtkSmartPointer<vtkImageFlip>::New();
flipYFilter->SetFilteredAxis(1); // flip y axis
flipYFilter->SetInputConnection(source->GetOutputPort());
flipYFilter->Update();
// Create the Viewer
vtkSmartPointer<vtkResliceImageViewer> viewer =
vtkSmartPointer<vtkResliceImageViewer>::New();
viewer->SetInputData(flipYFilter->GetOutput())

How can I change the underlying geometry of a TopoDS_Shape in OpenCASCADE

I am trying to change the geometry of an existing TopoDS_Shape in OpenCASCADE. A possible application is for modifying an edge of a body without the need to reconstruct the whole body (e.g. change the radius of one cap of a cylinder, shift a control point in a Bspline curve/surface).
Is there a standard approach to do this in OpenCASCADE?
Is it possible to update geometry without creating a new shape?
I already tried to use BRepAdaptor_HCurve instead, but this did not really help.
Handle(Geom_Circle) aCircle = new Geom_Circle(gp_Ax2(gp_Pnt(0, 0, 0), gp_Dir(0, 0, 1)), 5); // create a circle in the xy plane, origin (0,0,0) radius 5;
TopoDS_Edge circ = BRepBuilderAPI_MakeEdge(aCircle); // switch to topological description;
STEPControl_Writer writer;
writer.Transfer(circ,STEPControl_AsIs); // access topology for output
BRepAdaptor_Curve theAdaptor = BRepAdaptor_Curve(circ); // create an adaptor
gp_Circ mod_circ = theAdaptor.Circle();
mod_circ.SetRadius(1); // change radius to 1
// I dont want to create a new circle, but reuse the old one with the updated geometry:
// writer.Transfer(circ, STEPControl_AsIs); // access topology for output
// in order to output the updated geometry, we also have to create a new edge
TopoDS_Edge another_circ = BRepBuilderAPI_MakeEdge(mod_circ);
writer.Transfer(another_circ, STEPControl_AsIs); // access topology for output
writer.Write("debug.stp");
Original and modified geometry, created by writing circ and another_circ
As i understood from the OpenCASCADE forum and the documentation you can not to change the subshapes of a shape directly. But you can create a new subshape and replace the old.
See OpenCASCADE forum topics below. Hope it helps.
How to modify sub-shapes of a given shape without copy
Modify shape
Replacing a face with X faces
Edit topology