Transform static object to map frame - c++

i have a very simple and maybe stupid question, sorry i am new to this kinf of stuff.
I had a file of points, published them to a topic as a pointcloud and subscribed from another node to the topic to modify the poincloud and published it again. Everthing is fine so far. The problem is that the pointcloud (the object) ist not in the origin when i visualize it in RVIZ. It is somewhere at the edge of the platform in RVIZ. How can i make the pointcloud move to the origin in a easy way?
So far i have tried some solutions with the tf2 package. Tried to transform the frame of the object to the map frame, where i want to have the object at the ent. But doenst seem to work. I am missing something. The question is, is this the right and easiest approach for this or is there a better way. If its the right approach, what am i missing?
I put this code in my callback function:
geometry_msgs::msg::TransformStamped transform;
transform.header.stamp = this->get_clock()->now();
transform.header.frame_id = "object_frame";
transform.child_frame_id = "map";
// Set the position of the object relative to the Rviz frame
transform.transform.translation.x = 0.0;
transform.transform.translation.y = 0.0;
transform.transform.translation.z = 0.0;
// Set the orientation of the object relative to the Rviz frame
transform.transform.rotation.x = 0.0;
transform.transform.rotation.y = 0.0;
transform.transform.rotation.z = 0.0;
transform.transform.rotation.w = 1.0;
tf_broadcaster->sendTransform(transform);
And the following to my publisher function:
void Preprocessor::publish_pointcloud_supervoxel()
{
// Convert the PointCloud to a PointCloud2 message
auto pcl_msg_supervoxel = std::make_shared<sensor_msgs::msg::PointCloud2>();
//sensor_msgs::msg::PointCloud2 pcl_msg_supervoxel;
pcl::toROSMsg(*colored_supervoxel_cloud, *pcl_msg_supervoxel);
//pcl_msg_supervoxel->width = adjacent_supervoxel_centers.size();
pcl_msg_supervoxel->header.frame_id = "map";
pcl_msg_supervoxel->header.stamp = this->get_clock()->now();
// Publish the message
supervoxel_publisher->publish(*pcl_msg_supervoxel);
}

Not knowing your rviz setup, have you checked if the Global options:fixed frame in rviz is set to the correct frame?
You can try a static transform and see if that is reflected in rviz. This would be in the launch file.
static_transform_node = Node(
package="tf2_ros",
executable="static_transform_publisher",
arguments=["x", "y", "z", "roll", "pitch", "yaw", "parent_frame_id", "child_frame_id"],
output="screen",
)
Note: the quotes in the arguments are required
ref: https://wiki.ros.org/tf2_ros#static_transform_publisher

Related

FBXSDK, using Quaternions to set rotation keys?

I am trying to write a file save application using the Autodesk FBXSDK. I have this working fine using Euler rotations, but I need to update it to use quaternions.
The relevant function is:
bool CreateScene(FbxScene* pScene, double lFocalLength, int startFrame)
{
//Create Camera
FbxNode* lMyCameraNode = FbxNode::Create(pScene, "p_camera");
//connect camera node to root node
FbxNode* lRootNode = pScene->GetRootNode();
lRootNode->ConnectSrcObject(lMyCameraNode);
FbxCamera* lMyCamera = FbxCamera::Create(pScene, "Root_camera");
lMyCameraNode->SetNodeAttribute(lMyCamera);
// Create an animation stack
FbxAnimStack* myAnimStack = FbxAnimStack::Create(pScene, "My stack");
// Create the base layer (this is mandatory)
FbxAnimLayer* pAnimLayer = FbxAnimLayer::Create(pScene, "Layer0");
myAnimStack->AddMember(pAnimLayer);
// Get the camera’s curve node for local translation.
FbxAnimCurveNode* myAnimCurveNodeRot = lMyCameraNode->LclRotation.GetCurveNode(pAnimLayer, true);
//create curve nodes
FbxAnimCurve* myRotXCurve = NULL;
FbxAnimCurve* myRotYCurve = NULL;
FbxAnimCurve* myRotZCurve = NULL;
FbxTime lTime; // For the start and stop keys. int lKeyIndex = 0; // Index for the keys that define the curve
// Get the animation curve for local rotation of the camera.
myRotXCurve = lMyCameraNode->LclRotation.GetCurve(pAnimLayer, FBXSDK_CURVENODE_COMPONENT_X, true);
myRotYCurve = lMyCameraNode->LclRotation.GetCurve(pAnimLayer, FBXSDK_CURVENODE_COMPONENT_Y, true);
myRotZCurve = lMyCameraNode->LclRotation.GetCurve(pAnimLayer, FBXSDK_CURVENODE_COMPONENT_Z, true);
//This to add keys, per frame.
float frameNumber = startFrame;
for (int i = 0; i < rec.size(); i++)
{
lTime.SetFrame(frameNumber); //frame number
//rx
lKeyIndex = myRotXCurve->KeyAdd(lTime);
myRotXCurve->KeySet(lKeyIndex, lTime, recRotX[i], FbxAnimCurveDef::eInterpolationLinear);
//ry
lKeyIndex = myRotYCurve->KeyAdd(lTime);
myRotYCurve->KeySet(lKeyIndex, lTime, recRotY[i], FbxAnimCurveDef::eInterpolationLinear);
//rz
lKeyIndex = myRotZCurve->KeyAdd(lTime);
myRotZCurve->KeySet(lKeyIndex, lTime, recRotZ[i], FbxAnimCurveDef::eInterpolationLinear);
frameNumber += 1;
}
return true;
}
I would ideally like to pass in quaternion data here, instead of the euler x,y,z values. Is this possible with the fbxsdk? or do I need to convert my quaternion data first, and continue to pass in eulers?
Thank you.
You always need to go back to Euler angles, as you can only get animation curves for the XYZ rotation. The only thing you have control over is the rotation order.
However, you can use FbxQuaternion for your calculations, then use .DecomposeSphericalXYZ() to get XYZ Euler angles.
The accepted answer does not work. Although the documentation definitely implies it should,
Create an Euler XYZ equivalent to the current quaternion.
An Autodesk employee claims that it does not
DecomposeSphericalXYZ does not convert to Euler angles
and this is borne out by my testing. In the current FBX SDK, there are at least two relatively easy ways you can convert a quat to what they call an euler, or to something suitable for LclRotation. First is via FbxAMatrix
FbxQuaternion fq = ...;
FbxAMatrix fa;
fa.SetQ(fq);
FbxVector4 fe = fa.GetR();
Second is via FbxVector::SetXYZ
FbxVector4 fe2;
fe2.SetXYZ(fq);
I've successfully gone from an XYZ rotation sequence → quaternion → euler from both methods, and retrieved the same rotation sequence. When I use DecomposeSphericalXYZ I get a slightly different FbxVector4. I haven't tried to figure out what they mean by "euler in spherical coordinates".
Years later, I hit this issue again, and found a simple answer.
After you have set all the keys that you need, just use this filter:
FbxAnimCurveFilterUnroll filter;
filter.Apply(*myAnimCurveNodeRot);
This seems to function the same as the 'Euler Filter' in Maya, or the 'Gimbal Killer' filter in Motionbuilder.

cocos2d-x v3 c++ Drop shadow cocos2d::Sprite

As far as I've found out, cocos doesn't offer a simple filter handling like AS3 for example does.
My situation:
I want to add a realtime shadow to an cocos2d::Sprite.
For example I would like to do something like this (similar to AS3):
auto mySprite = Sprite::createWithSpriteFrameName("myCharacter.png");
DropShadowFilter* dropShadow = new DropShadowFilter();
dropShadow->distance = 0;
dropShadow->angle = 45;
dropShadow->color = 0x333333;
dropShadow->alpha = 1;
dropShadow->blurX = 10;
dropShadow->blurY = 10;
dropShadow->strength = 1;
dropShadow->quality = 15;
mySprite->addFilter(dropShadow);
This should add a shadow to my Sprite to achieve an result like this:
Adobe Drop Shadow Example
Could you help me please?
There isn't any built in support for shadows on Sprites in Cocos2D-X.
The best option, performance-wise, would be to place your shadows in your sprite images already, instead of calculating and drawing them in the code.
Another option is to sub-class Sprite and override the draw method so that you duplicate the sprite and apply your effects and draw it below the original.
One possible way to achieve that is with this snippet from this thread on the Cocos forum. I can't say that I completely follow what this code does with the GL transforms, but you can use this as a starting point to experiment.
void CMySprite::draw()
{
// is_shadow is true if this sprite is to be considered like a shadow sprite, false otherwise.#
if (is_shadow)
{
ccBlendFunc blend;
// Change the default blending factors to this one.
blend.src = GL_SRC_ALPHA;
blend.dst = GL_ONE;
setBlendFunc( blend );
// Change the blending equation to thi in order to subtract from the values already written in the frame buffer
// the ones of the sprite.
glBlendEquationOES(GL_FUNC_REVERSE_SUBTRACT_OES);
}
CCSprite::draw();
if (is_shadow)
{
// The default blending function of cocos2d-x is GL_FUNC_ADD.
glBlendEquationOES(GL_FUNC_ADD_OES);
}
}

Projection Mapping with Kinect and OpenGL

Im currently using a JavaCV software called procamcalib to calibrate a Kinect-Projector setup, which has the Kinect RGB Camera as origin. This setup consists solely of a Kinect RGB Camera (Im roughly using the Kinect just as an ordinary camera at the moment) and one Projector. This calibration software uses LibFreenect (OpenKinect) as the Kinect Driver.
Once the software completes its process, it will give me the intrinsics and extrinsics parameters of both the camera and the projector, which are being thrown at an OpenGL software to validate the calibration and is where a few problems begins. Once the Projection and Modelview are correctly set, I should be able to fit what is seen by the Kinect with what is being projected, but in order to achieve this I have to do a manual translation in all 3 axis and this last part isnt making any sense to me! Could you guys please help me sorting this out?
The SDK used to retrieve Kinect data is OpenNI (not the latest 2.x version, it should be 1.5.x)
I'll explain exactly what Im doing to reproduce this error. The calibration parameters is used as follows:
The Projection matrix is set as ( based on http://sightations.wordpress.com/2010/08/03/simulating-calibrated-cameras-in-opengl/ ):
r = width/2.0f; l = -width/2.0f;
t = height/2.0f; b = -height/2.0f;
alpha = fx; beta = fy;
xo = cx; yo = cy;
X = kinectCalibration.c_near + kinectCalibration.c_far;
Y = kinectCalibration.c_near*kinectCalibration.c_far;
d = kinectCalibration.c_near - kinectCalibration.c_far;
float* glOrthoMatrix = (float*)malloc(16*sizeof(float));
glOrthoMatrix[0] = 2/(r-l); glOrthoMatrix[4] = 0.0f; glOrthoMatrix[8] = 0.0f; glOrthoMatrix[12] = (r+l)/(l-r);
glOrthoMatrix[1] = 0.0f; glOrthoMatrix[5] = 2/(t-b); glOrthoMatrix[9] = 0.0f; glOrthoMatrix[13] = (t+b)/(b-t);
glOrthoMatrix[2] = 0.0f; glOrthoMatrix[6] = 0.0f; glOrthoMatrix[10] = 2/d; glOrthoMatrix[14] = X/d;
glOrthoMatrix[3] = 0.0f; glOrthoMatrix[7] = 0.0f; glOrthoMatrix[11] = 0.0f; glOrthoMatrix[15] = 1;
printM( glOrthoMatrix, 4, 4, true, "glOrthoMatrix" );
float* glCameraMatrix = (float*)malloc(16*sizeof(float));
glCameraMatrix[0] = alpha; glCameraMatrix[4] = skew; glCameraMatrix[8] = -xo; glCameraMatrix[12] = 0.0f;
glCameraMatrix[1] = 0.0f; glCameraMatrix[5] = beta; glCameraMatrix[9] = -yo; glCameraMatrix[13] = 0.0f;
glCameraMatrix[2] = 0.0f; glCameraMatrix[6] = 0.0f; glCameraMatrix[10] = X; glCameraMatrix[14] = Y;
glCameraMatrix[3] = 0.0f; glCameraMatrix[7] = 0.0f; glCameraMatrix[11] = -1; glCameraMatrix[15] = 0.0f;
float* glProjectionMatrix = algMult( glOrthoMatrix, glCameraMatrix );
And the Modelview matrix is set as:
proj_loc = new Vec3f( proj_RT[12], proj_RT[13], proj_RT[14] );
proj_fwd = new Vec3f( proj_RT[8], proj_RT[9], proj_RT[10] );
proj_up = new Vec3f( proj_RT[4], proj_RT[5], proj_RT[6] );
proj_trg = new Vec3f( proj_RT[12] + proj_RT[8],
proj_RT[13] + proj_RT[9],
proj_RT[14] + proj_RT[10] );
gluLookAt( proj_loc[0], proj_loc[1], proj_loc[2],
proj_trg[0], proj_trg[1], proj_trg[2],
proj_up[0], proj_up[1], proj_up[2] );
And finally the camera is displayed and moved around with:
glPushMatrix();
glTranslatef(translateX, translateY, translateZ);
drawRGBCamera();
glPopMatrix();
where the translation values are manually adjusted with the keyboard until I have a visual match (I'm projecting on the calibration board what the Kinect-rgb camera is seeing, so I manually adjust the opengl-camera until the projected pattern matches the printed pattern).
My question here is WHY do I have to make this manual adjustment? The modelview and projection setup should take care of it.
I was also wandering if there are any problems when switching drivers like that, since OpenKinect is used for calibration and OpenNI for validation. This came at mind when researching another popular calibration tool called RGBDemo, where it says that if using LibFreenect backend a Kinect calibration is needed.
So, will a calibration go wrong if made with a driver and displayed with another?
Does anyone think it'll be easier to achieve success if this is done with OpenCV rather than OpenGL ?
JavaCV Reference: https://code.google.com/p/javacv/
Procamcalib "short paper": http://www.ok.ctrl.titech.ac.jp/~saudet/research/procamcalib/
Procamcalib source code: https://code.google.com/p/javacv/source/browse?repo=procamcalib
RGBDemo calibration Reference: http://labs.manctl.com/rgbdemo/index.php/Documentation/Calibration
I can upload more things if necessary, just let me know what you guys need to be able to help me out :)
I'm the author of the article you linked to, and I think I can help.
The problem is in how you're setting your modelview matrix. You're using the third column of proj_RT as the camera's position when you call gluLookAt(), but it isn't the camera's position, it's the position of the world origin in camera coordinates. I wrote an article for my new blog that might help clear this up. It describes three different (equivalent) ways of interpreting the extrinsic matrix, with WebGL demos of each:
http://ksimek.github.io/2012/08/22/extrinsic/
If you must use gluLookAt, this article will show you how, but its much simpler to just call glLoadMatrix(proj_RT).
tl;dr: replace gluLookAt() with glLoadMatrix(proj_RT)
For Kinect calibration, take a look at the latest 0.7 release of RGBDemo http://labs.manctl.com/rgbdemo and corresponding Freenect calibration source.
From the v0.7.0 ChangeLogs:
New features since v0.6.1:
New demo to acquire object models using markers
Simple calibration mode for rgbd-multikinect
Much faster grabbing in rgbd-multikinect
Add timestamps and camera serials when saving to disk
Compatibility with PCL 1.4 Various bug fixes
A very good book to follow is Jason McKesson's Learning Modern 3D Graphics Programming You may also read the Kinect's ROS page and Nicolas' Kinect Calibration Page

MiniMap/PIP for OpenSceneGraph

So I am trying to create a mini-map/PIP. I have an existing program with scene that runs inside a Qt Widget. I have a class, NetworkViewer, which extends CompositeViewer. In NetworkViewer's constructor I call the following function. Notice the root is the scene which is populated elsewhere.
void NetworkViewer::init() {
root = new osg::Group() ;
viewer = new osgViewer::View( );
viewer->setSceneData( root ) ;
osg::Camera* camera ;
camera = createCamera(0,0,100,100) ;
viewer->setCamera( camera );
viewer->addEventHandler( new NetworkGUIHandler( (GUI*)view ) ) ;
viewer->setCameraManipulator(new osgGA::TrackballManipulator) ;
viewer->getCamera()->setClearColor(
osg::Vec4( LIGHT_CLOUD_BLUE_F,0.0f));
addView( viewer );
osgQt::GraphicsWindowQt* gw =
dynamic_cast( camera->getGraphicsContext() );
QWidget* widget = gw ? gw->getGLWidget() : NULL;
QGridLayout* grid = new QGridLayout( ) ;
grid->addWidget( widget );
grid->setSpacing(1);
grid->setMargin(1);
setLayout( grid );
initHUD( ) ;
}
The create camera function is as follows:
osg::Camera* createCamera( int x, int y, int w, int h ) {
osg::DisplaySettings* ds = osg::DisplaySettings::instance().get();
osg::ref_ptr traits
= new osg::GraphicsContext::Traits;
traits->windowName = "" ;
traits->windowDecoration = false ;
traits->x = x;
traits->y = y;
traits->width = w;
traits->height = h;
traits->doubleBuffer = true;
traits->alpha = ds->getMinimumNumAlphaBits();
traits->stencil = ds->getMinimumNumStencilBits();
traits->sampleBuffers = ds->getMultiSamples();
traits->samples = ds->getNumMultiSamples();
osg::ref_ptr camera = new osg::Camera;
camera->setGraphicsContext( new osgQt::GraphicsWindowQt(traits.get()) );
camera->setViewport( new osg::Viewport(0, 0, traits->width, traits->height) );
camera->setViewMatrix(osg::Matrix::translate(-10.0f,-10.0f,-30.0f));
camera->setProjectionMatrixAsPerspective(
20.0f,
static_cast(traits->width)/static_cast(traits->height),
1.0f, 10000.0f );
return camera.release();
}
I have been looking at several camera examples and searching for a solution for a while to no avail. What I am really looking for is the background being my main camera which takes up most of the screen and displays the scene graph while my mini-map appears in the bottom right. It has the same scene as the main camera but is overlaid on top of it and has its own set of controls for selection etc since it will have different functionality.
I was thinking that perhaps by adding another camera as a slave I would be able to do this:
camera = createCamera(40,40,50,50) ;
viewer->addSlave(camera) ;
But this doesn't seem to work. If I disable the other camera I do see a clear area that it appears this camera was suppose to be rendering in (its viewport) but that doesn't help. I've played around with rendering order thinking it could be that to no avail.
Any ideas? What it the best way to do such a minimap is? What am I doing wrong? Also anyway to make the rendering of the minimap circular instead of rectangular?
I am not personnally using OpenSceneGraph, so I can't advise you on your code. I think the best is to ask in the official forums. But I have some ideas on the minimap:
First, you have to specify the basic features of your minimap. Do you want it to emphasize some elements of the scene, or just show the scene (ie, RTS-like minimap vs simple top-show of the scene) ?
I assume you do not want to emphasize some elements of the scene. I also assume your main camera is not really 'minimap-friendly'. So, the simplest is to create a camera with the following properties:
direction = (0,-1,0) (Y is the vertical axis)
mode = orthographic
position = controlled by what you want, for example your main camera
Now, for the integration of the image. You can:
set the viewport of the minimap camera to what you want, if your minimap is rectangular.
render the minimap to a texture (RTT) and then blend it through an extra rendering pass.
You can also search other engines forums (like Irrlicht and Ogre) to see how they're doing minimaps.

Box2D - When creating a body at runtime, the body does not collide

I've been working on a game with destructible environments and I've come up with a solution where I check for possible destruction within my ContactListener object. Obviously because this is taking place within Step(), I postpone processing the destruction until the moment after the step. I do this by pooling "destruction events" that need to be processed within the contact listener, and then immediately after Step() calling something like contactListener->processDestructionEvents();.
The way I do this is by capturing the colliding fixtures within the beginContact event and then determining the angle of impact, then using that angle to raycast on the fixture itself. I then grap the vertices from the b2PolygonShape of the fixture, then generate two new shapes which are split based on the impact and exit points of the ray. The original fixture is destroyed on the body, and then a new fixture is generated for the first new shape and added to the original body. For the second shape, a new body is generated and that shape is added to this new body.
Anyway everything works great, in debug view I can see that the new shapes have been generated and are all in place, as they should be. However, I get really screwed up behavior at this point. As soon as this process is complete, neither the original nor the newly generated body will collide with anything. If I enable continuous physics, SOMETIMES a dynamic object will collide with one of the edges of these bodies/fixtures, but not always. I'm wondering if it's something I'm doing wrong in my approach to rebuilding bodies/fixtures at runtime. Here is the code for generating the new objects, any help would be greatly appreciated.
void PhysicsContactListener::processDestructionEvents() {
if(!hasDestructionEvents) {return;}
for(destructionEventsIterator = destructionEvents.begin(); destructionEventsIterator != destructionEvents.end(); ++destructionEventsIterator) {
b2Filter f1, f2;
f1.groupIndex = destructionEventsIterator->originalFixture->GetFilterData().groupIndex;
f1.categoryBits = destructionEventsIterator->originalFixture->GetFilterData().categoryBits;
f1.maskBits = destructionEventsIterator->originalFixture->GetFilterData().maskBits;
f2.groupIndex = destructionEventsIterator->originalFixture->GetFilterData().groupIndex;
f2.categoryBits = destructionEventsIterator->originalFixture->GetFilterData().categoryBits;
f2.maskBits = destructionEventsIterator->originalFixture->GetFilterData().maskBits;
b2PolygonShape newShape0 = destructionEventsIterator->newFixtures[0];
b2FixtureDef fixture0Def;
fixture0Def.shape = &newShape0;
fixture0Def.density = 1.0f;
fixture0Def.restitution = 0.2f;
b2Fixture* fixture1 = destructionEventsIterator->hostBody->CreateFixture(&fixture0Def);
fixture1->SetFilterData(f1);
//destructionEventsIterator->hostBody->SetAwake(true);
destructionEventsIterator->hostBody->ResetMassData();
//destructionEventsIterator->hostBody->SetActive(true);
destructionEventsIterator->hostBody->SetTransform(destructionEventsIterator->hostBody->GetPosition(), destructionEventsIterator->hostBody->GetAngle());
b2BodyDef bd;
bd.position = destructionEventsIterator->hostBody->GetPosition();
bd.angle = destructionEventsIterator->hostBody->GetAngle();
bd.type = destructionEventsIterator->hostBody->GetType();
b2Body* newBody = destructionEventsIterator->hostBody->GetWorld()->CreateBody(&bd);
b2PolygonShape* newShape1 = (b2PolygonShape*)(&destructionEventsIterator->newFixtures[1]);
b2Fixture* fixture2 = newBody->CreateFixture(newShape1, destructionEventsIterator->hostBodyDensity);
fixture2->SetFilterData(f2);
newBody->SetAngularVelocity(destructionEventsIterator->hostBody->GetAngularVelocity());
newBody->SetLinearVelocity(destructionEventsIterator->hostBody->GetLinearVelocity());
//newBody->SetAwake(true);
newBody->ResetMassData();
//newBody->SetActive(true);
newBody->SetTransform(destructionEventsIterator->hostBody->GetPosition(), destructionEventsIterator->hostBody->GetAngle());
destructionEventsIterator->hostBody->DestroyFixture(destructionEventsIterator->originalFixture);
}
The two pieces don't collide with each other? Take a look at the categoryBits and maskBits values that each fixture ends up with - looks like each piece is given the same values for these. My guess is you are just overlooking the fact that these masks are checked against each other both ways, eg. from the Box2D source code:
bool collide =
(filterA.maskBits & filterB.categoryBits) != 0 &&
(filterA.categoryBits & filterB.maskBits) != 0;
On the other hand if you mean the pieces collide with nothing at all and simply fall through the ground and down forever except for SOMETIMES, then I might suspect an incorrect polygon winding.
btw a b2Filter holds only primitives so you could assign those directly:
b2Filter f1 = destructionEventsIterator->originalFixture->GetFilterData();
...also, the first SetTransform and the second ResetMassData are redundant.