I want to get x,y,z coordinates of the base frame from TangoPoseData in tango device.
if (pose.baseFrame == TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION
&& pose.targetFrame == TangoPoseData
.COORDINATE_FRAME_START_OF_SERVICE) {
if (pose.statusCode == TangoPoseData.POSE_VALID) {
//get base frame coordinate
}
A pose represents transformation between two frames, you can't get a pose by only using one frame (the base frame in your case).
Related
I have to possitions, p1 and p2, p2 is attached to p1, not only to p1's position but also to it's rotation, so q1 is a quaternion which represents p1's rotation.
If q1 rotates, then p1's position must also rotate around p1 accordingly.
I only need to calculate p2's position, not it's rotation, I worked the rotation out already.
So basically is a spaceship docked to a station, I need to move and rotate the station around with the ship docked to it.
How do I do it?
the code i wrote for it works as long as the station is not rotated during the time of docking:
bool docked[100];
Quaternion quatTarget[100];
double distance_dock[100];
vector3 docking_position(int ship, int station)
{
if (!docked[ship])
{
docked[ship] = true;
distance_dock[ship] = distances(position[ship], position[station]);
vector3 direcc = normalized(position[station] - position[ship]);
quatTarget[ship] = vecToVecRotation(direcc, { 0, 0, 1 });
QuaternionNormalize(&quatTarget[ship], &quatTarget[ship]);
}
Quaternion orientation = total_rotation[station] * quatTarget[ship];
Matrix docking_place;
MatrixRotationQuaternion(&docking_place, &orientation);
vector3 axis_z = { docking_place(0, 2), docking_place(1, 2), docking_place(2, 2) };
return position[station] + -axis_z * distance_dock[ship];
}
What I do here is take an orientation quaternion from the ship to the station at the time of docking and then traslate the ship "distance_dock" units along the negative z axis of the orientation, so the ship will always move accordingly, but somehow if I dock the ship when the station is already rotated then I get the initial docking position wrong, though it still rotates perfectly along with the station.
If I understand you correctly, you have two objects that have a rigid transformation between them. The problem is that you want to calculate the pose (position + orientation) of one, given the pose of the other.
Let's say you have three frames; the Station frame "S", the Vehicle frame "V" and the Global frame "G" (I assume your graphics environment has a global 3D Cartesian frame).
The transformation between frames S and V is fully known (translation and orientation) and constant, and is denoted S_p_SV (the position of the Vehicle w.r.t the Station, expressed in the Station frame) and SV_q (the quaternion orientation of the Vehicle, expressed in the Station frame).
This will be confusing if you have not had experience in rigid-body mechanics, in which case you should read some introductory notes/slideshows on "Rigid-Body Mechanics" which are plentiful on Google results.
I have written the expression in LATEX but unfortunately StackOverflow does not support it, so I have attached it as an image. The original LATEX can be found here.
In my notation below, for example on the first line Sp_SV , is the position of the Vehicle w.r.t. the Station, expressed in the Station frame (of rotation).
The prefixed superscript indicates the rotation frame. For the quaternion G_Sq for example, this represents the orientation of the Station frame from the Ground Frame.
In terms of implementing this in C++, I am unsure of what library you are using for Quaternions, but you will need the following functions:
Convert Euler to Quaternions - If you are going to manually specify the rotation SVq (rotation of Vehicle w.r.t Station)
Convert Quaternion to DCM - For the first method in the LATEX
Quaternion Multiply - For the second method in the LATEX
Quaternion Conjugate - For the second method in the LATEX
i am new to ogre and have read the basic tutorials but unable to understand how to create a orbit camera with mouse wheel zooming.
here is my camera code
// Create the scene node(orbit camera)
node = mSceneMgr->getRootSceneNode()->createChildSceneNode("orbit", Ogre::Vector3(0, 100, -150));
node->attachObject(mCamera);
// create the second camera node(freecam)
node = mSceneMgr->getRootSceneNode()->createChildSceneNode("free", Ogre::Vector3(0, 100, 400));
// create the third camera node (3rd person robot cam)
node = mSceneMgr->getRootSceneNode()->createChildSceneNode("robocam", Ogre::Vector3(0, 100, -80));
And here is my keypress function
bool BasicTutorial05::processUnbufferedInput(const Ogre::FrameEvent& evt)
{
Ogre::Vector3 transVector1 = Ogre::Vector3::ZERO;
if (cam1 == true)//when cam 1 is selected, bool cam1 will be true;
{
if (mKeyboard->isKeyDown(OIS::KC_S))
{
mSceneMgr->getSceneNode("orbit")->pitch(Ogre::Radian(-0.012f));
}
if (mKeyboard->isKeyDown(OIS::KC_W))
{
mSceneMgr->getSceneNode("orbit")->pitch(Ogre::Radian(0.012f));
}
if (mKeyboard->isKeyDown(OIS::KC_A))
{
mSceneMgr->getSceneNode("orbit")->yaw(Ogre::Radian(0.012f));
}
if (mKeyboard->isKeyDown(OIS::KC_D))
{
mSceneMgr->getSceneNode("orbit")->yaw(Ogre::Radian(-0.012f));
}
}
mSceneMgr->getSceneNode("orbit")->translate(transVector1 *evt.timeSinceLastFrame, Ogre::Node::TS_LOCAL);
}
and the mouse wheel zooming
//zooming for orbit camera
Ogre::Vector3 transVector2 = Ogre::Vector3::ZERO;
if (mMouse->getMouseState().Z.rel != 0){
transVector2.z = -mMouse->getMouseState().Z.rel;
}
but i can able to sort of orbit around the point where the camera is but only when i use the wheel scroll zoom, instead of rotating around a point it rotates where the camera is.
How do i change it that it only rotates at a point?
Create two nodes for your camera - the first one is the target and it's placed at the point you want to rotate around.
The second node should be created at some distance from the first one. You should attach it as the child of the target and attach your camera to this node. Finally, you should point your camera at the target node (the first one).
With this setup you'll just need to put your target node at the point of your interest and rotate it as you want. The camera position will follow the target, because it's his child. And by moving your camera node closer to the target node you can change your zoom level.
I am making a top down isometric game using SDL 2.0 and C++ and have come across a glitch.
When a texture is rendered to the screen using the SDL_RenderCopyfunction, the moment the top of the texture hits the top of the screen it gets pushed down by one pixel, thus causing the missing borders seen in the following picture:
Pre-edit with no annotations
Post-edit with with annotations
The following is my render function specific to the world itself, as the world renders differently from everything else in the game, because I am simply copying a "source" texture instead of loading a texture for every single tile in the game, which would be absurdly inefficient.
//-----------------------------------------------------------------------------
// Rendering
DSDataTypes::Sint32 World::Render()
{
//TODO: Change from indexing to using an interator (pointer) for efficiency
for(int index = 0; index < static_cast<int>(mWorldSize.mX * mWorldSize.mY); ++index)
{
const int kTileType = static_cast<int>(mpTilesList[index].GetType());
//Translate the world so that when camera panning occurs the objects in the world will all be in the accurate position
I am also incorporating camera panning as follows (paraphrased with some snippets of code included as my camera panning logic spans multiple files due to the object orientated design of my game):
(code from above immediately continued below)
mpTilesList[index].SetRenderOffset(Window::GetPanOffset());
//position (dstRect)
SDL_Rect position;
position.x = static_cast<int>(mpTilesList[index].GetPositionCurrent().mX + Window::GetPanOffset().mX);
position.y = static_cast<int>(mpTilesList[index].GetPositionCurrent().mY + Window::GetPanOffset().mY);
position.w = static_cast<int>(mpTilesList[index].GetSize().mX);
position.h = static_cast<int>(mpTilesList[index].GetSize().mY);
//clip (frame)
SDL_Rect clip;
clip.x = static_cast<int>(mpSourceList[kTileType].GetFramePos().mX);
clip.y = static_cast<int>(mpSourceList[kTileType].GetFramePos().mY);
clip.w = static_cast<int>(mpSourceList[kTileType].GetFrameSize().mX);
clip.h = static_cast<int>(mpSourceList[kTileType].GetFrameSize().mY);
I am confused as to why this is happening, as regardless of whether I include my simple culling algorithm or not (as shown below), the same result occurs.
(code from above immediately continued below)
//Check to ensure tile is being drawn within the screen size. If so, rendercopy it, else simply skip over and do not render it.
//If the tile's position.x is greather than the left border of the screen
if(position.x > (-mpSourceList[kTileType].GetRenderSize().mX))
{
//If the tile's position.y is greather than the top border of the screen
if(position.y > (-mpSourceList[kTileType].GetRenderSize().mY))
{
//If the tile's position.x is less than the right border of the screen
if(position.x < Window::msWindowSize.w)
{
//If the tile's position.y is less than the bottom border of the screen
if(position.y < Window::msWindowSize.h)
{
SDL_RenderCopy(Window::mspRenderer.get(), mpSourceList[kTileType].GetTexture(), &clip, &position);
}
}
}
}
}
return 0;//TODO
}
You may have a rounding error when you are casting the positions to ints. Perhaps you should round to the nearest integer instead of just taking the floor (which is what you're cast is doing). A tile at position (0.8, 0.8) will be rendered at pixel (0, 0) when it should probably be rendered at position (1, 1).
Or you could ensure that the size of your tiles is always an integer, then errors shouldn't accumulate.
Short version of answer, restated at bottom:
By fixing my data types issue, that allowed me to fix my math library, which removed the issue of rounding parts of pixels, since there is no such thing as less than 1 but greater than 0 pixels on the screen when rendering.
Long Answer:
I believe the issue to have been caused by a rounding error with the offset logic used when rotating a 2D grid to a diagonal isometric perspective, with the rounding error only occurring when dealing with screen coordinates between -1 and +1.
Since I based the conversion from changing an orthogonal grid to a diagonal grid on the y axis (rows), this would explain why the single pixel offset was occurring only at the top border of the screen and not bottom border.
Even though every single row had implicit rounding occurring without any safety checks, only the conversion from world coordinates to screen coordinates dealt with rounding between a positive and negative number.
The reason behind all of this is because my math library which was templatized had an issue a lot of my code being based on type defined user types such as:
typedef unsigned int Uint32;
typedef signed int Sint32;
So I simply used DSMathematics::Vector2<float> instead of the proper implementation of DSMathematics::Vector2<int>.
The reason this is an issue is because there cannot be "half a pixel" on the screen, and thus integers must be used instead of floating point values.
By fixing my data types issue, that allowed me to fix my math library, which removed the issue of rounding parts of pixels, since there is no such thing as less than 1 but greater than 0 pixels on the screen when rendering.
I'm writing a raytracer in C++ and I'm having quite a bit of trouble understanding why my output images don't contain all of the objects that should be there. Namely, I'm working with spheres and planes, and I can't draw more than one instance of each.
The object values are read in from an ASCII file (such as radius, location, normals, etc). Here's my intersect test code.
//check primary ray against each object
for(int size = 0; size < objList.size(); size++){
//if intersect
if(objList[size]->intersect(ray,origin,&t)){
if(t < minDist){ //check depth
minDist = t; //update depth
bestObj = size; //update closest object
}
}
}
vec3 intersection = origin + minDist*ray;
//figure out what to draw, if anything
color_t shadeColor;
if(bestObj != -1){ //valid object
//get base color
//using rgb color
if(objList[bestObj]->rgbColor != vec3(-1)){
shadeColor.r = objList[bestObj]->rgbColor.x;
shadeColor.g = objList[bestObj]->rgbColor.y;
shadeColor.b = objList[bestObj]->rgbColor.z;
}
//else using rgbf color
else if(objList[bestObj]->rgbfColor != vec4(-1)){
shadeColor.r = objList[bestObj]->rgbfColor.x;
shadeColor.g = objList[bestObj]->rgbfColor.y;
shadeColor.b = objList[bestObj]->rgbfColor.z;
//need to do something with alpha value
}
//else invalid color
else{
cout << "Invalid color." << endl;
}
//...the rest is just shadow and reflection tests. There are bugs here as well, but those are for another post
The above code is within a loop that checks for every pixel. 'ray' is the direction of the ray, and 'origin' is the origin of that ray. 'objList' is an stl vector that holds each object in the scene. I've tested to make sure that each object is actually getting put into the vector.
I know that my intersection tests are working...at least for the one object of each type that renders. I've had the program print to a file all the values that 'bestObj' ever gets, but it never seems to register that any of the objects other than the last one is a 'bestObj'. I realize that this is the problem, that no other object gets set as the 'bestObj', but I can't figure out why!
Any help would be appreciated :)
I figured out the problem, thanks to didierc. I'm not sure what he was really talking about, but it made me think about how I was handling my pointers. Indeed, though my vector was pushing back every object, I wasn't creating new objects for each time I pushed one back. This led to each sphere in the stl vector pointing to the same one (aka the last one read in from file)!
I'm playing a little around with DirectX 9.0 and want a object to bounce back when it hits the screen edges (top,bottom,right and left). The sprite is an image that is 128x128 pixels.
I manage to make it bounce back and forth, but this does not happend before the image is either half outside the screen, or too early. This is because the object it self is in the middle of the image, is there anyway to "remove" the background part so the program does not bounce back the sprite before the image part itself collides with the screen edge?
Do I have to modify the image manually? Like cropping it or something=
Here is some of the code I'm working with:
if(this->Textures[i].posision.x >= this->_screenWidth)
{
this->Textures[i].right = false;
}
else if(this->Textures[i].posision.x <= 0)
{
this->Textures[i].right = true;
}
if(!this->Textures[i].right)
this->Textures[i].posision.x -= 0.3f;
else
this->Textures[i].posision.x += 0.3f;
Thanks for any help!
well, if you're travelling with a left-ward vector, a collision would be qualified as:
if(this->Textures[i].posision.x - 128/2 <= 0)
{
this->Textures[i].right = true;
}
if your position.x and position.y refer to the centre of the image, then all you have to do is add/subtract half the image size to get the bounds of your image.
If your sprite isn't filling up your image, then you should probably crop some of it out.