How do I access shape position in SFML? - c++

I'm using SFML to draw in C++. It was going well until I tried accessing the position of a circle I drew on the screen. Code:
sf::Shape RootCircle = sf::Shape::Circle(300, 30, 30, sf::Color::Blue);
App.Draw(RootCircle);
cout << "X: " << RootCircle.GetPosition().x << endl;
cout << "Y: " << RootCircle.GetPosition().y << endl;
It's consisting telling me that the x and y positions are set to 0. What am I missing?

By calling the sf::Shape::Circle() constructor, only the offset relative to the position is set to 300,30. To actually set the position of the circle, you need to call:
rootCircle.SetPosition(300.0f, 30.0f);
Note that by setting the position to 300,30, whatever offset is specified in the Circle() constructor will be relative to the actual position specified.

Related

Fbx SDK up axis import issues

I have some problems when trying to import fbx files in a 3D application, using the Autodesk SDK.
When I'm exporting a mesh from 3ds Max and choose Y as the up axis in the exporter options, the vertices aren't transformed and the Z axis is still used as the up axis for the points coordinates in the file. This is expected, as I'm supposed to transform the scene to my defined axis system afterwards.
In the importer code, I'm veriyfing the axis system and I'm converting the scene to the one with Y as the up axis:
FbxAxisSystem axisSystem(FbxAxisSystem::eYAxis, FbxAxisSystem::eParityOdd, FbxAxisSystem::eRightHanded); ```
FbxAxisSystem sceneAxisSystem = fbxScene->GetGlobalSettings().GetAxisSystem();
if (sceneAxisSystem != axisSystem)
axisSystem.ConvertScene(fbxScene);
However, the exported file already has the axis system similar with the one I'm using (the up axis is Y), so there is no convertion taking place.
If I export the same mesh from Blender or Maya, the axis system is the same too.
The only different attribute in the file exported from 3ds Max is the OriginalUpAxis attribute, which is 2 (Z), compared to 1, as it would be when exported from Maya.
I tried to export the mesh with Z as the up axis, the vertices are in the same positions as before, the scene conversion takes place this time (or at least the if statement fires), but when I'm trying to convert the vertices positions, I'm getting an identity matrix, which makes me believe that axisSystem.ConvertScene(fbxScene) does nothing:
FbxMesh *fbxMesh = meshNode->GetMesh();
FbxAMatrix& transform = meshNode->EvaluateGlobalTransform();
unsigned numFbxVertices = fbxMesh->GetControlPointsCount();
FbxVector4* lControlPoints = fbxMesh->GetControlPoints();
/*I'm getting an identity matrix here
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
*/
std::cout << transform.GetColumn(0).Buffer()[0] << transform.GetColumn(0).Buffer()[1] << transform.GetColumn(0).Buffer()[2] << transform.GetColumn(0).Buffer()[3] << std::endl;
std::cout << transform.GetColumn(1).Buffer()[0] << transform.GetColumn(1).Buffer()[1] << transform.GetColumn(1).Buffer()[2] << transform.GetColumn(1).Buffer()[3] << std::endl;
std::cout << transform.GetColumn(2).Buffer()[0] << transform.GetColumn(2).Buffer()[1] << transform.GetColumn(2).Buffer()[2] << transform.GetColumn(2).Buffer()[3] << std::endl;
std::cout << transform.GetColumn(3).Buffer()[0] << transform.GetColumn(3).Buffer()[1] << transform.GetColumn(3).Buffer()[2] << transform.GetColumn(3).Buffer()[3] << std::endl;
Is this an SDK bug? Any advice?
EDIT: It looks like node->ResetPivotSetAndConvertAnimation() for each node in the scene resets the transformation matrices too, that's why I was having this problem. Now it works perfect.

Vector.push_back(pair<int,int>(x1,x2 )); does not work

I used a DLIB parallel_for loop to do some processing and add coordinates to a vector> that has been declared outside the loop. But I cannot use the vector.push_back() function from within the loop.
Verified whether there are any declaration issues.
Passed the vector pointer to the parallel_for loop lambda function.
//Store cordinates of respective face_image
std::vector<pair<int,int>> xy_coords;
//Create a dlib image window
window.clear_overlay();
window.set_image(dlib_frame);
auto detections = f_detector(dlib_frame);
dlib::parallel_for(0, detections.size(), [&,detections,xy_coords](long i)
{
auto det_face = detections[i];
//Display Face data to the user
cout << "Face Found! " << "Area: " << det_face.area() << "X: " <<det_face.left() << "Y: " << det_face.bottom() << endl;
//Get the Shape details from the face
auto shape = sp(dlib_frame, det_face);
//Extract Face Image from frame
matrix<rgb_pixel> face_img;
extract_image_chip(dlib_frame, get_face_chip_details(shape, 150, 0.25), face_img);
faces.push_back(face_img);
//Add the coordinates to the coordinates vector
xy_coords.push_back(std::pair<int,int>((int)det_face.left(),(int)det_face.bottom()));
//Add face to dlib image window
window.add_overlay(det_face);
});
Your lambda is capturing xy_coords by copy, the one you're pushing into inside the lamdba is not the same outside. Try capturing it by reference like so [&,&xy_coords,detections] or just [&,detections].
See this for more info:
https://en.cppreference.com/w/cpp/language/lambda#Lambda_capture

opengl glRasterPos*() changes arguments

This is a part of my code and it's result in opengl/c++(using visual studio 2013):
GLint *raspos = new GLint[];
glRasterPos2i(56, 56);
glGetIntegerv(GL_CURRENT_RASTER_POSITION, raspos);
cout << " , X : " << raspos[0] << " and " << " Y : " << raspos[1];
result
X : 125 and Y : 125
i can't understand what's going on! why glRasterPos2i changes the arguments ?
The coordinates passed to glRasterPos are subject to the transformation pipeline. The values you retrieve is the raster position in window coordinates after undergoing those transformations.
Because the raster position is transform by the current projection and modelview matrices just like an ordinary vertex is, but querying GL_CURRENT_RASTER_POSITION is retrieving the window space coordinates.

stereoCalibrate() changes focal lengths even when it was not supposed to

I noticed that opencv stereoCalibrate() changes the focal lengths in camera matrices even though I've set appropriate flag (ie CV_CALIB_FIX_FOCAL_LENGTH). I'm using two identical cameras with the same focal length set mechanically on lens and furthermore I know the sensor size so I can compute intrinsic camera matrix manually what actually I do.
Here you have some output form the stereo calibration program - camera matrices before and after stereoCalibrate().
std::cout << "Before calibration: " << std::endl;
std::cout << "C1: " << _cameraMatrixA << std::endl;
std::cout << "C2: " << _cameraMatrixB << std::endl;
double error = cv::stereoCalibrate(objectPoints, imagePointsA, imagePointsB, _cameraMatrixA, _distCoeffsA, _cameraMatrixB, _distCoeffsB, _imageSize,
R, T, E, F,
cv::TermCriteria((cv::TermCriteria::COUNT + cv::TermCriteria::EPS), 30, 9.999999999999e-7), CV_CALIB_FIX_FOCAL_LENGTH | CV_CALIB_FIX_PRINCIPAL_POINT);
std::cout << "After calibration: " << std::endl;
std::cout << "C1: " << _cameraMatrixA << std::endl;
std::cout << "C2: " << _cameraMatrixB << std::endl;
Before calibration:
C1: [6203.076923076923, 0, 1280; 0,
6203.076923076923, 960; 0, 0, 1]
C2: [6203.076923076923, 0, 1280; 0, 6203.076923076923, 960; 0, 0,
1]
After calibration:
C1: [6311.77650416514, 0, 1279.5; 0, 6331.34531760757, 959.5; 0,
0, 1]
C2: [6152.655897294907, 0, 1279.5; 0, 6206.591406832492, 959.5; 0,
0, 1]
I think this is weird opencv behavior. Anyone faced similar problem? I know it is easy to solve, I can just set focal lengths to camera matrices after stereo calibration.
In order to do what you want, you have to call stereoCalibrate with flags:
CV_CALIB_USE_INTRINSIC_GUESS | CV_CALIB_FIX_FOCAL_LENGTH | CV_CALIB_FIX_PRINCIPAL_POINT
If you do not use the CV_CALIB_USE_INTRINSIC_GUESS flag, stereoCalibrate will first initialize the camera matrices and distortion coefficients itself and then fix part of them in the subsequent optimization. This is stated in the documentation, although rather unclearly and without mentionning that critical flag:
Besides the stereo-related information, the function can also perform a full calibration of each of two cameras. However, due to the high dimensionality of the parameter space and noise in the input data, the function can diverge from the correct solution. If the intrinsic parameters can be estimated with high accuracy for each of the cameras individually (for example, using calibrateCamera() ), you are recommended to do so [...].
Using CV_CALIB_USE_INTRINSIC_GUESS in addition to any of the CV_CALIB_FIX_* flags tells the function to use what you are passing as input, otherwise, this input is simply ignored and overwritten.
the CV_CALIB_FIX_FOCAL_LENGTH flag causes the optimization routine to just use the Fx and Fy that were passed in the intrinsic matrix.

Finding screen width and height dynamically in OpenGL GLUT program

In my OpenGL program, in order to run it in fullscreen mode, I'm using the GLUT function glutGameModeString(const char *string). (where 'string' specifies the screen width,height,pixelDepth and refresh rate) To make it working in any system,i need to dynamically determine the screen width and screen height of the host system, prior to making the call to 'glutGameModeString' function.How do I do that?
I think you are wrong, there are the following constants:
GLUT_SCREEN_HEIGHT
GLUT_SCREEN_WIDTH
I've tested it, it works.
Just tested on Linux Mint 18
std::cout << "glut screen width(DEFUALT): " << GLUT_SCREEN_WIDTH << std::endl;
std::cout << "glut screen height(DEFAULT): " << GLUT_SCREEN_HEIGHT << std::endl;
std::cout << "glut screen width: " << glutGet(GLUT_SCREEN_WIDTH) << std::endl;
std::cout << "glut screen height: " << glutGet(GLUT_SCREEN_HEIGHT) << std::endl;
outputs:
glut screen width(DEFAULT): 200
glut screen height(DEFAULT): 201
glut screen width: 1366
glut screen height: 768
setting your window size to glutGet(GLUT_SCREEN_WIDTH/HEIGHT); will maximize your window, still has top bar and border.
Just calling glutFullScreen(); after initializing your window will make it full screen with no borders or anything, no need for the above stuff of getting your screen size.