I am new to vtk and I am writing an app to draw multiple shapes. The shapes are drawn using point picking event as follow:
void Visualizer::pointPickingEventOccurred (const pcl::visualization::PointPickingEvent &event)
{
std::cout << "[INOF] Point picking event occurred." << std::endl;
float x, y, z;
if (event.getPointIndex () == -1)
{
return;
}
event.getPoint(x, y, z);
std::cout << "[INOF] Point coordinate ( " << x << ", " << y << ", " << z << ")" << std::endl;
points->InsertNextPoint(x, y, z);
}
points here is a vtk point list:
vtkNew<vtkPoints> points;
I am trying to clear points so I can resize it and get new points for each shape but I have not found a way to clear the list.
A simple call to points->Reset() will do the trick.
It will make the object look empty without actually releasing the memory.
Does Initialize() do the trick?
Related
I am trying to use the PtInRect function in Win32, which takes a POINT object. I have x and y coordinates, but I don't know how I create a POINT from those. I tried looking it up, but could only find this documentation, which does not contain the information I want.
Thank you for your help.
Problem:
You're creating trouble for yourself because of attempting to dynamically allocate the POINT object with malloc. BTW you do not always need dynamic allocation and to initialize any struct you do not need to use malloc as you pointed in the comments, you can just create the object directly.
Solution:
As tkausl pointed in the comments, POINT is created in the same way as any other struct.
Some examples on how creating a POINT:
int main()
{
int someX = 100;
int someY = 100;
RECT testRect1 = {0,0,200,200};
RECT testRect2 = {150,150,350,350};
POINT p1{someX,someY};
POINT p2 = {someX,someY};
POINT p3;
p3.x = someX;
p3.y = someY;
std::cout << (bool) PtInRect(&testRect1,p1) << std::endl;
std::cout << (bool) PtInRect(&testRect1,p2) << std::endl;
std::cout << (bool) PtInRect(&testRect1,p3) << std::endl;
std::cout << (bool) PtInRect(&testRect1,{someX,someY}) << std::endl;
std::cout << (bool) PtInRect(&testRect2,p1) << std::endl;
std::cout << (bool) PtInRect(&testRect2,p2) << std::endl;
std::cout << (bool) PtInRect(&testRect2,p3) << std::endl;
std::cout << (bool) PtInRect(&testRect2,{someX,someY}) << std::endl;
}
I've been trying to make a simple collision system for my 3D game, I'm creating a bounding sphere like this:
struct ACollSphr
{
glm::vec3* pos;
float radius;
};
And in the while loop that renders my game, in main, I give them a position like this:
for (unsigned int i = 0; i < meshModelMatrices2.size(); i++)
{
Ackerfe::ACollSphr tempSphr;
glm::vec3 *temporary = new glm::vec3(meshRenderer2.getBoundingSpherePos(*meshRenderer2.getMesh()) * glm::vec3(vec[i][12], vec[i][13], vec[i][14]));
tempSphr.pos = temporary;
radius = meshRenderer2.getBoundingSphereRadius(*meshRenderer2.getMesh(), *tempSphr.pos);
tempSphr.radius = radius;
meshSphr.push_back(tempSphr);
//std::cout << pos.x << " " << pos.y << " " << pos.z << std::endl;
//std::cout << vec[i][12] << " " << vec[i][13] << " " << vec[i][14] << std::endl;
}
where meshSphr is a vector of spheres and meshRenderer2 is the renderer I use for the meshes I'm loading in, basically I get the mesh, pass it to getBoundingSpherePos and get the position of that mesh, then I multiply it by a glm::vec3 made up of the position values inside the model matrix of each mesh and I get the radius after that and put my newly created sphere inside the meshSphr vector (which I clear after using it for the collision checks so it can get repopulated again in the next iteration)
my collision check looks like this:
for (unsigned int i = 0; i < meshSphr.size(); i++)
{
if (Ackerfe::sphrSphrColl(camera3D.getSphr(), &meshSphr[i]))
{
camera3D.changePosition(camera3D.getPosition()+glm::vec3(-5.0f));
}
}
and my sphrSphrColl function looks like this:
bool sphrSphrColl(ACollSphr *first, ACollSphr *second)
{
if (fabs((first->pos->x - second->pos->x) * (first->pos->x - second->pos->x) +
(first->pos->y - second->pos->y) * (first->pos->y - second->pos->y) +
(first->pos->z - second->pos->z) * (first->pos->z - second->pos->z) < (first->radius + second->radius) * (first->radius + second->radius)))
{
//std::cout <<"DISTANCE: "<<std::endl<<glm::length(*first->pos - *second->pos) << std::endl << std::endl << std::endl << std::endl;
return true;
}
return false;
}
I'm checking the position of the bounding spheres with a cout in my main while loop and the positions are registered correctly but when I pass them to the sphrSphrColl function it only seems to put a bounding sphere in the origin and that's it.
My question: Any idea why it's doing that? While I do have a mesh in the origin why does it only put the bounding sphere there? it's almost like the pointers aren't getting updated when I pass them in the function
So I'm working on a school project in C++. The basic idea is, that I have an area containing Tiles (200x200 Tiles).
Every Tile-Object contains a vector with shared pointers of my Creature-Class to display them in a GUI. The Project is a Simulation, so in order to simulate every Creature, we should use a vector containing every Creature. That's why I'm using shared pointers.
The Vector, that is specific for every tile is working without a problem, but the one with every Creature in it, is always empty. That's why I can't continue to work on the simulation.
That's how I place a Creature and try to insert it into the vector
void Presenter::placeCreature(const CreatureType& type) const {
int x = getModel().getTerrain()->getCursorPosX();
int y = getModel().getTerrain()->getCursorPosY();
std::shared_ptr<Creature> creature(std::make_shared<Creature>(type));
TileLand::Climate climate(getModel().getTerrain()
->getTileMap()[x + y * Reference::Terrain::terrainWidth].getClimate());
if (climate == TileLand::deepSea || climate == TileLand::shallowWater) {
if (!creature->isLandCreature()) {
getModel().getCreatures().push_back(creature);
getModel().getTerrain()->getTileMap()[x + y * Reference::Terrain::terrainWidth]
.getCreaturesOnTile().push_back(creature);
logger << INFO << "Placed Creature: " << type.getName() << " on Tile (" << x << "|" << y << ")" << ENDL;
return;
}
} else {
if (creature->isLandCreature()) {
getModel().getCreatures().push_back(creature);
getModel().getTerrain()->getTileMap()[x + y * Reference::Terrain::terrainWidth]
.getCreaturesOnTile().push_back(creature);
logger << INFO << "Placed Creature: " << type.getName() << " on Tile (" << x << "|" << y << ")" << ENDL;
return;
}
}
ui.warn("Falsche Kreatur", R"(Diese Kreatur kann auf diesem Feld nicht platziert werden!)");
creature.reset();
}
That's how I did the vector for creatures per tile. Works fine.
class TileLand {
public:
...
private:
const Climate climate;
std::vector<std::shared_ptr<Creature>> creaturesOnTile;
};
This is where the vector containing every creature is defined.
class Model {
public:
explicit Model(const CreatureList& creatureList);
TerrainModel* getTerrain() const;
CreatureList& getCreatureList();
std::vector<std::shared_ptr<Creature>>& getCreatures();
QPixmap calculateView(int sizeX, int sizeY);
void simStep();
private:
CreatureList creatureList;
std::vector<std::shared_ptr<Creature>> creatures;
TerrainModel* terrain;
int stepCounter;
};
The Constructor of the Model-Class. This one contains the global Creature-Vector as you can see above.
Model::Model(const CreatureList& creatureList) : creatureList(creatureList),
terrain(new TerrainModel()),
stepCounter(0) {
for (CreatureType ty : creatureList) {
ty.getInfoInOneLine();
}
}
std::vector<std::shared_ptr<Creature>>& Model::getCreatures() {
return creatures;
}
Thats where I try to do a simulation step for each creature. But I don't know what the problem is, "creatures" is always empty.
void Model::simStep() {
qDebug("Creature");
for (std::shared_ptr<biosim::Creature> cr : creatures) {
qDebug("BIG");
simPlants(cr);
}
}
I hope someone can at least understand what my problem is I have no idea how to describe it better :/.
Since Presenter::getModel() returns a copy of the Model instance, the getModel().getCreatures().push_back(creature) call adds a creature into the creatures vector of that temporary copy (not the original object). This change is then immediately lost when the temporary copy goes out of scope at the end of the statement.
Either have Presenter::getModel() return a reference (like you did for Model::getCreatures()), or just use :
model.getCreatures().push_back(creature);
I'm trying to run the example in the apriltags library, and I keep getting this error:
OpenCV Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == 1 && ((1 << type0) & fixedDepthMask) != 0)) in create, file /Users/Vijin/PersInq/opencv-3.2.0/modules/core/src/matrix.cpp, line 2559
I narrowed it down to a function call
detection.getRelativeTranslationRotation(m_tagSize, m_fx, m_fy, m_px, m_py,
translation, rotation);
I'm not an expert in opencv, so I'd appreciate some help with this. The exception is thrown the moment a marker is detected. Otherwise, it runs fine. Here's the whole function:
void print_detection(AprilTags::TagDetection& detection) const {
cout << " Id: " << detection.id
<< " (Hamming: " << detection.hammingDistance << ")";
// recovering the relative pose of a tag:
// NOTE: for this to be accurate, it is necessary to use the
// actual camera parameters here as well as the actual tag size
// (m_fx, m_fy, m_px, m_py, m_tagSize)
Eigen::Vector3d translation;
Eigen::Matrix3d rotation;
try{
detection.getRelativeTranslationRotation(m_tagSize, m_fx, m_fy, m_px, m_py,
translation, rotation);
}
catch (const std::exception& e)
{
cout<<"print_detection failing";
}
Eigen::Matrix3d F;
F <<
1, 0, 0,
0, -1, 0,
0, 0, 1;
Eigen::Matrix3d fixed_rot = F*rotation;
double yaw, pitch, roll;
wRo_to_euler(fixed_rot, yaw, pitch, roll);
cout << " distance=" << translation.norm()
<< "m, x=" << translation(0)
<< ", y=" << translation(1)
<< ", z=" << translation(2)
<< ", yaw=" << yaw
<< ", pitch=" << pitch
<< ", roll=" << roll
<< endl;
// Also note that for SLAM/multi-view application it is better to
// use reprojection error of corner points, because the noise in
// this relative pose is very non-Gaussian; see iSAM source code
// for suitable factors.
}
This issue persists with the newer versions of OpenCV. It can be easily fixed by changing line 95 of src/TagDetection.cc from cv::Matx33f cameraMatrix( to cv::Matx33d cameraMatrix(.
Note, this is simply converting from float to double. Alternatively, you can use this library (https://github.com/PrieureDeSion/apriltags-cpp), which I have made changes to and tested with Ubuntu 16 and OpenCV.
I've used OpenCV to calibrate my camera from different views and I obtained intrinsics, rvec and tvec with a reprojection error of .03 px (I thus think the calibration is fine).
Now, given one view of my scene, I want to be able to click on a point and find its projection on the other views.
To do so, I se the following functions:
void Camera::project(const vector<cv::Point2f> &pts_2d, vector<cv::Point3f> &pts_3d) {
std::cout << "Start proj 2d -> 3d" << std::endl;
cv::Mat pts_2d_homo;
convertPointsToHomogeneous(pts_2d, pts_2d_homo);
std::cout << "Cartesian to Homogeneous done!" << std::endl;
// Project point to camera normalized coordinates
cv::Mat unproj;
cv::transform(pts_2d_homo, unproj, intrinsics().inv());
std::cout << "Point unprojected: " << unproj.at<cv::Point3f>(0) << std::endl;
// Undo model view transform
unproj -= transVec();
cv::Mat rot;
cv::Rodrigues(rotVec(), rot);
cv::transform(unproj, unproj, rot.t());
unproj *= 1.f/cv::norm(unproj);
std::cout << "Model view undone: " << unproj.at<cv::Point3f>(0) << std::endl;
for (int i = 0; i < unproj.rows; ++i) {
std::cout << "Inside for :" << unproj.at<cv::Point3f>(i,0) << std::endl;
pts_3d.push_back(unproj.at<cv::Point3f>(i,0));
}
}
void Camera::project(const vector<cv::Point3f> &pts_3d, vector<cv::Point2f> &pts_2d) {
cv::projectPoints(pts_3d, rotVec(), transVec(), intrinsics(), dist_coeffs(), pts_2d);
}
Now I have mixed feelings about what I get as an output. When I draw the point projected on each view, they all correspond BUT no matter where I clicked at first in the "canonical view", the projected point is always the same.