I am trying to render views of a 3D mesh in VTK, I am doing the following:
vtkSmartPointer<vtkRenderWindow> render_win = vtkSmartPointer<vtkRenderWindow>::New();
vtkSmartPointer<vtkRenderer> renderer = vtkSmartPointer<vtkRenderer>::New();
render_win->AddRenderer(renderer);
render_win->SetSize(640, 480);
vtkSmartPointer<vtkCamera> cam = vtkSmartPointer<vtkCamera>::New();
cam->SetPosition(50, 50, 50);
cam->SetFocalPoint(0, 0, 0);
cam->SetViewUp(0, 1, 0);
cam->Modified();
vtkSmartPointer<vtkActor> actor_view = vtkSmartPointer<vtkActor>::New();
actor_view->SetMapper(mapper);
renderer->SetActiveCamera(cam);
renderer->AddActor(actor_view);
render_win->Render();
I am trying to simulate a rendering from a calibrated Kinect, for which I know the intrinsic parameters. How can I set the intrinsic parameters (focal length and principle point) to the vtkCamera.
I wish to do this so that the 2d pixel - 3d camera coordinate would be the same as if the image were taken from a kinect.
Hopefully this will help others trying to convert standard pinhole camera parameters to a vtkCamera: I created a gist showing how to do the full conversion. I verified that the world points project to the correct location in the rendered image. The key code from the gist is pasted below.
gist: https://gist.github.com/decrispell/fc4b69f6bedf07a3425b
// apply the transform to scene objects
camera->SetModelTransformMatrix( camera_RT );
// the camera can stay at the origin because we are transforming the scene objects
camera->SetPosition(0, 0, 0);
// look in the +Z direction of the camera coordinate system
camera->SetFocalPoint(0, 0, 1);
// the camera Y axis points down
camera->SetViewUp(0,-1,0);
// ensure the relevant range of depths are rendered
camera->SetClippingRange(depth_min, depth_max);
// convert the principal point to window center (normalized coordinate system) and set it
double wcx = -2*(principal_pt.x() - double(nx)/2) / nx;
double wcy = 2*(principal_pt.y() - double(ny)/2) / ny;
camera->SetWindowCenter(wcx, wcy);
// convert the focal length to view angle and set it
double view_angle = vnl_math::deg_per_rad * (2.0 * std::atan2( ny/2.0, focal_len ));
std::cout << "view_angle = " << view_angle << std::endl;
camera->SetViewAngle( view_angle );
I too am using VTK to simulate the view from a kinect sensor. I am using VTK 6.1.0. I know this question is old, but hopefully my answer may help someone else.
The question is how can we set a projection matrix to map world coordinates to clip coordinates. For more info on that see this OpenGL explanation.
I use a Perspective Projection Matrix to simulate the kinect sensor. To control the intrinsic parameters you can use the following member functions of vtkCamera.
double fov = 60.0, np = 0.5, fp = 10; // the values I use
cam->SetViewAngle( fov ); // vertical field of view angle
cam->SetClippingRange( np, fp ); // near and far clipping planes
In order to give you a sense of what that may look like. I have an old project that I did completely in C++ and OpenGL in which I set the Perspective Projection Matrix similar to how I described, grabbed the z-buffer, and then reprojected the points out onto a scene that I viewed from a different camera. (The visualized point cloud looks noisy because I also simulated noise).
If you need your own custom Projection Matrix that isn't the Perspective flavor. I believe it is:
cam->SetUserTransform( transform ); // transform is a pointer to type vtkHomogeneousTransform
However, I have not used the SetUserTransform method.
This thread was super useful to me for setting camera intrinsics in VTK, especially decrispell's answer. To be complete, however, one case is missing: if the focal length in the x and y directions are not equal. This can easily be added to the code by using the SetUserTransform method. Below is a sample code in python :
cam = self.renderer.GetActiveCamera()
m = np.eye(4)
m[0,0] = 1.0*fx/fy
t = vtk.vtkTransform()
t.SetMatrix(m.flatten())
cam.SetUserTransform(t)
where fx and fy are the x and y focal length in pixels, i.e. the two first diagnoal elements of the intrinsic camera matrix. np is and alias for the numpy import.
Here is a gist showing the full solution in python (without extrinsics for simplicity). It places a sphere at a given 3D position, renders the scene into an image after setting the camera intrinsics, and then displays a red circle at the projection of the sphere center on the image plane: https://gist.github.com/benoitrosa/ffdb96eae376503dba5ee56f28fa0943
Related
I've implemented a fps camera based on the up, right and view vectors from this.
Right now I want to be able to interact with the world by placing cubes in a minecraft style.
My lookAt vector is the sum of the view vector and the camera position, so my first attempt was to draw a cube at lookAt, but this is causing a strange behaviour.
I compute every vector like in the web I mentioned (such that lookAt = camera_position + view_direction) but the cube drawn is always arround me. I've tried several things like actually placing it (rounding the lookAt) and it appears near the wanted position but not at the place i'm looking at.
Given these vectors, how can I draw that's centered at the position that my camera is looking but a little bit further (exactly like minecraft)?
but the cube drawn is always arround me.
Yeah and that's obvious. You place cubes on the sphere surface of radius view_direction with center at camera_position.
Given these vectors, how can I draw that's centered at the position
that my camera is looking but a little bit further (exactly like
minecraft)?
You need to place cubes at the intersection of the view vector with the scene geometry. In the simplest case, it can be just "ground" plane, so you need intersect view vector with "ground" plane. Then you need to round the intersection xyz coordinates to the nearest grid node xyz = round(xyz / cubexyz)*cubexyz where cubexyz - cube size.
Approximate code:
Vector3D intersectPoint(Vector3D rayVector, Vector3D rayPoint, Vector3D planeNormal, Vector3D planePoint) {
Vector3D diff = rayPoint - planePoint;
double prod1 = diff.dot(planeNormal);
double prod2 = rayVector.dot(planeNormal);
double prod3 = prod1 / prod2;
return rayPoint - rayVector * prod3;
}
.......
Vector3D cubePos = intersectPoint(view_direction, camera_position, Vector3D(0, 1, 0), Vector3D(0, 0, 0));
cubePos = round(cubePos / cubeSize) * cubeSize;
AddCube(cubePos);
It's hard to tell without having images to look at, but lookAt is most likely your normalized forward vector? If i understood you correctly, you'd want to do something like objectpos = camerapos + forward * 10f (where 10f is the distance you want to place the object in front of you in 3d space units) to make sure that it's placed a few units in front of your fps controller.
actually, if view_direction is your normalized forward vector and your lookAt is camera_pos + view_direction, then you'd end up with something very close to your camera position, which would explain why the cube spawns inside you. either way, my suggestion should still work :)
I am trying to implement a logic where, on mouse click, a shot is fired at an object.To do so, I did the following,
I first considered the .obj file of my model and found the region (list of coordinates) that the shot works on (a particular weak point of the body).
I then considered the least and largest x,y and z values present in the file for that particular region (xmin,ymin,zmin and xmax,ymax,zmax).
To figure out whether the shot has landed on the weak point, I considered the assumption that a shot would land on the weak point, if the coordinates of the shot lie between (xmin,ymin,zmin) and (xmax,ymax,zmax).
I assumed the coordinates from the .obj file to be the actual coordinates of the model, since the assimp code I have directly loads in the coordinates of the model. Considering (xmin,ymin,zmin) and (xmax,ymax,zmax), I converted the coordinates to window coordinates via gluProject().
I then considered the current cursor position and checked if the cursor position lies between (xmin,ymin,zmin) and (xmax,ymax,zmax).
The problems I now face are:
The object coordinates provided in the .obj file range between -4 to 4, which then lie around 1.0 after gluProject(), whereas the cursor position lies between (0,0) and (1280,720).
After gluProject(), (xmin,ymin) and (xmax,ymax) are either (0,1) or (1,0) whereas the zmin and zmax values seem fine.
How can I get my logic working ?
Here is the code:
// Call shader to draw and acquire necessary information for gluProject()
modelShader.use();
modelShader.setMat4("projection", projection);
modelShader.setMat4("view", view);
glm::mat4 model_dragon;
double time=glfwGetTime();
model_dragon=glm::translate(model_dragon, glm::vec3(cos((360.0-time)/2.0)*60.0,cos(((360.0-time)/2.0))*(-2.5),sin((360-time)/1.0)*60.0));
model_dragon=glm::rotate(model_dragon,(float)(glm::radians(30.0)),glm::vec3(0.0,0.0,1.0));
model_dragon=glm::scale(model_dragon,glm::vec3(1.4,1.4,1.4));
modelShader.setMat4("model", model_dragon);
collision_model=model_dragon;collision_view=view;collision_proj=projection; //so that I can provide the view,model and projection required for gluProject()
ourModel.Draw(modelShader);
Mouse button callback
// Note: dragon_min and dragon_max variables hold the constant position of the min and max coordinates.
void mouse_button_callback(GLFWwindow* window,int button,int action,int mods){
if(button==GLFW_MOUSE_BUTTON_LEFT && action==GLFW_PRESS){
Mix_PlayChannel( -1, shot, 0 ); //Play sound
GLdouble x,y,xmin,ymin,zmin,xmax,ymax,zmax,dmodel[16],dproj[16];
GLint dview[16];
float *model = (float*)glm::value_ptr(collision_model);
float *proj = (float*)glm::value_ptr(collision_proj);
float *view = (float*)glm::value_ptr(collision_view);
for (int i = 0; i < 16; ++i){dmodel[i]=model[i];dproj[i]=proj[i];dview[i]=(int)view[i];} // Convert mat4 to double array
glfwGetCursorPos(window,&x,&y);
gluProject(dragon_min_x,dragon_min_y,dragon_min_z,dmodel,dproj,dview,&xmin,&ymin,&zmin);
gluProject(dragon_max_x,dragon_max_y,dragon_max_z,dmodel,dproj,dview,&xmax,&ymax,&zmax);
if((x>=xmin && x<=xmax) && (y>=ymin && y<=ymax)){printf("Hit\n");defense--;}
The .obj coordinates have eg. values as shown:
0.032046 1.533727 4.398055
You are confusing the parameters of gluProject, especially the view parameter. This parameter should contain 4 integers which describe the viewport (x,y,width,height) and not the view matrix.
gluProject (and a lot of other glu functions) are tailored towards the fixed function pipeline and their matrix stacks. Due to this, you have to pass the following information:
model: The modelview matrix, as returned by glGetDoublev( GL_MODELVIEW_MATRIX, ...)).
proj: The projection matrix, as returned by glGetDoublev( GL_PROJECTION_MATRIX, ...).
view: The current viewport, as returned by glGetIntegerv( GL_VIEWPORT, ...)
As you see, the view matrix is packed together with the model matrix and view contains the viewport.
I'd strongly advice not to use glu functions at all when working with modern OpenGL. Especially when the matrices are already stored in glm, it would be better to use glm::project.
Note1: Converting a floating point matrix to an integer matrix by casting each element almost never results in anything meaningful.
Note2: When projecting a bounding rectangle to screenspace, the result will in general not be a rectangle anymore. During projection, angles are not preserved, thus the result is a general four cornered polygon and not a rectangle anymore. Same goes for bounding boxes: You can't even guarantee that the projected box is contained in the screen-space rectangle defined by projecting [x_min, y_min, z_min] and [x_max, y_max, z_max].
I'm currently trying to get the relative position of two Kinect v2s by getting the position of a tracking pattern both cameras can see. Unfortunately I can't seem to get the correct position of the patterns origin.
This is my current code to get the position of the pattern relative to the camera:
std::vector<cv::Point2f> centers;
cv::findCirclesGrid( registeredColor, m_patternSize, centers, cv::CALIB_CB_ASYMMETRIC_GRID );
cv::solvePnPRansac( m_corners, centers, m_camMat, m_distCoeffs, m_rvec, m_tvec, true );
// calculate the rotation matrix
cv::Matx33d rotMat;
cv::Rodrigues( m_rvec, rotMat );
// and put it in the 4x4 transformation matrix
transformMat = matx3ToMatx4(rotMat);
for( int i = 0; i < 3; ++i )
transformMat(i,3) = m_tvec.at<double>(i);
transformMat = transformMat.inv();
cv::Vec3f originPosition( transformMat(0,3), transformMat(1,3), transformMat(2,3) );
Unfortunately, when I compare originPosition to the point in the pointcloud that corresponds to the origin found in screenspace (saved in centers.at(0) above) I get a very different result.
The Screenshot below shows the pointcloud from the kinect with the point at the screenspace position of the pattern's origin in red in the red circle and the point at originPosition in light blue in the light blue circle. The screenshot was taken from directly in front of the pattern. The originPosition is also a bit more to the front.
As you can see, the red dot is perfectly in the first circle of the pattern while the blue dot corresponding to originPosition is not even close. Especially it is definitely not just a scaling issue of the vector from camera to origin. Also, the findCirclesGrid is done on the registered color image and the intrinsic parameters are taken from the camera itself to ensure that there is no difference in those between the image and the calculation of the point cloud.
You have transormation P->P' given by R|T, To get inverse transformation P'->P given by R'|T' just do:
R' = R.t();
T' = -R'* T;
And then
P = R' * P' + T'
I have a 3D vtk scene representing a point cloud, displayed through a QVTKWidget.
vtk7.1, Qt5.8.
I want to be able to rotate the scene around specific coordinates, but I don't know how to proceed.
I like the trackball interaction. I just need to set the center, but I'm a bit lost in VTK api.
I think I can do this by changing the rotation matrix : InvTranslation + Rotation + Translation should do the trick. I see two ways of doing it :
1)
Get the Rotation Matrix computed by VTK
Generate a new matrix
Apply the matrix.
2)
set a transform to vtk to apply before the process
set a transform to vtk to apply after the process
Am I in the right direction? If yes, how I can implement one of these solutions..?
Thank i advance,
Etienne.
Problem solved. The change of focale would also change the view. SO, I aplied a few geometric transform, and there it is.
// vtk Element /////////////////////////////////////////////////////////
vtkRenderWindowInteractor *rwi = widget->GetInteractor();
vtkRenderer *renderer = widget->GetRenderWindow()->GetRenderers()->GetFirstRenderer();
vtkCamera *camera = renderer->GetActiveCamera();
// Camera Parameters ///////////////////////////////////////////////////
double *focalPoint = camera->GetFocalPoint();
double *viewUp = camera->GetViewUp();
double *position = camera->GetPosition();
double axis[3];
axis[0] = -camera->GetViewTransformMatrix()->GetElement(0,0);
axis[1] = -camera->GetViewTransformMatrix()->GetElement(0,1);
axis[2] = -camera->GetViewTransformMatrix()->GetElement(0,2);
// Build The transformatio /////////////////////////////////////////////////
vtkSmartPointer<vtkTransform> transform = vtkSmartPointer<vtkTransform>::New();
transform->Identity();
transform->Translate(d->center[0], d->center[1], d->center[2]);
transform->RotateWXYZ(rxf, viewUp); // Azimuth
transform->RotateWXYZ(ryf, axis); // Elevation
transform->Translate(-d->center[0], -d->center[1], -d->center[2]);
double newPosition[3];
transform->TransformPoint(position,newPosition); // Transform Position
double newFocalPoint[3];
transform->TransformPoint(focalPoint, newFocalPoint); // Transform Focal Point
camera->SetPosition(newPosition);
camera->SetFocalPoint(newFocalPoint);
// Orhthogonalize View Up //////////////////////////////////////////////////
camera->OrthogonalizeViewUp();
renderer->ResetCameraClippingRange();
rwi->Render();
You just have to change the focal point of your vtkCamera
vtkSmartPointer<vtkCamera> camera =
vtkSmartPointer<vtkCamera>::New();
camera->SetPosition(0, 0, 20);
camera->SetFocalPoint(0, 0, 10); // The center point is not 0, 0, 10
I have managed to rotate a rectangle in OpenGL (C++) just fine. I am making a program that tests two rectangles for collision using the "separated axis theorem". To use the theorem, I need the x and y coordinates of each corner of the rectangle, but my problem is that, although I call glRotatef(...), the coordinates of the corners of the rectangle are not changed to the values that they are rotated too, but the rectangle rotates as it should. How can I update the coordinates of my rectangle after glRotatef is called?
Code:
// float r1.x[4] and r1.y[4] hold the x and y position of each of the 4 corners, starting with the upper left (r1.x[0], r1.y[0])
glLoadIdentity();
glTranslatef((r1.x[0] + r1.x[2]) / 2, (r1.y[1] + r1.y[3]) / 2, 0); // Translates matrix to center of rectangle
glRotatef(r1.angle, 0, 0, 1);
glTranslatef(-((r1.x[0] + r1.x[2]) / 2), -((r1.y[1] + r1.y[3]) / 2), 0); // Translates back
r1.angle++;
glBegin(GL_QUADS);
glVertex2f(r1.x[0], r1.y[0]);
glVertex2f(r1.x[1], r1.y[1]);
glVertex2f(r1.x[2], r1.y[2]);
glVertex2f(r1.x[3], r1.y[3]);
glEnd();
Transformation calls in OpenGL (like glTranslatef and glRotatef) update an internal transformation matrix that get multiplied by the points you provide before getting drawn on the screen. OpenGL does not at all touch your data.
In general, that is what you want. You have a model that is constant in time, but it gets transformed around.
If however, you do need to update your data, you need to create your own transformation matrix, manually multiply it and then draw with a clean transformation matrix (with glLoadIdentity)
You could get a little help from OpenGL though, as you can get the transformation matrix from OpenGL, but I don't recommend this. The math is not that hard and you'd appreciate learning how to do it.