Movement of a surgical robot's arm OpenGL - opengl

I have a question concerning surgical robot arm's movements in OpenGL.
Our arm consists of 7 pieces that suppose to be the arm's joints and they are responsible for bending and twisting the arm. We draw the arm this way: first we create the element which is responsible for moving the shoulder like "up and down" and then we "move" using Translatef to the point in which we draw the next element, responsible for twisting the shoulder (we control the movement using Rotatef) and so on with the next joints (elbow, wrist).
The point is to create an arm that can make human-like movements. Our mechanism works, but now our tutor wants us to draw a line strip with the end of the arm. We put the code responsible for drawing and moving an arm between push and pop matrix, so it works like in real, I mean when we move the soulder, any other elements in arm also moves.
There is a lot of elements moving, rotating, we have a couple of rotate matrices that are attached to different elements which we can control and now we have no idea how to precisely find a new location of the end of an arm in space to be able to add a new point to a line strip. Anyone can help?
glGetFloatv(GL_MODELVIEW_MATRIX,mvm2);
x=mvm2[12];
y=mvm2[13];
z=mvm2[14];
glPointSize(5.0f);
glColor3f(1.0f, 0.0f, 0.0f);
glBegin(GL_POINTS);
glVertex3f(x,y,z);
glEnd();
When I checked using watch what are the x,y,z values, I got (0,-1.16-12e,17.222222), what can't be true, as my arm has length about 9.0 (on z-axis). I think only the last column of modelview matrix is important and I don't have to muliply it by local coordinates of the vertex, as the they are (0,0,0) since I finish my drawning here.

we have no idea how to precisely find a new location of the end of an arm in space to be able to add a new point to a line strip.
You do this by performing the matrix math and transformations yourself.
(from comment)
To do this we are suppose to multiply the matrices and get some information out of glGetFloatv
Please don't do this. Especially not if you're supposed to build a pretransformed line strip geometry yourself. OpenGL is not a matrix math library and there's absolutely no benefit to use OpenGL's fixed function pipeline matrix functions. But it has a lot of drawbacks. Better use a real matrix math library.
Your robot arm technically consists of a number of connected segments where each segment is transformed by the composition of transformations of the segments upward in the transformation hierachy.
M_i = M_{i-1} · (R_i · T_i)
where R_i and T_i are the respective rotation and translation of each segment. So for each segment you need the individual transform matrix to retrieve the point of the line segment.
Since you'll place each segment's origin at the tip of the previous segment you'd transform the homogenous point (0,0,0,1) with the segment's transformation matrix, which has the nice property of being just the 4th column of the transformation matrix.
This leaves you with the task of creating the transformation matrix chain. Doing this with OpenGL is tedious. Use a real math library for this. If your tutor insists on you using the OpenGL fixed function pipeline please ask him to show you the reference for the functions in the specicifications of a current OpenGL version (OpenGL-3 and later); he won't find them because all the matrix math functions have been removed entirely from modern OpenGL.
For math libraries I can recommend GLM, Eigen (with the OpenGL extra module) and linmath.h (self advertisement). With each of these libraries building transformation hierachies is simple, because you can create copies of each intermediary matrix without much effort.

If you're supposed to use glGetFloatv, then this refers to calling it with the GL_MODELVIEW_MATRIX argument, which returns the current model view matrix. You can then use this matrix to transform a point from the coordinate system of the hand to the world space CS.
However, calling glGetFloatv is bad practice, as it will probably result in reduced rendering performance. I think you should talk to your tutor about teaching outdated and even deprecated functionality, maybe he can get the prof to update the materials.
Edit: Your code for retrieving the translation is correct. However, you can't draw the point with the same model view matrix applied. Before drawing it, you have to reset the model view matrix with glLoadIdentity or by popping .

Related

Another Perspective Camera issue

- SOLVED -
Warning : I'm not native English speaker
Hi,
I'm currently trying to make a 3D camera, surely because of some mistakes or math basics that I don't have, anyway, I think I will definitely become insane if I don't ask for someone help.
OK lets go.
First, I've a custom game engine that allow to deal with camera only by setting up:
the projection parameters (according to an orthographic or perspective mode)
the view: with a vector 3 for the position and a quaternion for orientation
(and no, we will not discuss about this design right now)
Now I'm writing a camera in my gameplay code (which use the functionalities of the previous engine)
My camera's environment has the following specs:
up_vector = (0, 1, 0)
forward_vector = (0, 0, 1)
angles are in degrees
glm as math lib
In my camera code I handle the player input, convert them into data that I send to my engine.
In the engine I only do:
glm::mat4 projection_view = glm::perspective(...parameters...) * glm::inverse(view_matrix)
And voila I have my matrix for the rendering step.
And now a little scenario with simple geometry.
In a 3D space we have 7 circles, drawn from z = -300 to 300.
The circle at z = -300 is red and the one at 300 is blue,
There are decorative shapes (triangles/box), they are there to facilitate the identification of up and right
When I run the scenario I have got the following disco result !! Thing that I don't want.
As you can see on my exemple of colorful potatoid above, the blue circle is the bigger but is setup to be the farest on z. According to the perspective it should be the smaller. What happened ?
On the other hand, when I use an orthographic camera everything works well.
Any ideas ?
About the Perspective matrix
I generate my perspective matrix with the function glm::perspective(), After a quick check , I have confirmed that my parameters' values are always good, so I can easily imagine that my issue doesn't come from there.
About the View matrix
First, I think my problem must be around here, maybe ... So, I have a vector3 for the position of the camera and 3 float for describing its rotation on each axes.
And here is the experimental part where I don't know what I'm doing !
I copy the previous three float in a vector 3 that I use as an Euleur angles and use a glm quaternion constructor that can create a quat from Euler angles, like that :
glm::quat q(glm::radians(euler_angles));
Finally I send the quaternion like that into the engine, without having use my up and forward vector (anyway I do not see now how to use them)
I work on it for too long and I think my head will explode, the saddest is I think I'm really close.
PS0: Those who help me have my eternal gratitude
PS1: Please do not give me some theory links : I no longer have any neuron, and have already read two interesting and helpless books. Maybe because I have not understood everything yet.
(3D Math Primer for Graphics and Game Development / Mathematics for 3D Game Programming and Computer Graphics, Third Edition)
SOLUTION
It was a dumb mistake ... at the very end of my rendering pipeline, I forget to sort the graphical objects on them "z" according to the camera orientation.
You said:
In my camera code I handle the player input, convert them into data
that I send to my engine. In the engine I only do:
glm::mat4 projection_view = glm::perspective(...parameters...) *
glm::inverse(view_matrix)
And voila I have my matrix for the rendering step.
Are you using the projection matrix when you render the coloured circles?
Should you be using an identity matrix to draw the circle, the model is then viewed according to the view/perspective matrices ?
The triangles and squares look correct - do you have a different transform in effect when you render the circles ?
Hi TonyWilk and thanks
Are you using the projection matrix when you render the coloured circles?
Yes, I generate my projection matrix from the glm::perspective() function and after use my projection_view matrix on my vertices when rendering, as indicated in the first block of code.
Should you be using an identity matrix to draw the circle, the model is then viewed according to the view/perspective matrices ?
I don't know if I have correctly understood this question, but here is an answer.
Theoretically, I do not apply directly the perspective matrix into vertices. I use, in pseudo code:
rendering_matrix = projection_matrix * inverse_camera_view_matrix
The triangles and squares look correct - do you have a different transform in effect when you render the circles ?
At the end, I always use the same matrix. And if the triangles and squares seem to be good, that is only due to an "optical effect". The biggest box is actually associated to the blue circle, and the smaller one to the red

Return to the original openGL origin coordinates

I'm currently trying to solve a problem regarding the display of an arm avatar.
I'm using a 3D tracker that's sending me coordinates and angles through my serial port. It works quite fine as long as I only want to show a "hand" or a block of wood in its place in 3D space.
The problem is: When I want to draw an entire arm (lets say the wrist is "stiff"), so the only degree of freedom is the elbow), I'm using the given coordinates (to which I've gltranslatef'd and glmultmatrix'd), but I want to draw another quad primitive with 2 vertices that are relative to the tracker coordinates (part of the "elbow") and 2 vertices that are always fixed next to the camera (part of the "shoulder"). However, I can't get out of my translated coordinate system.
Is my question clear?
My code is something like
cubeStretch = 0.15;
computeRotationMatrix();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glPushMatrix();
glTranslatef(handX, handY, handZ);
glMultMatrixf(*rotationMatrix);
glBegin(GL_QUADS);
/*some vertices for the "block of wood"*/
/*then a vertex which is relative to handX-handZ*/
glVertex3f(-cubeStretch, -cubeStretch+0.1, 5+cubeStretch);
/*and here I want to go back to the origin*/
gltranslatef(-handX, -handY, -handZ);
/*so the next vertex should preferably be next to the camera; the shoulder, so to say*/
glVertex3f(+0.5,-0.5,+0.5);
I already know the last three line don't work, it's just one of the ways I've tried.
I realize it might be hard to understand what I'm trying to do. Anyone got any idea on how to get back to the "un-gltranslatef'd" coordinate origin?
(I'd rather avoid having to implement a whole bone/joint system for this.)
Edit:https://imagizer.imageshack.us/v2/699x439q90/202/uefw.png
In the picture you can see what I have so far. As you can see, the emphasis so far has not been on beauty, but rather on using the tracker coordinates to correctly display something on the screen.
The white cubes are target points which turn red when the arm avatar "touches" them ("arm avatar" used here as a word for the hideous brown contraption to the right, but I think you know what I mean). I now want to have a connection from the back end of the "lower arm" (the broad end of the avatar is supposed to be the hand) to just the right of the screen. Maybe it's clearer now?
a) The fixed function stack is deprecated and you shouldn't use it. Use a proper matrix math library (like GLM), make copies of the branching nodes in your transformation hierarchy so that you can use those as starting point for different branches.
b) You can reset the matrix state to identity at any time using glLoadIdentity. Using glPushMatrix and glPopMatrix you can create a stack. You know how stacks work, do you? Pushing makes a copy and adds it to the top, all following operations happen on that. Poping removes the element at the top and gives you back the state it was in before the previous push.
Update
Regarding transformation trees you may be interested in the following:
https://stackoverflow.com/a/8953078/524368
https://stackoverflow.com/a/15566740/524368
(I'd rather avoid having to implement a whole bone/joint system for this.)
It's actually the most easy way to do this. In terms of fixed function OpenGL a bone-joint is just a combination of glTranslate(…); glRotate(…).

Same Marker position, Different Rotation and Translation matrices - OpenCV

I'm working on an Augmented Reality marker detection program, using OpenCV and I'm getting two different rotation and translation values for the same marker.
The 3D model switches between these states automatically without my control, when the camera is slightly moved. Screenshots of the above two situations are added below. I want the Image#1 to be the correct one. How to and where to correct this?
I have followed How to use an OpenCV rotation and translation vector with OpenGL ES in Android? to create the Projection Matrix for OpenGL.
ex:
// code to convert rotation, translation vector
glLoadMatrixf(ConvertedProjMatrix);
glColor3f(0,1,1) ;
glutSolidTeapot(50.0f);
Image #1
Image #2
Additional
I'd be glad if someone suggests me a way to make the Teapot sit on the marker plane. I know I have to edit the Rotation matrix. But what's the best way of doing that?
To rotate the teapot you can use glRotatef(). If you want to rotate your current matrix for example by 125° around the y-axis you can call:
glRotate(125,0,1,0);
I can't make out the current orientation of your teapot, but I guess you would need to rotate it by 90° around the x-axis.
I have no idea about your first problem, OpenCV seems unable to decide which of the shown positions is the "correct" one. It depends on what kind of features OpenCV is looking for (edges, high contrast, unique points...) and how you implemented it.
Have you tried swapping the pose algorithm (ITERATIVE, EPNP, P3P)? Or possibly use the values from the previous calculation - remember that it's just giving you its 'best guess'.

Modern OpenGL Question

In my OpenGL research (the OpenGL Red Book, I think) I came across an example of a model of an articulating robot arm consisting of an "upper arm", a "lower arm", a "hand", and five or more "fingers". Each of the sections should be able to move independently, but constrained by the "joints" (the upper and lower "arms" are always connected at the "elbow").
In immediate mode (glBegin/glEnd), they use one mesh of a cube, called "member", and use scaled copies of this single mesh for each of the parts of the arm, hand, etc. "Movements" were accomplished by pushing rotations onto the transformation matrix stack for each of the following joints: shoulder, elbow, wrist, knuckle - you get the picture.
Now, this solves problem, but since it's using old, deprecated immediate mode, I don't yet understand the solution to this problem in a modern OpenGL context. My question is: how to approach this problem using modern OpenGL? In particular, should each individual "member" keep track of its own current transformation matrix since matrix stacks are no longer kosher?
Pretty much. If you really need it, implementing your own stack-like interface is pretty simple. You would literally just store a stack, then implement whatever matrix operations you need using your preferred math library, and have some way to initialized your desired matrix uniform using the top element of the stack.
In your robot arm example, suppose that the linkage is represented as a tree (or even a graph if you prefer), with relative transformations specified between each body. To draw the robot arm, you just do a traversal of this data structure and set the transformation of whichever child body to be the parent body's transformation composed with its own. For example:
def draw_linkage(body, view):
//Draw the body using view matrix
for child, relative_xform in body.edges:
if visited[child]:
continue
draw_linkage(child, view * relative_xform)
In the case of rigid parts, connected by joints, one usually treats each part as a individial submesh, loading the appropriate matrix before drawing.
In the case of "connected"/"continous" meshes, like a face, animation usually happens through bones and deformation targets. Each of those defines a deformation and every vertex in the mesh is assigned a weight, how strong it is affected by each deformators. Technically this can be applied to a rigid limb model, too, giving each limb a single deformator nonzero weighting.
Any decent animation system keeps track of transformations (matrices) itself anyway, the OpenGL matrix stack functions are seldomly used in serious applications (since OpenGL had been invented). But usually the transformations are stored in a hierachy.
You generally do this at a level above openGL using a scenegraph.
The same matrix transforms at each node in the scenegraph tree just map simply onto the openGL matrices so it's pretty efficient.

Finding Rotation Angles between 3d points

I am writing a program that will draw a solid along the curve of a spline. I am using visual studio 2005, and writing in C++ for OpenGL. I'm using FLTK to open my windows (fast and light toolkit).
I currently have an algorithm that will draw a Cardinal Cubic Spline, given a set of control points, by breaking the intervals between the points up into subintervals and drawing linesegments between these sub points. The number of subintervals is variable.
The line drawing code works wonderfully, and basically works as follows: I generate a set of points along the spline curve using the spline equation and store them in an array (as a special datastructure called Pnt3f, where the coordinates are 3 floats and there are some handy functions such as distance, length, dot and crossproduct). Then i have a single loop that iterates through the array of points and draws them as so:
glBegin(GL_LINE_STRIP);
for(pt = 0; pt<=numsubsegements ; ++pt) {
glVertex3fv(pt.v());
}
glEnd();
As stated, this code works great. Now what i want to do is, instead of drawing a line, I want to extrude a solid. My current exploration is using a 'cylinder' quadric to create a tube along the line. This is a bit trickier, as I have to orient openGL in the direction i want to draw the cylinder. My idea is to do this:
Psuedocode:
Push the current matrix,
translate to the first control point
rotate to face the next point
draw a cylinder (length = distance between the points)
Pop the matrix
repeat
My problem is getting the angles between the points. I only need yaw and pitch, roll isnt important. I know take the arc-cosine of the dot product of the two points divided by the magnitude of both points, will return the angle between them, but this is not something i can feed to OpenGL to rotate with. I've tried doing this in 2d, using the XZ plane to get x rotation, and making the points vectors from the origin, but it does not return the correct angle.
My current approach is much simpler. For each plane of rotation (X and Y), find the angle by:
arc-cosine( (difference in 'x' values)/distance between the points)
the 'x' value depends on how your set your plane up, though for my calculations I always use world x.
Barring a few issues of it making it draw in the correct quadrant that I havent worked out yet, I want to get advice to see if this was a good implementation, or to see if someone knew a better way.
You are correct in forming two vectors from the three points in two adjacent line segments and then using the arccosine of the dot product to get the angle between them. To make use of this angle you need to determine the axis around which the rotation should occur. Take the cross product of the same two vectors to get this axis. You can then build a transformation matrix using this axis-angle or pass it as parameters to glRotate.
A few notes:
first of all, this:
for(pt = 0; pt<=numsubsegements ; ++pt) {
glBegin(GL_LINE_STRIP);
glVertex3fv(pt.v());
}
glEnd();
is not a good way to draw anything. You MUST have one glEnd() for every single glBegin(). you probably want to get the glBegin() out of the loop. the fact that this works is pure luck.
second thing
My current exploration is using a
'cylinder' quadric to create a tube
along the line
This will not work as you expect. the 'cylinder' quadric has a flat top base and a flat bottom base. Even if you success in making the correct rotations according to the spline the edges of the flat tops are going to pop out of the volume of your intended tube and it will not be smooth. You can try it in 2D with just a pen and a paper. Try to draw a smooth tube using only shorter tubes with a flat bases. This is impossible.
Third, to your actual question, The definitive tool for such rotations are quaternions. Its a bit complex to explain in this scope but you can find plentyful information anywhere you look.
If you'd have used QT instead of FLTK you could have also used libQGLViewer. It has an integrated Quaternion class which would save you the implementation. If you still have a choice I strongly recommend moving to QT.
Have you considered gluLookAt? Put your control point as the eye point, the next point as the reference point, and make the up vector perpendicular to the difference between the two.