How to use and set axes in a 3D scene - c++

I'm creating a simulator coded in python and based on ODE (Open Dynamics Engine). For visualization I chose VTK.
For every object in the simulation, I create a corresponding source (e.g. vtkCubeSource), mapper and actor. I am able to show objects correctly and update them as the simulation runs.
I want to add axes to have a point of reference and to show the direction of each axis. Doing that I realized that, by default, X and Z are in the plane of the screen and Y points outwards. In my program I have a different convention.
I've been able to display axes in 2 ways:
1) Image
axes = vtk.vtkAxes()
axesMapper = vtk.vtkPolyDataMapper()
axesMapper.SetInputConnection(axes.GetOutputPort())
axesActor = vtk.vtkActor()
axesActor.SetMapper(axesMapper)
axesActor.GetProperty().SetLineWidth(4)
2) Image (colors do not match with the first case)
axesActor = vtk.vtkAxesActor()
axesActor.AxisLabelsOn()
axesActor.SetShaftTypeToCylinder()
axesActor.SetCylinderRadius(0.05)
In the second one, the user is allowed to set many parameters related to how the axis are displayed. In the first one, I only managed to set the line width but nothing else.
So, my questions are:
Which is the correct way to define and display axes in a 3D scene? I just want them in a fixed position and orientation.
How can I set a different convention for the axes orientation, both for their display and the general visualization?

Well, if you do not mess with objects' transformation matrix for display
purposes, it could probably be sufficient to just put your camera into a
different position while using axes approach 2. The easy methods to adjust
your camera position are: Pitch(), Azimuth() and Roll().
If you mess with object transforms, then apply the same transform to the
axes.
Dženan Zukić kindly answered this question in vtkusers#vtk.org mail list.
http://www.vtk.org/pipermail/vtkusers/2011-November/119990.html

Related

Qt 3D scatter graph: how can I adjust the scale of an axis?

I'm currently developing a Qt desktop application using the Q3DScatter class. I'm inspecting Qt's 3D Scatter example project and I tried to modify the data item set to plot my own data. The data is plotted except that one axis is not well scaled and my 3D plot looks really messy. I'm looking for a way to adjust this axis. I've tried to change the range and the segment count of the axis, I even tried to set the "AutoAdjustRange" of the axis to true, but nothing seemed to solve the problem.
Would really appreciate some help.
PS: Here's a screen capture of what my 3D scatter graph looks like (the "messy" axis is shown with the red arrow)
I figured this out by creating a CustomFormatter class by subclassing QValue3DAxisFormatter and reimplementing some of its functions (I followed this tutorial). Then I set up my axis formatter to my custom formatter (m_graph->axisZ()->setFormatter(cf);).
Subclassing QValue3DAxisFormatter will not work: it determines where ticks and labels are placed, but not how large the axex actually are.
To do that, you can set the (horizontal) aspect ratio, that is a property of Q3DScatter. The following settings will make the data into a cube volume:
plot->setAspectRatio(1.0);
plot->setHorizontalAspectRatio(1.0);

Matplotlib axis text coordinates inconsistency?

I'm working on a piece of code to automatically align x-axis labels for a variable number of subplots. When I started having trouble setting label positions manually, I checked to be sure I could just transform from one set of coordinates to the other without changing anything, with a code snippet like this:
# xaxes is a list of Axes objects
textCoords = [ax.xaxis.get_label().get_position() for ax in xaxes]
newCoords = [ax.transAxes.inverted().transform(ax.xaxis.get_label().\
get_transform().transform(c)) for ax,c in zip(xaxes,textCoords)]
for ax,c in zip(xaxes,newCoords): ax.xaxis.set_label_coords(*c)
In theory, this code doesn't change any coordinates; it just gets the coordinates of each label, maps it to Axes coordinates using the Text object's internally-stored transform, and then sets the position. Yet running this code removes my labels entirely, and a little experimentation shows that they go off the bottom edge of the plot.
Have I just misunderstood the transforms involved here?
You're understanding the transforms correctly, but there's a caveat to using display coordinates before the plot has been displayed.
The short answer is that putting in a call to plt.draw() before your code snippet above will fix your immediate problem.
You're trying to link the different axes display system through display coordinates. However, before the plot has been drawn the first time, the renderer isn't fully initialized yet, and the display coordinates don't have much meaning.
Can you elaborate a bit more on what you're trying to do? There may be an easier way.
Alternatively, if you want to avoid the extra draw, you can link things through figure coordinates before the plot has been drawn. (They're well defined regardless.)

Applying a transformation to a set in Raphael.js

Using Raphael 2.0, I am trying to apply a transform to a set of objects in a way that is relative to all of the objects in the set. However, the effect I am getting is that the transform is applied to each item individually, irrespective of the other objects in the set.
For example: http://jsfiddle.net/tim_iles/VCca9/8/ - if you now uncomment the last line and run the code, each circle is scaled to 0.5x. The actual effect I am trying to achieve would be to scale the whole set of circles, so their relative distances are also scaled, which should put all four of them inside the bounding box of the white square.
Is there a way to achieve this using Raphael's built in tools?
When you scale, the first parameter is the X-scale. If you provide no other parameters, it will use that as the Y-scale, and scale around the center of the object.
When you scaled the rectangle, it scaled around the center of the rectangle. If you want the circles to scale around that point as well, rather than their centers, you should provide that point.
So the last line could be set.transform("s0.5,0.5,100,100"); (100,100 being the center of the rectangle you scaled)
At least, I think this is what you're asking for.

Plotting a graph with given double coordinates

I receive an array of coordinates (double coordinates with -infinity < x < +infinity and 0 <= y <= 10) and want to draw a polyline using those points. I want the graph to always begin on the left border of my image, and end at the right. The bottom border of my image always represents a 0 y-value, and the top border always a 10 y-value. The width and height of the image that is created are decided by the user at runtime.
I want to realize this using Qt, and QImage in combination with QPainter seem to be my primary weapons of choice. The problem I am currently trying to solve is:
How to convert my coordinates to pixels in my image?
The y-values seem to be fairly simple, since I know the minimum and maximum of the graph beforehand, but I am struggling with the x-values. My approach so far is to find the min- and max-x-value and scale each point respectively.
Is there a more native approach?
Since one set of coordinates serves for several images with different widths and heights, I wondered whether a vector graphic (svg) may be a more suitable approach, but I couldn't find material on creating svg-files within Qt yet, just working with existing files. I would be looking for something comparable to the Windows metafiles.
Is there a close match to metafiles in Qt?
QGraphicsScene may help in this case. You plot the graph with either addPolygon() or addPath(). Then render the scene into a bitmap with QGraphicsScene::render()
The sceneRect will automatically grow as you add items to it. At the end of the "plotting" you will get the final size/bounds of the graph. Create a QImage and use it as the painter back store to render the scene.
QGraphicsScene also allows you to manipulate the transformation matrix to fit the orientation and scale to your need.
Another alternative to use QtOpenGL to render your 2d graph to a openGL context. No conversion/scaling of coordinates is required. Once you get past the opengl basics you can pick appropriate viewPort / eye parameters to achieve any zoom/pan level.

Coordinate Transformation C++

I have a webcam pointed at a table at a slant and with it I track markers.
I have a transformationMatrix in OpenSceneGraph and its translation part contains the relative coordinates from the tracked Object to the Camera.
Because the Camera is pointed at a slant, when I move the marker across the table the Y and Z axis is updated, although all I want to be updated is the Z axis, because the height of the marker doesnt change only its distance to the camera.
This has the effect when when project a model on the marker in OpenSceneGraph, the model is slightly off and when I move the marker arround the Y and Z values are updated incorrectly.
So my guess is I need a Transformation Matrix with which I multiply each point so that I have a new coordinate System which lies orthogonal on the table surface.
Something like this: A * v1 = v2 v1 being the camera Coordinates and v2 being my "table Coordinates"
So what I did now was chose 4 points to "calibrate" my system. So I placed the marker at the top left corner of the Screen and defined v1 as the current camera coordinates and v2 as (0,0,0) and I did that for 4 different points.
And then taking the linear equations I get from having an unknown Matrix and two known vectors I solved the matrix.
I thought the values I would get for the matrix would be the values I needed to multiply the camera Coordinates with so the model would updated correctly on the marker.
But when I multiply the known Camera Coordinates I gathered before with the matrix I didnt get anything close to what my "table coordinates" were suposed to be.
Is my aproach completely wrong, did I just mess something up in the equations? (solved with the help of wolframalpha.com) Is there an easier or better way of doing this?
Any help would be greatly appreciated, as I am kind of lost and under some time pressure :-/
Thanks,
David
when I move the marker across the table the Y and Z axis is updated, although all I want to be updated is the Z axis, because the height of the marker doesnt change only its distance to the camera.
Only true when your camera's view direction is aligned with your Y axis (or Z axis). If the camera is not aligned with Y, it means the transform will apply a rotation around the X axis, hence modifying both the Y and Z coordinates of the marker.
So my guess is I need a Transformation Matrix with which I multiply each point so that I have a new coordinate System which lies orthogonal on the table surface.
Yes it is. After that, you will have 2 transforms:
T_table to express marker's coordinates in the table referential,
T_camera to express table coordinates in the camera referential.
Finding T_camera from a single 2d image is hard because there's no depth information.
This is known as the Pose problem -- it has been studied by -among others- Daniel DeMenthon. He developed a fast and robust algorithm to find the pose of an object:
articles available on its research homepage, section 4 "Model Based Object Pose" (and particularly "Model-Based Object Pose in 25 Lines of Code", 1995);
code at the same place, section "POSIT (C and Matlab)".
Note that the OpenCv library offers an implementation of the DeMenthon's algorithm. This library also offers a convenient and easy-to-use interface to grab images from a webcam. It's worth a try: OpenCv homepage
If you know the location in the physical world of your four markers and you've recorded the positions as they appear on the camera, you ought to be able to derive some sort of transform.
When you do the calibration, surely you'd want to put the marker at the four corners of the table not the screen? If you're just doing the corners of the screen, I imagine you're probably not taking into acconut the slant of the table.
Is the table literally just slanted relative to the camera or is it also rotated at all?