Creating deformable map in root TCanvas - c++

I have some data that I'm plotting on a TH2F through the command-line interface of ROOT. I have a TTree* called goodtree, and I plot the XY positions of events in a detector as follows:
root [1] TCanvas *can = new TCanvas("can","can",800,800)
root [2] goodtree->Draw("y:x>>h1(400,-200,200,400,-200,200)","r<200","colz")
I also want to make normalized area plots, by looking at r^2 versus theta:
root [3] goodtree->Draw("r*r:t>>h2(400,-3.14,3.14,400,0,41000)","r<200","colz")
This part is fine. What I want to do next is overlay a map onto the XY plot, and have it automatically deform to the correct positions on the R^2T plot. What I mean is, this is a particle detector and uses photomultiplier tubes (PMTs) which have a circular cross-section in XY. I want to be able to overlay a map onto h1 which shows the outlines of these PMTs (which are in a honeycomb pattern). This I can also do very quickly with a script.
The tough bit is that I want to be able to define this map in XY, and plot it overtop of the R^2T data points. Is there a way do do this easily without having to calculate the positions, widths, and heights of all of these deformed ellipses by hand?

Related

How to access min and max coordinates of a 3D object in C++?

I am creating a game in qt Creator using c++ and OpenGL and an attempting to add bounding boxes to my scene in order to implement collision detection. I am using objects imported from Maya as .obj in my scene so their dimensions are not set in the code, only their position, rotation and scale. I am able to create a bounding box around each object which matches their position but am struggling to find a way to access the min and max x, y and z values of the objects in order to match the box to the size of the object.
Does anyone have any ideas on how I could access the min and max coordinates? I know how to implement the code if I could access these values..
The problem you afford is that each object geometry has different means of internal storage and determination of a bounding box.
Let's try some examples to illustrate this:
Suppose we have a circle, whose drawing parameters stored internally are the center coordinates x_center and y_center and the radius radius. If you try to determine the bounding box for this object, you'll see that it extends from (x_center - radius, y_center - radius) to (x_center + radius, y_center + radius).
In case you have an unrotated rectangle, given by the two points of it's principal diagonal, the bounding box just coincides with it's shape, so you have only to give the coordinates of the two same points that represent it.
If, on the other way, we have a polygon, the bounding box will be determined by the minimum and maximum coordinates of all the polygon vertices. If you allow to rotate the polygon, you'll need to rotate all the vertices coordinates before determining their maximum and minimum values, to get the bounding box.
If, for another example, we have a cubic spline, determined by the coordinates of its four control points you'll be determining the maximum and minimum values of two cubic polygons, which means solving two quadratic equations(after derivation), in the general case.
To cope with all this stuff, a geometric shape normally includes some means of polymorphically construct it's bounding box (it normally is even cached, so you don't have to calculate it, only after rotations or variations in it's position or scale) via some instance method.
Of course, all of this depends on how and how has defined the way shapes are implemented. perhaps your case is simpler than I'm exposing here, but you don't say. You also don't show any code or input/output data, as stated in the How to create a Minimal, Complete, and Verifiable example page. So you had better to edit your question and add your sample code, that will show more information about your exact problem.
if you have obj loader so you have an array.
float t[2100];
int x = 2100;
float xmax=-123243;
while(x>=0)
{
if(xmax<t[x]) xmax=t[x];
x-=3;
}
So here is a maximum x of the object(?).

Get HU values along a trajectory volume

So, what I am trying to do is to calculate the density profile (HU) along a trajectory (represented by target x,y,z and tangent to it) in a CT. At the moment, I am able to get the profile along a line passing through the target and at a certain distance from the target (entrance). What I would like to do is to get the density profile for a volume (cylinder in this case) of width 1mm or so.
I guess I have to do interpolation of some sort along voxels since depending on the spacing between successive coordinates, several coordinates can point to the same index. For example, this is what I am talking about.
Additionally, I would like to get the density profile for different shapes of the tip of the trajectory, for example:
My idea is that I make a 3 by 3 matrix, representing the shapes of the tip, and convolve this with the voxel values to get HU values corresponding to the tip. How can I do this using ITK/VTK?
Kindly let me know if you need some more information. (I hope the images are clear enough).
If you want to calculate the density drill tip will encounter, it is probably easiest to create a mask of the tip's cutting surface in a resolution higher than your image. Define a transform matrix M which puts your drill into the wanted position in the CT image.
Then iterate through all the non-zero voxels in the mask, transform indices to physical points, apply transform M to them, sample (evaluate) the value in the CT image at that point using an interpolator, multiply it by the mask's opacity (in case of non-binary mask) and add the value to the running sum.
At the end your running sum will represent the total encountered density. This density sum will be dependent on the resolution of your mask of the tip's cutting surface. I don't know how you will relate it to some physical quantity (like resisting force in Newtons).
To get a profile along some path, you would use resample filter. Set up a transform matrix which transforms your starting point to 0,0,0 and your end point to x,0,0. Set the size of the target image to x,1,1 and spacing the same as in source image.
I don't understand your second question. To get HU value at the tip, you would sample that point using a high quality interpolator (example using linear interpolator). I don't get why would the shape of the tip matter.

Qt: how do I make a field of 2d-interpolated colors?

I'm quite a beginner with c++, especially graphically related.
I would like to make an animated background for my graphicsview which looks kind of like this:
Gradient Field Airflow
The picture represents the turbulence of an airflow over an object.
The colors must be based on a matrix of values.
I can only find how to do single-direction gradients with QT.
How do I set this up? How do I get two-directional gradients?
/*edit
It has been pointed out well that technically speaking this is not a gradient, but an color interpolation on a 2d array of nodes.
*/
Well you have not provided the input data so no one knows what you really want to achieve !
if you have the flow trajectories and mass
Then you can use some particle system + heavy blurring/smoothing filtering to achieve this. For any known point along the trajectory plot a dithered circle with color depend on the mass/temp/velocity... and color scale. It should be solid in the middle and transparent on the edges. After rendering just blur/smooth the image few times and that should be it. The less points used the bigger the circles must be to cover the area nicely also can do it in multi pass and change the points coordinates randomly to improve randomness in the image...
if you have field strength/speed/temp or what ever grid values
Then it is similar to #1 also you can instead of particle system do the rendering via QUADs/Squares. The 2D linear gradient is called Bilinear Filtering
c00 -- x --> c01
|
|
y c(x,y)
|
|
V
c10 c11
where:
c00,c01,c10,c11 are corner colors
c(x,y) is color on x,y position inside square
x,y are in range <0,1> for simplicity (but you can use any with appropriate equations scaling)
Bilinear interpolation is 3x linear interpolation:
c0=c(x,0)=c00+((c01-c00)*x)
c1=c(x,1)=c10+((c11-c10)*x)
c(x,y) =c0 +((c1 -c0 )*y)
so render all pixels of the square with above computed colors and that is what you seek. This kind of filtering usually produce artifacts on the edges between squares or on diagonals to avoid that use non linear filtering or blur/smooth the final image
There is a tutorial on gradients in Qt: http://qt-project.org/doc/qt-4.8/demos-gradients.html and a class: http://harmattan-dev.nokia.com/docs/library/html/qt4/qgradient.html I have never used other than linear gradients and according to the docs, it seems there are only three basic types of gradients available in Qt: linear, radial and conical. If you cannot compose your desired gradient using these three types, then I am afraid you will need to program your image pixels by yourself. Not to forget, it might be worthy to explore if OpenGL somehow could help. Qt has some classes using OpenGL but I am not familiar with them to provide more advice.

Plotting a graph with given double coordinates

I receive an array of coordinates (double coordinates with -infinity < x < +infinity and 0 <= y <= 10) and want to draw a polyline using those points. I want the graph to always begin on the left border of my image, and end at the right. The bottom border of my image always represents a 0 y-value, and the top border always a 10 y-value. The width and height of the image that is created are decided by the user at runtime.
I want to realize this using Qt, and QImage in combination with QPainter seem to be my primary weapons of choice. The problem I am currently trying to solve is:
How to convert my coordinates to pixels in my image?
The y-values seem to be fairly simple, since I know the minimum and maximum of the graph beforehand, but I am struggling with the x-values. My approach so far is to find the min- and max-x-value and scale each point respectively.
Is there a more native approach?
Since one set of coordinates serves for several images with different widths and heights, I wondered whether a vector graphic (svg) may be a more suitable approach, but I couldn't find material on creating svg-files within Qt yet, just working with existing files. I would be looking for something comparable to the Windows metafiles.
Is there a close match to metafiles in Qt?
QGraphicsScene may help in this case. You plot the graph with either addPolygon() or addPath(). Then render the scene into a bitmap with QGraphicsScene::render()
The sceneRect will automatically grow as you add items to it. At the end of the "plotting" you will get the final size/bounds of the graph. Create a QImage and use it as the painter back store to render the scene.
QGraphicsScene also allows you to manipulate the transformation matrix to fit the orientation and scale to your need.
Another alternative to use QtOpenGL to render your 2d graph to a openGL context. No conversion/scaling of coordinates is required. Once you get past the opengl basics you can pick appropriate viewPort / eye parameters to achieve any zoom/pan level.

How to use and set axes in a 3D scene

I'm creating a simulator coded in python and based on ODE (Open Dynamics Engine). For visualization I chose VTK.
For every object in the simulation, I create a corresponding source (e.g. vtkCubeSource), mapper and actor. I am able to show objects correctly and update them as the simulation runs.
I want to add axes to have a point of reference and to show the direction of each axis. Doing that I realized that, by default, X and Z are in the plane of the screen and Y points outwards. In my program I have a different convention.
I've been able to display axes in 2 ways:
1) Image
axes = vtk.vtkAxes()
axesMapper = vtk.vtkPolyDataMapper()
axesMapper.SetInputConnection(axes.GetOutputPort())
axesActor = vtk.vtkActor()
axesActor.SetMapper(axesMapper)
axesActor.GetProperty().SetLineWidth(4)
2) Image (colors do not match with the first case)
axesActor = vtk.vtkAxesActor()
axesActor.AxisLabelsOn()
axesActor.SetShaftTypeToCylinder()
axesActor.SetCylinderRadius(0.05)
In the second one, the user is allowed to set many parameters related to how the axis are displayed. In the first one, I only managed to set the line width but nothing else.
So, my questions are:
Which is the correct way to define and display axes in a 3D scene? I just want them in a fixed position and orientation.
How can I set a different convention for the axes orientation, both for their display and the general visualization?
Well, if you do not mess with objects' transformation matrix for display
purposes, it could probably be sufficient to just put your camera into a
different position while using axes approach 2. The easy methods to adjust
your camera position are: Pitch(), Azimuth() and Roll().
If you mess with object transforms, then apply the same transform to the
axes.
Dženan Zukić kindly answered this question in vtkusers#vtk.org mail list.
http://www.vtk.org/pipermail/vtkusers/2011-November/119990.html