Link interaction of multiple volumes in xtk - xtk

If I have two 3D volumes rendered, how would I link the two volumes so that interaction with one will result in the same reaction in the other?

you can propagate the renderer1.camera.view to renderer2.camera.view. Both are Float32Arrays and hold the camera matrices.

Related

Tensorboard projector animated transition between multiple datasets

I would like to use the stand-alone tensorboard projector and have multiple datasets (same data points, different times).
I want the projector to load multiple datasets, apply PCA and then animate the points in time through the "checkpoints". (changing the point coordinates in the 2D/3D space in an animated fashion)
Where can I start to dig in the projector code in order to load multiple datasets and change the rendered points coordinates after initial render?
With still nearly zero experience with Tensorboard, I recently searched for a similar feature.
Loading multiple datasets in TensorboardX is no problem by calling add_embedding multiple times with different global_step parameter. It seems that for transitions there is no build-in solution.
One alternative I suggest is hypertools. It supports multiple dimension reduction methods, as well as animations.
I would appreciate any update on this topic as I am still searching for alternatives.

Cant understand concept of merge-instancing

I was reading slides from a presentation that was talking about "merge-instancing". (the presentation is from Emil Persson, the link: www.humus.name/Articles/Persson_GraphicsGemsForGames.pptx, from slide 19)
I can't understand what's going on, I know instancing only from openGL and I thought it can only draw the same mesh multiple times. Can somebody explain? Does it work differently with directX?
Instancing: You upload a mesh to the GPU and activate its buffers whenever you want to render it. Data is not duplicated.
Merging: You want to create a mesh from multiple smaller meshes (as the complex of building in the example), so you either:
Draw each complex using instancing, which means, multiple draw calls for each complex
You merge the instances into a single mesh, which will replicate the vertices and other data for each complex, but you will be able to render the whole complex with a single draw call
Instance-Merging: You create the complex by referencing the vertices of the instances that take part on it. Then you use the vertices to know where to fetch the data for each instance: This way you have the advantage of instancing (Each mesh is uploaded once to the GPU) and the merging benefits (you draw the whole complex with a single draw call)

3D Object Detection & Tracking using PCL

I have a task where i am asked to track parcels(carton boxes) of different dimensions moving on a conveyor
I am using Asus Xtion pro camera and also i have been asked to use Point cloud data to detect & track the object on the conveyor
I have read so many model-based methods where we need to create a model of the object to be detected and then we perform keypoints extraction,feature mapping and so many other concepts between the scene and the model.
Since i am using Boxes of different dimensions, i definitely need a model of each of them to match with the scene.
My question : Can i have one common point cloud model of a box of some dimension and compare it with any box that comes under the view of a camera? I meant can i have one scalable model to compare with box of any dimension? is it possible?
My chief doesn't want the project to be dependent on many models for detection and tracking. One common model which is scalable or parameterized should do the trick it seems.
Thanks in advance

Spatial transformation of volumes

I have 2 nifti image volumes that I would like to display together using XTK. Both of them are already converted to NRRD using Mevislab's relevant ITK modules. Both volumes were acquired in the same MR session, but they differ in spatial resolution and field of view (even by orientation), so it's important that XTK takes in mind the "space directions" and "space origin" fields of the NRRD to display them correctly in terms of relative spatial position. This seems not to happen in practice though.
I already read a question and answer on fixing the "space origin" part, but it's still running into problems with the "space directions". My latest attempts were trying to modify the transformation of the volumes after loading them, but this doesn't seems to have any effect on the displayed volumes. I could however successfully do this with a TRK fiber file, but changing a volume's tranformation doesn't yield any effects. So the question being: how do I correctly load a (NRRD) volume and while taking into account it's full spatial tranformation to patient/scanner space, so multiple loaded volumes get matched up correctly?
Thanks in advance for any help!
Sadly, the scan orientation is not taken into account at the moment. This should happen soon so, since it is very important.
As a workaround, it is possible to transform each Slice of a volume using the transform methods. You basically loop through the children and then apply to each one the transform.
If you are willing to contribute the features to the parser, it would be great! Also, the NII format is now supported so you don't have to convert to (the much slower) NRRD.

IDEAs: how to interactively render large image series using GPU-based direct volume rendering

I'm looking for idea's how to convert a 30+gb, 2000+ colored TIFF image series into a dataset able to be visualized in realtime (interactive frame rates) using GPU-based volume rendering (using OpenCL / OpenGL / GLSL). I want to use a direct volume visualization approach instead of surface fitting (i.e. raycasting instead of marching cubes).
The problem is two-fold, first I need to convert my images into a 3D dataset. The first thing which came into my mind is to see all images as 2D textures and simply stack them to create a 3D texture.
The second problem is the interactive frame rates. For this I will probably need some sort of downsampling in combination with "details-on-demand" loading the high-res dataset when zooming or something.
A first point-wise approach i found is:
polygonization of the complete volume data through layer-by-layer processing and generating corresponding image texture;
carrying out all essential transformations through vertex processor operations;
dividing polygonal slices into smaller fragments, where the corresponding depth and texture coordinates are recorded;
in fragment processing, deploying the vertex shader programming technique to enhance the rendering of fragments.
But I have no concrete ideas of how to start implementing this approach.
I would love to see some fresh ideas or ideas on how to start implementing the approach shown above.
If anyone has any fresh ideas in this area, they're probably going to be trying to develop and publish them. It's an ongoing area of research.
In your "point-wise approach", it seems like you have outlined the basic method of slice-based volume rendering. This can give good results, but many people are switching to a hardware raycasting method. There is an example of this in the CUDA SDK if you are interested.
A good method for hierarchical volume rendering was detailed by Crassin et al. in their paper called Gigavoxels. It uses an octree-based approach, which only loads bricks needed in memory when they are needed.
A very good introductory book in this area is Real-Time Volume Graphics.
I've done a bit of volume rendering, though my code generated an isosurface using marching cubes and displayed that. However, in my modest self-education of volume rendering I did come across an interesting short paper: Volume Rendering on Common Computer Hardware. It comes with source example too. I never got around to checking it out but it seemed promising. It is it DirectX though, not OpenGL. Maybe it can give you some ideas and a place to start.