I have read here:
Is there a way to use a Custom cross-sectional slicer of 3d image data?
... that the nrrd parser stores the image data as a 3D array. I want to be able to access this array in my scripts. How can this be done? I would like to use this data to do image statistics, and subsets to do region of interest statistics. I believe the data is a private variable which is just used by the slice function to create the volume slices, is that correct? If so how can I save it for later use as a public variable, or as a property of the volume object?
Please explain as simply as possible how to proceed as I am quite a novice at javascript.
Many thanks,
We didn't store the array for all volume parsers yet to slim down the memory usage. This can certainly be added since the infrastructure is there under the hood.
I assigned the issue to me
https://github.com/xtk/X/issues/84
Related
My PhD project revoles around simulating the paths of photons through objects of different optical properties. My code has classes which create ccd images etc, but it would be much more useful to be able to create a simple rendering of the 3D objects and the paths the photons take through them.
I've written an opengl system for viewing such a scene, but it would be much better if I could use something much more lightweight where I could simply specify the vertices of an object, and then a photon path as a list of connected vertices.
Exporting all the data and then visualising it in another program isn't ideal, as things like mesh transformations need to be taken into account, and I'd rather avoid exporting several new mesh objects just to import them all into another program.
What I essentially need is to be able to create the three dimensional equivalent of a svg image. Does such a '3D scene' file format with a simple visualiser exist?
I write in C++ on MacOS, though I'd prefer to avoid using a visualisation library. I appreciate that what I'm asking for is rather niece and picky, but that's why I'm asking the internet as someone might have come across a similar need for such a tool.
Please bear with me in this rather long question.
I am currently building a voxel engine (nothing like Minecraft, don't worry). I want the engine to be modular. For example, I want the voxel 'engine' to be a seperate component to the 'renderer'.
The engine currently builds up a database of voxel objects that are each composed of a 3-dimensional array. However, the renderer makes use of meshes extracted from this data using the marching cubes algorithm. I need to store both the voxel data and the mesh data for each voxel object. However, since both components are modular they cannot be stored in the same place.
My current solution has been to build a secondary data structure inside the renderer that attempts to mirror the engine. It does this by duplicating the objects in the engine structure, giving each object a pointer to the object they mirror. However, this quickly becomes complex, since modifications to the original data structure have to be updated by the mirrored version.
As you can see, the problem is more theoretical than technical. Could anyone help me in suggesting an OOP structure that my program may take to solve this issue better? I'm open to all suggestions, however radical.
Thanks.
I want to draw histogram for 1GB data using map reduce. Not able to get any hold after googling. Please give suggestions for any specific library in python or java.
ValueHistogram in hadoop can be used for drawing histogram.
https://hadoop.apache.org/docs/stable2/api/org/apache/hadoop/mapreduce/lib/aggregate/ValueHistogram.html
I have 2 nifti image volumes that I would like to display together using XTK. Both of them are already converted to NRRD using Mevislab's relevant ITK modules. Both volumes were acquired in the same MR session, but they differ in spatial resolution and field of view (even by orientation), so it's important that XTK takes in mind the "space directions" and "space origin" fields of the NRRD to display them correctly in terms of relative spatial position. This seems not to happen in practice though.
I already read a question and answer on fixing the "space origin" part, but it's still running into problems with the "space directions". My latest attempts were trying to modify the transformation of the volumes after loading them, but this doesn't seems to have any effect on the displayed volumes. I could however successfully do this with a TRK fiber file, but changing a volume's tranformation doesn't yield any effects. So the question being: how do I correctly load a (NRRD) volume and while taking into account it's full spatial tranformation to patient/scanner space, so multiple loaded volumes get matched up correctly?
Thanks in advance for any help!
Sadly, the scan orientation is not taken into account at the moment. This should happen soon so, since it is very important.
As a workaround, it is possible to transform each Slice of a volume using the transform methods. You basically loop through the children and then apply to each one the transform.
If you are willing to contribute the features to the parser, it would be great! Also, the NII format is now supported so you don't have to convert to (the much slower) NRRD.
I am developing a 3d facial animation application in java using opengl(jogl) i had done the animation with morph targets and now i am trying to do a parametrized facial animation .
I can't attach the vertex of the face with the appropriate parameter,for example i have the parameter eyebrow length ,how could i know wish are the vertex of the eyebrows (face features),please please could anybody help me i'm using obj file to read the face model and face gen to create it.
I appreciate your help .
For each morph target you assign each vertex a weight, how strong it is influenced by the respective morph target. There's no algorithmic way to do this, this must be done manually.
Lightwave OBJ file format may not be ideal for storing the geometry in this case, since it lacks support for storing such auxiliary data. You may extend the file format, however this will probably clash with programs expecting a "normal" OBJ.
I strongly suggest using a format that has been designed to support any number of additional attributes. You may want to look at OpenCTM.