Can you apply colour maps to volume renders? - xtk

Every example I have seen only renders in greyscale.
Is there a way to map scalar volumes in voxels to colours? ( similar to vtkColorTransferFunction in VTK for example?)
If there is no support for this out of the box, how difficult would it be for us to add that capability?

So far, only min and max colors are supported (lesson 15).
...
r.onShowtime = function() {
v.minColor = [0, 0.06666666666666667, 1];
v.maxColor = [0.5843137254901961, 1, 0];
};
...
You could also use a custom labelmap/color table as a hack for now (lesson 11).
There is no easy mechanism to support it right now but this is on the TODO list and should not be too hard to implement.
Please feel free to contribute to XTK on github, this would be a very useful addition!
https://github.com/xtk/X
Thanks

Related

Suggested algorithm for diverging color mapping visualization

I am attempting to write a piece of code that is suppose to map data to RGB values, and one of the types of visualizations I am attempting to use is a diverging color map.
I am not exactly sure what the best way is to go about applying the colors. The current algorithm I am using is:
//F is the data point being checked
if(F <= .5){
RGB[0] = F*510;
RGB[1] = F*510;
RGB[2] = F*254 + 128;
}else{
RGB[0] = 255 - (F-.5)*254;
RGB[1] = 255 - (F-.5)*510;
RGB[2] = 255 - (F-.5)*510;
}
Where the key points for the curve are:
F=0: (0,0,128)
F=0.5: (255,255,255)
F=1: (128, 0, 0)
Are there any suggested algorithms out there for use instead of this, or is this hacked together piecewise function alright?
This is the image generated by this current algorithm.
I think you should use a bar to test your function as it would be easier to see the transition 'speed' in linear data.
Here is a really good article for using the diverging colour maps: http://www.sandia.gov/~kmorel/documents/ColorMaps/
It describes the mathematics behind it. I know it seems an overkill to go through Lab and MSH colour spaces for such a simple task, but if you want good quality colour maps it's really worth it.
Other than that, I don't know of any 'manual' implementation of the function (i.e. not using already complex functions from matlab or R)
I think it may be more useful to use HSV color space as opposed to RGB, and show your data using the Hue component. This way all the values of your function will map to a nice rainbow color and will be evenly saturated.
In the provided links you should be able to derive the formula, how to convert the Hue value to RGB.

Sets in raphaeljs not real groups? Transform order

I'm having an issue with sets and how transforms are applied. I'm coming from a graphics background, so I'm familiar with scene graphs as well as the normal SVG group syntax, but Raphael is confusing me. Say I have a circle and a set, on which I want to apply a transform.
circle = paper.circle(0,0.5)
set = paper.set()
If I add the circle first, and then transform, it works.
set.push circle
set.transform("s100,100")
To make a 50 radius circle. If I reverse the order, however,
set.transform("s100,100")
set.push circle
The transform is not applied.
This seems as though it will break many, many rendering and animation type algorithms, where your groups/transforms hold your articulation state, and you add or remove objects to them instead of recreating the entire transform every time. Is there an option somewhere in the documentation that I am not seeing that addresses this, or was this functionality discarded in favor of simplicity? It seems very odd to be missing, given that it is supported directly and easily in the group hierarchy of SVG itself... do I need to manually apply the transform from the set to any children added after the set is transformed?
Sets in Raphael are just simple Arrays.
When you perform some actions on set, Raphael goes through all members via for(...){} loop.
Raphael doesn't support SVG groups <g></g>
UPDATE Raphael's code:
// Set
var Set = function (items) {
this.items = [];
this.length = 0;
this.type = "set";
if (items) {
for (var i = 0, ii = items.length; i < ii; i++) {
if (items[i] && (items[i].constructor == elproto.constructor || items[i].constructor == Set)) {
this[this.items.length] = this.items[this.items.length] = items[i];
this.length++;
}
}
}
},
As you can see, all items are stored in this.items which is array.
Raphaël's sets are merely intended to provide a convenient way of managing groups of shapes as unified sets of objects, by aggregating element related actions and delegating them (by proxying the corresponding methods in the set level) to each shape sequentially.
It seems very odd to be missing, given that it is supported directly
and easily in the group hierarchy of SVG itself...
Well, Raphaël is not an attempt to elevate the SVG specs to a JavaScript based API, but rather to offer an abstraction for vector graphics manipulation regardless of the underlying implementation (be it SVG in modern browsers, or VML in IE<9). Thus, sets are by no means representations of SVG groups.
do I need to manually apply the transform from the set to any
children added after the set is transformed?
Absolutely not, you only need to make sure to add any shapes to the set before applying transformations.

OpenCV Label connected and Compute feature measurements for image regions

I need help related to following matlab code
[labelMap_1,num] = bwlabel(labelMap == 1);
labelMap1Stat = imfeature(labelMap_1,'Area','Centroid');
Inside opencv i found few threads that i must use bloblib for it.
But suppose if i dont want to use it for the sake of code because i need to port this code into android and i am concern about the size. How can i achieve the same thing without using blob library overhead.
If there is no solution then what are the methods inside bloblib that will produce the same result as these two functions??
Thanks in advance.
Try using functions related to contours like cvFindContours() .
This article provides some insights on how to use opencv for blobs.
You can calculate centroid information my using cvMoments() function.
Then the center of mass is given by yc = M01 / M00, where M01 and M00 are fields in the structure returned by the Moments call.
Use cvContourArea() to find area.

Perlin's Noise with OpenGL

I was studying Perlin's Noise through some examples # http://dindinx.net/OpenGL/index.php?menu=exemples&submenu=shaders and couldn't help to notice that his make3DNoiseTexture() in perlin.c uses noise3(ni) instead of PerlinNoise3D(...)
Now why is that? Isn't Perlin's Noise supposed to be a summation of different noise frequencies and amplitudes?
Qestion 2 is what does ni, inci, incj, inck stand for? Why use ni instead of x,y coordinates? Why is ni incremented with
ni[0]+=inci;
inci = 1.0 / (Noise3DTexSize / frequency);
I see Hugo Elias created his Perlin2D with x,y coordinates, and so does PerlinNoise3D(...).
Thanks in advance :)
I now understand why and am going to answer my own question in hopes that it helps other people.
Perlin's Noise is actually a synthesis of gradient noises. In its production process, we must compute the dot product of a vector pointing from one of the corners flooring the input point to the input point itself with the random-generated gradient vector.
Now if the input point were a whole number, such as the xyz coordinates of a texture you want to create, the dot product would always return 0, which would give you a flat noise. So instead, we use inci, incj, inck as an alternative index. Yep, just an index, nothing else.
Now returning to question 1, there are two methods to implement Perlin's Noise:
1.Calculate the noise values separately and store them in the RGBA slots in the texture
2.Synthesize the noises up before-hand and store them in one of the RGBA slots in the texture
noise3(ni) is the actual implementation of method 1, while PerlinNoise3D(...) suggests the latter.
In my personal opinion, method 1 is much better because you have much more flexibility over how you use each octave in your shaders.
My guess on the reason for using noise3(ni) in make3DNoiseTexture() instead if PerlinNoise3D(...) is that when you use that noise texture in your shader you want to be able to replicate and modify the functionality of PerlinNoise3D(...) directly in the shader.
My guess for the reasoning behind ni, inci, incj, inck is that using x,y,z of the volume directly don't give a good result so by scaling the the noise with the frequency instead it is possible to adjust the resolution of the noise independently from the volume size.

How does a blur gauss algorithm look like? Are there examples of implementation?

I have a bitmap image context and want to let this appear blurry. So best thing I can think of is a gauss algorithm, but I have no big idea about how this kind of gauss blur algorithms look like? Do you know good tutorials or examples on this? The language does not matter so much, if it's done all by hand without using language-specific API too much. I.e. in cocoa the lucky guys don't need to think about it, they just use a Imagefilter that's already there. But I don't have something like this in cocoa-touch (objective-c, iPhone OS).
This is actually quite simple. You have a filter pattern (also known as filter kernel) - a (small) rectangular array with coefficients - and just calculate the convolution of the image and the pattern.
for y = 1 to ImageHeight
for x = 1 to ImageWidth
newValue = 0
for j = 1 to PatternHeight
for i = 1 to PatternWidth
newValue += OldImage[x-PatternWidth/2+i,y-PatternHeight/2+j] * Pattern[i,j]
NewImage[x,y] = newValue
The pattern is just a Gauss curve in two dimensions or any other filter pattern you like. You have to take care at the edges of the image because the filter pattern will be partialy outside of the image. You can just assume that this pixels are balck, or use a mirrored version of the image, or what ever seems reasonable.
As a final note, there are faster ways to calculate a convolution using Fourier transforms but this simple version should be sufficent for a first test.
The Wikipedia article has a sample matrix in addition to some standard information on the subject.
Best place for image processing is THIS. You can get matlab codes there.
And this Wolfram demo should clear any doubts about doing it by hand.
And if you don't want to learn too many things learn PIL(Python Imaging Library).
"Here" is exactly what you need.
Code copied from above link:
import ImageFilter
def filterBlur(im):
im1 = im.filter(ImageFilter.BLUR)
im1.save("BLUR" + ext)
filterBlur(im1)