I am working on vtk volume rendering of mammography. I have a 50 DICOM slices in folder to construct volume. Here I need to give vtkColorTransferFunction and vtkPiecewiseFunction to set RGB color and scalar opacity.
I am not getting exact values of color and opacity with respect to mammo images (breast images). I need values for color and opacity with respect to breast x-ray images.
Any suggestions will be helpful.
vtkGPUVolumeRayCastMapper *volumeGPUmapper =
vtkGPUVolumeRayCastMapper::New();
volumeGPUmapper->SetInputConnection(clip->GetOutputPort());
// RGB and alpha funcions
double skinOnBlueMap[28][5] =
{
{0, 0.987853825092316, 1.0, 1.0, 0.9},
{10000, 0.987853825092316, 1.0, 1.0, 0.9},
{20000, 0.987853825092316, 1.0, 1.0, 1.0},
{30000, 0.987853825092316, 1.0, 1.0, 1.0},
{40000, 0.0, 0.0, 0.0, 1.0},
{50000, 1.0, 0.0, 0.0, 1.0},
{60000, 1.0, 0.999206542968750, 0.0, 1.0},
{70000, 1.0, 1.0, 1.0, 1.0}
};
vtkSmartPointer<vtkPiecewiseFunction> alphaChannelFunc = vtkSmartPointer<vtkPiecewiseFunction>::New();
vtkSmartPointer<vtkColorTransferFunction> colorFunc = vtkSmartPointer<vtkColorTransferFunction>::New();
for(int i = 0; i < sizeof(skinOnBlueMap)/(5*sizeof(double)); i++)
{
colorFunc->AddRGBPoint(skinOnBlueMap[i][0], skinOnBlueMap[i][1], skinOnBlueMap[i][2], skinOnBlueMap[i][3]);
alphaChannelFunc->AddPoint(skinOnBlueMap[i][0], skinOnBlueMap[i][4]);
}
vtkSmartPointer<vtkVolumeProperty> volumeProperty = vtkSmartPointer<vtkVolumeProperty>::New();
volumeProperty->SetColor(colorFunc);
volumeProperty->SetInterpolationTypeToLinear();
volumeProperty->SetScalarOpacity(alphaChannelFunc);
vtkSmartPointer<vtkVolume> VTKvolume = vtkSmartPointer<vtkVolume>::New();
VTKvolume->SetMapper(volumeGPUmapper);
VTKvolume->SetProperty(volumeProperty);
Those scalar values seem strange to me. It could be the reason your results don't look correct. But to be sure we need the image stack and rendered result.
The scalar values are not even inside the range of the unsigned short type, that is generally used for the voxels.
Related
I want to do smooth transitions between different colors(rather than just toggling it) by pressing the keyboard key 't'.
Below is my code which toggles the colors at once but i want a smooth transitions of color:
case 't':
// code for color transition
changeColor += 1;
if(changeColor>8) //Toggling between 9 different colors
changeColor=0;
break;
Color storing code:
GLfloat diffColors[9][4] = { {0.3, 0.8, 0.9, 1.0},
{1, 0, 0, 1},
{0, 1, 0, 1},
{0, 0, 1, 1},
{0.5, 0.5, 0.9, 1},
{0.2, 0.5, 0.5, 1},
{0.5, 0.5, 0.9, 1},
{0.9, 0.5, 0.5, 1},
{0.3, 0.8, 0.5, 1} };
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT_AND_DIFFUSE, diffColors[changeColor]);
Change the changeColor parameter to float and instead of increment by 1 add some small value like 0.1 or smaller depends on how quick you want to change the colors and how often your event is firing.
case 't':
// code for color transition
changeColor += 0.025;
break;
Use linear interpolation to compute the color based on parameter changeColor.
//---------------------------------------------------------------------------
GLfloat diffColors[9][4] =
{
{0.3, 0.8, 0.9, 1.0},
{1.0, 0.0, 0.0, 1.0},
{0.0, 1.0, 0.0, 1.0},
{0.0, 0.0, 1.0, 1.0},
{0.5, 0.5, 0.9, 1.0},
{0.2, 0.5, 0.5, 1.0},
{0.5, 0.5, 0.9, 1.0},
{0.9, 0.5, 0.5, 1.0},
{0.3, 0.8, 0.5, 1.0}
};
GLfloat changeColor=0.0; // This must be float !!!
//---------------------------------------------------------------------------
void set_color()
{
int i;
const int N=9; // colors in your table
const GLfloat n=N+1.0; // colors in your table + 1
float *c0,*c1,c[4],t;
// bound the parameter
t=changeColor; // I renamed it so I do nto need to write too much)
while (t>= n) t-=n;
while (t<0.0) t+=n;
i=floor(t);
changeColor=t; // update parameter
t-=i; // leave just the decimal part
// get neighboring colors to t
c0=diffColors[i]; i++; if (i>=N) i=0;
c1=diffColors[i];
// interpolate
for (i=0;i<4;i++) c[i]=c0[i]+((c1[i]-c0[i])*t);
//glColor4fv(c);
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT_AND_DIFFUSE, c);
}
//---------------------------------------------------------------------------
so the idea is to dissect the changeColor to integer and fractional/decimal part. The integer part tells you between which 2 colors in your table you are and the fractional part <0..1> tells how far from the one color to the other one.
Linear interpolation of value x between 2 values x0,x1 and parameter t=<0..1> is like this:
x = x0 + (x1-x0)*t
If you look at the code above it does the same for c,c0,c1,t... In order this to get working the first chunk of code where you add to the parameter starting with case 't': must be executed repetitively like in timer ... and also invoke rendering. If it is just in some onkey handler that is called only once per key hit (no autorepeat) then it will not work and you need to implement the addition in some timer or on redraw event if you continuously redrawing screen... Again if not even that is happening you need to change the architecture of your app.
So this is how I solved it.
case 't':
// code for color transitioncol
changeColor=8; //I am doing the color transition at 9th number color
if(initialValue>=1.0)
initialValue=0.1;
initialValue+=0.01;
break;
Color storing code:
GLfloat diffColors[9][4] = { {initialValue, 0.5, 0.9, 1.0},
{initialValue, 1.0, 0.0, 0.0},
{initialValue, 0.0, 1.0, 0.0},
{initialValue, 0.8, 0.5, 0.8},
{initialValue, 0.5, 0.5, 0.9},
{initialValue, 0.9, 0.9, 0.5},
{initialValue, 0.5, 0.7, 0.9},
{initialValue, 0.9, 0.5, 0.5},
{initialValue, 0.7, 0.3, 0.5}};
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT_AND_DIFFUSE, diffColors[changeColor]);
I added two .eval() just in case. I got no compilation error, and no run time warning. Just segfault.
Thanks for helping me to fix this.
Test:
#include <Eigen/Eigen>
#include <iostream>
using namespace Eigen;
int main() {
Matrix<float, Dynamic, Dynamic> mat_b;
Matrix<float, Dynamic, Dynamic> mat_c;
mat_b << 1.0, 0.0, 0.5, 0.5,
0.0, 1.0, 0.5, 0.5,
1.0, 0.0, 1.0, 0.0,
0.0, 1.0, 0.0, 1.0;
mat_c << 0.0, 0.0, 0.0, 0.0, 1.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 1.0,
1.0, 0.0, 1.0, 0.0, 0.0, 0.0,
1.0, 0.0, 0.0, 1.0, 0.0, 0.0;
std::cout << (mat_b.transpose().eval() * mat_c).eval() << "\n";
}
Result:
Segmentation fault (core dumped)
As stated in documentatipon
The comma initializer
Eigen offers a comma initializer syntax which allows the user to easily set all the coefficients of a matrix, vector or array. Simply list the coefficients, starting at the top-left corner and moving from left to right and from the top to the bottom. The size of the object needs to be specified beforehand. If you list too few or too many coefficients, Eigen will complain.
emphasis is mine. If you expect that Matrix ctor would deduce size from your formatting, that simply not possible in C++. Looks like you created 16x1 and 24x1 matrix and then try to multiply 1x16 (transposed first one) to 24x1 which is not legal.
I render 3 images (left-view, center-view, right-view) with 90° horizontal foV and map them on 3 grids that create an overall image (basically like left, front and right views of a cubemap texture). Therefore the 3 unique images have to fit together somehow.
Everything works fine if i define the projection matrix for each image like this:
gluPerspective(90, 1, 0.1, 500)
However, since i'm trying to create an image with 210° (horizontal) and 60° (vertical) field of View, i would like to define it like this:
gluPerspective(60, 1.5, 0.1, 500)
But using this, the 3 images don't fit together in terms of their image content, foV, frustum or whatever.
So my question is: Do i have to use an aspect ratio of 1 if i want the images to fit together. And if i have to, why?
Some additional information:
i render the images in an fbo with a resolution that has the same aspect ratio like my hor./ver. foV
viewport is defined like this: glViewport (0, 0, width, height);
The modelview definition for the 3 views:
left-view: gluLookAt(0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0);
center-view: gluLookAt(0.0, 0.0, 0.0, 0.0, -1.0, 0.0, 0.0, 0.0, 1.0);
right-view: gluLookAt(0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0);
This was a sufficient explanation for me:
http://ledin.cs.sonoma.edu/CS375_OpenGL_Slides/Perspective_gluFrustum.pdf
I'm trying to use evaluators to create a plane:
void Plane::draw(float texS, float texT)
{
float div = v.at(0);
GLfloat ctrlpoints[4][3] = {
{-0.5, 0.0, 0.5}, {-0.5, 0.0 ,-0.5},
{0.5, 0.0, 0.5}, {0.5, 0.0, -0.5}};
GLfloat texturepoints[4][2] = {
{0.0, 0.0}, {0.0, 1.0/texT},
{1.0/texS, 0.0}, {1.0/texS, 1.0/texT}};
glMap2f(GL_MAP2_VERTEX_3, 0.0, 1.0, 3, 2, 0.0, 1.0, 2 * 3, 2, &ctrlpoints[0][0]);
glMap2f(GL_MAP2_TEXTURE_COORD_2, 0.0, 1.0, 2, 2, 0.0, 1.0, 2 * 2, 2, &texturepoints[0][0]);
glEnable(GL_MAP2_VERTEX_3);
glEnable(GL_MAP2_TEXTURE_COORD_2);
glEnable(GL_AUTO_NORMAL);
glMapGrid2f(div, 0.0, 1.0, div, 0.0, 1.0);
glEvalMesh2(GL_FILL,0, div, 0, div);
}
It displays the plane correctly, it gives me a 50*50 grid, for example, and the texture I apply to it is also displayed properly. However, if I try to apply a golden appearance to it, it just gives me a dull brown color.
I know I can get what I want by creating a rectangle with with quad or triangle strip, but the point here is to use evaluators.
One answer I found said that evaluators calculate normals automatically with the enabling of GL_AUTO_NORMAL, and that that was the only necessary instruction. But even then, the author of the question couldn't do what he wanted.
And I do have GL_NORMALIZE enabled in the initialization.
Ok, so I have a model class that contains a pointer to (what will be) an array of point3 objects:
point3* _vertices_colors;
Point3 has the following typedef:
typedef GLfloat point3[3];
Essentially making an array of point3 objects an array of arrays. Then in a derived classes' constructor, I allocate memory for the number of vertices and colors I want to store as follows:
_vertices_colors = new point3[16];
This means my object has 8 vertices with their own colors stored. I then define the following array on stack ready to copy to the pointer:
point3 verticesColors[] = {
{1.0, 1.0, 1.0}, {1.0, 0.0, 0.0},
{-1.0, 1.0, 1.0}, {1.0, 0.0, 0.0},
{-1.0, -1.0, 1.0},{1.0, 0.0, 0.0},
{1.0, -1.0, 1.0},{1.0, 0.0, 0.0},
{1.0, 1.0, -1.0}, {1.0, 0.0, 0.0},
{-1.0, 1.0, -1.0}, {1.0, 0.0, 0.0},
{-1.0, -1.0, -1.0},{1.0, 0.0, 0.0},
{1.0, -1.0, -1.0},{1.0, 0.0, 0.0}
};
Then, I use a for loop to copy to the array on heap:
for(int i = 0; i < 16; i++)
{
*_vertices_colors[i,0] = *verticesColors[i, 0];
*_vertices_colors[i,1] = *verticesColors[i, 1];
*_vertices_colors[i,2] = *verticesColors[i, 2];
printf("%15f", *_vertices_colors[i,0]);
printf("\t");
printf("%15f", *_vertices_colors[i,1]);
printf("\t");
printf("%15f", *_vertices_colors[i,2]);
printf("\n");
}
However, this appears to assign 1.0, 1.0, -1.0 to each of the 16 rows of the array. I've tried other ways of assigning the pointer to the array, for example the line:
_vertices_colors = verticesColors;
As verticesColors is a constant pointer to an array, I thought this would work, however it produces the same results. I also tried using memcpy:
memcpy(_vertices_colors, verticesColors, sizeof(_vertices_colors));
But this seems to produce some uncontrollable results. It assigns each of the first columns as 1.0 and the rest as very large negative integers. Can anyone see why my first method doesn't work?
This
*_vertices_colors[i,0] = *verticesColors[i, 0];
*_vertices_colors[i,1] = *verticesColors[i, 1];
*_vertices_colors[i,2] = *verticesColors[i, 2];
is equivalent to
*_vertices_colors[0] = *verticesColors[0];
*_vertices_colors[1] = *verticesColors[1];
*_vertices_colors[2] = *verticesColors[2];
You use a sequence operator , in the array subscription, which yields the last value of the sequence. In this case 0, 1 and 2.
Multi dimensional arrays are accessed as
_vertices_colors[i][0] = verticesColors[i][0];