Rendering Multiline Text with NVPath Extension and Pango - opengl

I'm using Pango to layout my text and NV Path to render glyphs.
Having difficulty in finding correct methods for getting per glyph positions. As you can see at the moment I'm calculating this values according to line and glyph indexes.
But Pango has better methods for this; like per glyph, per line, extent queries. My problem is that this methods got no documentation and I wasn't able to find any samples.
How can i get correct glyph positions from Pango for this type of application?
std::vector<uint32_t> glyphs;
std::vector<GLfloat> positions;
int lineCount = pango_layout_get_line_count( pangoLayout );
for ( int l = 0; l < lineCount; ++l )
{
PangoLayoutLine* line = pango_layout_get_line_readonly( pangoLayout, l );
GSList* runs = line->runs;
float xOffset = 0.0f;
while( runs )
{
PangoLayoutRun* run = static_cast<PangoLayoutRun*>( runs->data );
glyphs.resize( run->glyphs->num_glyphs, 0 );
positions.resize( run->glyphs->num_glyphs * 2, 0 );
for( int g = 0; g < run->glyphs->num_glyphs; ++g )
{
glyphs[g] = run->glyphs->glyphs[g].glyph;
// Need Correct Values Here
positions[ g * 2 + 0 ] = xOffset * NVPATH_DEFUALT_EMSCALE;
positions[ g * 2 + 1 ] = (float)l * NVPATH_DEFUALT_EMSCALE;
xOffset += PANGO_PIXELS( run->glyphs->glyphs[g].geometry.width ) / getFontSize();
}
const Font::RefT font = getFont( pango_font_description_get_family( pango_font_describe( run->item->analysis.font ) ) );
glEnable( GL_STENCIL_TEST );
glStencilFillPathInstancedNV( run->glyphs->num_glyphs,
GL_UNSIGNED_INT,
&glyphs[0],
font->nvPath,
GL_PATH_FILL_MODE_NV,
0xFF,
GL_TRANSLATE_2D_NV,
&positions[0]
);
glStencilFunc( GL_NOTEQUAL, 0, 0xFF );
glStencilOp( GL_KEEP, GL_KEEP, GL_ZERO );
glColor3f( 0.0, 0.0, 0.0 );
glCoverFillPathInstancedNV( run->glyphs->num_glyphs,
GL_UNSIGNED_INT,
&glyphs[0],
font->nvPath,
GL_BOUNDING_BOX_OF_BOUNDING_BOXES_NV,
GL_TRANSLATE_2D_NV,
&positions[0]
);
glDisable( GL_STENCIL_TEST );
runs = runs->next;
}
}

Related

How to get the position of 2D dicom image slice in 3D surface rendered output in vtk

I am doing a VTK program in that when I enter a 2D DICOM patient image position (kindly refer the given image for better understanding), I need to get that particular slice in the 3D surface rendered output.
For volume rendered 3D image this can be achieved by using these functions namely vtkImageData, vtkImageMapToColors,vtkImage Actor.
My question is how to do it in the surface rendered output.do anyone know the concept for that. If anybody knows please answer. If my question is incorrect or not understandable kindly share your opinion.
For clear understanding I am showing a sample picture
consider my 3d output will be like the picture below
and when I enter the image position(patient) in a text box and click ok button that corresponding slice should be shown in the 3d image like the below picture
please inform if my question is not understandable
I got stuck here i don't even know whether my code is right.Here is my code
ExtractVOI->SetInput(reader1->GetOutput());//VOI extractor
ExtractVOI->SetVOI(1,0,0,0,1,0);//i have given the Image Orientation(patient) as the SetVOI value
////====CREATE LookUpTable
tableax1->SetTableRange(0.0, 4096.0);
tableax1->SetValueRange(0.0, 1.0);
tableax1->SetSaturationRange(0.0, 0.0);
tableax1->SetRampToSCurve();
tableax1->SetAlphaRange(0.0, 0.08);
tableax1->Build();
//====CREATE TEXTURE
planesourceax1->SetXResolution(1);
planesourceax1->SetYResolution(1);
planesourceax1->SetOrigin(0,0,0);
planesourceax1->SetPoint1(xg , yg,zg);//i have given the value of Image Position(patient) that is taken from a textbox ,as the points
planesourceax1->Update();
vtkSmartPointer<vtkPolyDataMapper> mapax1 = vtkSmartPointer<vtkPolyDataMapper>::New();
mapax1->SetInputConnection(planesourceax1->GetOutputPort());
mapax1->UpdateWholeExtent();
textureax1->SetInputConnection(ExtractVOI->GetOutputPort());
textureax1->InterpolateOn();
textureax1->SetLookupTable(tableax1);
textureax1->UpdateWholeExtent();
//===PASS TO ACTOR
actorax1->SetMapper(mapax1);
actorax1->GetMapper()->SetResolveCoincidentTopologyToPolygonOffset();
actorax1->GetMapper()->SetResolveCoincidentTopologyPolygonOffsetParameters(0.1, -1.0);
actorax1->SetTexture(textureax1);
renderer->AddActor(actorax1);
renderWindow->Render();
but i am not getting output
I also tried:
static double axialElements[16] = {
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1 };
resliceax1->SetInputConnection(reader->GetOutputPort());
resliceax1->SetOutputDimensionality(2);
vtkSmartPointer<vtkMatrix4x4> reslicematrixax1 = vtkSmartPointer<vtkMatrix4x4>::New();
reslicematrixax1->DeepCopy(axialElements);
resliceax1->SetResliceAxes(reslicematrixax1);
resliceax1->SetResliceAxesOrigin(0.0, 0.0, 0.0);
resliceax1->Update();
extractaxpos1->RemoveAllInputs();
extractaxpos1->SetInputConnection(resliceax1->GetOutputPort());
////====CREATE LUT
tableax1->SetTableRange(0.0, 4096.0);
tableax1->SetValueRange(0.0, 1.0);
tableax1->SetSaturationRange(0.0, 0.0);
tableax1->SetRampToSCurve();
tableax1->SetAlphaRange(0.0, 0.08);
tableax1->Build();
//====CREATE TEXTURE
planesourceax1->SetXResolution(1);
planesourceax1->SetYResolution(1);
planesourceax1->SetOrigin(0,0,0);
planesourceax1->SetPoint1((xval/20 + xval/32),(yval/20 + yval/32),(zval/20 + zval/32));//this is where i put the values ad divided by its tag id(0020,0032)
//planesourceax1->SetPoint2(fBounds[0] , fBounds[3], fBounds[4]);
planesourceax1->Update();
vtkSmartPointer<vtkPolyDataMapper> mapax1 = vtkSmartPointer<vtkPolyDataMapper>::New();
mapax1->SetInputConnection(planesourceax1->GetOutputPort());
mapax1->UpdateWholeExtent();
textureax1->SetInputConnection(extractaxpos1->GetOutputPort());
textureax1->InterpolateOn();
textureax1->SetLookupTable(tableax1);
textureax1->UpdateWholeExtent();
//===PASS TO ACTOR
actorax1->SetMapper(mapax1);
actorax1->GetMapper()->SetResolveCoincidentTopologyToPolygonOffset();
actorax1->GetMapper()->SetResolveCoincidentTopologyPolygonOffsetParameters(0.1, -1.0);
actorax1->SetTexture(textureax1);
resliceax1->SetResliceAxesOrigin(0.0, 0.0, 0.0);
actorax1->SetPosition((xval/20 + xval/32),(yval/20 + yval/32),(zval/20 + zval/32));//I made the same changes here also
planesourceax1->SetOrigin(fBoundsUpdated[0], fBoundsUpdated[2], pDoc->fBounds[4]);
planesourceax1->SetPoint1(fBoundsUpdated[1] , fBoundsUpdated[2], pDoc->fBounds[4]);
planesourceax1->SetPoint2(fBoundsUpdated[0] , fBoundsUpdated[3], pDoc->fBounds[4]);
planesourceax1->Update();
but it is not cutting the position where the slice is.It is cutting a different position
If you want to know what element decide cutting direction
https://github.com/Kitware/VTK/blob/master/Examples/ImageProcessing/Cxx/ImageSlicing.cxx
please look this link
static double axialElements[16] = {
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1 };
...
vtkSmartPointer<vtkMatrix4x4> resliceAxes = vtkSmartPointer<vtkMatrix4x4>::New();
resliceAxes->DeepCopy(axialElements);
// Set the point through which to slice
resliceAxes->SetElement(0, 3, 0);
resliceAxes->SetElement(1, 3, 0);
resliceAxes->SetElement(2, 3, 70);
Matrix 4x4 is setting Reslice direction
and SetElement decide Reslice origin
this code setting option is Reslice XYimage start at Z = 70
vtkSmartPointer<vtkImageReslice> reslice = vtkSmartPointer<vtkImageReslice>::New();
reslice->SetInputData(m_vtkVolumeImageData);
reslice->SetOutputDimensionality(2);
reslice->SetResliceAxes(resliceAxes);
reslice->SetInterpolationModeToLinear();
reslice->Update();
this code make Reslice Data.
Result Dimesionality is made 2D(Plane)
and if you setting this sequence,
connect Reslice data to vtkImageMapToColors for paint Reslice image.
and last, Connect Mapper and Actor for show with Volume.
you need to setting Actor location because Reslice Data maybe dosen't have a location info
if Plane location is not changed, using vtkDataSetMapper and vtkActor, not vtkImageActor
Or you just use Widget,
I recommand vtkImagePlaneWidget.
It is easy and very Powerful.
I hope it will help you.
If you need full code, please tell me
Given a DICOM slice, position goes directly into the SetResliceAxesOrigin, whereas texture primitive is shifted a bit to make the VTK center of pixel -units to correspond with OpenGL bounds. Then the image dimensions and spacing along X and Y axis are used to calculate the extent of the primitive (track the prints of DICOM tags). Directions can be currently omitted since vtkImageData doesn't have them (https://discourse.vtk.org/t/proposal-to-add-orientation-to-vtkimagedata-feedback-wanted/120/2). But they are most likely (hopefully) baked into the image data during reading.
You can verify the rendering by comparing with volume or surface models.
For handling medical image data, I recommend ITK (https://itk.org/). If still not using ITK, just read the image and call the output as image in the code.
Sample print:
0020|0037 1\-2.58172e-10\3.63164e-11\-1.26711e-11\-0.187259\-0.982311
Warning, direction will be lost
0018|0050 1
0028|0030 1\1
0020|0032 -252.69\-118.273\305.807
#include "itkImage.h"
#include "itkImageFileReader.h"
#include "vtkInteractorStyleTrackballCamera.h"
#include "vtkImplicitPlaneWidget2.h"
#include "vtkImplicitPlaneRepresentation.h"
#include "vtkCommand.h"
#include "vtkPlane.h"
#include "vtkImageMapToColors.h"
#include "vtkSmartPointer.h"
#include "vtkPolyData.h"
#include "vtkPoints.h"
#include "vtkCellArray.h"
#include "vtkFloatArray.h"
#include "vtkImageReslice.h"
#include "vtkPiecewiseFunction.h"
#include "vtkColorTransferFunction.h"
#include "vtkGPUVolumeRayCastMapper.h"
#include "vtkVolumeProperty.h"
#include "vtkTexture.h"
#include "vtkLookupTable.h"
#include "vtkImageMarchingCubes.h"
vtkSmartPointer<vtkPolyData> MyCreateSimplePlane( const double * corners )
{
vtkSmartPointer<vtkPolyData> ret=vtkSmartPointer<vtkPolyData>::New();
vtkSmartPointer< vtkFloatArray > tcoords = vtkSmartPointer< vtkFloatArray >::New();
tcoords->SetNumberOfComponents( 2 );
tcoords->InsertNextTuple2( 0, 0 );
tcoords->InsertNextTuple2( 1, 0 );
tcoords->InsertNextTuple2( 1, 1 );
tcoords->InsertNextTuple2( 0, 1 );
ret->GetPointData()->SetTCoords( tcoords );
vtkSmartPointer<vtkFloatArray> floatArray = vtkSmartPointer<vtkFloatArray>::New();
floatArray->SetNumberOfComponents( 3 );
for ( int i=0;i<4;++i )
{
floatArray->InsertNextTuple3( corners[3*i], corners[3*i+1], corners[3*i+2] );
}
vtkSmartPointer<vtkPoints> points = vtkSmartPointer<vtkPoints>::New();
points->SetData( floatArray );
ret->SetPoints( points );
vtkSmartPointer<vtkCellArray> cells = vtkSmartPointer<vtkCellArray>::New();
cells->InsertNextCell( 4 );
cells->InsertCellPoint( 0 );
cells->InsertCellPoint( 1 );
cells->InsertCellPoint( 2 );
cells->InsertCellPoint( 3 );
ret->SetPolys( cells );
return ret;
}
void mainactual( void )
{
bool doVolumeRender = true;
bool doSurfaceRender = false;
typedef itk::Image<short,3> Image;
typedef itk::VTKImageExport< Image > Export;
typedef itk::ImageFileReader<Image> Reader;
// read image using ITK (this is essentially an nrrd-converted Visible Human CT "Male")
// https://mri.radiology.uiowa.edu/VHDicom/
// prefer using ITK for medical data
Reader::Pointer reader = Reader::New();
reader->SetFileName("D:/tmp/vhp_male.nrrd");
reader->Update();
// if input image orientation is not (1,0,0\0,1,0), user should handle
// the respective transform by using, e.g., vtkVolume::SetOrientation
Image::DirectionType dir = reader->GetOutput()->GetDirection();
std::cout << "0020|0037" << " ";
std::cout << dir(0,0) << "\\" << dir(1,0) << "\\" << dir(2,0) << "\\";
std::cout << dir(0,1) << "\\" << dir(1,1) << "\\" << dir(2,1);
std::cout << std::endl;
if ( !dir.GetVnlMatrix().is_identity( 0.0001 ) )
std::cout << "Warning, direction will be lost" << std::endl;
// import ITK image into VTK
vtkSmartPointer<vtkImageImport> imp = vtkSmartPointer<vtkImageImport>::New();
Export::Pointer exp = Export::New();
exp->SetInput( reader->GetOutput() );
imp->SetUpdateInformationCallback(exp->GetUpdateInformationCallback());
imp->SetPipelineModifiedCallback(exp->GetPipelineModifiedCallback());
imp->SetWholeExtentCallback(exp->GetWholeExtentCallback());
imp->SetSpacingCallback(exp->GetSpacingCallback());
imp->SetOriginCallback(exp->GetOriginCallback());
imp->SetScalarTypeCallback(exp->GetScalarTypeCallback());
imp->SetNumberOfComponentsCallback(exp->GetNumberOfComponentsCallback());
imp->SetPropagateUpdateExtentCallback(exp->GetPropagateUpdateExtentCallback());
imp->SetUpdateDataCallback(exp->GetUpdateDataCallback());
imp->SetDataExtentCallback(exp->GetDataExtentCallback());
imp->SetBufferPointerCallback(exp->GetBufferPointerCallback());
imp->SetCallbackUserData(exp->GetCallbackUserData());
imp->Update();
// this is our vtkImageData counterpart as it was read using VTK
vtkImageData * image = imp->GetOutput();
// create render window
vtkSmartPointer< vtkRenderer > ren = vtkSmartPointer< vtkRenderer >::New();
vtkSmartPointer< vtkRenderWindow > rw = vtkSmartPointer< vtkRenderWindow >::New();
rw->AddRenderer( ren );
rw->SetSize( 1024,1024 );
// create interactor used later
vtkSmartPointer< vtkRenderWindowInteractor > ia = vtkSmartPointer<vtkRenderWindowInteractor>::New();
ia->SetRenderWindow( rw );
ia->SetInteractorStyle( vtkSmartPointer< vtkInteractorStyleTrackballCamera >::New() );
ia->Initialize();
// define cutplane early on
vtkSmartPointer<vtkPlane > cutplane = vtkSmartPointer<vtkPlane>::New();
cutplane->SetNormal( 0,0,-1);
if ( doVolumeRender )
{
// set up some fancy volume rendering transfer function
vtkSmartPointer<vtkPiecewiseFunction> pw = vtkSmartPointer<vtkPiecewiseFunction>::New();
pw->AddPoint(-761.61130742049477, 0);
pw->AddPoint(-40.042402826855096, 0);
pw->AddPoint(353.54063604240287, 0.28817733990147787);
pw->AddPoint(1091.5088339222616, 0.69458128078817727);
pw->AddPoint(1763.8798586572439, 1);
vtkSmartPointer<vtkPiecewiseFunction> gf = vtkSmartPointer<vtkPiecewiseFunction>::New();
gf->AddPoint(-1024, 0);
gf->AddPoint(-1007.6007067137809, 1);
gf->AddPoint(-220.43462897526501, 0.78244274809160308);
gf->AddPoint(697.92579505300364, 0.9007633587786259);
gf->AddPoint(1157.1060070671379, 0.53435114503816794);
vtkSmartPointer<vtkColorTransferFunction> cf = vtkSmartPointer<vtkColorTransferFunction>::New();
cf->AddRGBPoint(-105.63957597173146, 0, 0, 0, 0.5, 0);
cf->AddRGBPoint(255.14487632508826, 0.93333299999999997, 0, 0, 0.5, 0);
cf->AddRGBPoint(353.54063604240287, 1, 0.90588199999999997, 0.66666700000000001, 0.5, 0);
cf->AddRGBPoint(632.32862190812716, 1, 0.66666666666666663, 0, 0.5, 0);
cf->AddRGBPoint(779.92226148409895, 1, 1, 1, 0.5, 0);
// and make GPUVolumeRayCastMapper to render them
typedef vtkGPUVolumeRayCastMapper Mapper;
vtkSmartPointer< Mapper > mapper = vtkSmartPointer< Mapper >::New();
mapper->SetInputData( image );
vtkSmartPointer< vtkVolumeProperty > prop = vtkSmartPointer< vtkVolumeProperty >::New();
prop->SetColor( cf );
prop->SetScalarOpacity( pw );
prop->SetGradientOpacity( gf );
prop->SetInterpolationTypeToLinear();
prop->SetDiffuse( 0 );
prop->SetSpecular( 0.0 );
prop->SetAmbient( 1 );
mapper->SetUseDepthPass( 1 );
mapper->SetUseJittering( 1 );
mapper->SetAutoAdjustSampleDistances( 0 );
vtkSmartPointer< vtkVolume > volume = vtkSmartPointer< vtkVolume >::New();
volume->SetMapper( mapper );
volume->SetProperty( prop );
// clip the volume
mapper->AddClippingPlane( cutplane );
ren->AddVolume( volume );
}
if ( doSurfaceRender )
{
// do marching cubes polygons
vtkSmartPointer<vtkImageMarchingCubes> mc = vtkSmartPointer<vtkImageMarchingCubes>::New();
mc->SetComputeGradients( 0 );
mc->SetComputeNormals( 0 );
mc->SetComputeScalars( 0 );
mc->SetInputData( image );
mc->SetNumberOfContours( 1 );
mc->SetValue( 0, 100.0 );
mc->Update();
vtkSmartPointer< vtkPolyDataMapper > surfmapper = vtkSmartPointer< vtkPolyDataMapper >::New();
vtkSmartPointer< vtkActor > surf = vtkSmartPointer< vtkActor >::New();
surf->SetMapper( surfmapper );
surfmapper->SetInputData( mc->GetOutput() );
surf->GetProperty()->SetAmbient(0);
surf->GetProperty()->SetDiffuse(1);
surf->GetProperty()->SetSpecular(0);
surf->GetProperty()->SetColor( 0.5, 0.9, 0.1 );
surfmapper->AddClippingPlane( cutplane );
ren->AddActor( surf );
}
// do the image slice plane
vtkSmartPointer< vtkImageReslice > slicer = vtkSmartPointer< vtkImageReslice >::New();
slicer->SetInputData( image );
vtkSmartPointer<vtkMatrix4x4> identity = vtkSmartPointer<vtkMatrix4x4>::New() ;
//identity->Identity(); // not needed as the orientation is identity anyways
slicer->SetResliceAxes( identity );
slicer->SetOutputDimensionality( 2 );
slicer->SetInterpolationModeToLinear();
vtkSmartPointer< vtkTexture > texture = vtkSmartPointer< vtkTexture > ::New();
vtkSmartPointer< vtkLookupTable > table = vtkSmartPointer< vtkLookupTable >::New();
table->SetNumberOfColors( 256 );
table->SetHueRange( 0.0, 0.0 );
table->SetSaturationRange( 0, 0 );
table->SetValueRange( 0.0, 1.0 );
table->SetAlphaRange( 1.0, 1.0 );
table->SetUseBelowRangeColor( 1 );
table->SetBelowRangeColor(0,0,0,0);
table->SetRange( -200, 200 );
table->Build();
texture->SetInputConnection( slicer->GetOutputPort() );
texture->SetLookupTable( table );
texture->SetColorModeToMapScalars();
vtkSmartPointer< vtkPolyDataMapper > polymapper = vtkSmartPointer< vtkPolyDataMapper >::New();
vtkSmartPointer< vtkActor > plane = vtkSmartPointer< vtkActor >::New();
plane->SetMapper( polymapper );
plane->GetProperty()->SetTexture(0, texture );
plane->GetProperty()->SetAmbient(1);
plane->GetProperty()->SetDiffuse(0);
plane->GetProperty()->SetSpecular(0);
plane->GetProperty()->SetAmbientColor(1,1,1);
plane->GetProperty()->SetEdgeColor(1,0,0);
plane->GetProperty()->SetEdgeVisibility(1);
ren->AddActor( plane );
int extent[6];
double origin[3];
double spacing[3];
image->GetOrigin( origin );
image->GetSpacing( spacing );
image->GetExtent( extent );
{
std::stringstream s;
std::cout << "0018|0050" << " ";
std::cout << spacing[2];
std::cout << std::endl;
}
{
std::cout << "0028|0030" << " ";
std::cout << spacing[0] << "\\" << spacing[1];
std::cout << std::endl;
}
ren->ResetCamera(); // before looping, hope there is either surface or volume
int z = (extent[5]-extent[4])/2;
// for ( ... )
{
// DICOM pixel corner
double corner[3]={ origin[0], origin[1], origin[2] + (double)z * spacing[2] };
std::cout << "0020|0032" << " ";
std::cout << corner[0] << "\\" << corner[1] << "\\" << corner[2];
std::cout << std::endl;
slicer->SetResliceAxesOrigin( corner );
double corners[12];
// remove the half spacing to make DICOM / ITK / VTK origin to OpenGL origin
corners[0]=corner[0]-0.5*spacing[0];
corners[1]=corner[1]-0.5*spacing[1];
corners[2]=corner[2]; // the slicing direction
// +1 are to convert from DICOM pixel coordinates to OpenGL bounds
const double vx[3]={ (double)(extent[1]-extent[0]+1)*spacing[0], 0.0, 0.0 };
const double vy[3]={ 0.0, (double)(extent[3]-extent[2]+1)*spacing[1], 0.0} ;
for ( unsigned int u=0;u<3;++u )
{
corners[ 3+ u ] = corners[u] + vx[u];
corners[ 6+ u ] = corners[u] + vx[u] + vy[u];
corners[ 9+ u ] = corners[u] + vy[u];
}
cutplane->SetOrigin( corner );
vtkSmartPointer< vtkPolyData > polys = MyCreateSimplePlane( corners );
polymapper->SetInputData( polys );
ia->Render();
}
ia->Start();
}
* UPDATE *
Note: the direction must be processed separately, e.g., via SetUserTransform for both the plane and the volume, or resampling of the imagedata. Two images shown below are sliced similarly along the outermost image dimension, and not along the worldspace z-axis:
void CMFCApplication1Dlg::setSliceImage()
{
int extent[6];
double spacing[3];
double origin[3];
//m_vtkVolumeImageData is vtkImageData(Save Slice Images)
//Same as dicomReader->GetOutput();
m_vtkVolumeImageData->GetExtent(extent);
m_vtkVolumeImageData->GetSpacing(spacing);
m_vtkVolumeImageData->GetOrigin(origin);
double center[3];
center[0] = origin[0] + spacing[0] * 0.5 * (extent[0] + extent[1]);
center[1] = origin[1] + spacing[1] * 0.5 * (extent[2] + extent[3]);
center[2] = origin[2] + spacing[2] * 0.5 * (extent[4] + extent[5]);
// Matrices for axial, coronal, sagittal, oblique view orientations
static double axialElements[16] = {
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1 };
static double coronalElements[16] = {
1, 0, 0, 0,
0, 0, 1, 0,
0,-1, 0, 0,
0, 0, 0, 1 };
static double sagittalElements[16] = {
0, 0,-1, 0,
1, 0, 0, 0,
0,-1, 0, 0,
0, 0, 0, 1 };
static double obliqueElements[16] = {
1, 0, 0, 0,
0, 0.866025, -0.5, 0,
0, 0.5, 0.866025, 0,
0, 0, 0, 1 };
// Set the slice orientation
vtkSmartPointer<vtkMatrix4x4> resliceAxes = vtkSmartPointer<vtkMatrix4x4>::New();
resliceAxes->DeepCopy(axialElements);
// Set the point through which to slice
resliceAxes->SetElement(0, 3, 0);
resliceAxes->SetElement(1, 3, 0);
resliceAxes->SetElement(2, 3, 70);
// Extract a slice in the desired orientation
vtkSmartPointer<vtkImageReslice> reslice = vtkSmartPointer<vtkImageReslice>::New();
reslice->SetInputData(m_vtkVolumeImageData);
reslice->SetOutputDimensionality(2);
reslice->SetResliceAxes(resliceAxes);
reslice->SetInterpolationModeToLinear();
reslice->Update();
// Create a lookup table
table = vtkSmartPointer<vtkLookupTable>::New();
// image intensity range
// my image is USHORT TYPE, so I Set 0 ~ 65535
table->SetRange(0, 65535);
table->SetNumberOfTableValues(65536);
double red = 0, green = 0, blue = 0;
for (int i = 0; i < table->GetNumberOfTableValues(); i++)
{
// 0~19999 value have Black ~ Red
if (i < 20000)
red += 1.0 / 20000;
// 20000~39999 value have Red ~ Yellow
else if (i < 40000)
green += 1.0 / 20000;
// 40000~59999 value have Yellow ~ white
// and 60000~ have white
else if (i < 60000)
blue += 1.0 / 20000;
table->SetTableValue(i, red, green, blue, 1);
}
table->Build();
//// Map the image through the lookup table
vtkSmartPointer<vtkImageMapToColors> color = vtkSmartPointer<vtkImageMapToColors>::New();
color->SetLookupTable(table);
color->SetInputData(reslice->GetOutput());
color->Update();
vtkSmartPointer<vtkDataSetMapper> mapper = vtkSmartPointer<vtkDataSetMapper>::New();
mapper->SetInputData(color->GetOutput());
//Setting ResliceImagePlaneActor
//Location, Opacity, Connect Mapper and Actor
double position[3] = { 0, 0, 70 };
vtkSmartPointer<vtkActor> nomal_actor = vtkSmartPointer<vtkActor>::New();
nomal_actor->SetMapper(mapper);
nomal_actor->GetProperty()->SetOpacity(0.7);
nomal_actor->SetPosition(position);
m_vtkRenderer->AddActor(nomal_actor);
//Setting ReslicePlaneOutLineActor
vtkSmartPointer<vtkOutlineFilter> sliceOutlineFilter = vtkSmartPointer<vtkOutlineFilter>::New();
sliceOutlineFilter->SetInputData(color->GetOutput());
sliceOutlineFilter->Update();
vtkSmartPointer<vtkPolyDataMapper> sliceOutlineMapper = vtkSmartPointer<vtkPolyDataMapper>::New();
sliceOutlineMapper->SetInputData(sliceOutlineFilter->GetOutput());
vtkSmartPointer<vtkActor> vtksliceOutlineActor = vtkSmartPointer<vtkActor>::New();
vtksliceOutlineActor->SetMapper(sliceOutlineMapper);
vtksliceOutlineActor->GetProperty()->SetColor(1, 0, 0);
vtksliceOutlineActor->SetPosition(position);
m_vtkRenderer->AddActor(vtksliceOutlineActor);
//Setting ScalarBarActor
//It is fine to Skip
vtkSmartPointer<vtkScalarBarActor> scalarBar = vtkSmartPointer<vtkScalarBarActor>::New();
scalarBar->SetLookupTable(table);
scalarBar->SetTitle("value");
scalarBar->SetNumberOfLabels(10);
scalarBar->SetLabelFormat("%10.2f");
scalarBar->SetHeight(.2);
scalarBar->SetWidth(.2);
m_vtkRenderer->AddActor2D(scalarBar);
}
This is My fullCode
Check it and if you don't understand some parts, Post a comment plz :)

Writing correct bitmap file results

Story:
I have been creating a font renderer for directx9 to draw text, The actual problem got caused by another problem, I was wondering why the texture didnt draw anything (my wrong bitmap), so i tried to copy the bitmap into a file and realized the current problem. yay
Question:
What exactly am i doing wrong? I mean, i just simply copy my CURRENT pixel array in my bitmap wrapper to a file with some other content ( the bitmap infos ), i have seen in an hex editor that there are colors after the bitmap headers.
Pictures:
This is the result of the bitmap which i have written to the filesystem
Code:
CFont::DrawGlyphToBitmap
This code does copy from a bitmap of an freetype glyph ( which have by the way a pixel format of FT_PIXEL_MODE_BGRA ) to the
font bitmap wrapper class instance
void CFont::DrawGlyphToBitmap ( unsigned char * buffer, int rows, int pitch, int destx, int desty, int format )
{
CColor color = CColor ( 0 );
for ( int row = 0; row < rows; row++ )
{
int x = 0;
for ( int left = 0; left < pitch * 3; left += 3,x++ )
{
int y = row;
unsigned char* cursor = &buffer [ ( row*pitch ) + left ];
color.SetAlphab ( 255 );
color.SetBlueb ( cursor [ 0 ] );
color.SetGreenb ( cursor [ 1 ] );
color.SetRedb ( cursor [ 2 ] );
m_pBitmap->SetPixelColor ( color, destx + x, desty + y );
}
}
}
CBitmap::SetPixelColor
This code does set a single "pixel" / color in its local pixel storage.
void CBitmap::SetPixelColor ( const CColor & color, int left, int top )
{
unsigned char* cursor = &m_pBuffer [ ( m_iPitch * top ) + ( left * bytes_per_px ) ];
cursor [ px_red ] = color.GetRedb ( );
cursor [ px_green ] = color.GetGreenb ( );
cursor [ px_blue ] = color.GetBlueb ( );
if ( px_alpha != 0xFFFFFFFF )
cursor [ px_alpha ] = color.GetAlphab ( );
}
CBitmap::Save
Heres a outcut of the function which writes
the bitmap to the file system, it does shows how
i initialize the bitmap info container ( file header & "dib" header )
void CBitmap::Save ( const std::wstring & path )
{
BITMAPFILEHEADER bitmap_header;
BITMAPV5HEADER bitmap_info;
memset ( &bitmap_header, 0, sizeof ( BITMAPFILEHEADER ) );
memset ( &bitmap_info, 0, /**/sizeof ( BITMAPV5HEADER ) );
bitmap_header.bfType = 'B' + ( 'M' << 8 );//0x424D;
bitmap_header.bfSize = bitmap_header.bfOffBits + ( m_iRows * m_iPitch ) * 3;
bitmap_header.bfOffBits = sizeof ( BITMAPFILEHEADER ) + sizeof ( BITMAPV5HEADER );
double _1px_p_m = 0.0002645833333333f;
bitmap_info.bV5Size = sizeof ( BITMAPV5HEADER );
bitmap_info.bV5Width = m_iPitch;
bitmap_info.bV5Height = m_iRows;
bitmap_info.bV5Planes = 1;
bitmap_info.bV5BitCount = bytes_per_px * 8;
bitmap_info.bV5Compression = BI_BITFIELDS;
bitmap_info.bV5SizeImage = ( m_iPitch * m_iRows ) * 3;
bitmap_info.bV5XPelsPerMeter = m_iPitch * _1px_p_m;
bitmap_info.bV5YPelsPerMeter = m_iRows * _1px_p_m;
bitmap_info.bV5ClrUsed = 0;
bitmap_info.bV5ClrImportant = 0;
bitmap_info.bV5RedMask = 0xFF000000;
bitmap_info.bV5GreenMask = 0x00FF0000;
bitmap_info.bV5BlueMask = 0x0000FF00;
bitmap_info.bV5AlphaMask = 0x000000FF;
bitmap_info.bV5CSType = LCS_WINDOWS_COLOR_SPACE;
...
-> the other part does just write those structures & my px array to file
}
CBitmap "useful" macros
i made macros for the pixel array because ive "changed" the
pixel "format" many times -> to make it "easier" i have made those macros which make it easier todo that
#define bytes_per_px 4
#define px_red 0
#define px_green 1
#define px_blue 2
#define px_alpha 3
Notes
My bitmap has a color order of RGBA
This calculation is wrong:
&m_pBuffer [ ( m_iPitch * top ) + ( left * bytes_per_px ) ]
It should be:
&m_pBuffer [ ( ( m_iPitch * top ) + left ) * bytes_per_px ]
Each row is m_iPitch * bytes_per_px bytes wide. If you just multiply by m_iPitch, then your "rows" are overlapping each other.

How i can set rgb color?

How i can set rgb color to anything component? FireMonkey, C++ Builder XE8.
I have used this code but its useless...
Rectangle1->Fill->Color = RGB(255, 50, 103);
Rectangle1->Fill->Color = (TColor)RGB(255, 50, 103);
May be i must use RGBA? But i dont know how to do it.
I did it.
UnicodeString s ;
s = "0xFF" ;
s += IntToHex ( 255 , 2 );
s += IntToHex ( 50 , 2 );
s += IntToHex ( 103 , 2 );
Rectangle1 -> Fill -> Color = StringToColor ( s );
This function will allow you to convert int specified RGB values to a TAlphaColor, which is what is used by FireMonkey.
TAlphaColor GetAlphaColor (int R, int G, int B)
{
TAlphaColorRec acr;
acr.R = R;
acr.G = G;
acr.B = B;
acr.A = 255;
return acr.Color;
}

Creating a Sphere (using osg::Geometry) in OpenSceneGraph

I spent quite some time to get this working, but my Sphere just won't display.
Used the following code to make my function:
Creating a 3D sphere in Opengl using Visual C++
And the rest is simple OSG with osg::Geometry.
(Note: Not ShapeDrawable, as you can't implement custom shapes using that.)
Added the vertices, normals, texcoords into VecArrays.
For one, I suspect something misbehaving, as my saved object is half empty.
Is there a way to convert the existing description into OSG?
Reason? I want to understand how to create objects later on.
Indeed, it is linked with a later assignment, but currently I'm just prepairing beforehand.
Sidenote: Since I have to make it without indices, I left them out.
But my cylinder displays just fine without them.
Caveat: I'm not an OSG expert. But, I did do some research.
OSG requires all of the faces to be defined in counter-clockwise order, so that backface culling can reject faces that are "facing away". The code you're using to generate the sphere does not generate all the faces in counter-clockwise order.
You can approach this a couple ways:
Adjust how the code generates the faces, by inserting the faces CCW order.
Double up your model and insert each face twice, once with the vertices on each face in their current order and once with the vertices in reverse order.
Option 1 above will limit your total polygon count to what's needed. Option 2 will give you a sphere that's visible from outside the sphere as well as within.
To implement Option 2, you merely need to modify this loop from the code you linked to:
indices.resize(rings * sectors * 4);
std::vector<GLushort>::iterator i = indices.begin();
for(r = 0; r < rings-1; r++)
for(s = 0; s < sectors-1; s++) {
*i++ = r * sectors + s;
*i++ = r * sectors + (s+1);
*i++ = (r+1) * sectors + (s+1);
*i++ = (r+1) * sectors + s;
}
Double up the set of quads like so:
indices.resize(rings * sectors * 8);
std::vector<GLushort>::iterator i = indices.begin();
for(r = 0; r < rings-1; r++)
for(s = 0; s < sectors-1; s++) {
*i++ = r * sectors + s;
*i++ = r * sectors + (s+1);
*i++ = (r+1) * sectors + (s+1);
*i++ = (r+1) * sectors + s;
*i++ = (r+1) * sectors + s;
*i++ = (r+1) * sectors + (s+1);
*i++ = r * sectors + (s+1);
*i++ = r * sectors + s;
}
That really is the "bigger hammer" solution, though.
Personally, I'm having a hard time figuring out why the original loop isn't sufficient; intuiting my way through the geometry, it feels like it's already generating CCW faces, because each successive ring is above the previous, and each successive sector is CCW around the surface of the sphere from the previous. So, the original order itself should be CCW with respect to the face nearest the viewer.
EDIT Using the OpenGL code you linked before and the OSG tutorial you linked today, I put together what I think is a correct program to generate the osg::Geometry / osg::Geode for the sphere. I have no way to test the following code, but desk-checking it, it looks correct or at least largely correct.
#include <vector>
class SolidSphere
{
protected:
osg::Geode sphereGeode;
osg::Geometry sphereGeometry;
osg::Vec3Array sphereVertices;
osg::Vec3Array sphereNormals;
osg::Vec2Array sphereTexCoords;
std::vector<osg::DrawElementsUInt> spherePrimitiveSets;
public:
SolidSphere(float radius, unsigned int rings, unsigned int sectors)
{
float const R = 1./(float)(rings-1);
float const S = 1./(float)(sectors-1);
int r, s;
sphereGeode.addDrawable( &sphereGeometry );
// Establish texture coordinates, vertex list, and normals
for(r = 0; r < rings; r++)
for(s = 0; s < sectors; s++)
{
float const y = sin( -M_PI_2 + M_PI * r * R );
float const x = cos(2*M_PI * s * S) * sin( M_PI * r * R );
float const z = sin(2*M_PI * s * S) * sin( M_PI * r * R );
sphereTexCoords.push_back( osg::Vec2(s*R, r*R) );
sphereVertices.push_back ( osg::Vec3(x * radius,
y * radius,
z * radius) );
sphereNormals.push_back ( osg::Vec3(x, y, z) );
}
sphereGeometry.setVertexArray ( &spehreVertices );
sphereGeometry.setTexCoordArray( &sphereTexCoords );
// Generate quads for each face.
for(r = 0; r < rings-1; r++)
for(s = 0; s < sectors-1; s++)
{
spherePrimitiveSets.push_back(
DrawElementUint( osg::PrimitiveSet::QUADS, 0 )
);
osg::DrawElementsUInt& face = spherePrimitiveSets.back();
// Corners of quads should be in CCW order.
face.push_back( (r + 0) * sectors + (s + 0) );
face.push_back( (r + 0) * sectors + (s + 1) );
face.push_back( (r + 1) * sectors + (s + 1) );
face.push_back( (r + 1) * sectors + (s + 0) );
sphereGeometry.addPrimitveSet( &face );
}
}
osg::Geode *getGeode() const { return &sphereGeode; }
osg::Geometry *getGeometry() const { return &sphereGeometry; }
osg::Vec3Array *getVertices() const { return &sphereVertices; }
osg::Vec3Array *getNormals() const { return &sphereNormals; }
osg::Vec2Array *getTexCoords() const { return &sphereTexCoords; }
};
You can use the getXXX methods to get the various pieces. I didn't see how to hook the surface normals to anything, but I do store them in a Vec2Array. If you have a use for them, they're computed and stored and waiting to be hooked to something.
That code calls glutSolidSphere() to draw a sphere, but it doesn't make sense to call it if your application is not using GLUT to display a window with 3D context.
There is another way to draw a sphere easily, which is by invoking gluSphere() (you probably have GLU installed):
void gluSphere(GLUquadric* quad,
GLdouble radius,
GLint slices,
GLint stacks);
Parameters
quad - Specifies the quadrics object (created with gluNewQuadric).
radius - Specifies the radius of the sphere.
slices - Specifies the number of subdivisions around the z axis (similar
to lines of longitude).
stacks - Specifies the number of subdivisions along the z axis (similar
to lines of latitude).
Usage:
// If you also need to include glew.h, do it before glu.h
#include <glu.h>
GLUquadric* _quadratic = gluNewQuadric();
if (_quadratic == NULL)
{
std::cerr << "!!! Failed gluNewQuadric" << std::endl;
return;
}
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0, 0.0, -5.0);
glColor3ub(255, 97, 3);
gluSphere(_quadratic, 1.4f, 64, 64);
glFlush();
gluDeleteQuadric(_quadratic);
It's probably wiser to move the gluNewQuadric() call to the constructor of your class since it needs to be allocated only once, and move the call to gluDeleteQuadric() to the destructor of the class.
#JoeZ's answer is excellent, but the OSG code has some errors/bad practices. Here's the updated code. It's been tested and it shows a very nice sphere.
osg::ref_ptr<osg::Geode> buildSphere( const double radius,
const unsigned int rings,
const unsigned int sectors )
{
osg::ref_ptr<osg::Geode> sphereGeode = new osg::Geode;
osg::ref_ptr<osg::Geometry> sphereGeometry = new osg::Geometry;
osg::ref_ptr<osg::Vec3Array> sphereVertices = new osg::Vec3Array;
osg::ref_ptr<osg::Vec3Array> sphereNormals = new osg::Vec3Array;
osg::ref_ptr<osg::Vec2Array> sphereTexCoords = new osg::Vec2Array;
float const R = 1. / static_cast<float>( rings - 1 );
float const S = 1. / static_cast<float>( sectors - 1 );
sphereGeode->addDrawable( sphereGeometry );
// Establish texture coordinates, vertex list, and normals
for( unsigned int r( 0 ); r < rings; ++r ) {
for( unsigned int s( 0) ; s < sectors; ++s ) {
float const y = sin( -M_PI_2 + M_PI * r * R );
float const x = cos( 2 * M_PI * s * S) * sin( M_PI * r * R );
float const z = sin( 2 * M_PI * s * S) * sin( M_PI * r * R );
sphereTexCoords->push_back( osg::Vec2( s * R, r * R ) );
sphereVertices->push_back ( osg::Vec3( x * radius,
y * radius,
z * radius) )
;
sphereNormals->push_back ( osg::Vec3( x, y, z ) );
}
}
sphereGeometry->setVertexArray ( sphereVertices );
sphereGeometry->setTexCoordArray( 0, sphereTexCoords );
// Generate quads for each face.
for( unsigned int r( 0 ); r < rings - 1; ++r ) {
for( unsigned int s( 0 ); s < sectors - 1; ++s ) {
osg::ref_ptr<osg::DrawElementsUInt> face =
new osg::DrawElementsUInt( osg::PrimitiveSet::QUADS,
4 )
;
// Corners of quads should be in CCW order.
face->push_back( ( r + 0 ) * sectors + ( s + 0 ) );
face->push_back( ( r + 0 ) * sectors + ( s + 1 ) );
face->push_back( ( r + 1 ) * sectors + ( s + 1 ) );
face->push_back( ( r + 1 ) * sectors + ( s + 0 ) );
sphereGeometry->addPrimitiveSet( face );
}
}
return sphereGeode;
}
Changes:
The OSG elements used in the code now are smart pointers1. Moreover, classes like Geode and Geometry have their destructors protected, so the only way to instantiate them are via dynamic allocation.
Removed spherePrimitiveSets as it isn't needed in the current version of the code.
I put the code in a free function, as I don't need a Sphere class in my code. I omitted the getters and the protected attributes. They aren't needed: if you need to access, say, the geometry, you can get it via: sphereGeode->getDrawable(...). The same goes for the rest of the attributes.
[1] See Rule of thumb #1 here. It's a bit old but the advice maintains.

Constant buffer members access the same memory

I'm using a constant buffer to pass data to my shaders at every frame, and I'm running into an issue where the values of some of the members of the buffer point to the same memory.
When I use the Visual Studio 2012 debugging tools, it looks like the data is being set in the buffer more or less correctly:
0 [0x00000000-0x00000003] | +0
1 [0x00000004-0x00000007] | +1
2 [0x00000008-0x0000000b] | +1
3 [0x0000000c-0x0000000f] | +1
4 [0x00000010-0x00000013] | +0.78539819
5 [0x00000014-0x00000017] | +1.1760513
6 [0x00000018-0x0000001b] | +0
7 [0x0000001c-0x0000001f] | +1
The problem is that when I debug the shader, the sunAngle and phaseFunction both have the same value - specifically 0.78539819, which should be the value of sunAngle only. It does change to 1.1760513 if I swap the order of the two floats, but both will still be the same. I thought I'd packed everything together correctly, but am I missing how to define exactly what constants are in each part of the buffer?
Here's the C++ structure I'm using:
struct SunData {
DirectX::XMFLOAT4 sunPosition;
float sunAngle;
float phaseFunctionResult;
};
And the shader buffer looks like this:
// updated as the sun moves through the sky
cbuffer sunDependent : register( b1 )
{
float4 sunPosition;
float sunAngle; // theta
float phaseFunctionResult; // F( theta, g )
}
Here's the code I'm using to initialize the buffer:
XMVECTOR pos = XMVectorSet( 0, 1, 1, 1 );
XMStoreFloat3( &_sunPosition, pos );
XMStoreFloat4( &_sun.sunPosition, pos );
_sun.sunAngle = XMVectorGetX(
XMVector3AngleBetweenVectors( pos, XMVectorSet( 0, 1, 0, 0 ) )
);
_sun.phaseFunctionResult = _planet.phaseFunction( _sun.sunAngle );
// Fill in a buffer description.
D3D11_BUFFER_DESC cbDesc;
cbDesc.ByteWidth = sizeof( SunData ) + 8;
cbDesc.Usage = D3D11_USAGE_DYNAMIC;
cbDesc.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
cbDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
cbDesc.MiscFlags = 0;
cbDesc.StructureByteStride = 0;
// Fill in the subresource data.
D3D11_SUBRESOURCE_DATA data;
data.pSysMem = &_sun;
data.SysMemPitch = 0;
data.SysMemSlicePitch = 0;
// Create the buffer.
ID3D11Buffer *constantBuffer = nullptr;
HRESULT hr = _d3dDevice->CreateBuffer(
&cbDesc,
&data,
&constantBuffer
);
assert( SUCCEEDED( hr ) );
// Set the buffer.
_d3dDeviceContext->VSSetConstantBuffers( 1, 1, &constantBuffer );
_d3dDeviceContext->PSSetConstantBuffers( 1, 1, &constantBuffer );
Release( constantBuffer );
And here's the pixel shader that's using the values:
float4 main( in ATMOS_PS_INPUT input ) : SV_TARGET
{
float R = sunAngle * sunPosition.x * sunIntensity.x
* attenuationCoefficient.x
* phaseFunctionResult;
return float4( R, 1, 1, 1 );
}
It looks like a padding issue like in this question: Question
All constant buffers should be sized to be dividble by sizeof(four-component vector) (doc)