Transformation and CSG Operations on grid in OpenVDB - c++

OpenVDB seems really amazing, and the addressing of the nodes is really smart. There are some operations that I don't understand, in particular CSG operations. This is a example code. It takes as input two arguments:
a vdb input file with only one grid, representing a level set created starting from a triangular mesh,
a vdb output that stores the results of the operations.
The algorithm should take the input,
creates a deepCopy in gridA
creates a deepCopy in gridB
rotates gridB along Y axiz of M_PI/4.0f
performs the csgUnion between gridA and gridB
saves all grids in a vdb output file.
I'm trying to use VDB grids as data container in place of classicaloctree algorithm, for physical simulations that needs an high level of detail in collisions.
I understand the concept of transformation between world coordinates and grid coordinates, what I cannot understand is how to perform a transformation of data inside the tree, like translate or rotate the level-sets, like a rigid object. In the example, I think I'm only changing the transformation between world and lattice.
This is the result (the same for level-set and volume):Initial GridTrasformed grid, it seems that the rotation is performed...no final result?
Do you have suggestions?
Attached: one example and a link to the LINK REMOVED that I'm using (sorry, it is 133MB...)
#include <cmath>
#include "openvdb/openvdb.h"
#include "openvdb/util/Util.h"
#include "openvdb/io/Stream.h"
#include "openvdb/tools/Composite.h"
using namespace openvdb;
int main(int argc, char** argv) {
openvdb::initialize();
openvdb::io::File file(argv[1]);
file.open();
GridBase::Ptr baseGrid;
for (openvdb::io::File::NameIterator nameIter = file.beginName();
nameIter != file.endName(); ++nameIter)
{ baseGrid = file.readGrid(nameIter.gridName()); }
file.close();
FloatGrid::Ptr gridA = gridPtrCast<FloatGrid>(baseGrid);
FloatGrid::Ptr gridB = gridA->deepCopy();
FloatGrid::Ptr result = gridA ->deepCopy();
gridB->transform().postRotate(M_PI/4.0f, math::Y_AXIS);
tools::csgUnion(*result, *gridB);
openvdb::io::File file_out(argv[2]);
GridPtrVec grids;
grids.push_back(gridA);
grids.push_back(gridB);
grids.push_back(result);
file_out.write(grids);
file_out.close();
return 0;
}

The solution of my answer, thanks to VDB support in OpenVDB forum, is:
Execute a simple metadata copy of the initial grid
Execute the transformation (like the rotation in my code)
reinterpolate data form the initial grid in the new transformed grid, using tools::resampleToMatch, choosing one of the interpolator (in my case tools::BoxSample) available.
continue with csg operations
FYI, there is an extreme difference in execution time
using optimization flag -O3 (400% time reduction).
#include "openvdb/io/Stream.h"
#include "openvdb/openvdb.h"
#include "openvdb/tools/Composite.h"
#include "openvdb/tools/GridTransformer.h"
#include "openvdb/tools/Interpolation.h"
#include "openvdb/util/Util.h"
#include <cmath>
using namespace openvdb;
int main(int argc, char **argv) {
openvdb::initialize();
openvdb::io::File file(argv[1]);
file.open();
GridBase::Ptr baseGrid;
for (openvdb::io::File::NameIterator nameIter = file.beginName();
nameIter != file.endName(); ++nameIter) {
baseGrid = file.readGrid(nameIter.gridName());
}
file.close();
FloatGrid::Ptr gridA = gridPtrCast<FloatGrid>(baseGrid);
FloatGrid::Ptr gridB = gridA->copy(CP_NEW);
gridB->setTransform(gridA->transform().copy());
gridB->transform().postRotate(M_PI / 4.0f, math::Y_AXIS);
tools::resampleToMatch<tools::BoxSampler>(*gridA, *gridB);
FloatGrid::Ptr result = gridA->deepCopy();
FloatGrid::Ptr gridB2 = gridB->deepCopy();
tools::csgUnion(*result, *gridB);
openvdb::io::File file_out(argv[2]);
GridPtrVec grids;
grids.push_back(gridA);
grids.push_back(gridB2);
grids.push_back(result);
file_out.write(grids);
file_out.close();
return 0;
}
Reference: OpenVDB Forum

Related

Shrink/Expand the outline of a polygon with holes

I want to expand/shrink a polygon with holes using boost::polygon. So to clarify that a bit, I have a single data structure
boost::polygon::polygon_with_holes_data<int> inPoly
where inPoly contains data that describe a rectangular outline and a triangle which forms the hole within this rectangle (in picture below this is the left, black drawing).
Now I want to
a) expand the whole stuff so that the rectangle becomes bigger and the hole becomes smaller (resulting in the red polygon in image below) or
b) shrink it so that the rectangle becomes smaller and the hole bigger (resulting in the green image below).
The corners don't necessarily need to be straight, the also can be rounded or somehow "rough".
My question: how can this be done using boost::polygon?
Thanks!
I answered this Expand polygons with boost::geometry?
And yes you can teach Boost Geometry to act on Boost Polygon types:
#include <boost/geometry/geometries/adapted/boost_polygon.hpp>
I came up with a test polygon like you described:
boost::polygon::polygon_with_holes_data<int> inPoly;
bg::read_wkt("POLYGON ((0 0,0 1000,1000 1000,1000 0,0 0),(100 100,900 100,500 700,100 100))", inPoly);
Now, apparently we can't just buffer on the adapted polygon, nor can we bg::assign or bg::convert directly. So, I came up with an ugly workaround of converting to WKT and back. And then you can do the buffer, and conver back similarly.
It's not very elegant, but it does work:
poly in;
bg::read_wkt(boost::lexical_cast<std::string>(bg::wkt(inPoly)), in);
Full Demo
Include SVG output:
Live On Coliru
#include <boost/polygon/polygon.hpp>
#include <boost/polygon/polygon_set_data.hpp>
#include <boost/polygon/polygon_with_holes_data.hpp>
#include <boost/geometry.hpp>
#include <boost/geometry/strategies/buffer.hpp>
#include <boost/geometry/algorithms/buffer.hpp>
#include <boost/lexical_cast.hpp>
#include <boost/geometry/geometries/multi_polygon.hpp>
#include <boost/geometry/geometries/point_xy.hpp>
#include <boost/geometry/geometries/adapted/boost_polygon.hpp>
#include <fstream>
namespace bp = boost::polygon;
namespace bg = boost::geometry;
using P = bp::polygon_with_holes_data<int>;
using PS = bp::polygon_set_data<int>;
using coordinate_type = bg::coordinate_type<P>::type;
int main() {
P inPoly, grow, shrink;
bg::read_wkt("POLYGON ((0 0,0 1000,1000 1000,1000 0,0 0),(100 100,900 100,500 700,100 100))", inPoly);
{
// define our boost geometry types
namespace bs = bg::strategy::buffer;
namespace bgm = bg::model;
using pt = bgm::d2::point_xy<coordinate_type>;
using poly = bgm::polygon<pt>;
using mpoly = bgm::multi_polygon<poly>;
// define our buffering strategies
using dist = bs::distance_symmetric<coordinate_type>;
bs::side_straight side_strategy;
const int points_per_circle = 12;
bs::join_round join_strategy(points_per_circle);
bs::end_round end_strategy(points_per_circle);
bs::point_circle point_strategy(points_per_circle);
poly in;
bg::read_wkt(boost::lexical_cast<std::string>(bg::wkt(inPoly)), in);
for (auto [offset, output_p] : { std::tuple(+15, &grow), std::tuple(-15, &shrink) }) {
mpoly out;
bg::buffer(in, out, dist(offset), side_strategy, join_strategy, end_strategy, point_strategy);
assert(out.size() == 1);
bg::read_wkt(boost::lexical_cast<std::string>(bg::wkt(out.front())), *output_p);
}
}
{
std::ofstream svg("output.svg");
using pt = bg::model::d2::point_xy<coordinate_type>;
boost::geometry::svg_mapper<pt> mapper(svg, 400, 400);
mapper.add(inPoly);
mapper.add(grow);
mapper.add(shrink);
mapper.map(inPoly, "fill-opacity:0.3;fill:rgb(153,204,0);stroke:rgb(153,204,0);stroke-width:2");
mapper.map(grow, "fill-opacity:0.05;fill:rgb(255,0,0);stroke:rgb(255,0,0);stroke-width:2");
mapper.map(shrink, "fill-opacity:0.05;fill:rgb(0,0,255);stroke:rgb(0,0,255);stroke-width:2");
}
}
The output.svg written:
More or less accidentally I found boost::polygon also provides a single function for that which is quite easy to use: boost::polygon::polygon_set_data offers a function resize() which is doing exactly what is described above. Using the additional, parameters corner_fill_arc and num_segments rounded corners can be created.
No idea why this function is located in boost::polygon::polygon_set_data and not in boost::polygon::polygon_with_holes_data which in my opinion would be the more logically place for such a function...

Zero padding for Savitzgy Golay filter not working for C++ numerical recipe

My problem is straight forward:
I want to smooth some data using the Savitzgy Golay filter. I use C++.
The code is taken from the book 1 and can be split into two parts:
Calculate the Savitzgy Golay coefficients and store them in a vector C.
Smooth the signal data S by convoluting it with C.
The problem is the boundaries. Since the signal S is not periodic, boundary effects have to be taken into consideration. This is done with so-called 0-padding, meaning that some extra 0s are added to the signal at the end. The procedure is described exactly in chapter 13.1.1 of 1.
However, I cannot find a complete example of this procedure, and my own implementation does not seem to work, although I can absolutely not understand why. Below is a well-commented example. Can somebody spot what is going wrong at the boundaries?
1 William H., et al. "Numerical recipes: the art of scientific
computing." (1987)
#include <iostream>
#include <math.h>
#include <stdlib.h>
#include <cstdlib>
#include <string>
#include <fstream>
#include <vector>
#include "./numerical_recipes/other/nr.h"
#include "./numerical_recipes/recipes/savgol.cpp"
#include "./numerical_recipes/recipes/lubksb.cpp"
#include "./numerical_recipes/recipes/ludcmp.cpp"
#include "./numerical_recipes/recipes/convlv.cpp"
#include "./numerical_recipes/recipes/realft.cpp"
#include "./numerical_recipes/recipes/four1.cpp"
using namespace std;
int main()
{
// set savgol parameters
int nl = 6; // left window length
int nr = 6; // right window length
int m = 3; // order of interpolation polynomial
// calculate savitzgy golay coefficients
int np=nl+nr+1; // number of coefficients
Vec_DP coefs(np); // vector that stores the coefficients
NR::savgol(coefs,np,nl,nr,0,m); // calculate the coefficients
// as example input data, generate sinh datapoints between -1 and 1
int nvals = int(pow(2,7))-nl; // number of datapoints to analyze (equal to 2^7 including zero-padding)
Vec_DP args(nvals); // stores arguments
Vec_DP vals(nvals); // stores signal
double next_arg; // help variable
for(int i = 0; i < nvals; i++)
{
next_arg = i*2./(nvals-1)-1; // next argument
args[i] = next_arg; // store argument point
vals[i] = sinh(next_arg); // evaluate next value
}
// for zero padding, we have to add nl datapoints to the right. The signal is then of length 2^7.
// see also chapter 13.1.1 in [1]
// [1] Press, William H., et al. "Numerical recipes: the art of scientific computing." (1987)
Vec_DP input_signal(int(pow(2,7))); // create vector of length 2^7
for(int i = 0; i < nvals; i++) input_signal[i] = vals[i]; // overwrite with actual signal
for(int i = nvals; i < int(pow(2,7)); i++) input_signal[i] = 0.0; // add zeros for zero-patting
// perfrom the convolution
Vec_DP ans(int(pow(2,7))); // stores the smoothed signal
NR::convlv(input_signal,coefs,1,ans); // smoothen the data
// write data to the output for visual inspection
string filename = "test.csv"; // output filename
string write_line;
ofstream wto(filename,ios::app);
for(int i = 0; i < nvals; i++) // write result to output, drop the values from 0-padding
{
write_line = to_string(args[i])+", "+to_string(vals[i])+= ", "+to_string(ans[i]);
wto << write_line << endl;
}
wto.close();
return 0;
}
Here is a visualization of the output. We can clearly see that the fit fails at the boundaries, although zero-padding was taken into consideration.
The problem is the boundaries. Since the signal S is not periodic, boundary effects have to be taken into consideration. This is done with so-called 0-padding, meaning that some extra 0s are added to the signal at the end. The procedure is described exactly in chapter 13.1.1 of 1.
In my edition of Numerical Recipies, Chapter 13 is "Fourier and spectral applications". While zero-padding the signal is perfectly fine for the Fourier Transform, it's not a good idea for Savitzky-Golay.
I see a couple of ways to apply Savitzky-Golay smoothing at signal boundaries:
Exclude the missing bits of the signal from convolution. Set the coefficients corresponding to the missing bits to zero and re-normalize the rest of them to sum to 1.
Compute a special Savitzky-Golay kernel for each signal point with an incomplete neighborhood. That's actually not hard to do. Conceptually, convolving with a Savitzky-Golay kernel is equivalent to fitting a polynomial to a neighborhood of a signal point and then taking that signal point from the polynomial. Nothing prevents you from having a one-sided or an asymmetric neighborhood. Building a Savitzky-Golay kernel for an arbitrary neighborhood is a matter of fitting a polynomial to a signal where the value at origin is 1 and zero everywhere else. The origin doesn't have to be at the center of a neighborhood. The Savitzky-Golay kernel coefficients are then the values of the fitted polynomial function at the corresponding signal points.
I have solved the problem now similar to #Tulon's 2. suggestion. Specifically, I take care of the left and right boundary by fitting there an extra polynomial on both sides. This is motivated by the implementation of the Savitzgy Golay filter in the Scipy.signal library for Python.

Removing geometrical objects from geomview window when used in CGAL

I am interested in implementing my computational geometry algorithms using the CGAL library.
Ideally, I am also interested in being able to animate my algorithm.CGAL has an interface to geomview built in which I am interested in using for illustrating these algorithms.
Based on what I little I understand of the CGAL geomview interface (from this example), below is a very simple code I wrote, which inserts 5 random points, and segments between some of the points.
However, once I render the objects to the screen, I don't know how unrender them or delete them from the geomview window, if they need to be deleted at the
next iteration(say) of my algorithm. So how would I modify the code below to do just that?
If someone knows of a better way than using geomview to animate geometry algorithms with CGAL that would also be helpful.
#include <iostream>
#include <vector>
#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>
#include <unistd.h>
#include <CGAL/IO/Geomview_stream.h>
typedef CGAL::Exact_predicates_inexact_constructions_kernel K;
typedef K::Point_2 Point_2;
typedef K::Segment_2 Segment_2;
using namespace std;
int main(int argc, char *argv[])
{
Point_2 points[5] = { Point_2(0.,0.), Point_2(10.,0.),Point_2(10.,10.),Point_2(6.,5.),Point_2(4.,1.) };
CGAL::Geomview_stream gv(CGAL::Bbox_3(-12, -12, -0.1, 12,12,0.1));
gv << CGAL::RED; // red points
for (int i = 0; i <= 2; ++i)
{
gv << points[i];
}
gv << CGAL::BLUE;// bluepoints
for (int i = 3; i <= 4; ++i)
{
gv << points[i];
}
// segments between some points
gv << CGAL::BLACK;
Segment_2 AB = Segment_2(points[0],points[1]);
gv << CGAL::YELLOW << AB ;
Segment_2 CD = Segment_2(points[1],points[2]);
gv << CGAL::BLUE << CD ;
sleep(300);
return 0;
}
The current trend among CGAL developers is to use Qt framework and associated visualisation tools such as QGLViewer rather than Geomview which are more recent, fully portative and allows you to do much more (especially if you want to make a demo for your algorithms with user interactions).
If you want to do 3D visualisation with CGAL I will advise you to use QGLViewer as they are already a lot of demos in CGAL that uses that library. For instance as an entry point, I will suggest you to have a look to Alpha_shape_3 demo. The code of this demo is quite light and straightforward, you can easily add new features without understanding the whole Qt framework first (you may have to eventually but that way the learning curve will be less steep and you can quickly start implementing stuff).
If you want to do 2D visualisation, you may have a look to the Alpha_shape_2 demo and use QPainter from Qt (note that you can combine both 3d and 2d viewer in QGL viewer as shown in this example.

How to compute 2D log-chromaticity?

My goal is to remove shadows from image. I use C++ and OpenCV. Sure I lack enough math background and not being native English speaker makes everything harder to understand.
After reading different approaches to remove shadows I found method which should work for me but it relies on something they call "2D chromaticity" and "2D log-chromaticity space" but even this term seems to be inconsistent in different sources. Many papers on topic, few are listed here:
http://www.cs.cmu.edu/~efros/courses/LBMV09/Papers/finlayson-eccv-04.pdf
http://www2.cmp.uea.ac.uk/Research/compvis/Papers/DrewFinHor_ICCV03.pdf
http://www.cvc.uab.es/adas/publications/alvarez_2008.pdf
http://ivrgwww.epfl.ch/alumni/fredemba/papers/FFICPR06.pdf
I teared Google into strips by searching right words and explanations. Best I found is Illumination invariant image which did not help me much.
I tried to repeat formula log(G/R), log(B/R) described in first paper, page 3 to get figures similar to 2b.
As input I used http://en.wikipedia.org/wiki/File:Gretag-Macbeth_ColorChecker.jpg
Output I get is
My source code:
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
int main( int argc, char** argv ) {
Mat src;
src = imread( argv[1], 1 );
if( !src.data )
{ return -1; }
Mat image( 600, 600, CV_8UC3, Scalar(127,127,127) );
int cn = src.channels();
uint8_t* pixelPtr = (uint8_t*)src.data;
for(int i=0 ; i< src.rows;i++) {
for(int j=0 ; j< src.cols;j++) {
Scalar_<uint8_t> bgrPixel;
bgrPixel.val[0] = pixelPtr[i*src.cols*cn + j*cn + 0]; // B
bgrPixel.val[1] = pixelPtr[i*src.cols*cn + j*cn + 1]; // G
bgrPixel.val[2] = pixelPtr[i*src.cols*cn + j*cn + 2]; // R
if(bgrPixel.val[2] !=0 ) { // avoid division by zero
float a= image.cols/2+50*(log((float)bgrPixel.val[0] / (float)bgrPixel.val[2])) ;
float b= image.rows/2+50*(log((float)bgrPixel.val[1] / (float)bgrPixel.val[2])) ;
if(!isinf(a) && !isinf(b))
image.at<Vec3b>(a,b)=Vec3b(255,2,3);
}
}
}
imshow("log-chroma", image );
imwrite("log-chroma.png", image );
waitKey(0);
}
What I am missing or misunderstand?
By reading the paper Recovery of Chromaticity Image Free from Shadows via Illumination Invariance that you've posted, and your code, I guess the problem is that your coordinate system (X/Y axis) are linear while in the paper the coordinate system are log(R/G) by log(B/G).
This is the closest I can figure. Reading through this:
http://www2.cmp.uea.ac.uk/Research/compvis/Papers/DrewFinHor_ICCV03.pdf
I came across the sentence:
"Fig. 2(a) shows log-chromaticities for the 24 surfaces of a Macbeth ColorChecker Chart, (the six neutral patches all belong to the same
cluster). If we now vary the lighting and plot median values
for each patch, we see the curves in Fig. 2(b)."
If you look closely at the log-chromaticity plot, you see 19 blobs, corresponding to each of the 18 colors in the Macbeth chart, plus the sum of all the 6 grayscale targets in the bottom row:
Explanation of Log Chromaticities
Explanation of Log Chromaticities
With 1 picture, we can only get 1 point of each blob: We take the median value inside each target and plot it. To get plot from the paper, we would have to create multiple images with different lighting. We might be able to do this by varying the temperature of the image in an image editor.
For now, I just looked at the color patches in the original image and plotted the points:
Input:
Color Patches Used
Output:
Log Chromaticity
The graph dots are not all in the same place as the paper, but I figure it's fairly close. Would someone please check my work to see if this makes sense?
In that OpenCV code I got a "undefined Identifier error" for the function ifinf() and I solved it by replacing it with _finite(). That might be the issue with the Visual studio version.
if(!isinf(a) && !isinf(b)) ----> if(_finite(a) && _finite(b))
Include this header:
#include<float.h>

VTK Toolkit - vtkCutter Performance

I use the VTK Toolkit to load an OBJ file and a vtkCutter to cut through the data set with a play and then draw the outline of the cut. For large objects this is can become quite slow as another user pointed out in the VTK Users Forum.
Is there a way to make the cutter use a hierarchical data structure to gain better performance?
This is the code:
#include <vtkSmartPointer.h>
#include <vtkCubeSource.h>
#include <vtkPolyDataMapper.h>
#include <vtkPlane.h>
#include <vtkCutter.h>
#include <vtkProperty.h>
#include <vtkActor.h>
#include <vtkRenderer.h>
#include <vtkRenderWindow.h>
#include <vtkRenderWindowInteractor.h>
#include <vtkOBJReader.h>
int main(int argc, char *argv[])
{
// Parse command line arguments
if (argc != 2) {
std::cout << "Usage: " << argv[0] << " Filename(.obj)" << std::endl;
return EXIT_FAILURE;
}
std::string filename = argv[1];
vtkSmartPointer<vtkOBJReader> obj = vtkSmartPointer<vtkOBJReader>::New();
obj->SetFileName(filename.c_str());
obj->Update();
vtkSmartPointer<vtkPolyDataMapper> mapper = vtkSmartPointer<vtkPolyDataMapper>::New();
mapper->SetInputConnection(obj->GetOutputPort());
// Create a plane to cut,here it cuts in the XZ direction (xz normal=(1,0,0);XY =(0,0,1),YZ =(0,1,0)
vtkSmartPointer<vtkPlane> plane = vtkSmartPointer<vtkPlane>::New();
plane->SetOrigin(0, 0, 0);
plane->SetNormal(1, 0, 0);
// Create cutter
vtkSmartPointer<vtkCutter> cutter = vtkSmartPointer<vtkCutter>::New();
cutter->SetCutFunction(plane);
cutter->SetInputConnection(obj->GetOutputPort());
cutter->Update();
vtkSmartPointer<vtkPolyDataMapper> cutterMapper = vtkSmartPointer<vtkPolyDataMapper>::New();
cutterMapper->SetInputConnection(cutter->GetOutputPort());
// Create plane actor
vtkSmartPointer<vtkActor> planeActor = vtkSmartPointer<vtkActor>::New();
planeActor->GetProperty()->SetColor(1.0, 1, 0);
planeActor->GetProperty()->SetLineWidth(2);
planeActor->SetMapper(cutterMapper);
// Create cube actor
vtkSmartPointer<vtkActor> cubeActor = vtkSmartPointer<vtkActor>::New();
cubeActor->GetProperty()->SetColor(0.5, 1, 0.5);
cubeActor->GetProperty()->SetOpacity(0.5);
cubeActor->SetMapper(mapper);
// Create renderers and add actors of plane and cube
vtkSmartPointer<vtkRenderer> renderer = vtkSmartPointer<vtkRenderer>::New();
renderer->AddActor(planeActor); //display the rectangle resulting from the cut
renderer->AddActor(cubeActor); //display the cube
// Add renderer to renderwindow and render
vtkSmartPointer<vtkRenderWindow> renderWindow = vtkSmartPointer<vtkRenderWindow>::New();
renderWindow->AddRenderer(renderer);
renderWindow->SetSize(600, 600);
vtkSmartPointer<vtkRenderWindowInteractor> interactor = vtkSmartPointer<
vtkRenderWindowInteractor>::New();
interactor->SetRenderWindow(renderWindow);
renderer->SetBackground(0, 0, 0);
renderWindow->Render();
interactor->Start();
return EXIT_SUCCESS;
}
vtkCutter slices meshes using an arbitrarily complex func(x,y,z) and is used here with a simple plane to describe that function, which is a common and well covered special case, as the cut countour lies on a simple plane and will hence be a simple (flat) polygon.
These generic implementations usually cost alot of CPU time, because all special cases of poly cutting are expected to occur in case of vtkCutter.
There's also a slowdown coming from calling virtual functions in the vast class hierarchy of VTK. Without special hacks, it solely depends on the compiler to optimize the virtual function pointer lookup out of a loop, while VTK calls virtual functions (the filter function, for example) many times in one or more nested loops.
See this for related info: about the cost of virtual function
VTK uses doubles almost everywhere, even if one could live with floats. Conversion and high precision also add quiet a bit of computation and memory overhead.
VTK (5.8) does not explicitly involve SIMD operations like SSE, afaik.
...
Search for topics like these:
Algorithm or software for slicing a mesh
Generate 2D cross-section polygon from 3D mesh.
Despite doing this on a CPU, one could also use an OpenGL geometry shader in a transform feedback pass to extract the cut contour determined by a cut plane. Doing this in OpenCL is also possible, however, if no GPU based compute device is available, it might get slower than a C or C++ implementation.
To render the meshes, one could use any OpenGL 3+ capable Renderer:
Ogre3D
Unity3D
Irrlicht
OSG
a simple, self made OpenGL 3 renderer.
...
more: What is the best way to have realtime 3D rendering in an engineering application?