Transfer Functions in xtk - xtk

How would I go about plugging in a transfer function to determine coloring and opacity in xtk based on pixel scalar intensity and / or gradient magnitude. This may be a separable 1d transfer Functions or a 2d transfer function. In addition to VTK, ImageVis3D also supports this.

There is no built-in support in XTK for that and is most likely not gonna happen.
We recommend using ami.js - there is no built-in support for 2D transfer functions in in ami.js yet but it is under active development and should provide most of required functionalities to support this feature.
Hope that helps -

Related

Surface approximation based on RBF

I am looking for a way of approximating a surface based on a set of 3D data points. For this purpose I would like to use a method based on radial basis functions but I cannot find a free implementation in C++.
I looked in ITK, VTK and open CV but I did not find anything...
Does anyone knows a free implementation of such an algorithm ?
Any suggestion about the reconstruction of a surface based on a set of 3D data points is also more than welcome ! :)
3D surface reconstruction can be challenging. I would first recommend taking a look at PCL. The Point Cloud Library has grown into a nice set of tools for 3D point management and interpretation, and its license and API sound compatible with your needs. The surface reconstruction features of the library appear to be most applicable. In fact, RBF reconstruction is supported.
If PCL doesn't work, there are other options:
MeshLab,
This SO post provides a nice summary, and
of course, Wikipedia provides some links
Finally, you might search CiteSeerX, Google Scholar, etc. for papers like this one. As an example, a search for "3D Surface Reconstruction" at CiteSeerX yields many hits. RBF-based reconstruction is just one of many methods: is your application truly limited to radial basis functions? If not, there are many choices, (i.e. Ball Pivoting Algorithm). See this survey paper for some comparisons.

(rendering particles) Should I learn shader or OpenCL?

I am trying to run 100000 and more particles.
I've been watching many tutorials and other examples that demonstrate the power of shaders and OpenCL.
In one example that I watched, particle's position was calculated based on the position of your mouse pointer(physical device that you hold with one hand and cursor on the screen).
The position of each particle was stored as RGB. R being x, G y, and B, z. And passed to pixel shader.And then each color pixel was drawn as position of particle afterward.
However I felt absurd towards this approach.
Isn't this approach or coding style rather to be avoided?
Shoudn't I learn how to use OpenCL and use the power of GPU's multithreading to directly state and pass my intended code?
Isn't this approach or coding style rather to be avoided?
Why?
The entire point of shaders is for you to be able to do what you want, to more effectively express what you want to do, and to allow yourself greater control over the hardware.
You should never, ever be afraid of re-purposing something for a different functionality. Textures do not store colors; they store data, which can be color, but it can also be other stuff. The sooner you stop thinking of textures as pictures, the better off you will be as a graphics programmer.
The GPU and API exist to be used. Use it as you see fit; do not allow how you think the API should be used to limit you.
Shoudn't I learn how to use OpenCL and use the power of GPU's multithreading to directly state and pass my intended code?
Yesterday, I would have said "yes". However, today this was released: OpenGL compute shaders.
The fact that the OpenGL ARB and Khronos created this shader type and so forth is a tacit admission that OpenCL/OpenGL interop is not the most efficient way to generate data for rendering purposes. After all, if it was, there would be no need for OpenGL to have generalized compute functionality. There were 3 versions of GL 4.x that didn't provide this. The fact that it's here now is basically the ARB saying, "Yeah, OK, we need this."
If the ARB, staffed by many people who make the hardware, think that CL/GL interop is not the fastest way to go, then it's pretty clear that you should use compute shaders.
Of course, if you're trying to do something right now, that won't help; only NVIDIA has compute shader support. And even that's only in beta drivers. It will take many months before AMD gets support for them, and many more before that support becomes solid and stable enough to use.
Even so, you don't need compute shaders to generate data. People have used transform feedback and geometry shaders to do LOD and frustum culling for instanced rendering. Do not be afraid to think outside of the "OpenGL draws stuff" box.
To simulate particles in OpenCL, you should try out "Yet Another Shader Editor" / http://yase.chnk.us/ - it takes away all the tricky parts and lets you get down to the meat of coding the particle control algorithms. IN YOUR BROWSER. Nothing to download, no accounts to create, just alter whatever examples you find. It's a blast.
https://lotsacode.wordpress.com/2013/04/16/fun-with-particles-yet-another-shader-editor/
I'm not affiliated with yase in any way.

Vector text rendering system in Direct3d

Does anyone know of an implementation of vector fonts in directx?
If not does anyone have a good starting place for this?
Or even any examples of a reader written in Directx with basic zoom support.
Direct vector fonts don't work to well in D3D, as it requires an intermediary texture to hold rasterized data(verts or pixels) and need to do a lot more extra work, thus you need a approach them a little differently to get them working easily and efficiently(if you are performance constrained/care about performance). You should use signed distances fields for this (they up-scale VERY well, but are horrid for down-scaling if your fonts are complex. Hard edges also don't store too well, but this can be fixed by using two channels to store data. Distance fields also allow easy smoothing, bolding, outlining, glowing and drop shadows), al la valve's improved alpha tested advanced vector texture rendering (which incidently references a paper on vector fonts, if you do want to go that way). It is heavily shader reliant (though it can be done in FFP via alpha testing, but using smoothstep in the pixel shader provides a far better result with minimal overhead), but one doesn't need anything beyond ps v1. see http://www.valvesoftware.com/publications.html for the paper, see the shaders in valves source sdk for a complete implementation reference. (I incidently just built a Dx11 based text renderer using this, works wonderfully, though I use a tool to brute force my sdf's so I don't need to create them at runtime).

Scaling Rotation Shearing Reflection algorithms

i use EASYBMP library and i want to know the most effective way to scale , rotate , shear and reflect algorithm. i want the most optimized to do it.
The most effective way to scale, rotate, shear and reflect is to use the power of your graphics card - for example through OpenGL.
If you still want to do bitmap pixel operations yourself, typically you do this using linear algebra. This is not super easy to figure out, so I recommend finding some good study material, for example this book.
Instead of telling you how to use EASYBMP (which looks like crap), I'm going to suggest you use Magick++ instead. It supports BMP and has all the methods you ask for built-in.
Please refer to the Wikipedia link for the transformation matrices

Recommendations for C++ 3d vector/matrix library with targeted camera

I am writing a very simple 3d particle software rendering system, but I am only really interested in coding the particle system and the rasterizer, so what I am looking for, is the easiest way to go from 3d particle coordinates, through camera, to screen coordinates, but with features like variable FOV, and targeted (look at) camera.
Any additional features such as distance from point to point, bounding volumes etc. would be a bonus, but ease of use is more important to me than features.
The only license requirement is that it's free (as in beer).
You probably want a scenegraph library. Many C++ scene graph libraries exist. OpenScenegraph is popular, Coin3D (free for non-commercial use) is an implementation of the OpenInventor spec, any of them would probably fit your need as it doesn't sound like you need any cutting-edge support. There is also Panda3D, which I've heard is good if you're into Python.
You could do all of this in a straight low-level toolkit like OpenGL but without prior experience it takes a lot longer to get a handle on using OpenGL than any of the scenegraph libraries.
When choosing a scenegraph library it will probably just come down to personal preference as to which API you prefer.
Viewing is done with elementary transformations, the same way that model transformations are done. If you want some convenience functions like gluLookAt() in GLU, then I don't know, but it would be really easy to make your own.
If you do want to make your own Look At function etc. then I can recommend eigen which is a really easy to use linear algebra library for C++.
If you're trying to just focus on the rendering portion of a particle system, I'd go with an established 3D rendering library.
Given your description, you could consider trying to just add your particle rasterization to one or both of the software renderers in Irrlicht. One other advantage of this would be that you could compare your results to the DX/OGL particle effect renderers that already exist. All of the plumbing/camera management/etc would be done for you.
You may also want to have a look at the Armadillo C++ library