Tensorboard projector animated transition between multiple datasets - tensorboard

I would like to use the stand-alone tensorboard projector and have multiple datasets (same data points, different times).
I want the projector to load multiple datasets, apply PCA and then animate the points in time through the "checkpoints". (changing the point coordinates in the 2D/3D space in an animated fashion)
Where can I start to dig in the projector code in order to load multiple datasets and change the rendered points coordinates after initial render?

With still nearly zero experience with Tensorboard, I recently searched for a similar feature.
Loading multiple datasets in TensorboardX is no problem by calling add_embedding multiple times with different global_step parameter. It seems that for transitions there is no build-in solution.
One alternative I suggest is hypertools. It supports multiple dimension reduction methods, as well as animations.
I would appreciate any update on this topic as I am still searching for alternatives.

Related

Cant understand concept of merge-instancing

I was reading slides from a presentation that was talking about "merge-instancing". (the presentation is from Emil Persson, the link: www.humus.name/Articles/Persson_GraphicsGemsForGames.pptx, from slide 19)
I can't understand what's going on, I know instancing only from openGL and I thought it can only draw the same mesh multiple times. Can somebody explain? Does it work differently with directX?
Instancing: You upload a mesh to the GPU and activate its buffers whenever you want to render it. Data is not duplicated.
Merging: You want to create a mesh from multiple smaller meshes (as the complex of building in the example), so you either:
Draw each complex using instancing, which means, multiple draw calls for each complex
You merge the instances into a single mesh, which will replicate the vertices and other data for each complex, but you will be able to render the whole complex with a single draw call
Instance-Merging: You create the complex by referencing the vertices of the instances that take part on it. Then you use the vertices to know where to fetch the data for each instance: This way you have the advantage of instancing (Each mesh is uploaded once to the GPU) and the merging benefits (you draw the whole complex with a single draw call)

OpenGL Primitive Ordering vs. Primitive Batching

I've been reading up on how some OpenGL-based architectures manage their objects in an effort to create my own light weight engine based on my application's specific needs (please no "why don't you just use this existing product" responses). One architecture I've been studying is Qt's Quick Scene Graph, and while it makes a lot of sense I'm confused about something.
According to their documentation, opaque primitives are ordered front-to-back and non-opaque primitives are ordered back-to-front. The ordering is to do early z-killing by hopefully eliminating the need to process pixels that appear behind others. This seems to be a fairly common practice and I get it. It makes sense.
Their documentation also talks about how items that use the same Material can be batched together to reduce the number of state changes. That is, a shared shader program can be bound once and then multiple items rendered using the same shader. This also makes sense and I'm good with it.
What I don't get is how these two techniques work together. Say I have 3 different materials (let's just say they are all opaque for simplification) and 100 items that each use one of the 3 materials, then I could theoretically create 3 batches based off the materials. But what if my 100 items are at different depths in the scene? Would I then need to create more than 3 batches so that I can properly sort the items and render them front-to-back?
Based on what I've read of other engines, like Ogre 3D, both techniques seem to be used pretty regularly, I just don't understand how they are used together.
If you really have 3 materials, you can only batch objects that are rendered in a group according to their sorting. At times the sorting can be optimized for objects that do not overlap each other to minimize the material switches.
The real "trick" behind all that how ever is to combine the materials. If the engine is able to create one single material out of the 3 source materials and use the shaders to properly apply the material settings to the different objects (mostly that is transforming the texture coordinates), everything can be batched and ordered at the same time. But if that is not possible the engine can't optimize it further and has to switch the material every now and then.
You don't have to group every material in your scene together. But if it's possible to group those materials that often switch with each other, it can already improve the performance a lot.

What algorithm would blend multiple images that has same scene except one object on different positions in every image?

I want to blend multiple photo shots of same scene but only one object is in different position on every shot. I want to know what kind of algorithm would give desired results. Here is an example
Well, what you are looking for is called Image Fusion. There are many methods that do this, but it is still a fairly active research idea. Based on the images you have, you should select the one that performs the best. Because your images will have imperfections and lighting, shadowing differences this is way beyond than a simple cut and paste.
Here is a little more information and some algorithm explanations: Image Fusion by Image Blending.

Need fast c++ qt/qwt scatter plot

I have a huge array of 2D points (about 3 millions of pairs), which I need to render with reasonable speed in a Qt-based application.
I've tried using QGraphicsScene, but its very slow even on 400000 primitives, so I was looking into the qwt library instead.
It has a scatter plot example screenshot on its sourceforge page, which looks like exactly what I need, but I cannot find neither any kind of actual code that can be used for this data, nor an according API in qwt docs - it mentions only different types of curves.
So it would be good to get some pointers for scatter plot examples and some advice on its performance.
Suggestions for other c++ qt-compatible plotting libraries which can cope with this amount of data are also welcome.
Scatter plot is contained in the "realtime" example: what you want is the IncrementalPlot class.
I'd also suggest that drawing all the 3 million points isn't reasonable, since modern screens have only about 2 million pixels :) Thus it seems better to simplify the plot beforehand by merging the adjacent points into one with a threshold dependent on the zoom factor.
As viens pointed out, generating scatter plots with 3 million points is probably not a good idea.
I have achieved good performance generating 3D scatter plots with 30.000 points using OpenGL.
OpenGL is fast and integrates well with Qt. However, it is a low level API that forces you to do a lot of tedious coding.
VTK may be another option.
MathGL is free (GPL) cross-platform plotting library. It was written in C++ and have Qt widget. Also it is rather fast, but 3 millions points ... it take about 30 seconds to plot in my laptop.
You'd suggest using OpenGL as #vines said, and in particular exploiting or display lists glGenList or vertex buffers. Some million points as primitives vertices shouldn't be that difficult.

IDEAs: how to interactively render large image series using GPU-based direct volume rendering

I'm looking for idea's how to convert a 30+gb, 2000+ colored TIFF image series into a dataset able to be visualized in realtime (interactive frame rates) using GPU-based volume rendering (using OpenCL / OpenGL / GLSL). I want to use a direct volume visualization approach instead of surface fitting (i.e. raycasting instead of marching cubes).
The problem is two-fold, first I need to convert my images into a 3D dataset. The first thing which came into my mind is to see all images as 2D textures and simply stack them to create a 3D texture.
The second problem is the interactive frame rates. For this I will probably need some sort of downsampling in combination with "details-on-demand" loading the high-res dataset when zooming or something.
A first point-wise approach i found is:
polygonization of the complete volume data through layer-by-layer processing and generating corresponding image texture;
carrying out all essential transformations through vertex processor operations;
dividing polygonal slices into smaller fragments, where the corresponding depth and texture coordinates are recorded;
in fragment processing, deploying the vertex shader programming technique to enhance the rendering of fragments.
But I have no concrete ideas of how to start implementing this approach.
I would love to see some fresh ideas or ideas on how to start implementing the approach shown above.
If anyone has any fresh ideas in this area, they're probably going to be trying to develop and publish them. It's an ongoing area of research.
In your "point-wise approach", it seems like you have outlined the basic method of slice-based volume rendering. This can give good results, but many people are switching to a hardware raycasting method. There is an example of this in the CUDA SDK if you are interested.
A good method for hierarchical volume rendering was detailed by Crassin et al. in their paper called Gigavoxels. It uses an octree-based approach, which only loads bricks needed in memory when they are needed.
A very good introductory book in this area is Real-Time Volume Graphics.
I've done a bit of volume rendering, though my code generated an isosurface using marching cubes and displayed that. However, in my modest self-education of volume rendering I did come across an interesting short paper: Volume Rendering on Common Computer Hardware. It comes with source example too. I never got around to checking it out but it seemed promising. It is it DirectX though, not OpenGL. Maybe it can give you some ideas and a place to start.