Add error bars to a VTK 2D scatter plot - c++

Is there a way to add error bars to scatter plot data using VTK? I am currently plotting point data using the C++ API; there is uncertainty associated with the data I am trying to plot which I would like to visualise also.
I can't find any obvious references in the documentation to error bars; the only mention I have found is in this Kitware presentation from 2011, which doesn't seem to be a function that exists.
Sample code snippet:
// Chart source data is populated etc...
vtkPlot* sampleScatter = chartXY->AddPlot(vtkChart::POINTS);
sampleScatter->SetInputData(chartDataTable, 0, 1);
// Here is where I would like to add the error bars -
// below method is from the link, and does not work
vtkPlotPoints::SafeDownCast(sampleScatter)->SetErrorArray(errorData.GetPointer());
// Chart is rendered...
where chartXY is a vtkChartXY object and chartDataTable is a vtkTable containing the x and y data in columns 0 and 1.
Is there a way to populate error data for visualisation in a similar fashion to the above, or will I have to roll my own chart type?

It turns out that this is not a capability that exists in VTK at the moment.
I have developed a basic capability to do this, which is currently the subject of a merge request in the VTK repository. Will update if/when this has been merged in and the capability is available.

Related

Create raster from XYZ

I have a data set consisting of XYZ data. The dimensions are 5587 rows by 3 columns.
I try to use rasterFromXYZ from the raster package but I get the following error:
Error in rasterFromXYZ(DATA) : x cell sizes are not regular
Any help would be appreciated.
You are not providing example data making it hard to help you out. What the message means is that your data does not appear to be regularly spaced.
Instead of rasterFromXYZ you can use rasterize in which case you specify the required geometry and then transfer the values to it.
Depending on your goals, you may also use interpolate

How can I use different colors for different labels in TensorBoard embedding visualization?

I am visualizing sentence embedding using tensorboard. I have label for each sentence embedding. How can I set a color for each label?
For example
embedding vector Labels
[0.2342 0.2342 0.234 0.8453] A
[0.5342 0.9342 0.234 0.1453] B
[0.7342 0.0342 0.124 0.8453] C
[0.8342 0.5342 0.834 0.5453] A
I am able to visualize the embedding vector where each row is labeled by its label. I want to set colors also so that I see points with same label will have same color. Like all "A" will be red, "B" will be green, "C" will be blue and so on?
I searched on Google but didn't get any sample.
Could anyone please share some code to get it done?
Thank you in advanced.
There should be a colour by drop down that you can use.
In case that is not showing up, one of the possible reason could be that you have more than 50 unique labels, which is the hardcoded limit in the current tensorflow code.
Refer to this thread for details.
https://github.com/tensorflow/tensorboard/issues/61

Arrayfire - rendering a heatmap as an image/array with the available colormaps

I'm using Arrayfire to make a 2D heat transfer simulation. My dataset is a matrix of temperatures and I want to vizualize it as a heatmap. I need to produce frames of the colored dataset and save it as an image on the disk. So each temperature in my dataset has to be mapped to a color according to a certain color scheme.
I found that you can render the dataset in a window with a colormap using fig():
http://blog.accelereyes.com/blog/2013/07/03/arrayfire-examples-part-7-of-8-pde/
I also found that the colormaps available:
http://arrayfire.org/docs/defines_8h.htm#a553ceda8a1d8946efac3b08e642574ae
My plan so far has been to render the colored dataset using window.image() in a hidden window and then extract an array/image from the result so I can save this result using saveImage(). But I cannot find a way to extract the image rendered by the window.
Is there a better way to do this using the image processing functions? I would like to avoid defining my own color scheme. (i.e. making my own function that maps a temperature to a color)

How to create a depth map from PointGrey BumbleBee2 stereo camera using Triclops and FlyCapture SDKs?

I've got the BumbleBee 2 stereo camera and two mentioned SDKs.
I've managed to capture a video from it in my program, rectify stereo images and get a disparity map. Next thing I'd like to have is a depth map similar to one, the Kinect gives.
The Triclops' documentation is rather short, it only references functions, without typical workflow description. The workflow is described in examples.
Up to now I've found 2 relevant functions: family of triclopsRCDxxToXYZ() functions and triclopsExtractImage3d() function.
Functions from the first family calculate x, y and z coordinate for a single pixel. Z coordinate perfectly corresponds to the depth in meters. However, to use this function I should create two nested loops, as shown in the stereo3dpoints example. That gives too much overhead, because each call returns two more coordinates.
The second function, triclopsExtractImage3d(), always returns error TriclopsErrorInvalidParameter. The documentation says only that "there is a geometry mismatch between the context and the TriclopsImage3d", which is not clear for me.
Examples of Triclops 3.3.1 SDK do not show how to use it. Google brings example from Triclops SDK 3.2, which is absent in 3.3.1.
I've tried adding lines 253-273 from the link above to current stereo3dpoints - got that error.
Does anyone have an experience with it?
Is it valid to use triclopsExtractImage3d() or is it obsolete?
I also tried plotting values of disparity vs. z, obtained from triclopsRCDxxToXYZ().
The plot shows almost exact inverse proportionality: .
That is z = k / disparity. But k is not constant across the image, it varies from approximately 2.5e-5 to 1.4e-3, that is two orders of magnitude. Therefore, it is incorrect to calculate this value once and use forever.
Maybe it is a bit to late and you figured it out by yourself but:
To use triclopsExtractImage3d you have to create a 3dImage first.
TriclopsImage3d *depthImage;
triclopsCreateImage3d(triclopsContext, &depthImage);
triclopsExtractImage3d(triclopsContext, depthImage);
triclopsDestroyImage3d(&depthImage);

ESRI ArcGIS Client match map to WKID (Silverlight)

I am using the map service at http://services.arcgisonline.com/ArcGIS/rest/services/World_Street_Map/MapServer, which gives me a world map.
I have a shape file (.prj) that looks like this:
PROJCS["UTM:10N",GEOGCS["GCS_North_American_1927",DATUM["D_North_American_1927",SPHEROID["CLARKE 1866",6378206.4,294.9786982]],PRIMEM["GREENWICH",0.0],UNIT["Degree",0.0174532925199433]],PROJECTION["Transverse_Mercator"],PARAMETER["Central_Meridian",-123.0],PARAMETER["Latitude_Of_Origin",0.0],PARAMETER["Scale_Factor",0.9996],PARAMETER["False_Easting",500000.0],PARAMETER["False_Northing",0.0],UNIT["METER",1.0]]
The locations relevant to the shape file are in western Canada (UTM:10N). Research seems to indicate that this is WKID 26710.
If I create the map layer and set the SpatialReference to 26710, no map shows. If I set SpatialReference to 102100, I get a map, but my points are in eastern France. This tells me that my reference is off.
I am processing the shape files, but I do not create or own them. How would you go about getting them to position themselves correctly in Canada? It seems that the answer would be to "get the right Spatial Reference", but all the searching I have done says that that is 26710.
The map service you're using only plots geometries supplied in the 102100 projection. If you have access to an ArcGIS Geometry server, you can convert your data points from the source projection to the one required by the map service. See http://resources.esri.com/help/9.3/arcgisserver/apis/rest/project.html
For example, if you have a point whose coordinates in the 26710 wkid are (491800, 5456280), you could do something like
http://sampleserver1.arcgisonline.com/ArcGIS/rest/services/Geometry/GeometryServer/project?inSR=26710&outSR=102100&geometries=%7B%22geometryType%22%3A%22esriGeometryPoint%22%2C%22geometries%22%3A%5B%7B%22x%22%3A491800%2C%22y%22%3A5456280%7D%5D%7D&f=pjson
The x and y coodinates in that result should show up somewhere around Vancouver on the map service you linked.