I am trying to use transform a vtkPolyData object by using vtkTransform.
However, the tutorials I found are using pipeline, for example: http://www.vtk.org/Wiki/VTK/Examples/Cxx/Filters/TransformPolyData
However, I am using VTK 6.1 which has removed thge GetOutputPort method for stand-alone data object as mentioned here:
http://www.vtk.org/Wiki/VTK/VTK_6_Migration/Replacement_of_SetInput
I have tried to replace the line:
transformFilter->SetInputConnection()
with
transformFilter->SetInputData(polydata_object);
Unfortunately, the data was not read properly (as the pipeline was not set correctly?)
Do you know how to correctly transform a stand-alone vtkPolyData without using pipeline in VTK6?
Thank you!
GetOutputPort was never a method on a data-object. It was always a method on vtkAlgorithm and it still is present on vtkAlgorithm (and subclasses). Where is the polydata_object coming from? If it's an output of a reader, you have two options:
// update the reader to ensure it executes and reads data.
reader->UpdatePipeline()
// now you can get access to the data object.
vtkSmartPointer<vtkPolyData> data = vtkPolyData::SafeDownCast(reader->GetOutputDataObject(0));
// pass that to the transform filter.
transformFilter->SetInputData(data.GetPointer());
transformFilter->Update();
Second option is to simply connect the pipeline:
transformFilter->SetInputConnection(reader->GetOutputPort());
The key is to ensure that the data is updated/reader before passing it to the transform filter, when not using the pipeline.
Related
I am trying to use ITK's OtsuMultipleThresholdsImageFilter filter in a project but I do not have output.
My aim is to make a simple interface between OpenCV and ITK.
To convert my data from OpenCV's Mat container to itk::Image I use ITK's bridge to OpenCV and I could check that the data are properly sent to ITK.
I am even able to display thanks to QuickView.
But When I setup the filter inspired by this example the object returned by the method GetThresholds() is empty.
Here is the code I wrote:
typedef itk::Image<uchar,2> image_type;
typedef itk::OtsuMultipleThresholdsImageFilter<image_type, image_type> filter_type;
image_type::Pointer img = itk::OpenCVImageBridge::CVMatToITKImage<image_type>(src);
image_type::SizeType size = img->GetLargestPossibleRegion().GetSize();
filter_type::Pointer filter = filter_type::New();
filter->SetInput(img);
filter->SetNumberOfHistogramBins(256);
filter->SetNumberOfThresholds(K);
filter_type::ThresholdVectorType tmp = filter->GetThresholds();
std::cout<<"CHECK: "<<tmp.size()<<std::endl;
src is OpenCV's Mat of CV_8U(C1) type.
A fundamental and basic concept to using ITK is that it is a pipeline architecture. You must connect the input's and output's then update the pipeline.
You have connected the pipeline but you have not executed it. You must call filter->Update().
Please read the ITK Software Guide to understand the fundamentals of ITK:
https://itk.org/ItkSoftwareGuide.pdf
I am using the nokiatech heif api (github.com/nokiatech/heif) to process heic files produced by the IOS betas.
I am able to get the tiles and metadata like rotation and dimensions, but I am unable to locate the capture date of the image. I found some timestamp functions but they complain about "Forced FPS not set for meta context" which leads me to think these functions are related to tracks and not items.
Any help would be appreciated.
EDIT:
So there is a typo in the documentation for getReferencedToItemListByType (and getReferencedFromItemListByType), it says it takes "cdcs" as a referenceType parameter. It is ofcource "cdsc" (Content Describe).
So to get the metadata blob from a stil image as of now you can do the following:
reader.getItemListByType(contextId, "grid", gridItemIds);
ImageFileReaderInterface::IdVector cdscItemIds;
reader.getReferencedToItemListByType(contextId, gridItemIds.at(0), "cdsc", cdscItemIds);
ImageFileReaderInterface::DataVector data;
reader.getItemData(contextId, cdscItemIds.at(0), data);
Then you need to decode the exif. You can easily use Exiftool cli or an api like exiv2.
I am using a sklearn.pipeline.Pipeline object for my clustering.
pipe = sklearn.pipeline.Pipeline([('transformer1': transformer1),
('transformer2': transformer2),
('clusterer': clusterer)])
Then I am evaluating the result by using the silhouette score.
sil = preprocessing.silhouette_score(X, y)
I'm wondering how I can get the X or the transformed data from the pipeline as it only returns the clusterer.fit_predict(X).
I understand that I can do this by just splitting the pipeline as
pipe = sklearn.pipeline.Pipeline([('transformer1': transformer1),
('transformer2': transformer2)])
X = pipe.fit_transform(data)
res = clusterer.fit_predict(X)
sil = preprocessing.silhouette_score(X, res)
but I would like to just do it all in one pipeline.
If you want to both fit and transform the data on intermediate steps of the pipeline then it makes no sense to reuse the same pipeline and better to use a new one as you specified, because calling fit() will forget all about previously learnt data.
However if you only want to transform() and see the intermediate data on an already fitted pipeline, then its possible by accessing the named_steps parameter.
new_pipe = sklearn.pipeline.Pipeline([('transformer1':
old_pipe.named_steps['transformer1']),
('transformer2':
old_pipe.named_steps['transformer2'])])
Or directly using the inner varible steps like:
transformer_steps = old_pipe.steps
new_pipe = sklearn.pipeline.Pipeline([('transformer1': transformer_steps[0]),
('transformer2': transformer_steps[1])])
And then calling the new_pipe.transform().
Update:
If you have version 0.18 or above, then you can set the non-required estimator inside the pipeline to None to get the result in same pipeline. Its discussed in this issue at scikit-learn github
Usage for above in your case:
pipe.set_params(clusterer=None)
pipe.transform(df)
But be aware to maybe store the fitted clusterer somewhere else to do so, else you need to fit the whole pipeline again when wanting to use that functionality.
This questions is addressed to developers using C++ and the NDK of Nuke.
Context: Assume a custom Op which implements the interfaces of DD::Image::NoIop and
DD::Image::Executable. The node iterates of a range of frames extracting information at
each frame, which is stored in a custom data structure. An custom knob, which is a member
variable of the above Op (but invisible in the UI), handles the loading and saving
(serialization) of the data structure.
Now I want to exchange that data structure between Ops.
So far I have come up with the following ideas:
Expression linking
Knobs can share information (matrices, etc.) using expression linking.
Can this feature be exploited for custom data as well?
Serialization to image data
The custom data would be serialized and written into a (new) channel. A
node further down the processing tree could grab that and de-serialize
again. Of course, the channel must not be altered between serialization
and de-serialization or else ... this is a hack, I know, but, hey, any port
in a storm!
GeoOp + renderer
In cases where the custom data is purely point-based (which, unfortunately,
it isn't in my case), I could turn the above node into a 3D node and pass
point data to other 3D nodes. At some point a render node would be required
to come back to 2D.
I am going into the correct direction with this? If not, what is a sensible
approach to make this data structure available to other nodes, which rely on the
information contained in it?
This question has been answered on the Nuke-dev mailing list:
If you know the actual class of your Op's input, it's possible to cast the
input to that class type and access it directly. A simple example could be
this snippet below:
//! #file DownstreamOp.cpp
#include "UpstreamOp.h" // The Op that contains your custom data.
// ...
UpstreamOp * upstreamOp = dynamic_cast< UpstreamOp * >( input( 0 ) );
if ( upstreamOp )
{
YourCustomData * data = yourOp->getData();
// ...
}
// ...
UPDATE
Update with reference to a question that I received via email:
I am trying to do this exact same thing, pass custom data from one Iop
plugin to another.
But these two plugins are defined in different dso/dll files.
How did you get this to work ?
Short answer:
Compile your Ops into a single shared object.
Long answer:
Say
UpstreamOp.cpp
DownstreamOp.cpp
define the depending Ops.
In a first attempt I compiled the first plugin using only UpstreamOp.cpp,
as usual. For the second plugin I compiled both DownstreamOp.cpp and
UpstreamOp.cpp into that plugin.
Strangely enough that worked (on Linux; didn't test Windows).
However, by overriding
bool Op::test_input( int input, Op * op ) const;
things will break. Creating and saving a Comp using the above plugins still
works. But loading that same Comp again breaks the connection in the node graph
between UpstreamOp and DownstreamOp and it is no longer possible to connect
them again.
My hypothesis is this: since both plugins contain symbols for UpstreamOp it
depends on the load order of the plugins if a node uses instances of UpstreamOp
from the first or from the second plugin. So, if UpstreamOp from the first plugin
is used then any dynamic_cast in Op::test_input() will fail and the two Op cannot
be connected anymore.
It is still surprising that Nuke would even bother to start at all with the above
configuration, since it can be rather picky about symbols from plugins, e.g if they
are missing.
Anyway, to get around this problem I did the following:
compile both Ops into a single shared object, e.g. myplugins.so, and
add TCL script or Python script (init.py/menu.py)which instructs Nuke how to load
the Ops correctly.
An example for a TCL scripts can be found in the dev guide and the instructions
for your menu.py could be something like this
menu = nuke.menu( 'Nodes' ).addMenu( 'my-plugins' )
menu.addCommand('UpstreamOp', lambda: nuke.createNode('UpstreamOp'))
menu.addCommand('DownstreamOp', lambda: nuke.createNode('DownstreamOp'))
nuke.load('myplugins')
So far, this works reliably for us (on Linux & Windows, haven't tested Mac).
I'm trying to do the following trick:
I have IDataObject* to be set into the clipboard, so I'm using OleSetClipboard() to set it into the clipboard.
I have another CLIPFORMAT I want to add to the clipboard, but I can't do it with OleSetClipboard() because the IDataObject* I receive does not implement SetData() method. So, to overcome this limitation I OpenClipboard() with GetClipboardOwner(), this way, I can SetClipboardData() to the clipboard without EmptyClipboard() first.
Now, it all works well, but what happens is that OleGetClipboard() does not return the data I placed in the clipboard using SetClipboardData(), but I can using GetClipboardData().
I can imagine why this happens (It just returns the IDataObject*), so I tried to OleFlushClipboard() to delete the IDataObject*, and OleGetClipboard() again to let the OS rebuild a new IDataObject*, and it still didn't contain the CLIPFORMAT added by SetClipboardData().
Does anyone have any idea how to overcome this issue? or a different trick? or even explain why it works this way? Thanks
I just tried this (on Windows 7) and it appears to work but only cross-process:
In a different process to the clipboard owner, OleGetClipboard returns a data object that contains all of the formats (i.e. the original formats from the data object and the extra ones added to the clipboard).
In the same process, OleGetClipboard always returns a data object that does not contain the extra clipboard formats.
In both cases, calling OleFlushClipboard makes no difference.
Anyway, this doesn't seem like a terribly robust solution. What you can do instead is create your own data object that responds to the formats it knows about and delegates other formats to the original data object. The EnumFormatEtc method would combine formats from both objects, and so on. This article has the skeleton of a simple data object you could extend.