I am using the nokiatech heif api (github.com/nokiatech/heif) to process heic files produced by the IOS betas.
I am able to get the tiles and metadata like rotation and dimensions, but I am unable to locate the capture date of the image. I found some timestamp functions but they complain about "Forced FPS not set for meta context" which leads me to think these functions are related to tracks and not items.
Any help would be appreciated.
EDIT:
So there is a typo in the documentation for getReferencedToItemListByType (and getReferencedFromItemListByType), it says it takes "cdcs" as a referenceType parameter. It is ofcource "cdsc" (Content Describe).
So to get the metadata blob from a stil image as of now you can do the following:
reader.getItemListByType(contextId, "grid", gridItemIds);
ImageFileReaderInterface::IdVector cdscItemIds;
reader.getReferencedToItemListByType(contextId, gridItemIds.at(0), "cdsc", cdscItemIds);
ImageFileReaderInterface::DataVector data;
reader.getItemData(contextId, cdscItemIds.at(0), data);
Then you need to decode the exif. You can easily use Exiftool cli or an api like exiv2.
Related
I am new to OMNeT++ and I'm trying to implement a drone network that communicate with each other using direct messages.
I want to visualize my drone network with the 3D visualization in OMNeT using the OsgVisualizer in inet.visualizer.scene package.
In the dronenetwork.ned file, I have used the IntegratedVisualizer and the OsgGeographicCoordinateSystem. Then in the omnetpp.ini file, the map file to be used is defined and so the map loading and mobility of the drones works fine in the 3D visualization of the simulation run.
However, the message transmissions between the drones are not visualized in 3D even though this is properly visualized in the 2D canvas mode.
I tried adding both NetworkNodeOsgVisualizer and NetworkConnectionOsgVisualizer to my drone module as visualization simple modules and also I have defined the drones as a #networkNode and #networkConnectionNode. But it still hasn't been able to visualize the message transmissions.
Any help or hint regarding this would be highly appreciated.
Code used for visualizations in the simple module drone is as follows
import inet.visualizer.scene.NetworkNodeOsgVisualizer;
import inet.visualizer.scene.NetworkConnectionOsgVisualizer;
module drone
{
parameters:
#networkNode;
#networkConnection;
submodules:
networkNodeOsgVisualizer: NetworkNodeOsgVisualizer {
#display("p=207,50");
displayModuleName = true;
visualizationTargetModule = "^.^";
visualizationSubjectModule = "wirelessInterface.^.^";
}
networkConnectionOsgVisualizer : NetworkConnectionOsgVisualizer{
visualizationTargetModule = "^.^";
visualizationSubjectModule = "wirelessInterface.^.^";
displayNetworkConnections = true;
}
Thank you
Message passing and direct message sending visualizations are special cases implemented by the Qtenv automatically for 2D (default) visualization only. You can add custom 2D message visualization (like the one in the aloha example). OMNeT++ does not provide any 3D visualization by default. All the code must be provided by the model (INET in this case). This is also true for any transient visualization. There is an example for this in the osg-earth omnet example where communication between cows are visualized by inflating bubbles.
So, you have to implement your own visualization effect. There is something in INET which is pretty close to what you want: DataLinkOsgVisualizer and PhysicalLinkOsgVisualizer which flashes an arrow if communication on data link or physical layer has occurred. This is not the same as message passing, but close enough. Or you can implement your own animation using these visualizers as a sample.
I have some code that will change an images colorspace from RGB to a GenericCMYK profile. I would like to be able to use an ICC Profile to convert the image to a CMYK colorspace. There is a way to do this in Photoshop, but the process takes too much of a users time when you are dealing with 100's of images, I am trying to create an AppleScript droplet that will do this for them.
I have already looked at the page for NSColorspace and there looks like there is a way to do this. I just have no idea how to convert this Objective-C into ApplescriptObjC. Here are the two references to the ICC Profiles in NSColorSpace:
init?(iccProfileData: Data)
Initializes and returns an NSColorSpace object given an ICC profile.
var iccProfileData: Data?
The ICC profile data from which the receiver was created.
Here is the code I have thanks to Shane at Macscripter.net:
set theImage to (current application's NSImage's alloc()'s initWithContentsOfURL:theInput)
set imageRep to (theImage's representations()'s objectAtIndex:0)
set targetSpace to current application's NSColorSpace's genericCMYKColorSpace()
set bitmapRep to (imageRep's bitmapImageRepByConvertingToColorSpace:targetSpace renderingIntent:(current application's NSColorRenderingIntentPerceptual))
set theProps to (current application's NSDictionary's dictionaryWithObjects:{1.0, true} forKeys:{current application's NSImageCompressionFactor, current application's NSImageProgressive})
set jpegData to (bitmapRep's representationUsingType:(current application's NSJPEGFileType) |properties|:theProps)
set colorSpace to bitmapRep's colorSpaceName() as text
(jpegData's writeToURL:theOutput atomically:true)
I just need to figure out how to include the ICC Profile as a part of the CMYK conversion process that happens I believe on this line of code:
set targetSpace to current application's NSColorSpace's genericCMYKColorSpace()
Can anyone give me some guidance on this?
Thanks!
You are looking at the NSColorSpace documentation in Swift. AppleScriptObjC is an Objective-C bridge, so it makes more sense viewing the documentation in Objective-C (the language can be set in the page header), where your snippet becomes
set targetSpace to current application's NSColorSpace's alloc's initWithICCProfileData:iccData
where iccData is the ICC profile data you are going to use.
I am trying to use ITK's OtsuMultipleThresholdsImageFilter filter in a project but I do not have output.
My aim is to make a simple interface between OpenCV and ITK.
To convert my data from OpenCV's Mat container to itk::Image I use ITK's bridge to OpenCV and I could check that the data are properly sent to ITK.
I am even able to display thanks to QuickView.
But When I setup the filter inspired by this example the object returned by the method GetThresholds() is empty.
Here is the code I wrote:
typedef itk::Image<uchar,2> image_type;
typedef itk::OtsuMultipleThresholdsImageFilter<image_type, image_type> filter_type;
image_type::Pointer img = itk::OpenCVImageBridge::CVMatToITKImage<image_type>(src);
image_type::SizeType size = img->GetLargestPossibleRegion().GetSize();
filter_type::Pointer filter = filter_type::New();
filter->SetInput(img);
filter->SetNumberOfHistogramBins(256);
filter->SetNumberOfThresholds(K);
filter_type::ThresholdVectorType tmp = filter->GetThresholds();
std::cout<<"CHECK: "<<tmp.size()<<std::endl;
src is OpenCV's Mat of CV_8U(C1) type.
A fundamental and basic concept to using ITK is that it is a pipeline architecture. You must connect the input's and output's then update the pipeline.
You have connected the pipeline but you have not executed it. You must call filter->Update().
Please read the ITK Software Guide to understand the fundamentals of ITK:
https://itk.org/ItkSoftwareGuide.pdf
Code:
var bg = CCSprite(imageNamed:"Background.png")
Images used in Resource:
Background-hd.png
Background-ipad.png
Background-ipadhd.png
Background-iphone5hd.png
Background.png
In all device Background.png(320x480) used, other images are ignored. How to fix this ?
Retina display is not enabled...How to enable retina display in v3 ?
Here is working sample code with this problem.
I'm not really advanced user of SpriteBuilder but have some concerns about your issue.
As you are using Sprite Builder for starting the project, it configures your CCFileUtils to distinguish different devices' resources by looking at folders, not prefixes. In your app delegate you can see this line:
[CCBReader configureCCFileUtils];
going into this method you can see that search mode for file utils is set to CCFileUtilsSearchModeDirectory
sharedFileUtils.searchMode = CCFileUtilsSearchModeDirectory;
So you need to use Publish folders and copy your Background image to each of them with the same name (Background.png), but with different resolution for each device.
You don't need to use image suffixes in SpriteBuilder at all.
Finally Updated Cocos2d and now -hd,-ipad files are used. Just updated one line
sharedFileUtils.searchMode = CCFileUtilsSearchModeSuffix
I am trying to use transform a vtkPolyData object by using vtkTransform.
However, the tutorials I found are using pipeline, for example: http://www.vtk.org/Wiki/VTK/Examples/Cxx/Filters/TransformPolyData
However, I am using VTK 6.1 which has removed thge GetOutputPort method for stand-alone data object as mentioned here:
http://www.vtk.org/Wiki/VTK/VTK_6_Migration/Replacement_of_SetInput
I have tried to replace the line:
transformFilter->SetInputConnection()
with
transformFilter->SetInputData(polydata_object);
Unfortunately, the data was not read properly (as the pipeline was not set correctly?)
Do you know how to correctly transform a stand-alone vtkPolyData without using pipeline in VTK6?
Thank you!
GetOutputPort was never a method on a data-object. It was always a method on vtkAlgorithm and it still is present on vtkAlgorithm (and subclasses). Where is the polydata_object coming from? If it's an output of a reader, you have two options:
// update the reader to ensure it executes and reads data.
reader->UpdatePipeline()
// now you can get access to the data object.
vtkSmartPointer<vtkPolyData> data = vtkPolyData::SafeDownCast(reader->GetOutputDataObject(0));
// pass that to the transform filter.
transformFilter->SetInputData(data.GetPointer());
transformFilter->Update();
Second option is to simply connect the pipeline:
transformFilter->SetInputConnection(reader->GetOutputPort());
The key is to ensure that the data is updated/reader before passing it to the transform filter, when not using the pipeline.