Loading a .dae file as a scene element
This code works, loading the file as the scene:
let scene = SCNScene(named: "art.scnassets/base-wall-tile_sample.dae")!
This code, loading the file as SCNGeometry, doesn't:
let url = Bundle.main.url(forResource: "art.scnassets/base-wall-tile_sample", withExtension: "dae")
let source = SCNSceneSource(url: url! )
let geo = source!.entryWithIdentifier("Geo", withClass: SCNGeometry.self)!
url and source are ok, but it crashes trying to produce geo. Bad instruction.
This code, like several examples offered on the web, was in Swift 2 (load a collada (dae) file into SCNNode (Swift - SceneKit). I had to juggle it to Swift 3, and something seems to have been lost in translation. Can someone tell me how to do this stuff right?
A .dae file is always loaded as a SCNScene. You need to name the node containing the geometry you want to add.
Than you can load the scene, filter it for the node with the given name and add it to your scene.
func addNode(named nodeName, fromSceneNamed: sceneName, to scene: SCNScene) {
if let loadedScene = SCNScene(named: sceneName),
let node = loadedScene.rootNode.childNode(withName: nodeName, recursivly: true) {
scene.rootNode.addChildNode(node)
}
}
guard let shipScene = SCNScene(named: "ship.dae") else { return }
let shipNode = SCNNode()
let shipSceneChildNodes = shipScene.rootNode.childNodes
for childNode in shipSceneChildNodes {
shipNode.addChildNode(childNode)
}
node.addChildNode(shipNode)
Related
I am working on an off-screen render program and I use crate glium to do this. I have followed the example of screenshot.rs and this example worked well.
Then I made some change:
The orignal code was
fn main() {
// building the display, ie. the main object
let event_loop = glutin::EventsLoop::new();
let wb = glutin::WindowBuilder::new().with_visible(true);
let cb = glutin::ContextBuilder::new();
let display = glium::Display::new(wb, cb, &event_loop).unwrap();
// building the vertex buffer, which contains all the vertices that we will draw
I grouped these codes into a function:
fn main() {
// building the display, ie. the main object
let event_loop = glutin::EventsLoop::new();
let display = build_display((128,128), &event_loop);
// building the vertex buffer, which contains all the vertices that we will draw
pub fn build_display(size: (u32, u32), event_loop: &glutin::EventsLoop) -> glium::Display {
let version = parse_version(); //this will return `OpenGL 3.3`
let wb = glutin::WindowBuilder::new()
.with_visibility(false)
.with_dimensions(glutin::dpi::LogicalSize::from(size));
let cb = glutin::ContextBuilder::new()
.with_gl(version);
glium::Display::new(wb, cb, &event_loop).unwrap()
}
After this modification, the program still worked well. So I continued to add the headless-context:
fn main() {
// building the display, ie. the main object
let event_loop = glutin::EventsLoop::new();
let display = build_display_headless((128,128), &event_loop);
// building the vertex buffer, which contains all the vertices that we will draw
pub fn build_display_headless(size: (u32, u32), event_loop: &glutin::EventsLoop) -> glium::HeadlessRenderer {
let version = parse_version(); // this will return `OpenGL 3.3`
let ctx = glutin::ContextBuilder::new()
.with_gl(version)
.build_headless(&event_loop, glutin::dpi::PhysicalSize::from(size))
.expect("1");
//let ctx = unsafe { ctx.make_current().expect("3") };
glium::HeadlessRenderer::new(ctx).expect("4")
}
But this time, the program did not work. There was no panic during the running, but the output image was empty with black, and its size was not 128x128 but 800x600.
I have tried to remove the libEGL.dll so that, due to the doc of crate glutin, the function .build_headless will build a window and hide it, just as my function build_display does. However, this failed too. So what can cause this?
I'm trying to save data in an image metadata in iOS/Swift3. It does not appear that CG will let you save out custom tags (is that true?) so I JSON encoded my dictionary and put the result as a string into the TIFF tag's ImageDescription. When I load the image and get the metadata back...
if let data = NSData(contentsOfFile:oneURL.path), let imgSrc = CGImageSourceCreateWithData(data, options as CFDictionary) {
let allmeta = CGImageSourceCopyPropertiesAtIndex(imgSrc, 0, options as CFDictionary) as? [String : AnyObject]
The allMeta contains (among other things):
▿ 0 : 2 elements
- key : ImageDescription
- value : {
"CameraOrientationW" : 0.1061191,
"CameraOrientationZ" : -0.01305595,
"CameraOrientationX" : 0.01319851,
"CameraOrientationY" : 0.9941801
}
Which has the JSON data, yay! So now I simply have to get the TIFF metadata, get the ImageDescription from that, and de-JSON it...
let tiffmeta = allmeta?["{TIFF}"]
if let tiffMeta = tiffmeta {
let descmeta = tiffMeta["ImageDescription"]
var descdata = descmeta?.data(usingEncoding: NSUTF8StringEncoding)!
let descdict = try? JSONSerialization.jsonObject(with: descdata, options: [])
But this will not compile. Xcode puts an error on the let descdata line:
Value of type 'MDLMaterialProperty??' has no member 'data'
I tried casting it to String on the line above, at which point it complains I didn't unwrap the optional MDLMaterialProperty.
Am I missing something obvious here?
So just to close this one, this appears to be a problem in the compiler. I made a number of minor changes to the syntax, nothing that had any actual effect on the code, and suddenly it decided the object was indeed a string.
I'm new baby for Video Processing use Swift 3. I try to merge multiple videos with AVAssetExportSession, and using AVVideoCompositionCoreAnimationTool to add overlay for final video.
The problem is sometimes the final video is perfect, but sometimes it just give me a black video with sound only even I didn't change anything :(
Anybody who ran into that problem same me please give an idea, thanks!
let mixComposition: AVMutableComposition = AVMutableComposition()
//Add assets here
let mainComposition: AVMutableVideoComposition = AVMutableVideoComposition(propertiesOf: mixComposition)
mainComposition.frameDuration = CMTimeMake(1, 30)
mainComposition.renderSize = renderSize
mainComposition.renderScale = 1.0
mainComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: parentLayer)
mainComposition.instructions = instructions
let exportSession: AVAssetExportSession = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)!
exportSession.videoComposition = mainComposition
exportSession.audioMix = audioMix
exportSession.outputURL = outputURL
exportSession.outputFileType = AVFileTypeMPEG4
exportSession.shouldOptimizeForNetworkUse = true
exportSession.exportAsynchronously {
// Ended here
}
I have a shape file (Sample.shp) along with two other files (Sample.shx and Sample.dbf), which has geometry (polygons) defined for 15 pincodes of Bombay.
I am able to view the .shp file using the Quickstart tutorial.
File file = JFileDataStoreChooser.showOpenFile("shp", null);
if (file == null) {
return;
}
FileDataStore store = FileDataStoreFinder.getDataStore(file);
SimpleFeatureSource featureSource = store.getFeatureSource();
// Create a map content and add our shapefile to it
MapContent map = new MapContent();
map.setTitle("Quickstart");
Style style = SLD.createSimpleStyle(featureSource.getSchema());
Layer layer = new FeatureLayer(featureSource, style);
map.addLayer(layer);
// Now display the map
JMapFrame.showMap(map);
Now I want to convert the geometry of these 15 pincodes to 15 Geometry/Polygon objects so that I can use Geometry.contains() to find if a point falls in a particular Geometry/Polygon.
I tried:
ShapefileReader r = new ShapefileReader(new ShpFiles(file),true,false,geometryFactory);
System.out.println(r.getCount(0)); >> returns 51
System.out.println(r.hasNext()); >> returns false
Any help is really appreciated
In fact you don't need to extract the geometries your self - just create a filter and iterate through the filtered collection. In your case there will probably be only one feature returned.
Filter pointInPolygon = CQL.toFilter("CONTAINS(the_geom, POINT(1 2))");
SimpleFeatureCollection features = source.getFeatures(filter);
SimpleFeatureIterator iterator = features.features();
try {
while (iterator.hasNext()) {
SimpleFeature feature = iterator.next();
Geometry geom = (Geometry) feature.getDefaultGeometry();
/*... do something here */
}
} finally {
iterator.close(); // IMPORTANT
}
For a full discussion of querying datastores see the Query Lab.
I used the above solution and tried a few combinations. Just changed "THE_GEOM" to lower case and POINT is in order (Lon Lat)
Filter filter = CQL.toFilter("CONTAINS(the_geom, POINT(72.82916 18.942883))");
SimpleFeatureCollection collection=featureSource.getFeatures(filter);
SimpleFeatureIterator iterator = collection.features();
try {
while (iterator.hasNext()) {
SimpleFeature feature = iterator.next();
.....
}
} finally {
iterator.close(); // IMPORTANT
}
I am trying to implement a screenshot functionality in a WinRT app that shows Video via a MediaElement. I have the following code, it saves a screenshot that's the size of the MediaElement but the image is empty (completely black). Tried with various types of Media files. If I do a Win Key + Vol Down on Surface RT, the screen shot includes the Media frame content, but if I use the following code, it's blackness all around :(
private async Task SaveCurrentFrame()
{
RenderTargetBitmap renderTargetBitmap = new RenderTargetBitmap();
await renderTargetBitmap.RenderAsync(Player);
var pixelBuffer = await renderTargetBitmap.GetPixelsAsync();
MultimediaItem currentItem = (MultimediaItem)this.DefaultViewModel["Group"];
StorageFolder currentFolder = Windows.Storage.ApplicationData.Current.LocalFolder;
var saveFile = await currentFolder.CreateFileAsync(currentItem.UniqueId + ".png", CreationCollisionOption.ReplaceExisting);
if (saveFile == null)
return;
// Encode the image to the selected file on disk
using (var fileStream = await saveFile.OpenAsync(FileAccessMode.ReadWrite))
{
var encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.PngEncoderId, fileStream);
encoder.SetPixelData(
BitmapPixelFormat.Bgra8,
BitmapAlphaMode.Ignore,
(uint)renderTargetBitmap.PixelWidth,
(uint)renderTargetBitmap.PixelHeight,
DisplayInformation.GetForCurrentView().LogicalDpi,
DisplayInformation.GetForCurrentView().LogicalDpi,
pixelBuffer.ToArray());
await encoder.FlushAsync();
}
}
Here MultimediaItem is my View Model class that among other things has a UniqueId property that's a string.
'Player' is the name of the Media Element.
Is there anything wrong with the code or this approach is wrong and I've to get in the trenches with C++?
P.S. I am interested in the WinRT API only.
Update 1 Looks like RenderTargetBitmap doesn't support this, the MSDN documentation clarifies it http://msdn.microsoft.com/en-us/library/windows/apps/windows.ui.xaml.media.imaging.rendertargetbitmap .
I'll appreciate any pointers on how to do it using DirectX C++. This is a major task for me so I'll crack this one way or the other and report back with the solution.
Yes, it is possible - little bit tricky, but working well.
You dont use mediaElement, but StorageFile itself.
You need to create writableBitmap with help of Windows.Media.Editing namespace.
Works in UWP (Windows 10)
This is complete example with file picking and getting video resolution and saving image to Picture Library
TimeSpan timeOfFrame = new TimeSpan(0, 0, 1);//one sec
//pick mp4 file
var picker = new Windows.Storage.Pickers.FileOpenPicker();
picker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.VideosLibrary;
picker.FileTypeFilter.Add(".mp4");
StorageFile pickedFile = await picker.PickSingleFileAsync();
if (pickedFile == null)
{
return;
}
///
//Get video resolution
List<string> encodingPropertiesToRetrieve = new List<string>();
encodingPropertiesToRetrieve.Add("System.Video.FrameHeight");
encodingPropertiesToRetrieve.Add("System.Video.FrameWidth");
IDictionary<string, object> encodingProperties = await pickedFile.Properties.RetrievePropertiesAsync(encodingPropertiesToRetrieve);
uint frameHeight = (uint)encodingProperties["System.Video.FrameHeight"];
uint frameWidth = (uint)encodingProperties["System.Video.FrameWidth"];
///
//Use Windows.Media.Editing to get ImageStream
var clip = await MediaClip.CreateFromFileAsync(pickedFile);
var composition = new MediaComposition();
composition.Clips.Add(clip);
var imageStream = await composition.GetThumbnailAsync(timeOfFrame, (int)frameWidth, (int)frameHeight, VideoFramePrecision.NearestFrame);
///
//generate bitmap
var writableBitmap = new WriteableBitmap((int)frameWidth, (int)frameHeight);
writableBitmap.SetSource(imageStream);
//generate some random name for file in PicturesLibrary
var saveAsTarget = await KnownFolders.PicturesLibrary.CreateFileAsync("IMG" + Guid.NewGuid().ToString().Substring(0, 4) + ".jpg");
//get stream from bitmap
Stream stream = writableBitmap.PixelBuffer.AsStream();
byte[] pixels = new byte[(uint)stream.Length];
await stream.ReadAsync(pixels, 0, pixels.Length);
using (var writeStream = await saveAsTarget.OpenAsync(FileAccessMode.ReadWrite))
{
var encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.JpegEncoderId, writeStream);
encoder.SetPixelData(
BitmapPixelFormat.Bgra8,
BitmapAlphaMode.Premultiplied,
(uint)writableBitmap.PixelWidth,
(uint)writableBitmap.PixelHeight,
96,
96,
pixels);
await encoder.FlushAsync();
using (var outputStream = writeStream.GetOutputStreamAt(0))
{
await outputStream.FlushAsync();
}
}
Yeah...I spent lot of hours by this
Ok I have managed to get making snapshot from MediaElement on button press to work.
I am passing MediaStreamSource object to MediaElement using SetMediaStreamSource method. MediaStreamSource has event SampleRequested which is fired basicly everytime new frame is drawn. Then using boolean I control when to create bitmap
private async void MediaStream_SampleRequested(MediaStreamSource sender, MediaStreamSourceSampleRequestedEventArgs args)
{
if (!takeSnapshot)
{
return;
}
takeSnapshot = false;
Task.Run(() => DecodeAndSaveVideoFrame(args.Request.Sample));
}
After that what is left is to decode compressed image and convert it to WriteableBitmap. The image is (or at least was in my case) in YUV fromat. You can get the byte array using
byte[] yvuArray = sample.Buffer.ToArray();
and then get data from this array and convert it to RGB. Unfortunetly I cannot post entire code but I'm gonna give you a few more hints:
YUV to RGB wiki here you have wiki describing how does YUV to RGB conversion works.
Here I found python project which solution I have adapted (and works perfectly). To be more precise you have to analize how method NV12Converter works.
The last thing is to change takeSnapshot boolean to true after pressing button or doing other activity :).