How to find attachments in layout board using SKILL Cadence - cadence

Is there any way to find all attachments in the current Layout board using axlGetAttachment, displaying the results to the user, and labeling the data type with SKILL?

Related

How to separate same label data points by specifying marker/ symbol for the data point when visualising using Embedding Projector

I have a trained network that is giving me features of 2048 dimensions. I want to visualize them using t-sne plot through a Tensorboard embedding projector. Each of the data points belongs to some 10 different labels so this distinction I can do by specifying the different colors to different labels. However, the catch is, within 1 label, these features belong to 2 categories, one is the RGB image feature and the other is the infrared image feature. So now I need to show some marker like 'x' or 'o' which is available in Scatter plot different for RGB and infrared features.
However, I couldn't find this feature to specify marker anywhere in Embedding Projector documentation. Currently, I am using Pytorch's embedding projector using Summary writer but I don't mind switching to Tensorflow if it has such functionality.
Kindly help me if anybody knows if this can be done in a Tensorboard embedding projector.
Thanks in advance

Scenekit: How to get all of node's materials?

Ok, from what I understand materials can be created for .dae or any 3D model being used as an SCNNode in Xcode here, in the model's editor:
The topmost material gets applied automatically and all is well. My problem is that I want to programmatically SWITCH between these materials that have been created throughout my game.
I tried to get an array of these materials by doing:
node.geometry?.materials
however this only returns that first material. Ive tried everybting but cant find a way to get the other materials and switch to them. Right now I am trying:
childNode.geometry?.materials = [(childNode.geometry?.material(named: "test"))!]
//childNode is the node
where test was that second material but it finds that as nil. How can I programmatically switch between multiple materials?
If the material is not actually assigned to one of the material slots (such as diffuse) it’s not part of the geometry either.
You could assign the second material to another slot and then reset its color in code after you read the material into a property to be used later.
Another option I used myself is assigning multiple materials to different faces of the same model (in third party 3d software). Exported as dae, added to xcode which then automatically divided the geometry into separate elements with each their own material. Which I can then adjust in xcode, and itterate in the same way you are trying to do.

Vision Framework with ARkit and CoreML

While I have been researching best practices and experimenting multiple options for an ongoing project(i.e. Unity3D iOS project in Vuforia with native integration, extracting frames with AVFoundation then passing the image through cloud-based image recognition), I have come to the conclusion that I would like to use ARkit, Vision Framework, and CoreML; let me explain.
I am wondering how I would be able to capture ARFrames, use the Vision Framework to detect and track a given object using a CoreML model.
Additionally, it would be nice to have a bounding box once the object is recognized with the ability to add an AR object upon a gesture touch but this is something that could be implemented after getting the solid project down.
This is undoubtedly possible, but I am unsure of how to pass the ARFrames to CoreML via Vision for processing.
Any ideas?
Update: Apple now has a sample code project that does some of these steps. Read on for those you still need to figure out yourself...
Just about all of the pieces are there for what you want to do... you mostly just need to put them together.
You obtain ARFrames either by periodically polling the ARSession for its currentFrame or by having them pushed to your session delegate. (If you're building your own renderer, that's ARSessionDelegate; if you're working with ARSCNView or ARSKView, their delegate callbacks refer to the view, so you can work back from there to the session to get the currentFrame that led to the callback.)
ARFrame provides the current capturedImage in the form of a CVPixelBuffer.
You pass images to Vision for processing using either the VNImageRequestHandler or VNSequenceRequestHandler class, both of which have methods that take a CVPixelBuffer as an input image to process.
You use the image request handler if you want to perform a request that uses a single image — like finding rectangles or QR codes or faces, or using a Core ML model to identify the image.
You use the sequence request handler to perform requests that involve analyzing changes between multiple images, like tracking an object's movement after you've identified it.
You can find general code for passing images to Vision + Core ML attached to the WWDC17 session on Vision, and if you watch that session the live demos also include passing CVPixelBuffers to Vision. (They get pixel buffers from AVCapture in that demo, but if you're getting buffers from ARKit the Vision part is the same.)
One sticking point you're likely to have is identifying/locating objects. Most "object recognition" models people use with Core ML + Vision (including those that Apple provides pre-converted versions of on their ML developer page) are scene classifiers. That is, they look at an image and say, "this is a picture of a (thing)," not something like "there is a (thing) in this picture, located at (bounding box)".
Vision provides easy API for dealing with classifiers — your request's results array is filled in with VNClassificationObservation objects that tell you what the scene is (or "probably is", with a confidence rating).
If you find or train a model that both identifies and locates objects — and for that part, I must stress, the ball is in your court — using Vision with it will result in VNCoreMLFeatureValueObservation objects. Those are sort of like arbitrary key-value pairs, so exactly how you identify an object from those depends on how you structure and label the outputs from your model.
If you're dealing with something that Vision already knows how to recognize, instead of using your own model — stuff like faces and QR codes — you can get the locations of those in the image frame with Vision's API.
If after locating an object in the 2D image, you want to display 3D content associated with it in AR (or display 2D content, but with said content positioned in 3D with ARKit), you'll need to hit test those 2D image points against the 3D world.
Once you get to this step, placing AR content with a hit test is something that's already pretty well covered elsewhere, both by Apple and the community.

Directx11 How to manage multiple vertex/index buffers?

I'm Working on a small game framework as I am learning DirectX11.
What could be the best method to have a BufferManager class (maybe static) to handle all the vertex and index data of the models created both in real time or before. The class should be responsible for creating the Buffers dynamic or static, depending on the model info and then drawing them.
Should I have one vertex and index vector list and append all the new models to it... and then recreate the buffers, whenever new data is appended and set the new buffers before drawing.
Should I have seperate vertex and index buffers for the models, access respective model's buffer and set to IASetVertexBuffer(model[i].getVertBuff()) before each draw call;
Also some models could be dynamic and others static, how can I do batching here?
Not showing any code here but the construct that you are requesting would be as follows:
Create a file loader for, textures, model meshes, vertex data, normal, audio, etc.
Have a reusable structure that stores all of this data for a particular mesh of a model.
When creating this you will also want a separate texture class to hold information about different textures. This way the same texture can be referenced for different models or meshes and you won't have to load them into memory each time.
The same can be done about different meshes; you can reference a mesh that may be a part of different model objects.
To do this you would need an Asset Storage class that will manage all of your assets. This way if an asset is already in memory it will not load it again; such as a font, a texture, a model, an audio file, etc.
Then the next part you will need is a Batch class and a Batch Manager class
The batch class will define the container of what a batch is based off of a few parameters: Primitive types, if they have transparencies or not (priority queue) etc.
The Batch Manager class will do the organization and send the batches to the rendering stage. This class will also be used to state how many vertices a batch can hold and how many batches (buckets) you have. The ratio will depend on the game content. A good ratio for a basic 2D sprite type application would be approximately 10 batches where each batch contains not less than 10,000 vertices. The next step would be then to populate a bucket of similar types based on Primitive Type and its Priority (alpha channel - for Z depth), and if a bucket can not hold the data it will then look for another bucket to fill. If no buckets are available to hold the data, then the Batch Manager will look for the bucket that is most filled with the highest priority queue, it will send that batch to the video card to be rendered, then it will reuse that bucket.
The last part would be a ShaderManager class to manage different types of shaders that your program will use. So all of these classes or structures will be tied together.
If you design this appropriately you can abstract all of the Engines behaviors and responsibilities away from the actual Game Content, Properties and Logic or set of Rules. This way your Game Engine can be reusable for multiple games. That way your engine doesn't have any dependencies on a particular game and when you are ready to reuse it all you have to do is create a main project that inherits from either this Static or Dynamic library and all of the Engine Components will be incorporated into the next game. This separation of code is an excellent approach for generic reusable code.
For an excellent representation of this approach I would suggest checking out this website www.MarekKnows.com and follow the Shader Engine series of video tutorials. Albeit this particular website focuses on a Win32 in C++ but uses OpenGL instead of DirectX. However the overall design pattern has the same concept. The only difference would be to strip out the OpenGL parts and replace them with the DirectX API.
EDIT - Additional References:
Geometric Tools
Rastertek
3D Buzz
Learn OpenGL
GPU Gems by NVidia
Batches PDF by NVidia
Hieroglyph3 # Codeplex
I also found this write up on Batch Rendering Process that is by Marek Krzeminski which is from his video tutorials but found here Batch Rendering by Marek at Gamedev

How to set classification colors in GDAL output files

I am using the GDAL C++ library to reclassify raster map images and then create an output image of the new data. However when I create the new the new image and open it, the classification values don't seem to have a color defined, so I just get a black image. I can fix this by going into the image properties and setting a color for each of the 10 classification values I'm using, but that is extremely time consuming for the amount of maps and trials I am doing.
My question is, is there a way to set metadata info through the GDAL API to define a color for each classification value? Just the name of the right function would be great, I can figure it out from there.
I have tried this using ArcGIS and QuantumGIS, and both have the same problem. Also the file type I am using is Erdas Imagine (called "HFA" in GDAL).
You can use SetColorTable() method on your raster band. Easiest to do is to fetch one pre-existing raster using GetColorTable(), and pass it to your new raster.