I am using ML.net to consume Tensorflow object detection model exported from Azure custom vision. As prediction output, I am supposed to get "detected class", "detected score" and "bounding box". I am getting one class and one score but the bounding box is an array of 256 float rows. I am supposed to get only 4 values as bounding box - x,y,h,w but I am getting 256 rows against a single detected class. I am using Ml.Net 1.5.4 from nugget package store in Visual studio
I believe the array float output is flatten (256 / 4 = 64) so there are 64 detections in total. A List<float[4]> output would be easier to understand and interrogate but this isn't happening. Thus, you need to split the 256 array into chunks of 4.
To correctly capture the ONNX outputs for a Custom Vision ONNX exported model requires the following class:
public class ONNXOutput
{
[ColumnName("detected_boxes")]
public float[] DetectedBoxesCoords { get; set; }
[ColumnName("detected_classes")]
public long[] DetectedClasses { get; set; }
[ColumnName("detected_scores")]
public float[] DetectedScores { get; set; }
}
DetectedClasses and DetectedScores are separate outputs from the model and need to be retrieved independently to DetectedBoxes.
This website provides more information https://mlnet-workshop.azurewebsites.net/category/ONNX
Related
I am using veins 5.0 and i am trying to calculate the distance between the vehicles and setting their speed. I want to calculate it every second and i want to do it by sending wsm messages.My goal is to have for example 5 vehicles, each vehicle will communicate with the front vehicle and get its position with the intention of calculating their distance and keep it static. I am new to this and i don't know how to approach it.
I tried to do something like this on handlePositionUpdate
void TraCIDemo11p::handlePositionUpdate(cObject* obj)
{
DemoBaseApplLayer::handlePositionUpdate(obj);
// stopped for for at least 10s?
if (x<simTime()){
TraCIDemo11pMessage* wsm1 = new TraCIDemo11pMessage();
populateWSM(wsm1);
wsm1->setPosition(mobility->getPositionAt(simTime()));
wsm1->setSpeed(mobility->getSpeed());
if (dataOnSch) {
startService(Channel::sch2, 42, "Traffic Information Service");
message to self to send later
scheduleAt(computeAsynchronousSendingTime(1,ChannelType::service), wsm1);
}
else {
sendDown(wsm1);
}
}
You are describing what is, effectively, a platooning application. You might want to base your source code on Plexe, the platooning extension to Veins. It already comes with state-of-the-art distance controllers like PATH or Ploeg. More information can be found on http://plexe.car2x.org/
I have a functioning tf.estimator pipeline build in TF 1, but now I made the decision to move to TF 2.0, and I have problems in the end of my pipeline, when I want to save the model in the .pb format
I'm using this high level estimator export_saved_model method:
https://www.tensorflow.org/api_docs/python/tf/estimator/BoostedTreesRegressor#export_saved_model
I have two numeric features, 'age' and 'time_spent'
They're defined using tf.feature_column as such:
age = tf.feature_column.numeric_column('age')
time_spent = tf.feature_column.numeric_column('time_spent')
features = [age,time_spent]
After the model has been trained I turn the list of features into a dict using the method feature_column_make_parse_example_spec() and feed it to another method build_parsing_serving_input_receiver_fn() excactly as outlied on tensorflow's webpage, https://www.tensorflow.org/guide/saved_model under estimators.
columns_dict = tf.feature_column_make_parse_example_spec(features)
input_receiver_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(columns_dict)
model.export_saved_model(export_dir,input_receiver_fn)
I then inspect the output using the CLI tools
saved_model_cli show --dir mydir --all:
Resulting in the following:
enter image description here
Somehow Tensorflow squashes my two usefull numeric features into a useless string input crap called "inputs".
In TF 1 this could be circumvented by creating a custom input_receiver_fn() function using some tf.placeholder method, and I'd get the correct output with two distinct numeric features. But tf.placeholder doesn't exist in TF 2, so now it's pretty useless.
Sorry about the raging, but Tensorflow is horribly documented, and I'm really working with high level API's and it should just be straight out on the horse, but no.
I'd really appreciate any help :)
Tensorflow squashes my two usefull numeric features into a useless
string input crap called "inputs"
is not exactly true, as the exported model expects a serialized tf.Example proto. So, you can warp your age and time_spent into two features which will look like:
features {
feature {
key: "age"
value {
float32_list {
value: 10.2
}
}
}
feature {
key: "time_spent"
value {
float32_list {
value: 40.3
}
}
}
}
you can then call your regress function with the serialized string.
I created a custom date field in the opportunity screen. The field shows up just as expected. I enter a date save the record, go out, go back in and the date is no longer there. However, when I query the database, the column is there and there is a data in the column. So, even though the date is being stored in the database it is not displaying it in the screen or when I added that value to the generic inquiry it is also blank. I am not a programmer but it seems like that should’ve been an easy thing to do. I found some references on the web of this type of problem but was hoping there was a more straightforward fix than what those appeared, as they were more programming than I am comfortable at this point.
I found many web pages explaining how to do this seemingly simple thing using the customization tools. Not sure what I'm missing.
I am running fairly recent version of 2019R1.
Any help would be appreciated!
Response from an Acumatica support person.. Evidently they changed some things with 2018R1. See comments/answers below from support. Image of the change I made in the customization tool is also below. After making this change the custom field worked as desired.
After reviewing it into more details, it sounds that it could be related to the new implementation of PXProjection on that DAC.
Unlike ver. 2017 R2, some DAC like the PX.Objects.CR.CROpportunity DAC were implemented as a regular Data Access Class:
[System.SerializableAttribute()]
[PXCacheName(Messages.Opportunity)]
[PXPrimaryGraph(typeof(OpportunityMaint))]
[CREmailContactsView(typeof(Select2<Contact,
LeftJoin<BAccount, On<BAccount.bAccountID, Equal<Contact.bAccountID>>>,
Where2<Where<Optional<CROpportunity.bAccountID>, IsNull, And<Contact.contactID, Equal<Optional<CROpportunity.contactID>>>>,
Or2<Where<Optional<CROpportunity.bAccountID>, IsNotNull, And<Contact.bAccountID, Equal<Optional<CROpportunity.bAccountID>>>>,
Or<Contact.contactType, Equal<ContactTypesAttribute.employee>>>>>))]
[PXEMailSource]//NOTE: for assignment map
public partial class CROpportunity : PX.Data.IBqlTable, IAssign, IPXSelectable
{
...
}
In version 2018 R1(and later) the PX.Objects.CR.CROpportunity DAC is a projection over the PX.Objects.CR.Standalone.CROpportunity and PX.Objects.CR.Standalone.CROpportunityRevision DACs:
[System.SerializableAttribute()]
[PXCacheName(Messages.Opportunity)]
[PXPrimaryGraph(typeof(OpportunityMaint))]
[CREmailContactsView(typeof(Select2<Contact,
LeftJoin<BAccount, On<BAccount.bAccountID, Equal<Contact.bAccountID>>>,
Where2<Where<Optional<CROpportunity.bAccountID>, IsNull, And<Contact.contactID, Equal<Optional<CROpportunity.contactID>>>>,
Or2<Where<Optional<CROpportunity.bAccountID>, IsNotNull, And<Contact.bAccountID, Equal<Optional<CROpportunity.bAccountID>>>>,
Or<Contact.contactType, Equal<ContactTypesAttribute.employee>>>>>))]
[PXEMailSource]//NOTE: for assignment map
[PXProjection(typeof(Select2<Standalone.CROpportunity,
InnerJoin<Standalone.CROpportunityRevision,
On<Standalone.CROpportunityRevision.opportunityID, Equal<Standalone.CROpportunity.opportunityID>,
And<Standalone.CROpportunityRevision.revisionID, Equal<Standalone.CROpportunity.defRevisionID>>>>>), Persistent = true)]
public partial class CROpportunity : IBqlTable, IAssign, IPXSelectable
{
...
}
Because of that change, it's now required to declare 2 extension classes, one for Standalone.CROpportunity (normal DAC) and the CROpportunity (PXProjection).
On the PXProjection DAC Extension, please remind to add BqlField to the correspondent field on the Standalone DAC, Ex.: BqlField = typeof(CROpportunityStandaloneExt.usrTest)
public class CROpportunityExt : PXCacheExtension<PX.Objects.CR.CROpportunity>
{
#region UsrTest
[PXDBDecimal(BqlField = typeof(CROpportunityStandaloneExt.usrTest))]
[PXUIField(DisplayName="Test Field")]
public virtual Decimal? UsrTest { get; set; }
public abstract class usrTest : IBqlField { }
#endregion
}
Please find more information on this article below:
Custom field on CROpportunity doesn't display saved value since upgrading from 6.10 or 2017R2 to 2018R1
change made in customization tool
I have a trained fastrcnn model on a custom set of images. I want to evaluate a new image using the model and the C++ Eval API. I flattened in the image into a 1D vector and acquired rois to input into the eval function.
GetEvalF(&model);
// Load model with desired outputs
std::string networkConfiguration;
//networkConfiguration += "outputNodeNames=\"h1.z:ol.z\"\n";
networkConfiguration += "modelPath=\"" + modelFilePath + "\"";
model->CreateNetwork(networkConfiguration);
// inputs are features of image: 1000:1000:3 & rois for image: 100
std::unordered_map<string, vector<float>> inputs = { { "features", imgVector },{ "rois", roisVector } };
//outputs are roiLabels and prediction values for each one: 500
std::unordered_map<string, vector<float>*> outputs = { { "roiLabels", &labelsVector }};
but when I try to evaluate with
model->Evaluate(inputs, outputs);
I have a 'no instance of overloaded function error'
Does somebody know how I'm wrong in my formatting?
Did you train your model using Python or BrainScript? If using Python, you should use CNTKLibrary API for evaluation, but not the EvalDll API (which works only for models trained with BrainScript). You can find more information about difference beween these two APIs in our Wiki page here. You can check this page about how to use CNTKLibrary API for model evaluation, and the example code. Instructions about how to build examples are described in this page.
You can also use our Nuget packages to build your application.
Thanks!
It is possible to automatically generate Sitecore templates just coding models? I'm using Sitecore 8.0 and I saw Glass Mapper Code First approach but I cant find more information about that.
Not sure why there isn't much info about it, but you can definitely model/code first!. I do it alot using the attribute configuration approach like so:
[SitecoreType(true, "{generated guid}")]
public class ExampleModel
{
[SitecoreField("{generated guid}", SitecoreFieldType.SingleLineText)]
public virtual string Title { get; set; }
}
Now how this works. The SitecoreType 'true' value for the first parameter indicates it may be used for codefirst. There is a GlassCodeFirstDataprovider which has an Initialize method, executed in Sitecore's Initialize pipeline. This method will collect all configurations marked for codefirst and create it in the sql dataprovider. The sections and fields are stored in memory. It also takes inheritance into account (base templates).
I think you first need to uncomment some code in the GlassMapperScCustom class you get when you install the project via Nuget. The PostLoad method contains the few lines that execute the Initialize method of each CodeFirstDataprovider.
var dbs = global::Sitecore.Configuration.Factory.GetDatabases();
foreach (var db in dbs)
{
var provider = db.GetDataProviders().FirstOrDefault(x => x is GlassDataProvider) as GlassDataProvider;
if (provider != null)
{
using (new SecurityDisabler())
{
provider.Initialise(db);
}
}
}
Furthermore I would advise to use code first on development only. You can create packages or serialize the templates as usual and deploy them to other environment so you dont need the dataprovider (and potential risks) there.
You can. But it's not going to be Glass related.
Code first is exactly what Sitecore.PathFinder is looking to achieve. There's not a lot of info publicly available on this yet however.
Get started here: https://github.com/JakobChristensen/Sitecore.Pathfinder