List object properties from a instance in Jena - list

How can I list all Object Properties associated to a instance in Jena?
For example:
A Person has an Object Property called "hasVehicle" which is associated with a class Vehicle

The appropriate Jena method is OntClass.listDeclaredProperties. There are some nuances to be aware of; the Jena RDF frames how-to explains in detail.
Update
OK, I've looked at your code sample, and read your description, and I'm afraid I don't understand what you want to do. What I've done is re-write your code sample so that it does something that I guess you might want, based on your description in the comment:
package test;
import com.hp.hpl.jena.ontology.*;
import com.hp.hpl.jena.rdf.model.*;
import com.hp.hpl.jena.util.FileManager;
import com.hp.hpl.jena.util.iterator.ExtendedIterator;
public class LeandroTest
{
public static String NS = "http://www.owl-ontologies.com/TestProject.owl#";
public static void main( String[] args ) {
OntModel m = ModelFactory.createOntologyModel( OntModelSpec.OWL_MEM, null );
FileManager.get().readModel( m, "./src/main/resources/project-test.owl" );
OntClass equipe = m.getOntClass( NS + "Equipe" );
OntProperty nome = m.getOntProperty( NS + "nome" );
for (ExtendedIterator<? extends OntResource> instances = equipe.listInstances(); instances.hasNext(); ) {
OntResource equipeInstance = instances.next();
System.out.println( "Equipe instance: " + equipeInstance.getProperty( nome ).getString() );
// find out the resources that link to the instance
for (StmtIterator stmts = m.listStatements( null, null, equipeInstance ); stmts.hasNext(); ) {
Individual ind = stmts.next().getSubject().as( Individual.class );
// show the properties of this individual
System.out.println( " " + ind.getURI() );
for (StmtIterator j = ind.listProperties(); j.hasNext(); ) {
Statement s = j.next();
System.out.print( " " + s.getPredicate().getLocalName() + " -> " );
if (s.getObject().isLiteral()) {
System.out.println( s.getLiteral().getLexicalForm() );
}
else {
System.out.println( s.getObject() );
}
}
}
}
}
}
This gives the following output, by first listing all resources of rdf:type #Equipe, then for each of those it lists the resources in the model that link to that Equipe, then for of those linked resources it lists all of the RDF the properties. I don't think that's a particularly useful thing to do, but hopefully it will show you some patterns for traversing RDF graphs in Jena.
Equipe instance: Erica
Equipe instance: Etiene
http://www.owl-ontologies.com/TestProject.owl#EtapaExecucao_01
EtapaExecucao_DataModificao -> 2010-03-29T10:54:05
caso_de_teste -> http://www.owl-ontologies.com/TestProject.owl#CasoDeTeste_01
EtapaExecucao_StatusTeste -> Passou
EtapaExecucao_Reprodutibilidade -> Sempre
type -> http://www.owl-ontologies.com/TestProject.owl#EtapaExecucao
EtapaExecucao_VersaoDefeitoSurgiu -> Release ICAMMH_01.00
EtapaExecucao_Severidade -> Minimo
EtapaExecucao_VersaoDefeitoCorrigiu -> Release ICAMMH_02.00
DataExecucao -> 2009-07-10T09:42:02
EtapaExecucao_StatusDoDefeito -> Nao sera corrigido
EtapaExecucao_DataSubmissao -> 2009-06-30T09:43:01
Tipos_Fases -> http://www.owl-ontologies.com/TestProject.owl#FaseTesteExecucao
EtapaExecucao_Resolucao -> Fechado
executor_do_teste -> http://www.owl-ontologies.com/TestProject.owl#Etiene
EtapaExecucao_PrioridadeCorrecao -> Normal
Equipe instance: Fabio
Equipe instance: Melis
Some general suggestions, particularly if you have any follow-up questions:
ask specific questions, it's very hard to answer a vague unclear question;
provide runnable code if possible: you can take my code below, drop it into a code environment like Eclipse and try it out
provide the code and data in the question, not linked off on pastebin
take some time to reduce the code and data to the minimum form necessary to show the problem: your Protégé file was over 600 lines long

Related

How to write custom ppx decorator to rescript?

I need to generate a value with a different type from my passed type. This is the first time I write on ocaml-like, and for example, in a familiar me haskell I would use Data.Generics.
How I have understood I need to use decorator and ppx. I wrote simple example
let recordHandler = (loc: Location.t, _recFlag: rec_flag, _t: type_declaration, fields: list(label_declaration)) => {
let (module Builder) = Ast_builder.make(loc);
let test = [%str
let schema: Schema = { name: "", _type: String, properties: [] }
]
let moduleExpr = Builder.pmod_structure(test);
[%str
module S = [%m moduleExpr]
]
}
let str_gen = (~loc, ~path as _, (_rec: rec_flag, t: list(type_declaration))) => {
let t = List.hd(t)
switch t.ptype_kind {
| Ptype_record(fields) => recordHandler(loc, _rec, t, fields);
| _ => Location.raise_errorf(~loc, "schema is used only for records.");
};
};
let name = "my_schema";
let () = {
let str_type_decl = Deriving.Generator.make_noarg(str_gen);
Deriving.add(name, ~str_type_decl) |> Deriving.ignore;
};
And
open Ppxlib;
let _ = Driver.run_as_ppx_rewriter()
But in using in rescript code
module User = {
#deriving(my_schema)
type my_typ = {
foo: int,
};
};
I caught:
schema is not supported
. And I made myself sure me to connect it right when I had changed #deriving(my_schema) for #deriving(abcd) and #deriving(sschema).
I got different error
Ppxlib.Deriving: 'abcd' is not a supported type deriving generator.
And my last experiment was to copy past existing library deriving accessors .
ppx_accessor
I copied-pasted it and renamed for accessors_2. And I got same error such as experiment.
accessors_2 is not supported
Also I haven't found examples "ppx rescript". Can you please help me.
What am I doing wrong (ALL , I know)
I have found answer in the article
Dropping support for custom PPXes such as ppx_deriving (the deriving
attribute is now exclusively interpreted as bs.deriving)

How to use Isolationforest in weka?

I am trying to use isolationforest in weka ,but I cannot find a easy example which shows how to use it ,who can help me ?thanks in advance
import weka.classifiers.misc.IsolationForest;
public class Test2 {
public static void main(String[] args) {
IsolationForest isolationForest = new IsolationForest();
.....................................................
}
}
I strongly suggest you to study a little bit the implementation for IslationForest.
The following code work loading a CSV file with first column with Class (note: a single class value will produce only (1-anomaly score) if it's binary you will get the anomaly score too. Otherwise it just return an error). Note I skip the second column (that in my case is the uuid that is not needed for anomaly detection)
private static void findOutlier(File in, File out) throws Exception {
CSVLoader loader = new CSVLoader();
loader.setSource(new File(in.getAbsolutePath()));
Instances data = loader.getDataSet();
// setting class attribute if the data format does not provide this information
// For example, the XRFF format saves the class attribute information as well
if (data.classIndex() == -1)
data.setClassIndex(0);
String[] options = new String[2];
options[0] = "-R"; // "range"
options[1] = "2"; // first attribute
Remove remove = new Remove(); // new instance of filter
remove.setOptions(options); // set options
remove.setInputFormat(data); // inform filter about dataset **AFTER** setting options
Instances newData = Filter.useFilter(data, remove); // apply filter
IsolationForest randomForest = new IsolationForest();
randomForest.buildClassifier(newData);
// System.out.println(randomForest);
FileWriter fw = new FileWriter(out);
final Enumeration<Attribute> attributeEnumeration = data.enumerateAttributes();
for (Attribute e = attributeEnumeration.nextElement(); attributeEnumeration.hasMoreElements(); e = attributeEnumeration.nextElement()) {
fw.write(e.name());
fw.write(",");
}
fw.write("(1 - anomaly score),anomaly score\n");
for (int i = 0; i < data.size(); ++i) {
Instance inst = data.get(i);
final double[] distributionForInstance = randomForest.distributionForInstance(inst);
fw.write(inst + ", " + distributionForInstance[0] + "," + (1 - distributionForInstance[0]));
fw.write(",\n");
}
fw.flush();
}
The previous function will add at the CSV at last column the anomaly values. Please note I'm using a single class so for getting the corresponding anomaly I do 1 - distributionForInstance[0] otherwise you ca do simply distributionForInstance[1] .
A sample input.csv for getting (1-anomaly score):
Class,ignore, feature_0, feature_1, feature_2
A,1,21,31,31
A,2,41,61,81
A,3,61,37,34
A sample input.csv for getting (1-anomaly score) and anomaly score:
Class,ignore, feature_0, feature_1, feature_2
A,1,21,31,31
B,2,41,61,81
A,3,61,37,34

GetSymbolInfo().Symbol returning null on AttributeSyntax

I have a custom attribute I'm using to test a Roslyn code analyzer I'm writing:
[AttributeUsage( validOn: AttributeTargets.Class | AttributeTargets.Interface, Inherited = false, AllowMultiple = true )]
public class DummyAttribute : Attribute
{
public DummyAttribute( string arg1, Type arg2 )
{
}
public int TestField;
}
which decorates another test class:
[Dummy( "", typeof( string ) )]
[Dummy( "test", typeof( int ) )]
[Dummy( "test", typeof( int ) )]
public class J4JLogger<TCalling> : IJ4JLogger<TCalling>
{
But when I call GetSymbolInfo() on it with the semantic model it's defined in:
model.GetSymbolInfo( attrNode ).Symbol
the value of Symbol is null.
What's odd is that the GetSymbolInfo() call works perfectly well for attribute classes defined in the Net Core library (e.g., AttributeUsage).
Both Dummy and J4JLogger are defined in the same project. I create my compilation unit by parsing the files individually (e.g., J4JLogger is parsed and analyzed separately from Dummy) so when I'm parsing J4JLogger there is no reference to the assembly containing both J4JLogger and Dummy.
Could the problem be that model doesn't actually contain the Dummy class I think it does? Is there a way to check what's in the semantic model? Do I have to include a reference to the assembly whose source file I'm analyzing in the semantic model?
Corrected Parsing Logic
My original parsing logic parsed each file into a syntax tree independent of all its sister source files. The correct way to parse source files -- at least when they depend on each other -- is something like this:
protected virtual (CompilationUnitSyntax root, SemanticModel model) ParseMultiple( string primaryPath, params string[] auxPaths )
{
if( !IsValid )
return (null, null);
CSharpCompilation compilation;
SyntaxTree primaryTree;
var auxFiles = auxPaths == null || auxPaths.Length == 0
? new List<string>()
: auxPaths.Distinct().Where( p => !p.Equals( primaryPath, StringComparison.OrdinalIgnoreCase ) );
try
{
var auxTrees = new List<SyntaxTree>();
primaryTree = CSharpSyntaxTree.ParseText( File.ReadAllText( primaryPath ) );
auxTrees.Add( primaryTree );
foreach( var auxFile in auxFiles )
{
var auxTree = CSharpSyntaxTree.ParseText( File.ReadAllText( auxFile ) );
auxTrees.Add( auxTree );
}
compilation = CSharpCompilation.Create( ProjectDocument.AssemblyName )
.AddReferences( GetReferences().ToArray() )
.AddSyntaxTrees( auxTrees );
}
catch( Exception e )
{
Logger.Error<string>( "Configuration failed, exception message was {0}", e.Message );
return (null, null);
}
return (primaryTree.GetCompilationUnitRoot(), compilation.GetSemanticModel( primaryTree ));
}
A minor gotcha is that AddSyntaxTrees() does not appear to be incremental; you need to add all the relevant syntax trees in one call to AddSyntaxTrees().
Turns out the problem was that you have to include all the source files that reference each other (e.g., Dummy and J4JLogger in my case) in the compilation unit because otherwise the "internal" references (e.g., decorating J4JLogger with Dummy) won't resolve. I've annotated the question with how I rewrote my parsing logic.

different output with mono on linux as on visual studio on win7 with calling a webservice

I use the Exchange webservices to extract attachments from exchange mailserver.
When i call the code on linux with mono a certain text attachment contain some mixed-up strings.
like so
"sam winglin vz" becomes "sainglin vz" -> so it is missing "m w".
I see this about 3 times in a 150kb file. 3 bytes are missing in the linux output vs the windows output.
When i extract it from visual studio the text attachment is perfect.
It is like this example
Save attachments from exchange inbox
Any idea in what direction i should look to fix this?
Code:
#r "Microsoft.Exchange.WebServices.dll"
open Microsoft
open Microsoft.Exchange.WebServices.Data
open System
open System.Net
type PgzExchangeService(url,user,password) =
let service = new ExchangeService(ExchangeVersion.Exchange2007_SP1,
TimeZoneInfo.CreateCustomTimeZone("Central Standard Time",new TimeSpan(-6, 0, 0),"(GMT-06:00) Central Time (US & Canada)","Central Standard Time"))
do
ServicePointManager.ServerCertificateValidationCallback <- ( fun _ _ _ _ -> true )
service.Url <- new Uri(url)
service.Credentials <- new WebCredentials(user, password, "domain")
member this.Service with get() = service
member this.InboxItems = this.Service.FindItems(WellKnownFolderName.Inbox, new ItemView(10))
member this.GetFileAttachments ( item : Item ) =
let emailMessage =
EmailMessage.Bind( this.Service,
item.Id,
new PropertySet(BasePropertySet.IdOnly, ItemSchema.Attachments))
item, emailMessage.Attachments |> Seq.choose (fun attachment -> match box attachment with
| :? FileAttachment as x -> Some(x) | _ -> None)
let mailAtdomain = new PgzExchangeService("https://xx.xx.XX.XX/EWS/Exchange.asmx", "user", "passw")
let printsave (item : Item ,att : seq<FileAttachment>) =
if (Seq.length att) > 0 then
printfn "%A - saving %i attachments" item.Subject (Seq.length att)
att |> Seq.iter ( fun attachment -> printfn "%A" attachment.Name
attachment.Load(#"/tmp/test/" + attachment.Name ) )
// filter so we only have items with attachements and ...
let itemsWithAttachments = mailAtdomain.InboxItems
|> Seq.map mailAtdomain.GetFileAttachments
|> Seq.iter printsave
The code doesn't run on Windows with mono due to a bug in TimeZoneInfo
This sample code runs on linux but not on windows. because of the TimeZoneInfo bug.
But with this the code that works on linux to extract attachments.
Try csv attachments and see if the result is the same. i loose data ! about 3 bytes every somemany lines
mail me if you need the sample csv attachment that gives the problem
Here is a c# version that i used for testing. Running from VS2010 it works perfect but on linux with mono the attachment has wrong size , some bytes are missing?!.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.Exchange.WebServices.Data;
using System.Net;
namespace Exchange_SDP_Attachment_Extracter
{
public class PgzExchangeService
{
public void Extract()
{
ExchangeService service = new ExchangeService (ExchangeVersion.Exchange2007_SP1,TimeZoneInfo.Local);
service.Credentials = new NetworkCredential("user", "pass", "domain");
service.Url = new Uri("https://xx.xx.xx.xx/EWS/Exchange.asmx");
ServicePointManager.ServerCertificateValidationCallback = delegate { return true; };
FindItemsResults<Item> findResults = service.FindItems(WellKnownFolderName.Inbox, new ItemView(10));
foreach (Item item in findResults.Items)
{
EmailMessage e = EmailMessage.Bind
(service,
item.Id,
new PropertySet(BasePropertySet.IdOnly, ItemSchema.Attachments));
foreach ( Attachment att in e.Attachments )
{
if (att is FileAttachment)
{
FileAttachment fileAttachment = (FileAttachment)att;
fileAttachment.Load(#"/tmp/testsdp/" + fileAttachment.Name);
}
}
}
}
}
class Program
{
static void Main(string[] args)
{
PgzExchangeService pgz = new PgzExchangeService();
pgz.Extract();
}
}
}
My suggestion would be to try examining the text attachment with a hex editor. There has be something unusual about those three occurrences. You need to find out what those three lines have in common before any of us can recommend a course of action to you.

Using Conversion Studio by To-Increase to import Notes into Microsoft Dynamics AX 2009

Currently, I'm using Conversion Studio to bring in a CSV file and store the contents in an AX table. This part is working. I have a block defined and the fields are correctly mapped.
The CSV file contains several comments columns, such as Comments-1, Comments-2, etc. There are a fixed number of these. The public comments are labeled as Comments-1...5, and the private comments are labeled as Private-Comment-1...5.
The desired result would be to bring the data into the AX table (as is currently working) and either concatenate the comment fields or store them as separate comments into the DocuRef table as internal or external notes.
Would it not require just setting up a new block in the Conversion Studio project that I already have setup? Can you point me to a resource that maybe shows a similar procedure or how to do this?
Thanks in advance!
After chasing the rabbit down the deepest of rabbit holes, I discovered that the easiest way to do this is like so:
Override the onEntityCommit method of your Document Handler (that extends AppDataDocumentHandler), like so:
AppEntityAction onEntityCommit(AppDocumentBlock documentBlock, AppBlock fromBlock, AppEntity toEntity)
{
AppEntityAction ret;
int64 recId; // Should point to the record currently being imported into CMCTRS
;
ret = super(documentBlock, fromBlock, toEntity);
recId = toEntity.getRecord().recId;
// Do whatever you need to do with the recId now
return ret;
}
Here is my method to insert the notes, in case you need that too:
private static boolean insertNote(RefTableId _tableId, int64 _docuRefId, str _note, str _name, boolean _isPublic)
{
DocuRef docuRef;
boolean insertResult = false;
;
if (_docuRefId)
{
try
{
docuRef.clear();
ttsbegin;
docuRef.RefCompanyId = curext();
docuRef.RefTableId = _tableId;
docuRef.RefRecId = _docuRefId;
docuRef.TypeId = 'Note';
docuRef.Name = _name;
docuRef.Notes = _note;
docuRef.Restriction = (_isPublic) ? DocuRestriction::External : DocuRestriction::Internal;
docuRef.insert();
ttscommit;
insertResult = true;
}
catch
{
ttsabort;
error("Could not insert " + ((_isPublic) ? "public" : "private") + " comment:\n\n\t\"" + _note + "\"");
}
}
return insertResult;
}