Defining a property whose range is an ordered list - list

Given two classes Container and Element, I would like to define a property contains to describe the contents of a Container.
However, the order of Elements is important, so I can't simply write
_:container :contains _:element1, _:element2, _:element3 .
How can I define the contains property correctly?
I've looked at rdf:List and owl:Seq but I don't know how to translate that into my ontology.

You can define your property in many ways depending on what your requirements and your use cases are.
First, it is sometimes possible to avoid using lists or sequences completely, and yet keep the ordering of things. This can be done if the elements that you are ordering have an intrinsic order. For instance, if you want to name the children of someone from the oldest to the youngest, you can just use a hasChild relation:
ex:someone onto:hasChild ex:child3, ex:child1, ex:child2 .
ex:child1 onto:birthDay "1995-10-25"^xsd:date .
ex:child2 onto:birthDay "1997-03-10"^xsd:date .
ex:child3 onto:birthDay "2003-01-14"^xsd:date .
If you do not have the exact dates, it is also possible to use a relation isOlderThan to make the order explicit. However, this cannot work in many cases. If you want to say in which order participants to a race arrived at the finish line, you cannot say:
ex:runner1 onto:arrivedBefore ex:runner2 .
# etc.
because this only applies to this particular race. One solution is to use an RDF list like so:
ex:race42 onto:arrival (ex:runner1 ex:runner2 ex:runner3) .
However, RDF lists cannot be used like this in OWL DL. The typical way of dealing with such lists in OWL is what is described in the document that AKSW links to in his comment. That is, you define a class and properties that mimic the RDF list constructs:
ex:container42 onto:contains [
a listonto:OWLList;
listonto:hasElement ex:element1;
listonto:hasNext [
a listonto:OWLList;
listonto:hasElement ex:element2;
listonto:hasNext [
a listonto:OWLList;
listonto:hasElement ex:element3;
listonto:hasNext listonto:emptylist
]
]
] .
This is not the only solution. Using rdf:Seq is also an option (though usually discouraged by many people). Again, this cannot be used in OWL DL. However, one can introduce an ontology that partially mimics the way rdf:Seq works:
ex:container42 onto:contains [
a seqonto:Sequence;
seqonto:hasSlots
[ a seqonto:Slot; seqonto:content ex:element1; seqonto:position 1 ],
[ a seqonto:Slot; seqonto:content ex:element2; seqonto:position 2 ],
[ a seqonto:Slot; seqonto:content ex:element3; seqonto:position 3 ],
seqonto:numberOfElements 3
] .
The property position with its number are used to mimic the properties rdf:_1, rdf:_2, etc. from the RDF vocabulary. Other ways of identifying the last slot could be a special seqonto:lastSlot property. Note that this is what the ordered list ontology does.
There may be other options, probably as involved as the ones discussed here, but I think it covers most possibilities well enough.

Perhaps can you find a partial answer in this paper :
P. Ciccarese ans S. Peroni, The Collections Ontology: creating and handling collections in OWL 2 DL frameworks, 2013 Source

Related

What are the ways of Key-Value extraction from unstructured text?

I'm trying to figure out what are the ways (and which of them the best one) of extraction of Values for predefined Keys in the unstructured text?
Input:
The doctor prescribed me a drug called favipiravir.
His name is Yury.
Ilya has already told me about that.
The weather is cold today.
I am taking a medicine called nazivin.
Key list: ['drug', 'name', 'weather']
Output:
['drug=favipiravir', 'drug=nazivin', 'name=Yury', 'weather=cold']
So, as you can see, in the 3d sentence there is no explicit key 'name' and therefore no value extracted (I think there is the difference with NER). At the same time, 'drug' and 'medicine' are synonyms and we should treat 'medicine' as 'drug' key and extract the value also.
And the next question, what if the key set will be mutable?
Should I use as a base regexp approach because of predefined Keys or there is a way to implement it with supervised learning/NN? (but in this case how to deal with mutable keys?)
You can use a parser to tag words. Your problem is similar to Named Entity Recognition (NER). A lot of libraries, like NLTK in Python, have POS taggers available. You can try those. They are generally trained to identify names, locations, etc. Depending on the type of words you need, you may need to train the parser. So you'll need some labeled data also. Check out this link:
https://nlp.stanford.edu/software/CRF-NER.html

How to reduce semantically similar words?

I have a large corpus of words extracted from the documents. In the corpus are words which might mean the same.
For eg: "command" and "order" means the same, "apple" and "apply" which does not mean the same.
I would like to merge the similar words, say "command" and "order" to "command".
I have tried to use word2vec but it doesn't check for semantic similarity of words(it ouputs good similarity for apple and apply since four characters in the words are the same). And when I try using wup similarity, it gives good similarity score if the words have matching synonyms whose results are not that impressive.
What could be the best approach to reduce semantically similar words to get rid of redundant data and merge similar data?
I believe one of the options here is using WordNet. It gives you a list of synonyms for the word, so you may merge them together given you know its part of speech.
However, I'd like to point out that "order" and "command" are not the same, e.g. you do not command food in restaurants and such homonymy is true for many-many words.
Also I'd like to point out that for Word2vec spelling is irrelevant and is not taken into consideration at all, the algorithm considers only concurrent usage. I suppose you might be mixing it with FastText.
However, there should be some problems with your model.
Because in a standard set of embeddings distance between these concepts should be large. MUSE FastText similarity between "apple" and "apply" is only 0.15, which is quite low.
I use Gensim's function
model.similarity("apply", "apple")
So you might need to fix learning parameters or just use a pretrained model.

CSV-like format wit C library supporting multiple "tables" and "named references

I have some data to feed to a C/C++ program and I could easily convert it in CSV format. However I would need a couple of extensions to the CSV standard, or the parts I know about it.
Data are heterogeneous, there are different parameters of different sizes. They could be 1-valued, vectors or multidimensional arrays. My ideal format would be like this one
--+ Size1
2
--+ Size2
4
--+Table1
1;2;3;4
5;6;7;8
--+Table2
1;2
"--+" is some sort of separator. I have two 1-valued parameters named symbolically Size1 and Size2 and two other multidimensional parameters Table1 and Table2. In this case the dimensions of Table1 and Table2 are given by the other two parameters.
Also rows and columns could be named, i.e. there could be a table like
--+Table3
A;B
X;1;2
Y;4;5
Where element ("A","X") is 1 and ("B","X") is 2 and so forth.
In other terms it's like a series of appended CSV files with names for tables, rows and columns.
The parsers should be able to exploit the structure of the file allowing me to write code like this:
parse(my_parser,"Size1",&foo->S1); // read Size1 value and write it in &foo.S1
parse(my_parser,"Size2",&foo->S2); // read Size2 value and write it in &foo.S2
foo->T2=malloc(sizeof(int)*(foo->S1));
parse(my_parser,"Table2",foo->T2); // read Table2
If it was able to store rows and columns name it would be a bonus.
I don't think it would take much to write such a library, but I have more important things to do ATM.
Is there an already defined format like this one? With open-source libraries for C++? Do you have other suggestions for my problem?
Thanks in advance.
A.
I would use JSON, which boost will readily handle. A scalar is a simple case of an array
[ 2 ]
The array is easy
[ 1, 2]
Multidimensional
[ [1,2,3,4], [5,6,7,8] ]
It's been a while since I've done this sort of thing, so I'm not sure how the code will break down for you. Definitely by expanding on this you could add row/column names. The code will be very nice, perhaps not quite as brainless as in python, but it should be simple.
Here's a link for the JSON format: http://json.org
Here's a stackoverflow link for reading JSON with boost: Reading json file with boost
A good option could be YAML.
It's a well known, human friendly data serialization standard for programming languages.
It fits quite well your needs: YAML syntax is designed to be easily mapped to data types common to most high-level languages: vector, associative array and scalar:
Size1: 123
---
Table1: [[1.0,2.0,3.0,4.0], [5.0,6.0,7.0,8.0]]
There are good libraries for C, C++ and many other languages.
To get a feel for how it can be used see the C++ tutorial.
For interoperability you could also consider the way OpenCV uses YAML format:
%YAML:1.0
frameCount: 5
calibrationDate: "Fri Jun 17 14:09:29 2011\n"
cameraMatrix: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 1000., 0., 320., 0., 1000., 240., 0., 0., 1. ]
Since JSON and YAML have many similarities, you could also take a look at: What is the difference between YAML and JSON? When to prefer one over the other
Thanks everyone for the suggestions.
The data is primarily numeric, with lots of dimensions and, given its size, it could be slow to parse with those text formats, I found that the quickest and cleanest way is to use a database, for now.
I still think it may be overkill but there are no clearly better alternatives now IMHO.

DDD Aggregate root design

Trying to model an 'manufacturing plant' software system...
At the core of the entire system is the "workorder" -- almost every entity (many of those are not shown here or part of the AR in question) is somehow connected to it. Primarily however it looks like:
+ WorkOrder_Root
+ TrackingID: Property (UID)
+ DateReceived: Property
+ DateApproved: Property
+ PartName: Property
+ PartNumber: Property
+ Rework: Collection (1:m)
+ SerialLog: Collection (1:m)
+ CeriLog: Collection (1:m)
+ Sequences: Collection (1:m)
+ Dimensions: Collection (1:m)
+ Consumables: Collection (1:m)
+ Quoting: Single
+ Invoice: Single
+ Warranty: Single
+ Certification: Single
This is a massive AR (incomplete -- there are more properties/collections. Having read several more articles and mini-books in the last few days I am seriously wondering whether I should try and decompose into more AR.
http://www.sapiensworks.com/blog/post/2013/05/13/7-Biggest-Pitfalls-When-Doing-Domain-Driven-Design.aspx
http://www.sapiensworks.com/blog/post/2012/04/18/DDD-Aggregates-And-Aggregates-Root-Explained.aspx
All the above collections are collection of entities but none of which make sense outside the context of the work order.
You cannot invoice without a work order, you cannot quote without a work order, everything relies on a work order.
My primary concern is what I understand to be potential concurrency issues. For example, if someone is working on W/O: 66354 changing the quote and someone else is adding a rework, there exists something of a race condition.
Reworks can change the price, so quoting before a rework has completed, makes me think, perhaps rework should be it's own AR -- but all reworks belong to a work order, you cannot construct a rework without first opening/loading a WorkOrder.
All my other AR's in the model are relatively simple at most 3 child entities and few properties, but the work order is a beast and i'm wonder what type of issues I might expect by having this "God" object???
EDIT: I just read through the following
(http://practical-ddd.blogspot.ca/2012/07/designing-aggregates.html)
Invariants made me think twice. If a sequence can be updated or
changed without needing to inform the work order in which it is
associated, then sequences is a candidate for AR??? Sequences may be
a bad example as changes to the sequences do need to be reflected in
the WorkOrder_Root...but still...am I on the right path here? Letting
the business rules (rather than logical or data-centric organization
guide the path?)...
Regards,
Alex
Aggregates should certainly not be as big as you pointed out. The objects contained in an aggregate should have high coupling and share some invariants that must be met all the time. Invariants between aggregates are only eventually consistent. You can not rely on them as valid all the time. I could imagine that you can safely guard against such inconsistencies as you described with a 2 step process. First check the preconditions. Do the changes. Check again if everything is fine. If not undo your changes and start over again. If you have changes that should trigger something else use domain events. With them you can loosely couple some processes.

Create and use HTML full text search index (C++)

I need to create a search index for a collection of HTML pages.
I have no experience in implementing a search index at all, so any general information how to build one, what information to store, how to implement advanced searches such as "entire phrase", ranking of results etc.
I'm not afraid to build it myself, though I'd be happy to reuse an existing component (or use one to get started with a prototype). I am looking for a solution accessible from C++, preferrably without requiring additional installations at runtime. The content is static (so it makes sense to aggregate search information), but a search might have to accumulate results from multiple such repositories.
I can make a few educated guesses, though: create a map word ==> pages for all (relevant) words, a rank can be assigned to the mapping by promincence (h1 > h2 > ... > <p>) and proximity to top. Advanced searches could be built on top of that: searching for phrase "homo sapiens" could list all pages that contain "homo" and "sapiens", then scan all pages returned for locations where they occur together. However, there are a lot of problematic scenarios and unanswered questions, so I am looking for references to what should be a huge amount of existing work that somehow escapes my google-fu.
[edit for bounty]
The best resource I found until now is this and the links from there.
I do have an imlementation roadmap for an experimental system, however, I am still looking for:
Reference material regarding index creation and individual steps
available implementations of individual steps
reusable implementations (with above environment restrictions)
This process is generally known as information retrieval. You'll probably find this online book helpful.
Existing libraries
Here are two existing solutions that can be fully integrated into an application without requiring a separate process (I believe both will compile with VC++).
Xapian is mature and may do much of what you need, from indexing to ranked retrieval. Separate HTML parsing would be required because, AFAIK, it does not parse html (it has a companion program Omega, which is a front end for indexing web sites).
Lucene is a index/searching Apache library in Java, with an official pre-release C version lucy, and an unofficial C++ version CLucene.
Implementing information retrieval
If the above options are not viable for some reason, here's some info on the individual steps of building and using an index. Custom solutions can go from simple to very sophisticated, depending what you need for your application. I've broken the process into 5 steps
HTML processing
Text processing
Indexing
Retrieval
Ranking
HTML Processing
There are two approaches here
Stripping The page you referred to discusses a technique generally known as stripping, which involves removing all the html elements that won't be displayed and translating others to their display form. Personally, I'd preprocess using perl and index the resulting text files. But for an integrated solution, particularly one where you want to record significance tags (e.g. <h1>, <h2>), you probably want to role your own. Here is a partial implementation of a C++ stripping routine (appears in Thinking in C++ , final version of book here), that you could build from.
Parsing A level up in complexity from stripping is html parsing, which would help in your case for recording significance tags. However, a good C++ HTML parser is hard to find. Some options might be htmlcxx (never used it, but active and looks promising) or hubbub (C library, part of NetSurf, but claims to be portable).
If you are dealing with XHTML or are willing to use an HTML-to-XML converter, you can use one of the many available XML parsers. But again, HTML-to-XML converters are hard to find, the only one I know of is HTML Tidy. In addition to conversion to XHTML, its primary purpose is to fix missing/broken tags, and it has an API that could possibly be used to integrate it into an application. Given XHTML documents, there are many good XML parsers, e.g. Xerces-C++ and tinyXML.
Text Processing
For English at least, processing text to words is pretty straight forward. There are a couple of complications when search is involved though.
Stop words are words known a priori not to provide a useful distinction between documents in the set, such as articles and propositions. Often these words are not indexed and filtered from query streams. There are many stop word lists available on the web, such as this one.
Stemming involves preprocessing documents and queries to identify the root of each word to better generalize a search. E.g. searching for "foobarred" should yield "foobarred", "foobarring", and "foobar". The index can be built and searched on roots alone. The two general approaches to stemming are dictionary based (lookups from word ==> root) and algorithm based. The Porter algorithm is very common and several implementations are available, e.g. C++ here or C here. Stemming in the Snowball C library supports several languages.
Soundex encoding One method to make search more robust to spelling errors is to encode words with a phonetic encoding. Then when queries have phonetic errors, they will still map directly to indexed words. There are a lot of implementations around, here's one.
Indexing
The map word ==> page data structure is known as an inverted index. Its inverted because its often generated from a forward index of page ==> words. Inverted indexes generally come in two flavors: inverted file index, which map words to each document they occur in, and full inverted index, which map words to each position in each document they occur in.
The important decision is what backend to use for the index, some possibilities are, in order of ease of implementation:
SQLite or Berkly DB - both of these are database engines with C++ APIs that integrated into a project without requiring a separate server process. Persistent databases are essentially files, so multiple index sets can be search by just changing the associated file. Using a DBMS as a backend simplifies index creation, updating and searching.
In memory data structure - if your using a inverted file index that is not prohibitively large (memory consumption and time to load), this could be implemented as a std::map<std::string,word_data_class>, using boost::serialization for persistence.
On disk data structure - I've heard of blazingly fast results using memory mapped files for this sort of thing, YMMV. Having an inverted file index would involve having two index files, one representing words with something like struct {char word[n]; unsigned int offset; unsigned int count; };, and the second representing (word, document) tuples with just unsigned ints (words implicit in the file offset). The offset is the file offset for the first document id for the word in the second file, count is the number of document ids associate with that word (number of ids to read from the second file). Searching would then reduce to a binary search through the first file with a pointer into a memory mapped file. The down side is the need to pad/truncate words to get a constant record size.
The procedure for indexing depends on which backend you use. The classic algorithm for generating a inverted file index (detailed here) begins with reading through each document and extending a list of (page id, word) tuples, ignoring duplicate words in each document. After all documents are processed, sort the list by word, then collapsed into (word, (page id1, page id2, ...)).
The mifluz gnu library implements inverted indexes w/ storage, but without document or query parsing. GPL, so may not be a viable option, but will give you an idea of the complexities involved for an inverted index that supports a large number of documents.
Retrieval
A very common method is boolean retrieval, which is simply the union/intersection of documents indexed for each of the query words that are joined with or/and, respectively. These operations are efficient if the document ids are stored in sorted order for each term, so that algorithms like std::set_union or std::set_intersection can be applied directly.
There are variations on retrieval, wikipedia has an overview, but standard boolean is good for many/most application.
Ranking
There are many methods for ranking the documents returned by boolean retrieval. Common methods are based on the bag of words model, which just means that the relative position of words is ignored. The general approach is to score each retrieved document relative to the query, and rank documents based on their calculated score. There are many scoring methods, but a good starting place is the term frequency-inverse document frequency formula.
The idea behind this formula is that if a query word occurs frequently in a document, that document should score higher, but a word that occurs in many documents is less informative so this word should be down weighted. The formula is, over query terms i=1..N and document j
score[j] = sum_over_i(word_freq[i,j] * inv_doc_freq[i])
where the word_freq[i,j] is the number of occurrences of word i in document j, and
inv_doc_freq[i] = log(M/doc_freq[i])
where M is the number of documents and doc_freq[i] is the number of documents containing word i. Notice that words that occur in all documents will not contribute to the score. A more complex scoring model that is widely used is BM25, which is included in both Lucene and Xapian.
Often, effective ranking for a particular domain is obtained by adjusting by trial and error. A starting place for adjusting rankings by heading/paragraph context could be inflating word_freq for a word based on heading/paragraph context, e.g. 1 for a paragraph, 10 for a top level heading. For some other ideas, you might find this paper interesting, where the authors adjusted BM25 ranking for positional scoring (the idea being that words closer to the beginning of the document are more relevant than words toward the end).
Objective quantification of ranking performance is obtained by precision-recall curves or mean average precision, detailed here. Evaluation requires an ideal set of queries paired with all the relevant documents in the set.
Depending on the size and number of the static pages, you might want to look at an already existent search solution.
"How do you implement full-text search for that 10+ million row table, keep up with the load, and stay relevant? Sphinx is good at those kinds of riddles."
I would choose the Sphinx engine for full text searching. The licence is GPL but the also have a commercial version available. It is meant to be run stand-alone [2], but it can also be embedded into applications by extracting the needed functionality (be it indexing[1], searching [3], stemming, etc).
The data should be obtained by parsing the input HTML files and transforming them to plain-text by using a parser like libxml2's HTMLparser (I haven't used it, but they say it can parse even malformed HTML). If you aren't bound to C/C++ you could take a look at Beautiful Soup.
After obtaining the plain-texts, you could store them in a database like MySQL or PostgreSQL. If you want to keep everything embedded you should go with sqlite.
Note that Sphinx doesn't work out-of-the-box with sqlite, but there is an attempt to add support (sphinx-sqlite3).
I would attack this with a little sqlite database. You could have tables for 'page', 'term' and 'page term'. 'Page' would have columns like id, text, title and url. 'Term' would have a column containing a word, as well as the primary ID. 'Page term' would have foreign keys to a page ID and a term ID, and could also store the weight, calculated from the distance from the top and the number of occurrences (or whatever you want).
Perhaps a more efficient way would be to only have two tables - 'page' as before, and 'page term' which would have the page ID, the weight, and a hash of the term word.
An example query - you want to search for "foo". You hash "foo", then query all page term rows that have that term hash. Sort by descending weight and show the top ten results.
I think this should query reasonably quickly, though it obviously depends on the number and size of the pages in question. Sqlite isn't difficult to bundle and shouldn't need an additional installation.
Ranking pages is the really tricky bit here. With a large sample of pages you can use links quite a lot in working out ranks. Other wise you need to check how words seem to be placed, and also making sure your engine doesn't get fooled by 'dictionary' pages.
Good luck!