Why is this ATL helper wrong? - ocl

I'm new to ATL and OCL and I'm trying to transform this metamodel:
enter image description here
into this one:
enter image description here
The helper is meant to take all the tests created by the user admin and after that sum the id's of the Actions of that test.
I've done this helper:
helper def: actionsId: Integer = Test!Test.allInstances()->select(i | i.md.user='admin')->collect(n | n.act.id.toInteger())->sum();
But when I run the transformation I'm having this error:
org.eclipse.m2m.atl.engine.emfvm.VMException: Collections do not have properties, use ->collect()
This error is in the collect(n | n.act.id.toInteger()) part of the helper.
The rest of my code is this:
rule Testset2Testcase{
from s: Test!Test
to r: Testcase!Testcase(
ident <- thisModule.actionId.toString(),
date <- s.md.date,
act <- thisModule.resolveTemp(s.act,'a')
)
do{
'Bukatuta'.println();
}
}
rule Action2Activity{
from s: Test!Action
to a: Testcase!Activity(
ident <- s.id
)
}
Sorry for my bad english.

My teacher helped me with this.
The problem was in the helper. Doing this:
helper def: actionsId: Integer = Test!Test.allInstances()->select(i | i.md.user='admin')->collect(n | n.act.id.toInteger())->sum();
I was trying to take the id of a collection of collections of the type Action instead of taking the id of each objects.
With that helper I was taking a collection of collections so using flattener this collection of collections became a collection of Actions.
The helper written in a correct way looks like this:
helper def: actionsId: Integer = Test!Test.allInstances()->select(i | i.md.user='admin')->collect(n | n.act)->flatten()->collect(x | x.id.toInteger())->sum();

Your expression looks plausible, but without your metamodel it is difficult to see where ATL is unhappy about use of a Collection property. If Test::md is a collection, the expression would just be stupid, though not for the reason given.
If ATL's hovertext doesn't help you understand your types, you might enter the same expression into the OCL Xtext Console and carefully hover over "." and "md" to get its accurate type analysis.
But beware, ATL has an independently developed embedded OCL that is not as rich as Eclipse OCL. Perhaps your expression is too complex for ATL; try breaking it up with let's.

Related

Selection / Filtering by kind

I would like to select or filter scenarios by kinds in my Capella Project. When I use:
ownedScenarios.kind
It returns:
FUNCTIONAL
DATA_FLOW
FUNCTIONAL
DATA_FLOW
The first request I tried returns an empty set:
ownedScenarios->select(myScenario | myScenario.kind='DATA_FLOW')
The second one returns "ERROR: Couldn't find the 'filter(Set(EClassifier=Scenario),EClassifier=ScenarioKind)' service (78, 124)"
ownedScenarios->filter(interaction::ScenarioKind::DATA_FLOW)
Any idea why?
Thanks
interaction::ScenarioKind is an EEnum (an enumeration) and interaction::ScenarioKind::DATA_FLOW an EEnumLiteral (a value from the interaction::ScenarioKind enumeration) but the service filter() use an EClass as parameter. In order to filter on an EEnumLiteral you can use the select() service as in your first attempt:
ownedScenarios->select(myScenario | myScenario.kind = interaction::ScenarioKind::DATA_FLOW)

SPARQL query to get subjects linked to object 2 that are not linked to object 1, using IF or NOT EXISTS or any other way?

I hope you are doing well.
Here is the basic structure of my graph database. Components have estimation methods, estimation methods have parameters and parameters have data sources.
c -> em -> p -> ds
Where,
c stands for components
em stands for estimation methods
p stands for parameters
ds stands for data sources
I am able to query individuals in the structured format like this:
SELECT ?c ?em ?p ?ds WHERE {
?c wb:hasEstimationMethod ?em.
OPTIONAL {
?em wb:hasParameter ?p.
OPTIONAL{
?p wb:hasDataSource ?ds.
}
}
}
I use OPTIONAL clause because there is a possibility that estimation method might not have any parameters and similarly parameters might not have any data sources.
However, there are few cases where, for example, an estimation method is unknown but we know the parameter. So for example in that case, components will directly have parameters and I would prefer to have blank for estimation methods. So here is the output I would like to have,
c
em
p
ds
component-1
estimation method-1
parameter-1
data source-1
component-2
parameter-2
data source-2
component-3
parameter-3
If you notice the last two rows have have missing info which is what I want to have in my output if that is the case. In other words, I want to skip a step in the hierarchical structure.
So my question is, how can I first query ?c wb:hasEstimationMethod ?em but if it does not have any value, I want to tell SPARQL to use query ?c wb:hasParameter ?p and similarly if that has no value as well, do ?c wb:hasDataSource ?ds ?
Any help will be greatly appreciated! Please let me know if I am not using the right terminology. Have a wonderful day :)

Get Taxonomy Term ID by Node in Drupal 8

I'm trying to get Taxonomy data by particular node.
How can I get Taxonomy Term Id by using Node object ?
Drupal ver. 8.3.6
You could do something like that:
$termId = $node->get('field_yourfield')->target_id;
Then you can load the term with
Term::load($termId);
Hope this helps.
If you want to get Taxonomy Term data you can use this code:
$node->get('field_yourfield')->referencedEntities();
Hope it will be useful for you.
PS: If you need just Term's id you can use this:
$node->get('field_yourfield')->getValue();
You will get something like this:
[0 => ['target_id' => 23], 1 => ['target_id'] => 25]
In example my field has 2 referenced taxonomy terms.
Thanks!
#Kevin Wenger's comment helped me. I'm totally basing this answer on his comment.
In your code, when you have access to a fully loaded \Drupal\node\Entity\Node you can access all the (deeply) nested properties.
In this example, I've got a node which has a taxonomy term field "field_site". The "field_site" term itself has a plain text field "field_site_url_base". In order to get the value of the "field_site_url_base", I can use the following:
$site_base_url = $node->get('field_site')->entity->field_site_url_base->value;
How to extract multiple term IDs easily if you know a little Laravel (specifically Collections):
Setup: composer require tightenco/collect to make Collections available in Drupal.
// see #Wau's answer for this first bit...
// remember: if you want the whole Term object, use ->referencedEntities()
$field_value = $node->get('field_yourfield')->getValue();
// then use collections to avoid loops etc.
$targets = collect($field_value)->pluck('target_id')->toArray();
// $targets = [1,2,3...]
or maybe you'd like the term IDs comma-separated? (I used this for programmatically passing contextual filter arguments to a view, which requires , (OR) or + (AND) to specify multiple values.)
$targets = collect($field_value)->implode('target_id', ',');
// $targets = "1,2,3"

Dynamic Context free grammar NLTK

Trying to generate sentences with NLTK CFG. Would like to know if it is possible to connect sql database to feed noun and verb in the program below.
In the example below door, window, open, close are hardcoded. How to dynamically ask nltk to look from say excel or database column to feed noun and verb in this particular context?
import nltk
from nltk.parse.generate import generate,demo_grammar
from nltk import CFG
grammar = CFG.fromstring("""
S -> VP NP
NP -> Det N
VP -> V
Det ->'the '
N -> 'door' | 'window'
V -> 'Open' | 'Close'
""")
print(grammar)
for sentence in generate(grammar, n=100):
print(' '.join(sentence))
It seems that you can't dynamically change an NLTK CFG – once it is instantiated, it stays put. You need to define all of the vocabulary immediately when constructing the CFG.
As far as I can see, you have two options to include comprehensive vocabulary from an external resource:
Build up a grammar string as in the example you posted, and use CFG.fromstring() to parse it. You might have to take care of some escaping issues (eg. quotes/apostrophes in the terminal symbols).
Use the CFG constructor directly, providing it a list of productions, eg.:
from nltk import CFG, Production, Nonterminal
prods = [Production(Nonterminal('S'), (Nonterminal('PN'), Nonterminal('V'))),
Production(Nonterminal('PN'), ('Sam',)),
Production(Nonterminal('PN'), ('Fred',)),
Production(Nonterminal('V'), ('sleeps',))]
g = CFG(Nonterminal('S'), prods)
This looks somewhat verbose, but it's probably easier and faster to construct this nested structure of Python datatypes rather than writing a bug-free serialiser for the (more concise) grammar string format.

Webscraping (potentially) ill-formated HTML in R with xpath or regex

I'm trying to extract the abstract from this link. However, I'm unable to extract only the content of the abstract. Here's what I accomplished so far:
url <- "http://www.scielo.br/scielo.php?script=sci_abstract&pid=S1981-38212013000100001&lng=en&nrm=iso&tlng=en"
textList <- readLines(url)
text <- textList[grep("Abstract[^\\:]", textList)] # get the correct element
text1 <- gsub("\\b(.*?)\\bISSN", "" , text)
Up to this point I got almost what I want, but then I couldn't get rid of the rest of the string that isn't of interest to me.
I even tried another approach, with xpath, but unsuccessfully. I tried something like the code below, but to no effect whatsoever.
library(XML)
arg.xpath <-"//p/#xmlns"
doc <- htmlParse( url) # parseia url
linksAux <- xpathSApply(doc, arg.xpath)
How can I accomplih what I want, either with regex or xpath, or maybe both?
ps.: my general aim is webscraping of several similar pages like the one I provided. I alredy can extract the link. I only need to get the abstract now.
free(doc)
I would strongly recommend the XML approach because regular expressions with HTML can be quite a headache. I think your xpath expression was just a bit off. Try
doc <- htmlParse(url)
xpathSApply(doc, "//p[#xmlns]", xmlValue)
This returns (clipped for length)
[1] "HOLLANDA, Cristina Buarque de. Human rights ..."
[2] "This article is dedicated to recounting the main ..."
[3] "Keywords\n\t\t:\n\t\tHuman rights; transitional ..."
[4] ""
someone better could give you a better answer but this kinda works
reg=regexpr("<p xmlns=\"\">(.*?)</p>",text1)
begin=reg[[1]]+12
end=attr(reg,which = "match.length")+begin-17
substr(text1,begin,end)
Here is another approach, which is klunky as written, but offers the technique of keeping the right parts after splitting at tag tokens:
text2 <- sapply(strsplit(x = text1, ">"), "[", 3)
text2
[1] "This article is dedicated to recounting the main initiative of Nelson Mandela's government to manage the social resentment inherited from the segregationist regime. I conducted interviews with South African intellectuals committed to the theme of transitional justice and with key personalities who played a critical role in this process. The Truth and Reconciliation Commission is presented as the primary institutional mechanism envisioned for the delicate exercise of redefining social relations inherited from the apartheid regime in South Africa. Its founders declared grandiose political intentions to the detriment of localized more palpable objectives. Thus, there was a marked disparity between the ambitious mandate and the political discourse about the commission, and its actual achievements.</p"
text3 <- sapply(strsplit(text2, "<"), "[", 1)