I have a UML model having OpaqueActions containing text conform to an XText Grammar/MetaModel. I am turning the UML model into text by means of an ACCELEO transformation. I'd like to invoke from the ACCELEO script a Java service which takes as input the text in the opaque actions within the model and provides as output the root element of the related model so that I can use it seamlessly from ACCELEO.
To this end I need to define a Java class with a method which: takes as parameter a String, invokes XTEXT, parses the text and, if it is correct, produces a related EMF model. Suppose the text is OCL (It isn't but I guess the procedure is the same), how would you do that?
You could try to load the OpaqueActions as the content of a resource in the resource set that holds the currently processed model. That will return the AST for that string.
Related
I've applied the guidance on programmatic usage of M2Doc (also with this help) to successfully generate a document via the API, which was previously prepared by using the M2Doc GUI (configured .docx plus a .genconf file). It seems to also work with a configured .docx, but without a .genconf file.
Now I would like to go a step further and ease the user interface in our application. The user should come with a .docx, include the {m:...} fields there, especially for variable definition, and then in our Eclipse application just assign model elements to the list of variables. Finally press "generate". The rest I would like to handle via the M2Doc API:
Get list of variables from the .docx
Tell M2Doc the variable objects (and their types and other required information, if that is separately necessary)
Provide M2Doc with sufficient information to handle AQL expressions like projectmodel::PJDiagram.allInstances() in the Word fields
I tried to analyse the M2Doc source code for this, but have some questions to achieve the goal:
The parse/generate API does not create any config information into the .docx or .genconf files, right? What would be the API to at least generate the .docx config information?
The source code mentions "if you are using a Generation" - what is meant with that? The use of a .genconf file (which seems to be optional for the generate API)?
Where can I get the list of variables from, which M2Doc found in a .docx (during parse?), so that I can present it to the user for Object (Model Element) assignment?
Do I have to tell M2Doc the types of the variables, and in which resource file they are located, besides handing over the variable objects? My guess is no, as using a blank .docx file without any M2Doc information stored also worked for the variables themselves (not for any additional AQL expressions using other types, or .oclAsType() type castings).
How can I provide M2Doc with the types information for the AQL expressions mentioned above, which I normally tell it via the nsURI configuration? I handed over the complete resourceSet of my application, but that doesn't seem to be enough.
Any help would be very much appreciated!
To give you an impression of my code so far, see below - note that it's actually Javascript instead of Java, as our application has a built-in JS-Java interface.
//=================== PARSING OF THE DOCUMENT ==============================
var templateURIString = "file:///.../templateReqs.docx";
var templateURI = URI.createURI(templateURIString);
// canNOT be empty, as we get nullpointer exceptions otherwise
var options = {"TemplateURI":templateURIString};
var exceptions = new java.util.ArrayList();
var resourceSetForModels = ...; //here our application's resource set for the whole model is used, instead of M2Doc "createResourceSetForModels" - works for the moment, but not sure if some services linking is not working
var queryEnvironment = m2doc.M2DocUtils.getQueryEnvironment(resourceSetForModels, templateURI, options);
var classProvider = m2doc.M2DocPlugin.getClassProvider();
// empty Monitor for the moment
var monitor = new BasicMonitor();
var template = m2doc.M2DocUtils.parse(resourceSetForModels.getURIConverter(), templateURI, queryEnvironment, classProvider, monitor);
// =================== GENERATION OF THE DOCUMENT ==============================
var outputURIString = "file:///.../templateReqs.autogenerated.docx";
var outputURI = URI.createURI(outputURIString);
variables["myVar1"] = ...; // assigment of objects...
m2doc.M2DocUtils.generate(template, queryEnvironment, variables, resourceSetForModels, outputURI, monitor);
Thanks!
No the API used to parse an generate don't modifies the template file nor the .genconf file. To modify the configuration of the template you will need to use the
TemplateCustomProperties class. That will allow you to register your metamodels and service classes. This instormation is then used to configure the IQueryEnvironment, so you might also want to directly configure the IQueryEnvironment in your code.
The generation in this context referes to the .genconf file. Note The genconf file is also an EMF model, so you can also craft one in memory to launch you generation if it's easier for you. But yes the use of a .genconf file is optional like in your code example.
To the list of variables in the template you can use the class TemplateCustomProperties:
TemplateCustomProperties.getVariables() will list the variables that are declared with their type
TemplateCustomProperties.getMissingVariables() to list varaibles that are used in the template but not declared
You can also find le list of used metamodels (EPackage nsURIs) and imported services classes.
The type of variables is not needed at generation time, it's only needed if you want to validate your template. At generation time you need to pass a map from the variable name to its value as you did in your example. The value of a variable can be a any object from your model (an EObject), a String, an Integer, ... If you want to use something like oclIsKindOf(pkg::MyEClass) you will need to register the nsURI of pkg first see the next point.
The code you provided should let you use something like projectmodel::PJDiagram.allInstances(). This service needs a ResourceSetRootEObjectProvider() that is initialized in M2DocUtils.getQueryEnvironment(). But you need to declare the nsURI of your metamodel in your template (see TemplateCustomProperties). This will register it in the IQueryEnvironment. You can also register it yourself using IQueryEnvironment.registerEPackage().
This should help you finding the missing parts in the configuration of the AQL environment. Your code seems good and should work when you add the configuration part.
Is there a C++ library I use to load the XSD schema model?
The goal is to load the actual XSD schema model (eventually from multiple files) in a way that I can then inspect the model elements (i.e., types, cardinality, attributes, even comments if possible). I don't want to use it for XML content but to manipulate/inspect the actual model.
I know that in Java it can be done, for example, with Xerces2 (http://xerces.apache.org/xerces2-j/xml-schema.html), but I looked for something similar in C++ and could not find it.
You could look at the C++ implementation of EMF:
http://modeling-languages.com/emf4cpp-c-implementation-emf/
Then you could use the EMF XSD model:
http://www.eclipse.org/modeling/mdt/?project=xsd
The EMF XSD model is very well engineered, so the only question is around the maturity of the C++ port of EMF.
Im trying to make new doc type in mfc that reads data from another document type when needed. And my question is- Is this possible and how it should be done ?
You can use CWinApp::GetFirstDocTemplatePosition() and CWinApp::GetNextDocTemplate() to iterate through the doc templates.
Then, for each doc template, use CDocTemplate::GetFirstDocPosition() and CDocTemplate::GetNextDoc() to iterate through the documents.
You will need to make the document data public or provide getters/setters.
How can I extract the data the user entered into my custom material's fields when I'm parsing an .mb with the C++ Maya API in my importing app? (I suspect I already have access to an MObject that contains the user's input, but don't know how to extract it)
Here's the situation in greater detail:
I defined a custom material with the C++ Maya API (I created an .mll that defines a custom MPxNode, which in turn defines some float and enum fields for the user)
in Maya I can "assign new material" to an object with the custom material, and then modify the custom material's datafields and save the .mb
in my C++ Maya importer I traverse the DAG and DG and, as expected, note one occurrence of the custom material (so noted by identifying the material MObject as the only one for which the call MObject::hasFn(MFn::kPluginDependNode) returns true)
I can extract each of my custom shader's datafields by name using MFnDependencyNode::attribute("datafieldName") -- trying to extract a nonexistent datafield fails as expected
...but these extracted datafields are MObject's, and I don't know how to extract the data the user entered into the custom material instance in Maya.
What's the right approach here?
Here's the missing link I was looking for:
MFnDependencyNode::findPlug("datafieldName") returns an MPlug, which then provides access to the user-entered data.
(I was searching for names like "attribute" and "datafield" -- it didn't occur to me to look for anything called a "plug")
i am trying to create a REST service that accepts a List of objects from a client and gives back a zip file.
i understand how to give back the zip file alright.
But i am right now trying to figure out a way i can pass a List of objects from a REST client/browser to the Rest service and how do i accept the List in the REST service.
Should this be done via XML input ?
or maybe the #consumes annotation could help?
Much thanks .
Som
You need to think out more clearly what you wish to do. There's no really good reason for taking a list of objects and returning a ZIP of them; you might as well use a local zip program (which just about all computers already have). That indicates that we instead need to be looking at something sensible: for example, a list of names of objects that you'll return a ZIP of, that makes a lot of sense. There are other sensible things you could be doing too, but you have to work it out in your mind what you want to happen.
Because you mention “#consumes annotation”, I'm going to assume you're using JAX-RS (i.e., Java). That's nice, because it's entirely possible to do on-the-fly ZIP generation with that; the content type you want to produce is application/zip. The easiest way I've found to handle the specification of the list of descriptions of things to return is as a wrapped list, where you use something like JAXB to do the mapping (which gives you XML support; some frameworks also support JSON off the same data models). To do a wrapped list, you use something like this:
#XmlRootElement
public class Wrapper {
#XmlElement
public List<String> item;
}
That then produces/handles XML documents like this (a three item list):
<wrapper>
<item>foo</item>
<item>...</item>
<item>bar</item>
</wrapper>
You'll need to set up the #Consumes annotation so that the content type accepted is application/xml (at least), and also consider what type of operation is involved and on what resource.
[EDIT]: In order to create a REST service that takes a list of strings as arguments, the easiest method is indeed to use a wrapper object, much as above. (You can't take a raw list; it needs to be a well-formed XML document when it's on the wire.) We then set up the annotated service method like this:
#POST
#Path("somewhere/{id}")
#Consumes("application/xml")
#Produces("application/zip")
public Response getSomeBytesForList(#PathParam("id") String id, Wrapper req) {
List<String> items = req.item; // For example...
byte[] zip = generateZipBytes(id, items); // or however
return Response.ok(zip).type("application/zip").build();
}
The key is that the req argument (the name is arbitrary, of course) is the only argument that is not annotated, that it is of a type that is JAXB-enabled, and there is an overall #Consumes ("application/xml") annotation to enable the JAXB processing of the request body. (I handle the returning of a ZIP by generating the Response directly rather than relying on the framework to do the processing for me; this lets me control the content type handling a little more precisely.)
Also note that some frameworks can also transfer JAXB-annotated objects as JSON documents, just by having a bit of extra annotation; you just state that the method can accept both "application/xml" and "application/json" in the #Consumes annotation. I do not know whether this applies to the framework you are using (I've only tested it with Apache CXF).