Is there a C++ library I use to load the XSD schema model?
The goal is to load the actual XSD schema model (eventually from multiple files) in a way that I can then inspect the model elements (i.e., types, cardinality, attributes, even comments if possible). I don't want to use it for XML content but to manipulate/inspect the actual model.
I know that in Java it can be done, for example, with Xerces2 (http://xerces.apache.org/xerces2-j/xml-schema.html), but I looked for something similar in C++ and could not find it.
You could look at the C++ implementation of EMF:
http://modeling-languages.com/emf4cpp-c-implementation-emf/
Then you could use the EMF XSD model:
http://www.eclipse.org/modeling/mdt/?project=xsd
The EMF XSD model is very well engineered, so the only question is around the maturity of the C++ port of EMF.
Related
How do I create a model dynamically upon uploading a csv file? I have done the part where it can read the csv file.
This doc explains very well how to dynamically create models at runtime in django. It also links to an example of doing so.
However, as you will see after looking at the document, it is quite complex and cumbersome to do this. I would not recommend doing this and believe it is quite likely you can determine a model ahead of time that is flexible enough to handle the CSV. This would be much better practice since dynamically changing the schema of your database as your application is running is a recipe for a ton of bugs in your code.
I understand that you want to create new schema's on the fly based on fields in the those in a CSV. While thats a valid use case and could be the absolute right call. I doubt it though - it lends itself to a data model for a single tenet SaaS application that could have goofy performance and migration issues.
I'd try using Mongo/ some other NoSQL solutions as others have mentioned. But a simpler approach may be a modified Star Schema implemented in SQL. In this case you create a dimensions tables that stores each header, then create an instance of each data element that has a foreign key to dimension and records the value of that dimension.
If you read the csv the psuedo code would look something like this:
for row in DictReader(file):
for k in row.keys():
try:
dim = Dimension.objects.get(name=k)
except:
dim = Dimension(name=k)
dim.save()
DimensionRecord(dimension=dim, value=row[k]
Obviously you could better handle reading the headers and error trapping if dimensions already exist, but this would be an example of how you could dynamically load variable headered CSV's into a SQL db.
Newbie: Trying to better understand how Loopback Model Definition relates (if at all) to json-schema. Will I be able to just write a json-schema file and use that as the Loopback Model Definition?
While there are definitely some close similarities between the two, json-schema and LoopBack's model definition are not the same.
If you have existing JSON schemas, you can probably convert those to LoopBack model definitions, but depending on how complex / advanced your schemas are will determine how easy that conversion is.
I have a UML model having OpaqueActions containing text conform to an XText Grammar/MetaModel. I am turning the UML model into text by means of an ACCELEO transformation. I'd like to invoke from the ACCELEO script a Java service which takes as input the text in the opaque actions within the model and provides as output the root element of the related model so that I can use it seamlessly from ACCELEO.
To this end I need to define a Java class with a method which: takes as parameter a String, invokes XTEXT, parses the text and, if it is correct, produces a related EMF model. Suppose the text is OCL (It isn't but I guess the procedure is the same), how would you do that?
You could try to load the OpaqueActions as the content of a resource in the resource set that holds the currently processed model. That will return the AST for that string.
I would like to have "realtime" like map.
My main question is:
How to use django-olwidget with openlayers OpenLayers.Strategy.Refresh?
Do I need to start back "from scratch" to use manually openlayers?
With django-olwidget, the data is on the web page so the args which define data-source, protocol.
My "second" question is about which format should I choose...
geoJSON? kml? other?
Can those formats contain openlayers point specific "style" specifications like:
{'graphic_name': 'square', 'point_radius': 10, 'fill_color': "#ABBAAB', 'stroke_color':'#BAABBA'}.
I already overriden the default map template olwidget/multi_layer_map.html to access my map object in JS. I think it should be rather simple to apply a js function on each data layers before passing it to the map.
Thanx in advance.
PS: I'm french speaker.
PS2: I asked this question as a feature request on github: https://github.com/yourcelf/olwidget/issues/89
If you're going to use regularly-refreshing data (without refreshing the page) and serialization formats like geoJSON and KML, django-olwidget won't help you very much out of the box. You might find it easier just to use OpenLayers from scratch.
But if you really wanted to use django-olwidget, here's what I would do:
Subclass olwidget.InfoLayer to create a new vector layer type that uses a network-native format like geoJSON or KML to acquire its data.
Add a corresponding python subclass to be able to use it with Django forms or whatever the use case is. You'll probably need to specify things like the URL from which the map will poll its data.
This is a lot of work beyond writing for OpenLayers directly. The advantages would be that you would get easy Django form integration with the same map.
As to which serialization format to use: I'm partial to JSON flavors over XML flavors such as KML, but it really doesn't matter much -- Django and OpenLayers both speak both fluently.
About the styling,you should take a look at the StyleMap[1] where you can set style properties according to attributes.
For the main question, I’m sorry I don’t know django-olwidget…
1 - http://openlayers.org/dev/examples/stylemap.html
I am trying to use the NDbUnit. I have created seperate XSD for each table instead of one large XSD for complete database.
My tests run fine when I use only single XSD and singe xml read. However for a perticular test, I need to have data in two or three different (but related) tables. If I try to read more than one xsd and xml, then it throws exception.
Here is my code
[ClassInitialize()]
public static void MyClassInitialize(TestContext testContext)
{
IDbConnection connection = DbConnection.GetCurrentDbConnection();
_mySqlDatabase = new NDbUnit.Core.SqlClient.SqlDbUnitTest(connection);
_mySqlDatabase.ReadXmlSchema(#"Data\CompanyMaster.xsd");
_mySqlDatabase.ReadXml(#"Data\CompanyMaster.xml");
_mySqlDatabase.ReadXmlSchema(#"Data\License.xsd");
_mySqlDatabase.ReadXml(#"Data\License.xml");
_mySqlDatabase.ReadXmlSchema(#"Data\LicenseDetails.xsd");
_mySqlDatabase.ReadXml(#"Data\LicenseDetails.xml");
_mySqlDatabase.ReadXmlSchema(#"RelatedLicense.xsd");
_mySqlDatabase.ReadXml(#"Data\RelatedLicense.xml");
}
Here is the exception I get at the point where i try to read License.XSD as shown above
Class Initialization method
ESMS.UnitTest.CompanyManagerTest.MyClassInitialize
threw exception.
System.ArgumentException:
System.ArgumentException: Item has
already been added. Key in dictionary:
'EnableTableAdapterManager' Key being
added: 'EnableTableAdapterManager'.
I am not sure if this is the correct way of reading multiple XML,XSD with NDbUnit. I googled and Overflowed (i.e. searched stack overflow), but could not get any sensible direction. Could someone explain what is going wrong and how to correct?
This isn't how NDbUnit is intended to be used. There is no support for reading multiple XSD or XML files into a single test-scope. NDbUnit uses the information in the single XSD to analyze relationships (FKs, etc.) between your tables in order to be able to properly manipulate the tables during its CRUD operations and so the requirement is that the single XSD describe the entire scope of the tables that you want NDbUnit to manipulate during a test-run.
It might be possible to load multiple XML files (containing your test data) but this is not a tested/supported scenario. I'd be interested in understanding what usage scenario you have that would preclude having just one XML file with your needed test data.
But its definitely the case that only a single XSD file (containing the schema of one or more tables and their relationships, etc.) can be loaded at a time.
Hope this clears this up a bit.
Sbohlen showed me the way.
It's true that as thing stands as of now, loading of multiple XSD's is not supported.
However fortunately loading of multiple XMLs against single XSD is possible.
So what I did was created a single XSD and pulled in all related tables onto it. Then used the AppendXml sytanx available along side of ReadXml. This way I could load the required test data into multiple tables and my tests started getting passed.
This link would share more lights about the AppendXml http://code.google.com/p/ndbunit/issues/detail?id=27