How to check for an internal edge via TraCI? - veins

I am trying to figure out whether an edge/lane is internal. When SuMO creates internal edges/lanes, it prefixes them with a colon [1]. Currently, I am exploiting this information, however, it seems that you also can annotate arbitrary other edges as internal using the tag function. This is also set for internal edges created by SuMO [1]. Therefore, I want to retrieve the information via TraCI.
To my knowledge, there is no TraCI command to retrieve this information (i.e. either the value of function or whether the edge/lane is internal).
The classes MSEdge and MSLane in the microsim directory have methods to retrieve both of those values, however, the classes Edge and Lane from libsumo do not.
I also checked whether the value of the function tag might get added to the parameter map during initialization, which I could access via TraCI's getParameter. This also does not seem to be the case. I checked some files from the netimport directory but could not find anything satisfactory.
Is there any other way to retrieve the function/isInternal information via TraCI without adding a new TraCI command (and the aforementioned missing methods in libsumo)?

This is a static property of the network so the easiest way of retrieving the information is to parse the network. In Python you can use sumolib for that:
import sumolib
net = sumolib.net.readNet("my.net.xml")
function = {}
for e in net.getEdges():
function[e.getID()] = e.getFunction()
There is currently no TraCI call for that but the colon thing is a very good indicator. The main developers are also a bit reluctant to add all static information retrieval to the TraCI API in order not to overload it.

Related

How to automate M2Doc generation including the settings?

I've applied the guidance on programmatic usage of M2Doc (also with this help) to successfully generate a document via the API, which was previously prepared by using the M2Doc GUI (configured .docx plus a .genconf file). It seems to also work with a configured .docx, but without a .genconf file.
Now I would like to go a step further and ease the user interface in our application. The user should come with a .docx, include the {m:...} fields there, especially for variable definition, and then in our Eclipse application just assign model elements to the list of variables. Finally press "generate". The rest I would like to handle via the M2Doc API:
Get list of variables from the .docx
Tell M2Doc the variable objects (and their types and other required information, if that is separately necessary)
Provide M2Doc with sufficient information to handle AQL expressions like projectmodel::PJDiagram.allInstances() in the Word fields
I tried to analyse the M2Doc source code for this, but have some questions to achieve the goal:
The parse/generate API does not create any config information into the .docx or .genconf files, right? What would be the API to at least generate the .docx config information?
The source code mentions "if you are using a Generation" - what is meant with that? The use of a .genconf file (which seems to be optional for the generate API)?
Where can I get the list of variables from, which M2Doc found in a .docx (during parse?), so that I can present it to the user for Object (Model Element) assignment?
Do I have to tell M2Doc the types of the variables, and in which resource file they are located, besides handing over the variable objects? My guess is no, as using a blank .docx file without any M2Doc information stored also worked for the variables themselves (not for any additional AQL expressions using other types, or .oclAsType() type castings).
How can I provide M2Doc with the types information for the AQL expressions mentioned above, which I normally tell it via the nsURI configuration? I handed over the complete resourceSet of my application, but that doesn't seem to be enough.
Any help would be very much appreciated!
To give you an impression of my code so far, see below - note that it's actually Javascript instead of Java, as our application has a built-in JS-Java interface.
//=================== PARSING OF THE DOCUMENT ==============================
var templateURIString = "file:///.../templateReqs.docx";
var templateURI = URI.createURI(templateURIString);
// canNOT be empty, as we get nullpointer exceptions otherwise
var options = {"TemplateURI":templateURIString};
var exceptions = new java.util.ArrayList();
var resourceSetForModels = ...; //here our application's resource set for the whole model is used, instead of M2Doc "createResourceSetForModels" - works for the moment, but not sure if some services linking is not working
var queryEnvironment = m2doc.M2DocUtils.getQueryEnvironment(resourceSetForModels, templateURI, options);
var classProvider = m2doc.M2DocPlugin.getClassProvider();
// empty Monitor for the moment
var monitor = new BasicMonitor();
var template = m2doc.M2DocUtils.parse(resourceSetForModels.getURIConverter(), templateURI, queryEnvironment, classProvider, monitor);
// =================== GENERATION OF THE DOCUMENT ==============================
var outputURIString = "file:///.../templateReqs.autogenerated.docx";
var outputURI = URI.createURI(outputURIString);
variables["myVar1"] = ...; // assigment of objects...
m2doc.M2DocUtils.generate(template, queryEnvironment, variables, resourceSetForModels, outputURI, monitor);
Thanks!
No the API used to parse an generate don't modifies the template file nor the .genconf file. To modify the configuration of the template you will need to use the
TemplateCustomProperties class. That will allow you to register your metamodels and service classes. This instormation is then used to configure the IQueryEnvironment, so you might also want to directly configure the IQueryEnvironment in your code.
The generation in this context referes to the .genconf file. Note The genconf file is also an EMF model, so you can also craft one in memory to launch you generation if it's easier for you. But yes the use of a .genconf file is optional like in your code example.
To the list of variables in the template you can use the class TemplateCustomProperties:
TemplateCustomProperties.getVariables() will list the variables that are declared with their type
TemplateCustomProperties.getMissingVariables() to list varaibles that are used in the template but not declared
You can also find le list of used metamodels (EPackage nsURIs) and imported services classes.
The type of variables is not needed at generation time, it's only needed if you want to validate your template. At generation time you need to pass a map from the variable name to its value as you did in your example. The value of a variable can be a any object from your model (an EObject), a String, an Integer, ... If you want to use something like oclIsKindOf(pkg::MyEClass) you will need to register the nsURI of pkg first see the next point.
The code you provided should let you use something like projectmodel::PJDiagram.allInstances(). This service needs a ResourceSetRootEObjectProvider() that is initialized in M2DocUtils.getQueryEnvironment(). But you need to declare the nsURI of your metamodel in your template (see TemplateCustomProperties). This will register it in the IQueryEnvironment. You can also register it yourself using IQueryEnvironment.registerEPackage().
This should help you finding the missing parts in the configuration of the AQL environment. Your code seems good and should work when you add the configuration part.

Keeping preset (non-default) values when calling boost parseOptions

In my project I've got some internal config structures containing options registration using default values (let's say Config.x=0, Config.y=0), those values are not modifable for a client.
Sometimes users of my application want to modify default values of those fields before parsing command line arguments, so before parsing they just change those values manually (let's say Config.x=3, Config.y=4) and then fetch command-line/.ini ifle options and parse it using parseOptions.
If those external arguments contain only a part of those options i.e. Config.x=9, values of other options will be those which are registered using boost::program_options, not those currently assigned, so the result would be Config.x=9, Config.y=0 instead of Config.x=9, Config.y=4. So basically it seems that, boost::program_options::parseOptions clears all options before parsing.
Is there anyway to prevent boost from clearing already assigned options in case they do not appear in command-line arguments?
This can't be done. However, you should be able to create parsed_options either manually¹, or you can supply the options as a "faux" configfile so you can actually use the configfile parser on it.
Once you have the parsed_options you can store/notify them as usual.
¹ though this isn't supported/documented, see the comment at boost::program_option::basic_parsed_options<>

Clojure - component and names

I am trying to understand Stuart Sierra's component, specifically the naming convention for the components in order to structure a Clojure application.
If I look into system for instance, I see several components mapped to the :server key :
aleph
immutant
Since both use the same key :server, does that mean that I can only use one of them if I use this library ?
Similarly, I use onyx. Several components are already defined inside onyx system.clj.
Does that mean that some keys are effectively reserved by onyx ?
What will happen to the :port parameter, which seems to be used by many components in the wild ?
Questions
What is get the difference between the keys used when associng in the start method and the keys used in component/system-map ?
Is there a naming convention for those keys, how do we avoid collisions between those keys ?
In which cases (if any) does it make sense to have several systems and can they run at the same time ?
Keys in the system map identify specific components (instances) in that system. You can use whatever key you like for whatever component you need.
Keys in a specific component record can be one of three things:
a configuration value set up at creation time
some internal value that is irrelevant to the user of the component
a dependency (which will refer to another component when the system is started)
1 and 2 are generally set up by the component constructor and users do not need care about the actual key used in the record.
Dependencies are configured by setting a mapping on the depending component from the dependency key (3) to the key in the system map referring to the dependancy component. This is done with the component/using function and giving it a map of component keys to system-map keys as the second argument. That way you can always map any expected key to any actually used key. You can use the short-hand form of component/using with a vector of keys, but only if the keys in the system-map are the same as the keys in the component you're configuring.
I hope that answers the first two questions
The third question I think I'd like to see an example of what you're looking for as a separate post
The last question: yes you can have multiple systems running at the same time. That may or may not make sense depending on what you want to do, but running a test system as well as a development system seems like a fairly obvious setup.

TMG SF_NOTIFY_POLICY_CHECK_COMPLETED Event

According to http://msdn.microsoft.com/en-us/library/ff823993%28v=VS.85%29.aspx, during this event the web filter can request GUID of the matching rule. I am assuming that is done by performing a GetServerVariable with type of SELECTED_RULE_GUID, since I could find no other readily identifiable means of doing so.
My problem comes from the fact that I want to see if the rule is allowing or blocking the request. If it's being blocked then my filter doesn't have to take any action, but if it's being allowed I need to do some work. SF_NOTIFY_POLICY_CHECK_COMPLETED seems to be the best event to watch, since it occurs last enough that authentication and various ms_auth traffic has been handled, but just before the request either gets routed or fetched from cache.
I had thought that perhaps I needed to use COM and the IFPC interfaces (following along with example code for registering Web Filters to TMG) to get details on the rule. However, going down via FPC -> FPCArray -> FPCArrayPolicy -> FPCPolicyRules, the only element-returning function takes either an index or a name.
Which is problematic given that I only have a GUID.
The FPCPolicyRule object (singular) doesn't seem have any field related to GUID either, which eliminates just iterating over the collection for it.
So my question boils down to, from the SF_NOTIFY_POLICY_CHECK_COMPLETED event, how would a web filter determine if the request has been allowed or denied?
After more investigation and testing, the GUID is accessible via the PersistentName of the FPCPolicyRule object. Since FPCPolicyRules->Item member only works on either Name or Index, I had to iterate through its items comparing each PersistentName against the GUID.
Apologies if this was obvious, took me a good day to work out :)

Detail question on REST URLs

This is one of those little detail (and possibly religious) questions. Let's assume we're constructing a REST architecture, and for definiteness lets assume the service needs three parameters, x, y, and z. Reading the various works about REST, it would seem that this should be expressed as a URI like
http://myservice.example.com/service/ x / y / z
Having written a lot of CGIs in the past, it seems about as natural to express this
http://myservice.example.com/service?x=val,y=val,z=val
Is there any particular reason to prefer the all-slashes form?
The reason is small but here it is.
Cool URI's Don't Change.
The http://myservice.example.com/resource/x/y/z/ form makes a claim in front of God and everybody that this is the path to a specific resource.
Note that I changed the name. There may be a service involved, but the REST principle is that you're describing a specific web resource, named /x/y/z/.
The http://myservice.example.com/service?x=val,y=val,z=val form doesn't make as strong a claim. It says there's a piece of code named service that will try to do some sort of query. No guarantees.
Query parameters are rarely "cool". Take a look at the Google Chart API. Should that use a /full/path/notation for all of the fields? Would each URL be cool if it did?
Query parameters are useful. Optional fields can be omitted. New keys can be added to support new functionality. Over time, old fields can be deprecated and removed. Doing this is clumsier with a /path/notation .
Quoting from http://www.xml.com/pub/a/2004/08/11/rest.html
URI Opacity [BP]
The creator of a URI decides the encoding
of the URI, and users should not derive
metadata from the URI itself. URI opacity
only applies to the path of a URI. The
query string and fragment have special
meaning that can be understood by users.
There must be a shared vocabulary between
a service and its consumers.
This sounds like query strings are what you want.
One downside to query strings is that the are unordered. The GET ending with "?x=1&y=2" is different than that ending with "?y=2&x=1". This means the browser and any other intermediate systems won't be able to cache it, because caching is done based on the full URL. If this is a concern, then generate the query string in a well-defined order.
While constructing URIs this is the priniciple I follow. I don't know whether it is perfectly acceptable in all cases
Say for instance, that I have to get the details of an employee, then the URI will be of the form:
GET /employees/1/ and not GET /employees?id=1 since I treat every employee as a resource and the whole URI "employees/{id}" is used in identification of the resource.
On the other hand, if I have algorithmic operations that do not identify a specific resource as such,but merely require inputs to the algorithm which in turn identify the resource, then I use query strings.
For instance GET /employees?empname='%Bob%'&maxResults=100 might give me all employees whose names have the word Bob in them, with the maximum results returned by the query limited to 100.
Hope this answers your question
URIs are strictly split into a hierarchical part (the path) and a non-hierarchical path (the query), and both serve to identify the resource
Tthe URI spec itself (RFC 3986) clearly sets the path and the query portion of a URI as equal.
Section 3.3:
The path component contains data [...] that along with [the] query component
serves to identify a resource.
Section 3.4:
The query component contains [...] data that, along with
[...] the path component serves to identify a resource
So your choice in using x/y/z versus x=val&y=val&z=val has mainly to do if x, y or z are hierarchical in nature or if they're non-hierarchical, and if you can perceive them as always being hierarchical or non-hierarchical for the foreseeable future, along with any technical limitations you might be having on selecting one over the other.
But to answer your question, as others have noted: Neither is more RESTful than the other, since they both end up identifying a resource.
If the resource is the service, independent of parameters, it should be
http://myservice.example.com/service?x=val&y=val&z=val
This is a GET query. One of the principles behind REST is that you GET to read (but not modify!) the resource; you can POST to modify a resource & get a response; you can PUT to write to a resource; and you can DELETE to remove a resource.
If the resource specific with those parameters is a persistent resource, it needs a name. You could (if you organized your webservice this way) POST to http://myservice.example.com/service?x=val&y=val&z=val to create a particular instance of the service and have it return an ID to name this instance, e.g.
http://myservice.example.com/service/12312549
then use GET/POST/PUT/DELETE to interact with that instance.
First of all, defining URIs as part of your API violates a constraint of the REST architecture. You cannot do that and call your API RESTful.
Secondly, the reason query parameters are bad for non-query resource access is that they are generally not cached. It is also a violation of HTTP standards.
A URL with slashes like /x/y/z/ would impose a hierarchy and is not suited for the exact case of just passing three parameters.
If, like you said, x y z are indeed just parameters and the order is not important, it would be more RESTful to use semicolons:
http://myservice.example.com/service/x;y;z/
If your "service" however is just an algorithm that works the same with different parameters, there would also be nothing unRESTful with using ?x=val format.