building these examples, https://github.com/GStreamer/gstreamer/tree/main/subprojects/gst-plugins-base/tests/examples/decodebin_next
it appears that decodebin3 only exposes the representation it has already chosen
what should I do to list the available representations to select one chosen by me
then I have another question
is there a guide/example to use the new dashdemux2 without rely on decodebin3? once i set the flag "GST_BIN_FLAG_STREAMS_AWARE" to my pipeline is there anythings else to do to make it work?
Related
Regard class weka.attributeSelection.InfoGainAttributeEval
Javadoc says it has 2 options, which are expressed as command line parameters:
-M
treat missing values as a seperate value.
-B
just binarize numeric attributes instead
of properly discretizing them.
While from Weka GUI it says it has 3 options with symbolic names:
OPTIONS
missingMerge -- Distribute counts for missing values. Counts are distributed across other values in proportion to their frequency. Otherwise, missing is treated as a separate value.
binarizeNumericAttributes -- Just binarize numeric attributes instead of properly discretizing them.
doNotCheckCapabilities -- If set, evaluator capabilities are not checked before evaluator is built (Use with caution to reduce runtime).
Which option set is correct and how to know correspondence?
There is not just one correct settings. It depends on the dataset and how you wish to apply your attribute selection. The values in the GUI are default ones and of course it is not guaranteed that they are the correct settings.
In case you mean how to check the options, as you can see in the javadoc, the options in the GUI are fields of the class, not just symbolic names; they have getters and setters. You can set them via setOptions() method or via setters however you wish and if you'd like to know their values, you can get them via getters if we mention using Weka API.
I am trying to figure out whether an edge/lane is internal. When SuMO creates internal edges/lanes, it prefixes them with a colon [1]. Currently, I am exploiting this information, however, it seems that you also can annotate arbitrary other edges as internal using the tag function. This is also set for internal edges created by SuMO [1]. Therefore, I want to retrieve the information via TraCI.
To my knowledge, there is no TraCI command to retrieve this information (i.e. either the value of function or whether the edge/lane is internal).
The classes MSEdge and MSLane in the microsim directory have methods to retrieve both of those values, however, the classes Edge and Lane from libsumo do not.
I also checked whether the value of the function tag might get added to the parameter map during initialization, which I could access via TraCI's getParameter. This also does not seem to be the case. I checked some files from the netimport directory but could not find anything satisfactory.
Is there any other way to retrieve the function/isInternal information via TraCI without adding a new TraCI command (and the aforementioned missing methods in libsumo)?
This is a static property of the network so the easiest way of retrieving the information is to parse the network. In Python you can use sumolib for that:
import sumolib
net = sumolib.net.readNet("my.net.xml")
function = {}
for e in net.getEdges():
function[e.getID()] = e.getFunction()
There is currently no TraCI call for that but the colon thing is a very good indicator. The main developers are also a bit reluctant to add all static information retrieval to the TraCI API in order not to overload it.
In my project I've got some internal config structures containing options registration using default values (let's say Config.x=0, Config.y=0), those values are not modifable for a client.
Sometimes users of my application want to modify default values of those fields before parsing command line arguments, so before parsing they just change those values manually (let's say Config.x=3, Config.y=4) and then fetch command-line/.ini ifle options and parse it using parseOptions.
If those external arguments contain only a part of those options i.e. Config.x=9, values of other options will be those which are registered using boost::program_options, not those currently assigned, so the result would be Config.x=9, Config.y=0 instead of Config.x=9, Config.y=4. So basically it seems that, boost::program_options::parseOptions clears all options before parsing.
Is there anyway to prevent boost from clearing already assigned options in case they do not appear in command-line arguments?
This can't be done. However, you should be able to create parsed_options either manually¹, or you can supply the options as a "faux" configfile so you can actually use the configfile parser on it.
Once you have the parsed_options you can store/notify them as usual.
¹ though this isn't supported/documented, see the comment at boost::program_option::basic_parsed_options<>
I am trying to understand Stuart Sierra's component, specifically the naming convention for the components in order to structure a Clojure application.
If I look into system for instance, I see several components mapped to the :server key :
aleph
immutant
Since both use the same key :server, does that mean that I can only use one of them if I use this library ?
Similarly, I use onyx. Several components are already defined inside onyx system.clj.
Does that mean that some keys are effectively reserved by onyx ?
What will happen to the :port parameter, which seems to be used by many components in the wild ?
Questions
What is get the difference between the keys used when associng in the start method and the keys used in component/system-map ?
Is there a naming convention for those keys, how do we avoid collisions between those keys ?
In which cases (if any) does it make sense to have several systems and can they run at the same time ?
Keys in the system map identify specific components (instances) in that system. You can use whatever key you like for whatever component you need.
Keys in a specific component record can be one of three things:
a configuration value set up at creation time
some internal value that is irrelevant to the user of the component
a dependency (which will refer to another component when the system is started)
1 and 2 are generally set up by the component constructor and users do not need care about the actual key used in the record.
Dependencies are configured by setting a mapping on the depending component from the dependency key (3) to the key in the system map referring to the dependancy component. This is done with the component/using function and giving it a map of component keys to system-map keys as the second argument. That way you can always map any expected key to any actually used key. You can use the short-hand form of component/using with a vector of keys, but only if the keys in the system-map are the same as the keys in the component you're configuring.
I hope that answers the first two questions
The third question I think I'd like to see an example of what you're looking for as a separate post
The last question: yes you can have multiple systems running at the same time. That may or may not make sense depending on what you want to do, but running a test system as well as a development system seems like a fairly obvious setup.
I'm trying to call printing of a document from my C++(XCode) application on MAC.I'm currently using Launch Services framework,but I didn't noticed where is printto option(flag).Is this supported by Launch Services at all?Is there some other way to do this?
Thanks,
Marko
According to Technical Note TN2082: The Enhanced Print Apple Event (in the legacy docs), you should be able to specify a keyAEPropData parameter of type kPMPrinterAEType whose value is a PMPrinter reference.
That TechNote is a bit unclear, though. It seems as though the keyAEPropData parameter carries both the print settings and the printer. The receiver can retrieve both pieces of information by coercing the parameter's "actual" value to two different types. That raises the question of whether you can specify the parameter value with just kPMPrinterAEType and have it work, or if it needs to be some other type.
Anyway, you can construct an AEDesc for the parameter and pass it in to LSOpenFromRefSpec() in the passThruParams field of the LSLaunchFSRefSpec structure.
It may help to use the Script Editor to send an enhanced print Apple Event to a test application which then dumps that event. That may clear up how exactly the parameter is constructed, so you can construct it in the same way.