GStreamer PyGObject Flag and Enum usage - gstreamer

I am using PyGObject to run Gstreamer pipelines from python.
I need to set properties of some of my GStreamerElements, for example, the profiles property of the rtspclientsink element.
I am using the <element>.set_property(name, val) PyGObject equivalent of the GObject set_property function.
However, in this particular case, val is a flag, GstRTSPProfile. I was wondering how I might import and instantiate this flag as a python type such that I can set this property without having to just use an integer.
This is how I am importing Gstreamer:
import gi
gi.require_version('Gst', '1.0')
gi.require_version('GstBase', '1.0')
from gi.repository import GObject, Gst, GstBase
Importing GstRTSPProfile from gi.repository does not work. I'm not able to access GstRTSPProfile from Gst or GstBase either.

To set a property by string value as you would with gst-launch-1.0, you can use Gst.util_set_object_arg as outlined in this documentation: https://lazka.github.io/pgi-docs/#Gst-1.0/functions.html#Gst.util_set_object_arg. Gst.util_set_object_arg will silently do nothing on failure (property doesn't exist, string is not a valid value) though which you may want to check for.

You can get the gobject.GEnum by asking the element that uses it (thanks to https://stackoverflow.com/a/62139101/104446). Here's an example using the queue element and leaky property.
enum_class = Gst.ElementFactory.make("queue").find_property("leaky").enum_class
I couldn't figure out how to use this directly in a useful way - the class needs to be instantiated with the integer value. e.g.
>>> enum_class(1)
<enum Leaky on upstream (new buffers) of type __main__.GstQueueLeaky>
But you can create a Python enum from it:
>>> GstQueueLeaky = IntEnum(
... enum_class.__name__,
... {v.value_nick.upper(): k for k, v in enum_class.__enum_values__.items()}
... )
>>> GstQueueLeaky.DOWNSTREAM
<GstQueueLeaky.DOWNSTREAM: 2>
>>> GstQueueLeaky.__members__
mappingproxy({
'NO': <GstQueueLeaky.NO: 0>,
'UPSTREAM': <GstQueueLeaky.UPSTREAM: 1>,
'DOWNSTREAM': <GstQueueLeaky.DOWNSTREAM: 2>
})

Related

Separate the dependency list ( requirements) from conanfile.py

We are planning to set up conan repositories for our C++ codes. We want to expose only the list of dependencies ( lib/version#user/channel) to the developers, not the logic we put in conanfile.py We are planning so, because we are creating a wrapper around conan which will have several logic and checks. This wrapper will be exposed to the users. They do not need to know the detailed logic and build steps.
Is there a way to implement the requirements ( dependency list ) outside conanfile.py, and make the list available to the users, so that they can choose which version of the library they want to use - something similar ( not same, though ) to pom.xml in maven world ?
The above answer from #amit-jaim is quite good. I would like to point a couple of further details:
It is necessary to exports the .list file, as it will be also used when the conanfile is used in the cache
The conanfile can be made a bit more pythonic
The code could be like:
from conans import ConanFile, load
class HelloConan(ConanFile):
name = "Hello"
version = "0.1"
exports = "deps.list"
def requirements(self):
for r in load("deps.list").splitlines():
self.requires(r)
If you want to be able to run conan create from directories other than the current conanfile then getting the current location of the conanfile would be necessary, something like:
def requirements(self):
f = os.path.join(os.path.dirname(__file__), "deps.list")
for r in load(f).splitlines():
self.requires(r)
I found 2 solutions :
Create a list of libraries to be used and then read that from requirements method :
localhost$ cat dependencies.list
lib1/0.0.1#user/stable
lib2/1.6.0#user/stable
lib3/1.5.0#suer/stable
Remember, there should not be any quote around the values, in the way we pass them to the self.requires() method. Now define the requirements method in conanfile.py in the following way :
def requirements(self):
try:
with open("/path/to/dependencies.list") as c:
line=c.readline()
while line:
self.requires(line)
line=c.readline()
except Exception as ex:
print(ex)
Define requirements method outside conanfile.py. Use this method if library dependency is conditional.
localhost$ cat requires.py
def requires(self):
self.requires("lib1/0.0.1#user/stable")
self.requires("lib2/2.6.0#user/stable")
if self.options.shared:
self.requires("lib3/1.5.0#user/stable")
else:
self.requires("lib3/1.5.1#user/stable")`
Then import the requires method and assign that to the requirements method in conan class, in the following way :
from conans import ConanFile, CMake, tools
from requires import requires
class HelloConan(ConanFile):
name = Hello
version = "0.0.1"
license = "LICENSE"
url = "URL"
description = "libHello, Version 0.0.1"
settings = "os", "compiler", "build_type", "arch"
....
....
Now instead of defining the requirements method with def requirements(self), do this :
requirements=requires
....
....
That's it !! conan install will get the library details, and if found in the registry, those will be installed !!

How to pass parameter to dataflow template for pipeline construction

I am trying make a ancestor query like this example and transfer it to template version.
The problem is that the parameter ancestor_id is for the function make_query during pipeline construction.
If I don't pass it when create and stage the template, I will get RuntimeValueProviderError: RuntimeValueProvider(option: ancestor_id, type: int).get() not called from a runtime context. But if I pass it at template creating, it seems like a StaticValueProvider that never change when I execute the template.
What is the correct way to pass parameter to template for pipeline construction?
import apache_beam as beam
from apache_beam.io.gcp.datastore.v1.datastoreio import ReadFromDatastore
from apache_beam.options.pipeline_options import PipelineOptions
from google.cloud.proto.datastore.v1 import entity_pb2
from google.cloud.proto.datastore.v1 import query_pb2
from googledatastore import helper as datastore_helper
from googledatastore import PropertyFilter
class Test(PipelineOptions):
#classmethod
def _add_argparse_args(cls, parser):
parser.add_value_provider_argument('--ancestor_id', type=int)
def make_query(ancestor_id):
ancestor = entity_pb2.Key()
datastore_helper.add_key_path(ancestor, KIND, ancestor_id)
query = query_pb2.Query()
datastore_helper.set_kind(query, KIND)
datastore_helper.set_property_filter(query.filter, '__key__', PropertyFilter.HAS_ANCESTOR, ancestor)
return query
pipeline_options = PipelineOptions()
test_options = pipeline_options.view_as(TestOptions)
with beam.Pipeline(options=pipline_options) as p:
entities = p | ReadFromDatastore(PROJECT_ID, make_query(test_options.ancestor_id.get()))
Two problems.
The ValueProvider.value.get() method can only run in a run-time method like ParDo.process(). See example.
Further, your challenge is that your are using Google Cloud Datastore IO (a query from datastore). As of today (May 2018),
the official documentation indicates that, Datastore IO is NOT accepting runtime template parameters yet.
For python, particularly,
The following connectors accept runtime parameters.
File-based IOs: textio, avroio, tfrecordio
A workaround: you probably can first run a query without any templated parameters to get a PCollection of entities. At this time, since any transformers can accept a templated parameter you might be able to use it as a filter. But this depends on your use case and it may not applicable to you.

Manually pass commands into argparse? | Python 2.7

I am making a terminal game using Python's wonderful Cmd library. But i was curious if i could somehow put argparse code into it. Like use argparse to handle the 'args' from my cmd.Cmd() class.
To do this, i was really hoping that argparse had a way to manually pass args into it. I skimmed over the docs, but didn't notice anything like that.
parse_args() takes an optional argument args with a list (or tuple) of to parse. parse_args() (without arguments) is equivalent to parse_args(sys.argv[1:]):
In a script, parse_args() will typically be called with no arguments, and the ArgumentParser will automatically determine the command-line arguments from sys.argv.
If you do not have a tuple, but a single string, shell-like argument splitting can be accomplished using shlex.split()
>>> shlex.split('"A" B C\\ D')
['A', 'B', 'C D']
Note that argparse will print usage and help messages as well as exit() on fatal errors. You can override .error() to handle errors yourself:
class ArgumentParserNoExit(argparse.ArgumentParser):
def error(self, message):
raise ValueError(message) # or whatever you like
You could also try namedtuple to manually provide the input arguments,
from collections import namedtuple
ManualInput = namedtuple("ManualInput", ["arg1", "arg2"])
args = ManualInput(1, 4)
you will get
In [1]: print(args.arg2)
Out[1]: 4

unbound method <method> must be called with <class> instance as first argument

I would like to provide default behaviour for a class as illustrated below.
import numpy as np
class Test:
def __init__(self, my_method=None):
self.my_method = my_method or np.min
Test().my_method([1, 2, 3]) # >>> 1
The code works as expected. To keep all the default values together for easier code maintenance I wanted to change the code to
import numpy as np
class Test:
default_method = np.min
def __init__(self, my_method=None):
self.my_method = my_method or Test.default_method
Test().my_method([1, 2, 3]) # >>> TypeError
but the call to my_method fails with the error message unbound method amin() must be called with Test instance as first argument (got list instance instead). Oddly, the code works as expected if I use the builtin min rather than np.min, i.e. the following works as expected.
import numpy as np
class Test:
default_method = min # no np.
def __init__(self, my_method=None):
self.my_method = my_method or Test.default_method
Test().my_method([1, 2, 3]) # >>> 1
What am I missing?
Any function stored as an attribute on a class object is treated as a method by Python. On Python 2, that means it requires the first argument to be an instance of the class (which will be passed automatically if the attribute is requested via an instance). On Python 3, unbound methods no longer check their arguments in that way (so your code would work as written).
To work around the issue on Python 2, try wrapping the default_method value with staticmethod:
class Test(object):
default_method = staticmethod(np.min)
#...
This might not be a bad idea even on Python 3, since you'll also be able to use self.default_method rather than explicitly naming the class.
As for why the code worked with min but not np.min, that's because they are implemented differently. You can see that from their types:
>>> type(min)
<class 'builtin_function_or_method'>
>>> type(np.min)
<class 'function'>
Regular functions (like np.min) act as descriptors when they're attributes of a class (thus getting the "binding" behavior that was causing your issue). Builtin functions like min don't support the descriptor protocol, so the issue doesn't come up.

Call overloaded methods from Ironpython

I would like to call the Curve.Trim(CurveEnd, Double) method from the RhinoCommon API via IronPython. How can I make sure I don't get the overload for Curve.Trim(Double, Double)?
crv.Trim(geo.CurveEnd.End, 8.8)
#raised: Message: expected float, got CurveEnd
Note: If you want to try it yourself you will need to install a trial version of Rhino. It includes a Python editor.
Edit / Addition: the .Overloads property as mentioned by Jeff does not work here either. A snippet for testing:
import rhinoscriptsyntax as rs
import Rhino.Geometry as geo
import System
# first draw a curve longer than 8.8 units
crvO = rs.GetObject() # to pick that curve on the 3d GUI screen
crv = rs.coercecurve(crvO) # to get Rhino.Geometry.Curve
# these both don't work:
crv.Trim(geo.CurveEnd.End, 8.8)
#Message: expected float, got CurveEnd
crv.Trim.Overloads[geo.CurveEnd, System.Double](geo.CurveEnd.End, 8.8)
#Message: Trim() takes at least 2147483647 arguments (2 given)
rhinscriptsyntax is a library based on Rhino namespace from RhinoCommon
Use the .Overloads property to access a method's overloads:
csv.Trim.Overloads[CurveEnd, float](geo.CurveEnd.End, 8.8)
The docs.
I'm late to the party. But this code works (out the override).
import rhinoscriptsyntax as rs
import Rhino
import scriptcontext as sc
import System
crv_id = rs.GetObject()
crv = rs.coercecurve(crv_id)
trimmed = crv.Trim(Rhino.Geometry.CurveEnd.End, 4)
sc.doc.Objects.Replace(crv_id, trimmed)
sc.doc.Views.Redraw()