Push GstBuffer list to next downstream element - gstreamer

I have a GstBufferList *list inside the upstream element and wanted to pass the same to the downstream element for further processing.
Is there any way in Gstreamer to pass this Gstbufferlist to the next element?

https://gstreamer.freedesktop.org/documentation/gstreamer/gstpad.html?gi-language=c#gst_pad_push_list
gst_pad_push_list() pushes GstBufferList downstream. If the downstream pad has a chain_list implementation it will get the complete list - if not it will call the chain function with each GstBuffer individually.

Related

(OMNeT++) Where do packets go?

I'm trying to do a project described also here: PacketQueue is 0
I've modified the UdpBasicApp.cc to suit my needs, and in the handleMessage() function I added a piece of code to retrieve the length of the ethernet queue of a router. However the value returned is always 0.
My .ned file regarding the queue of routers is this:
**.router*.eth[*].mac.queue.typename = "DropTailQueue"
**.router*.eth[*].mac.queue.packetCapacity = 51
The code added in the UdpBasicApp.cc file is this:
cModule *mod = getModuleByPath("router3.eth[*].mac.queue.");
queueing::PacketQueue *queue = check_and_cast<queueing::PacketQueue*>(mod);
int c = queue->getNumPackets();
So my question is this: is this the right way to create a queue in a router linked to other nodes with an ethernet link?
My doubt is that maybe packets don't pass through the specified interface, i.e. I've set the ini parameters for the wrong queue.
You are not creating that queue. It was already instantiated by the OMNeT++ kernel. You are just getting a reference to an already existing module with the getModuleByPath() call.
The router3.eth[*].mac.queue. module path is rather suspicious in that call. It is hard-coded in all of your application to get the queue length from router3 even if the app is installed in router1. i.e. you are trying to look at the queue length in a completely different node. Then, the eth[*] is wrong. As a router obviously contains more than one ethernet interface (otherwise it would not be a router), you must explicitly specify which interface you want to sepcify. You must not specify patterns in module path (i.e. eth[0] or something like that, with an exact index must be specified). And at this point, you have to answer the question which ethernet interface I'm interested in, and specify that index. Finally the . end the end is also invalid, so I believe, your code never executes, otherwise the check_and_cast part would have raised an error already.
If you wanted to reach the first enthern interface from an UDP app in the same node, you would use relative path, something like this: ^.eth[0].mac.queue
Finally, if you are unsure whether your model works correctly, why not start the model with Qtenv, and check whether the given module receives any packet? Like,, drill down in the model until the given queue is opened as a simple module (i.e. you see the empty inside of the queue module) and then tap the run/fast run until next event in this module. If the simulation does not stop, then that module indeed did not received any packets and your configuration is wrong.

Is it possible to assign a task to a specific worker in Ray?

Specifically I'd like my parameter store worker to always be invoked on the HEAD node, and not on any of the workers. This way I can optimize the resource configuration. Currently the parameter store task seems to get started on a random server, even if it called first, and even if it is followed by a ray.get()
Maybe it's possible to do something like:
ps = ParameterStore.remote(onHead=True)?
You can start the "head" node with an extra custom resource and then you can make the parameter store actor require that custom resource. For example, start the head node with:
ray start --head --resources='{"PSResource": 1}'
Then you can declare the parameter store actor class with
#ray.remote(resources={"PSResource": 1})
class ParameterStore(object):
pass
ps = ParameterStore.remote()
You can also declare the parameter store actor regularly and change the way you invoke it. E.g.,
#ray.remote
class ParameterStore(object):
pass
ps = ParameterStore._remote(args=[], resources={"PSResource": 1})
You can read more about resources in Ray at https://ray.readthedocs.io/en/latest/resources.html.

How can I get current queue in emberJs?

tell me please, how can I debug queues in Ember.js and get current queue with "debugger;"?
You can get current queue by inspecting:
Ember.run.currentRunLoop.queues
You'll notice you have many queues there:
Object {sync: Queue, actions: Queue, routerTransitions: Queue, render:
Queue, afterRender: Queue…}
You have to expand each property which is a Queue, for example, actions and see if it has _queueBeingFlushed property defined. If yes, then it is current Queue.
Example of _queueBeingFlushed for actions Queue:
_queueBeingFlushed: Array[4]
0: null
1: ()
2: undefined
3: undefined
length: 4
When you know that you can also filter Ember.run.currentRunLoop.queues and get current Queue programatically.

data structure for a circuit switching?

I would like to create something like this :
I have a module that does something like 'circuit switching' for a stream of messages. That is, it has a single inport and multiple outports. Once a message arrives at the inport, an outport is selected based on some logic (logic is not important in the context of the question). It is checked whether, there is any ongoing message transfer on the outport (for the first message, there won't be any). If there is no transfer, message is sent to that outport, otherwise, it is kept in queue for that particular outport. I need to decide data structure for this communication. Please advice
My idea is to have a map of outports and corresponding queues.
queue<message> m_incoming_queue;
typedef map<outport*,m_incoming_queue> transaction_map
if this is a good solution, i want to know how do I create a queue at the runtime? as in, I dont know in advance how many outports will there be, I create outports based on requirement.
Maybe something like:
// At beginning
typedef queue<message> MessageQueue
typedef map<outport*, MessageQueue> transaction_map
transaction_map tm() // Create the transaction map
// On receipt of each message
// (Some logic that determines outport* op and message m)
if(tm.count(*op) == 0)
{
// There are no queues yet, create one and insert it
tm.insert(transaction_map::value_type(*op, MessageQueue()))
}
// There is already a queue created, so add to it
tm[*op].push(m)

Publisher/Subscriber with changing subscriptions during loop

This is more of a general design query I had. I have implemented a publish / subscribe pattern by maintaining a list of subscribers. When an event to publish occurs, I loop through the subscribers and push the event to each one, of them in turn.
My problem occurs when due to that publication, somewhere in the depth of the software, another component or event the described component decide to unsubscribe themselves. By doing so, they invalidate my iterator and cause crashes.
What is the best way to solve this? I have been thinking of wrapping the whole publication loop into a try catch block, but that means some subscribers miss the particular subscription upon which someone unsubscribed, and seems a bit over the top. Then I tried feeding it back, e.g. I turned the void publish call into a bool publish call that returns true when the subscriber wants to be deleted, which works for that case, but not if another subscriber unsubscribes. Then I am thinking to "cache" unsubscription requests somewhere and release them when the loop is done, but that seems a bit overkill. Then I am thinking of storing the iterator as a class member, so that I can manipulate the iterator from outside, but that gets messy (say you unsubscribe subscriber 1, iterator is pointed at 2, and the container is a vector - then the iterator would have to be decremented). I think I might prefer one of the latter two solutions, but both seem not ideal.
Is this a common problem? Is there a more elegant solution?
You could either disallow subscription operations during publication, or you could use an appropriate data structure to hold your subscription list, or both.
Assuming that you keep your subscribers in a std::list, you could run your loop thus:
for(iterator_type it = subs.begin(); it != subs.end(); ) {
iterator_type next = it;
++next;
it->notifier();
it = next;
}
That way, if the current item is removed, you still have a valid iterator in next. Of course, you still can't allow arbitrary removal (what if next is removed?) during publication.
To allow arbitrary removal, mark an item as invalid and defer its list removal until it is safe to do so:
... publication loop ...
dontRemoveItems = true;
for(iterator_type it = subs.begin(); it != subs.end(); ++it) {
if(it->valid)
it->notifier();
}
std::erase(std::remove_if(...,, IsNotValid),...);
dontRemoveItems = false;
elsewhere,
... removal code:
if(dontRemoveItems) item->valid = false;
else subs.erase(item);