In the context of a dynamic pipeline, I read that one should do the following when unlinking two elements or pads:
Block the source element via a blocking probe.
Flush the sink element via EOS to ensure the sink element is empty.
Unlink.
https://gstreamer.freedesktop.org/documentation/application-development/advanced/pipeline-manipulation.html?gi-language=javascript#changing-elements-in-a-pipeline
Should one always do this? And curiously if so: is there a reason it's not part of the core library?
Related
I am trying to understand the following code. If I have 50 connections to this server and I send data through one of these sockets, the select block with the inner loop will capture what I send and echo it back. But what happens if within a very short time-frame of the first message, I send another one? So fast that the inner loop (after select - the loop iterating over all active client sockets) doesn't finish. Will that data be thrown away? Will it be what the next select will be triggered with? What happens if I send two messages before the inner loop finishes ? Will I ever face the scenario where inside the loop iterating over all the active sockets I get more than 1 that has "activity" - i.e.: can two FD_ISSET(sd, &readfds) be true within a single iteration of the loop ?
Yes, multiple descriptors can be ready to read in a single iteration. The return value of select() is the number of descriptors that are ready, and it can be more than 1. As you loop through the descriptors, you should increment a counter when FD_ISSET(sd, &readfds) is true, and continue until the counter reaches this number.
But even if you only process one descriptor, nothing will be thrown away. select() is not triggered by changes, it returns whenever any of the descriptors is ready to read (or write, if you also use writefds). If a descriptor is ready to read, but you don't read from it, it will still be ready to read the next time you call select(), so it will return immediately.
However, if you only process the first descriptor you find in the loop, later descriptors could be "starved" if an earlier descriptor is always ready to read, and you never process the later ones. So it's generally best to always process all the ready descriptors.
select() is a level-triggered API, which means that it answers the question "are any of these file descriptors readable/writable now?", not "have these file descriptors become readable/writable?". That should answer most of your questions:
But what happens if within a very short time-frame of the first message, I send another one? [...] Will it be what the next select will be triggered with?
It will be what the next select() will be triggered with.
What happens if I send two messages before the inner loop finishes ?
That depends on how long the messages are - TCP doesn't work in terms of messages, but in terms of a stream of bytes. The server might well read both messages in a single read(). And if it doesn't, the socket will remain readable, and it will pick them up immediately on the next select().
Will I ever face the scenario where inside the loop iterating over all the active sockets I get more than 1 that has "activity" - i.e.: can two FD_ISSET(sd, &readfds) be true within a single iteration of the loop ?
Yes, if two clients send data at the same time (while you are out of select()), select() will report two readable file descriptors.
To add to the already excellent answers:
The select function in this case isn't grabbing packets directly from the wire, it's actually going to the packet buffer, usually a part of the NIC, to grab packets/frames that are available to be read. The packet buffer is normally a ring buffer: it has a fixed size, new packets come in at the "top", and when the buffer gets full, the oldest packets drop out of the "bottom".
Just as #sam-varshavchik mentioned in the comments, as long as select is implemented correctly and the packet buffer doesn't clog up during the time you are going through the select loop, you will be fine.
Here's an interesting article on how to implement a packet ring buffer for a socket.
I am using boost log to make logging system for my program.
I understand boost log mechanism like this:
the core singleton registers sink ,this leads to raising shared pointer count of sink by 1,then we backend raising this count to 2 in addition to main count of shared pointer of sink as 0 .
In my code I remove sink from core and I expect that shared pointer count of this front end sink is decreased to 1 ,then I test this shared pointer to be unique and if so I reset the shared pointer.
I use multi threads and use mutex to protect boost log code working with this specefic sink" I have cout sink and I do not protect it"
the problem is : sometimes I find that sink front end shared pointer counter is not 2,it becomes 3.
I do not know why this is happening as every sink will be registered to core once making its count 1 then adding backend we should have count of 2 only.
is there any way I can verify that core has removed front end sink??
is there any way to know where each instance of shared pointer is present in code??
thanks alot
Update:
if core.remove_sink is executed on one thread and at the same time core log to cout is done on another thread"cout sink is not protected by mutex" and i can see on console that msg is written in wrong position where certain message after core.remove_sink is ought to be done ,BUT here frontend sink shared pointer count is not reduced!!
Did the core discarded the remove_sink which came at same time of logging to another sink???
is there any way I can verify that core has removed front end sink?
The sink is considered removed when remove_sink returns. That is, it will not receive any future log records.
It may not be released by the library at that point because there may be log records in progress at the point of the remove_sink call, and remove_sink may return before those log records are fully processed. Log record processing will continue and may involve the sink that is being removed. Eventually, when all log records are processed and remove_sink have returned, the sink will have been released by the core and, if no more references are left, destroyed.
You can detect when the sink is no longer present by using weak_ptr, which you can construct from shared_ptr referencing the sink. When the last shared_ptr referencing the sink object is destroyed or reset, the weak_ptr::lock method will return a null shared_ptr. Note that this includes any shared_ptrs to the sink that you may be holding in your code.
is there any way to know where each instance of shared pointer is present in code?
Generally, no. You will have to manually track where you pass and save pointers to objects.
I have a ring buffer in which I want to place some bytes received over serial port. Now these received bytes consist of a command followed by data bytes. And each of these command and data combination could be off different length. I want to implement a method in which I can copy one command from this buffer and execute it. Then the next command and so on. What would be the best (and simplest) way of doing it?
Simple byte stream will do. Actually, you can even use a wrapper over std::stringstream for your byte stream as a storage as a first step: fill it with the routine that communicates with the serial port and read from it with the instruction decoder.
When you talk about ring buffer it can be something as simple as char[ring_size_in_bytes] and current location indicator int. Reading from port should be byte by byte and when you reach end of the buffer you start from zero.
I usually use simple buffer for command and for data read from port. After copying data from port to small buffer I interpret data on the fly trying to find beginning of command and then start putting to buffer until I detect new beginning. Then I enqueue command and start over. This constitutes top half (fast one) of serice routine.
Serial port is very slow so there is no risk that you will not read data from it fast enough. Each iteration (interrupt) will give you couple of bytes ...
I would use queue of type that encapsulates your commands to store received commands list.
How complicated is your serial protocol ?
I have trouble with using general_work function for a block which takes a vector as an input and outputs a message.
The block is a kind of demodulator. In fact it is working great if I send some data after and after (periodically).
But I need to create only one data (frame) which has a predefined size and sent it to this block. And I want this block to handle all of the items in its buffer without waiting for more data.
As I understand, it is about the buffering and scheduler structure of GNU Radio, but, I couldn't figure it out how to provide an ability to this block to handle all the symbols of the frame that I've sent without waiting for another frame.
For example, lets say my frame has 150 symbols. The scheduler calls my general_work function two, three, or four times (I don't know how it decides the number of calls for my general_work).
However, it stops lets say at symbol #141, or 143. Every time I run it, it stops at different symbol number. If I send another frame, it completes to handle remaining items (symbols) in its buffer.
Does anybody know how can I tell the scheduler to not wait for another frame to complete the remaining items in its buffer from the previously sent data.
First of all, thank you for your advices. In fact, I am studying on a link layer protocol and its implementation using SDR for my graduate thesis. Because I'm not a DSP expert, I need a wifi phy layer (transceiver). So, I decided to use an OOT module, "802.11 a/g/p Transceiver" project developed by Bastian Bloessl which is available on https://github.com/bastibl/gr-ieee802-11.git. He provided an example flow-graph (wifi_loopback.crc) to simulate the transceiver. By the way, besides the transceiver (DSP stuff) itself, he also developed some part of the data link layer issues for 802.11 such as framing and error control. In the example flow-graph, the "Message Strobe" block is used as a kind of application layer for producing data periodically and send them to a block called "OFDM MAC" which has 4 message ports (app_in, app_out, phy_in, and phy_out). In this block, the raw data which is coming from the "Message Strobe" is encapsulated by adding a header and FCS information. Then, the encapsulated data is sent (phy_out) to a hierarchical block called "Wifi PHY Hier" in order to do some DSP issues such as scrambling, coding, interleaving, symbol mapping and modulation etc. In some way, the data is converted to signal and received by the same block ("Wifi PHY Hier") and the opposite process is handled such as descrambling, decoding etc. And it gives the decoded frame to "OFDM MAC" block (phy_in). If you run this flow-graph, everything is normal. I mean, the data sent by "Message Strobe" is received correctly.
However, because I am trying to implement a kind of link layer protocol, I need some feedback from destination to source such as an ACK message. So, I decided to start by implementing a simple stop&wait protocol that the source sends a message and wait for an ACK from the destination, DATA -> ACK -> DATA -> ACK... and so on. In order to do that, I create a simple source block which sends only one data and wait for an ACK message to send another data. The data I produce with my source block is the same as the data produced by "Message Strobe". When I replace the "Message Strobe" block with my source block, I realized that something is wrong because I couldn't receive my data. So, I've followed my data in order to find which step cause this situation. There is no problem with the transmission process. In the receive process, I found the problematic block which is in the "Wifi PHY Hier" block and is the last block before this hierarchical block gives its data to "OFDM MAC" block. This problematic block which is called "OFDM Decode MAC" has two ports. The output port is a message port and the input port is complex vector. So, I reviewed the code of this block, specially, the general_work() function of it. For my particular test data, in order to complete its job correctly, it should consume 177 items to produce an output to "OFDM MAC". However, it stops consuming items after 172 items are consumed. I override the forecast() method and set ninput_items_required[0] = 177. But in this case, nothing is happened, because, as I understand, the scheduler has never see 177 items in the input buffer. As you said, this is because the block ("OFDM Decode Signal") that writes into this block's input buffer produce 172 items.
I did not go deep further yet but the interesting point is when I send a second data (in the runtime) after a period, without waiting for an ACK, the remaining 5 items of the first data I've sent are consumed in some way and received correctly by the "OFDM MAC" block. And now the second data is in the same problematic situation that the previus data has experienced.. If I send third data, the second one is also received correctly. I'm really confused. How can this be ?
I'll comment quickly on your text, and then advise below:
I have trouble with using general_work function for a block which
takes a vector as an input and outputs a message.
That block is, from a sample stream perspective, a sink. You will find that when using sink as a block type in gr_modtool, you will get a sync_block, which means you will only have to implement a work, not a general_work, and a forecast.
The block is a kind of demodulator. In fact it is working great if I
send some data after and after (periodically).
So that's great!
But I need to create only one data (frame) which has a predefined size
and sent it to this block. And I want this block to handle all of the
items in its buffer without waiting for more data.
That sounds like your block doesn't actually take streams of samples, but blocks. That is either a job for
message passing (so your block would have no input stream, just a message port) or
tagged stream blocks.
Sounds like the second to me.
As I understand, it is about the buffering and scheduler structure of
GNU Radio, but, I couldn't figure it out how to provide an ability to
this block to handle all the symbols of the frame that I've sent
without waiting for another frame.
Frame is what you make of this – to GNU Radio, your samples are just items that get written to and read from a buffer.
For example, lets say my frame has 150 symbols. The scheduler calls my
general_work function two, three, or four times (I don't know how it
decides the number of calls for my general_work).
It doesn't decide -- that's probably the chunks in which the symbols get written into the input buffer of your block. You don't have to consume all of these (or any of these) if your block isn't able to produce output with the input given. Just let GNU Radio know how many items were consumed (in the sync block case, it's implicitly done with the return value; in the general_work case, you might have to manually call consume – another reason to change your block type!).
However, it stops lets say at symbol #141, or 143. Every time I run
it, it stops at different symbol number. If I send another frame, it
completes to handle remaining items (symbols) in its buffer.
That sounds like a bug in your algorithm, not in GNU Radio. Maybe your input buffer is simply full, or maybe the block that writes into it simply doesn't provide more data?
Does anybody know how can I tell the scheduler to not wait for
another frame to complete the remaining items in its buffer from the
previously sent data.
The scheduler doesn't wait; as soon as there is data to be processed, it instantly "wakes" your block, and asks it to process the items.
I've reached Bastian, the guy who developed this OOT module. He said that the reason of the problem was a kind of padding issue. If a block called "Packet Padding2", which can be found in another OOT module that also developed by him, is used after "Wifi PHY Hier" and set the Pad Tail parameter of this block to appropriate value, the problem is solved.
I'm trying to spread data across multiple workers using OpenMPI, however, I'm doing the data division in a fairly custom way that is not amenable to MPI_Scatter or MPI_Broadcast. What I would like to do is to give each processor some work in a queue (or, some other async mechanism) such that they can do their work on the first chunk of data, take the next chunk, repeat until no more chunks.
I know of MPI_Isend, however if I send data with MPI_Isend I can't modify it until it's finished sending; forcing me to use MPI_Wait and thus having to wait until the thread is finished receiving the data anyway!
Is there a standard a solution to this problem, or must I rethink my approach?
Using MPI_ISEND doesn't necessarily mean that the message is received on the remote end. It just means that the buffer is available for reuse. It could be that the message has been buffered internally by Open MPI or that the message actually has been received on the other end. It depends on your message size.
Another option would be to have your workers ask the master process for work when they need it instead of having it pushed to them. Then the master can work only as needed. You could do an MPI_SCATTER for the first message since everyone will be receiving some data. Then after that, have the master do an MPI_RECV(MPI_ANY_SOURCE) to get a message from one of the worker processes.