I am confused by the concept of sink and source elements.
Intuitively, I would expect a source to be something from which we take data, and a sink to be something where we put the results of our process.
In Gstreamer tutorial 2, this seems to be right: https://gstreamer.freedesktop.org/documentation/tutorials/basic/images/figure-1.png
But when I was reading tutorial 3, the definition seems to be the opposite: https://gstreamer.freedesktop.org/documentation/tutorials/basic/images/filter-element.png
I am not sure why the filter element in the second image contains the sink element on the left and the src element on right, and not the opposite. I mean, I would expect the output to be sent into a sink not a source. Is it because we are looking at it from the "outside"? That is, the output of the filter is a source for us (and for the next element in the pipeline) even though, in non-gstreamer terminology, it could be seen as a sink internally for the filter. Is this correct?
The second picture is not about elements in a pipeline, but pads on a single element. On the first one you look at a complete pipeline, but you have figured that one out already.
On the second one you look a single element and it's pads. The element has two pads. The one on the left is the sink pad, data goes in there and is consumed by the element. On the right side you have a source pad, the element will generate data and push it to that pad (so it is somehow a data source).
Treat is as a vocabulary. It sure can be argued about the naming, but this is what has been agreed upon.
Related
I created two streams: for input video file and for output. But even if I want to change only one (first) video frame I have to use av_read_frame and write_frame for each frame in loop just to place them from one file to another.
How can I change only one frame and then flush somehow other frames to the output without loop av_read_frame , write_frame?
What you ask seems a logical question, if you think of the frames in a video as a sort of linked list or array - you simply want to replace one element in the array or list without having to change everything else, I think?
The problem is that video streams are typically not as simply arranged as this.
For instance, the encoding often uses previous and even following frames as a reference for the current frame. As an example, every 10th frame might be encoded fully and the frames in between be simply encoded as deltas to these reference frames.
Also, the video containers, e.g. mp4, often use offsets to point to the various elements within them. Hence, if you dive inside and change the size of a particular frame by replacing it with another one, then the offsets will not be correct anymore.
If you are doing this for a single frame in a large video file, and you know roughly where the frame is, then you may find it more effective to split the video file, into a long section before the bit you want to change, a short section containing the frame you want to change, and a long section afterwards. You can apply your change just on the short section and then concatenate the videos at the end.
If you do something like this, you have to use a proper utility to split and concatenate the video, for the same reasons as above - i.e. to make sure the split videos have the right header info, offsets etc.
ffmpeg supports these type of functions - for example you can see the concatenation documentation here:
https://trac.ffmpeg.org/wiki/Concatenate
I am new to TBB, so my apologies, if this question is obvious... but how do I set up an aggregating node with TBB? Out of all pre-made nodes I cannot find the right type for it.
Imagine I have a stream of incoming images. I want a node that keeps accepting images (with a FIFO buffer), does some calculation on them (i.e. it needs an internal state) and whenever it has received N images (fixed parameter), it emits a single result.
I think there is no such singular node in TBB flow graph that does accumulating with some sort of preprocessing and then, when accumulation is done, forwards the result of it to successor.
However, I believe the effect could be achieved by using several nodes. For example, consider queue_node as a starting point in the graph. It will serve as a buffer with FIFO semantics. After it there goes multifunction_node with N outputs. This node will do actual image preprocessing and send the result to its output port that correponds to image number. Then goes join_node that has all its N inputs connected to corresponding outputs of multifunction_node. At the end there will be a successor of join_node that will receive N images as its input. Since join_node aggregates its inputs in a tuple the drawback of this design could be quickly seen in case the number N is relatively large.
The other variant might be having the same queue_node connected with function_node with unlimited concurrency as successor (function_node is supposed to be doing some image preprocessing), and then having a multifunction_node with serial concurrency (meaning that only single instance of its body could be working at a time) that will sort of accumulate the images and do try_put call from inside the body to its successor when the number N is reached.
Of course there could be other variants how to implement desired behavior by using other flow graph topologies. By the way, to make such a graph as a singular node one could use composite_node that represents the subgraphs as a single node.
In the AWS API for Kinesis, shard iterators have the parameter ShardIteratorType which accepts one of the following values:
AT_SEQUENCE_NUMBER, AFTER_SEQUENCE_NUMBER, AT_TIMESTAMP, TRIM_HORIZON,
LATEST
However, the AWS documentation for this is very poorly written, and hence my question, which is critical for understanding how Kinesis works:
When using each of the above listed iterator types, what is the direction in which data is read?
Is the data read from the record at the current pointer, to a record that was inserted before the current record; or is the data read from the record at the current pointer to a record that was inserted after the current record was inserted?
Reading the shard iterator api docs
We can see that the LATEST starts at the head of stream and works backwards, you get your newest record and proceed back to your oldest.
TRIM_HORIZON works in the opposite direction, you get the oldest data and proceed to your newest data.
The AT_SEQUENCE_NUMBER, AFTER_SEQUENCE_NUMBER, and AT_TIMESTAMP are based on the concept of starting at a know place in the stream and moving sequentially. Sequentially at least based on sequence number docs I believe this to mean moving in a trimming direction. ie, for least to greatest sequence number.
I hope some of that helps, LASTEST and TRIM_HORIZON make sense to me based on those docs, however I totally agree that AFTER/AT_SEQUENCE number are a bit confusing. If anyone has good edits on that, please comment on it.
The quicktime documentation recommends the following approach to finding a keyframe:
Finding a Key Frame
Finding a key frame for a specified time in a movie is slightly more
complicated than finding a sample for a specified time. The media
handler must use the sync sample atom and the time-to-sample atom
together in order to find a key frame.
The media handler performs the following steps:
Examines the time-to-sample atom to determine the sample number that contains the data for the specified time.
Scans the sync sample atom to find the key frame that precedes the sample number chosen in step 1.
Scans the sample-to-chunk atom to discover which chunk contains the key frame.
Extracts the offset to the chunk from the chunk offset atom.
Finds the offset within the chunk and the sample’s size by using the sample size atom.
source: https://developer.apple.com/library/mac/documentation/QuickTime/qtff/QTFFChap2/qtff2.html
This is quite confusing, since multiple tracks ("trak" atom) will yield different offsets. For example, the keyframe-sample-chunk-offset value for the video trak will be one value, and the audio will be another.
How does one translate the instructions above into a location in the file (or mdat atom)?
That's not restricted to key frames. You can't in general guarantee that samples for different tracks are close to each other in the file. You hope that audio and video will be interleaved so you can play back a movie without excessive seeking but that's up to the software that created the file. Each track has its own sample table and chunk atoms that tell you where the samples are in the file and they could be anywhere. (They could even be in a different file, though reference movies are deprecated nowadays so you can probably ignore them.)
i've written a thrift-definition, and used this defintion to serialize multiple records in one file (i've added the size of the whole record at the beginning of each record). That is in short what I have done.
boost::shared_ptr<apache::thrift::transport::TMemoryBuffer> transport(new apache::thrift::transport::TMemoryBuffer);
boost::shared_ptr<apache::thrift::protocol::TBinaryProtocol> protocol(new apache::thrift::protocol::TBinaryProtocol(transport));
myClass->write(protocol.get());
const std::string & data(transport->getBufferAsString());
Afterwards i just print the string data in binary mode. Now I want to deserialize this file again. I wouldn't have any problem if there was only on record in the file, unfortunately I have to print multiple files, so I guess I have to work with offset based on the size i saved in the file along with the record itself. However, I can't seem to find any example I can use to achieve my goals, and the official documentation is quite lacking. Has anyone any tipps for me. If I'm missing some information, just ask.
Further Informations:
Of course I want to use use thrift to deserialize. However, one file can contain multiple records. For example: Imagine I have defined a struct in a thrift-definition file that contains car-Information. Now I serialize multiple car-structs in one output file. Serializing is no problem as i just append the data. If i want to deserialize however, I have to know where one record starts, and the next begins. That is my problem. I don't know how to tell thrift where one record begins and ends. I've searched the internet, but can't seem to find an example for c++ (i got one for python so far, but am not able to translate it to c++). The structure of one file can be described as followed: [lenghtofrecord1][record1][lengthofrecord2][record2][...]
Thanks in Advance
Michael
How about having a list<records> that you de/serialize as a whole? Or is it an absolute requirement to read them independently and randomly? If yes, I see 1,5 (one and a half) possible solutions:
Have a second file as an index. This holds a map< recordNumber, offset>, or simply a sorted list of integers-pairs, to quickly locate records. Since these data are much less than the records you probably can cache it in memory all the time.
The half solution: iff the record size is fixed, any records position could be calculated easily by multiplying recordSize * (recordNr-1). This way you don't even need the size prefix. If you have strings in the record or other variable-sized entities, this will not work, unless you force a fixed record size by reserving a buffer for each record with a predefined (maximum) size. It's a little ugly, thus the "half" solution, but you don't need the index file.
Although maybe not the perfect solution, this seems to work for me:
boost::shared_ptr<apache::thrift::transport::TMemoryBuffer> transport(new apache::thrift::transport::TMemoryBuffer);
boost::shared_ptr<apache::thrift::protocol::TBinaryProtocol> protocol(new apache::thrift::protocol::TBinaryProtocol(transport));
transport->resetBuffer((uint8_t*) buffer, sizeOfEntry);
Buffer is a char array containing the desired record (I used seekg for the offset) and sizeOfEntry is the records size. Afterwards I can go on with the automatically generated read-Method of my thrift-generated class. In Fact I had this solution earlier, I just messed up my offset, thus it didn't work.