I have a LTTNg trace, which i am parsing using babeltrace API. So i was wondering if I could count all events in trace (or stream) without iterating over them. What functions from publilc API I can use to do that ?
The very nature of CTF makes it impossible to count the event records of a given packet in constant time. The packet's context could include an event record count field somehow, but it's not specified, so generic tools would not use it.
Thus the only way to count events is to iterate the event records, unfortunately. The easiest way is to count the number of lines that the text format of the babeltrace(1) tool prints:
babeltrace /path/to/ctf/trace/directory | wc --lines
This works as long as there's one line per printed event record, which is the case unless an event record contains a string field which has a newline (currently not escaped in the text output).
You may also wish to consider discarded event records. They are not printed to the standard output by babeltrace(1), but the tool prints a message including the count to the standard error when they are detected.
There's no way with the current babeltrace(1) tool to only print the event records which belong to the packets of a given data stream. If you need this, what I suggest is that you remove all the data stream files except the one for which you need an event record count, and run the command above again.
Also consider the Babeltrace Python bindings, for example (not tested):
import babeltrace
def count_ctf_event_records(path):
trace_collection = babeltrace.TraceCollection()
trace_collection.add_trace(path, 'ctf')
return sum(1 for event in trace_collection.events)
if __name__ == '__main__':
import sys
print(count_ctf_event_records(sys.argv[1]))
Saved as count.py, you can try this:
python3 count.py /path/to/ctf/trace/directory
Counting the event records of a specific data stream with the Python bindings is left as an exercise for the reader.
Having said this, I don't know if the Python bindings approach is faster than the babeltrace(1) one.
Related
A tee operation is expected to take an input and return two copy outputs.
I noticed that bonobo-etl features Tee nodes, but it's not clear how they are intended to be used.
Can they be used to fork the running graph into two directions?
Or are they intended for a Load-type persistent action, to use without stopping the data flow in that particular node?
The bonobo.Tee(f: Callable) operation just applies a function and pass the stream input unmodified into the stream output.
Although the name obviously comes from the unix tool tee (as you pointed), it's not perfectly similar as in the bonobo version, one output is the stream output and one output is just the callable you use. This callable may or may not send data to a stream (and sending data to a stream is knid of hackish for now).
As an example, if you use Tee(print), then there will be the stream passed both to output and print.
As another, more realistic example, you should be able to do the following:
import bonobo
import queue
output_queue = queue.Queue()
def get_graph():
graph = bonobo.Graph()
graph >> range(100) >> bonobo.Tee(output_queue.put) >> print
return graph
if __name__ == "__main__":
with bonobo.parse_args() as options:
bonobo.run(get_graph())
while True:
try:
print("out:", output_queue.get_nowait())
except queue.Empty:
break
Hope that helps.
for example, that is data:
1,1470732420000,0
2,1470732421000,0
3,1470732422000,0
4,1470732423000,86
5,1470732424000,87
6,1470732425000,88
7,1470732426000,84
8,1470732427000,0
9,1470732428000,0
10,1470732429000,0
11,1470732430000,89
12,1470732431000,89
13,1470732432000,87
14,1470732433000,89
15,1470732434000,85
16,1470732435000,89
17,1470732436000,89
18,1470732437000,87
19,1470732438000,86
20,1470732439000,88
21,1470732440000,0
22,1470732441000,0
23,1470732442000,0
24,1470732443000,87
25,1470732444000,85
26,1470732445000,86
27,1470732446000,0
28,1470732447000,0
29,1470732448000,0
30,1470732449000,0
column one is id,column two is timestamp,column three is value,1 sec interval between the timestamp.
i want monitoring the value of event,if i found out value>=85(e.g. id=4), i will starting counting,if the next two consecutive value>=85(e.g. id=5/id=6),then i will put the third value of event to OutputStream.(e.g. id=6,value=88,timestamp=1470732425000)
at the same time i clear the counting and wait value lower than 85(e.g. id=7,value=84), then i will monitoring again,when i found out value>=85(e.g. id=11,value=89) i will starting counting,if the next two consecutive value>=85(e.g. id=12/id=13),then i will put the third value of event to OutputStream.(e.g. id=13,value=87,timestamp=1470732432000)...
all this is i wanna do,before i post this ask, i've got an answer in this post,i've tried this code:
from every a1=InputStream[value>=85], a2=InputStream[value>=85]+, a3=InputStream[value<85]
select a2[1].id, a2[1].value
having (not (a2[1] is null))
insert into OutPutStream;
and it works,but i found out it will insert the value into OutputStream after the value<=85,and what i want is if i got three consecutive value>=85 then i insert into the value immediately.(i don't want to wait if the next value>=85 all the times)
in fact, i just wanna record value of third seconds in three consecutive seconds value(>=85) .
i'm using wso2das-3.1.0-SNAPSHOT.
Though DAS (Siddhi) supports sequence/pattern processing, for your requirement you might need to write a custom extension. I have written a sample window processor extension to cater your requirement (source code). Download and place siddhi-extension-condition-window-1.0.jar in <das_home>/repository/components/lib/ directory and restart the server. Refer to the test case to get an idea of the usage of the extension.
When I make a OOT block in gnuradio
class mod(gr.sync_block):
"""
docstring for block mod
"""
def __init__(self):
gr.sync_block.__init__(self,
name="mod",
in_sig=[np.byte],
out_sig=[np.complex64])
def work(self, input_items, output_items):
in0 = input_items[0]
out = output_items[0]
result=do(....)
out[:]=result
return len(output_items[0])
I get:
ValueError: could not broadcast input array from shape (122879) into shape (4096)
How can I solve it?
GRC is as below:
selector :input index and output index are controlled by WX GUI Chooser block
FSK4 MOD: modulate fsk4 signal and write data to raw.bin
FSK4 DEMOD : read data from raw.bin and demodulate
file source -> /////// -> FSK4 MOD -> FSK4 DEMOD -> NULL SINK
selector
file source -> ////// -> GMKS MOD -> GMSK DEMOD ->NULL SINK
when the input index or output index is changed,the whole flow graph will be not responding.
There's two things:
You have a bug somewhere, and the solution is not to change something, but fix that bug. The full Python error message will tell you exactly in which line the error is.
noutput_items is a variable that GNU Radio sets at runtime to let you know how much output you might produce in this call to work. Hence, it's not something you can set, but it's something your work method must respect.
I think it's fair to assume that you're not very aware of how GNU Radio works:
GNU Radio is based on calling your block's work function when there is enough output space available and enough input items to process. The amount of output space that your block can use is passed to your work as a parameter, and will change between calls to work.
I very strongly recommend going through chapters 1-3 of the official Guided Tutorials if you haven't already. We always try to keep these tutorials up-to-date.
EDIT: Your command shows that you have not really understood what I meant, sorry. So: GNU Radio calls your work method over and over again while it's executing.
For example, it might call work with 4000 input items and 4000 output items space (you have a sync block, therefore number of input==number of output). Your work processes the first 1000 of that, and therefore return 1000. So there's 3000 items left.
Now, the upstream block does something, so there's 100 new items. Because the 3000 from before are still there, your block's work will get called with 3100 items.
Your work processes any number of items, and returns that number. GNU Radio makes sure that the "remaining" items stay available and will call your work again if there is enough in- our output.
I'm trying to use libPd, the wrapper for PureData.
But the documentation is poor and I'm not very into C++
Do you know how I can simply send a floating value to a Pd patch?
Do I need to install libPd or I can just include the files?
First of all, check out ofxpd. It has an excellent libpd implementaiton with OpenFrameworks. If you are starting with C++ you may want to start with OpenFrameworks since it has some great documentation and nice integration with Pd via the ofxpd extension.
There are two good references for getting started with libpd (though neither cover C++ in too much detail): the original article and Peter Brinkmann's book.
On the libpd wiki there is a page for getting started with libpd. The linked project at the bottom has some code snippets in main.cpp that demonstrate how to send floats to your Pd patch.
pd.sendBang("fromCPP");
pd.sendFloat("fromCPP", 100);
pd.sendSymbol("fromCPP", "test string");
In your Pd patch you'll set up a [receive fromCPP] and then these messages will register in your patch.
In order to get the print output you have to use the receivers from libpd in order to receiver the strings and then do something with them. libpd comes with PdBase, which is a great class for getting libpd up and running. PdBase has sendBang, sendFloat, sendMessage, and also has the receivers set up so that you can get output from your Pd patch.
if you want to send a value to a running instance of Pd (the standalone application), you could do so via Pd's networking facilities.
e.g.
[netreceive 65432 1]
|
[route value]
|
[print]
will receive data sent from the cmdline via:
echo "value 1.234567;" | pdsend 65432 localhost udp
you can also send multiple values at once, e.g.
echo "value 1.234567 3.141592;" | pdsend 65432 localhost udp
if you find pdsend to slow for your purposes (e.g. if you launch the executable for each message you want to send you have a considerable overhead!), you could construct the message directly in your application and use an ordinary UDP-socket to send FUDI-messages to Pd.
FUDI-messages really are simple text strings, with atoms separated by whitespace and a terminating semicolon, e.g.
accelerator 1.23 3.14 2.97; button 1;
you might also considering using OSC, but for this you will need some externals (OSC by mrpeach; net by mrpeach (or iemnet)) on the Pd side.
as for performance, i've been using the latter with complex tracking data (hundreds of values per frame at 125fps) and for streaming multichannel audio, so i don't think this is a problem.
if you are already using libPd and only want to communicate from the host-application, use Adam's solution (but your question is a bit vague about that, so i'm including this answer just in case)
I'm stuck on problem were I would like to ask for some help:
I have the task to print some files of different types using ShellExecuteEx with the "print" verb and need to guarantee print order of all files. Therefore I use FindFirstPrinterChangeNotification and FindNextPrinterChangeNotification to monitor the events PRINTER_CHANGE_ADD_JOB and PRINTER_CHANGE_DELETE_JOB using two different threads in the background which I start before calling ShellExecuteEx as I don't know anything about the application which will print the files etc. The only thing I know is that I'm the only one printing and which file I print. My solution seems to work well, my program successfully recognizes the event PRINTER_CHANGE_ADD_JOB for my file, I even verify that this event is issued for my file by checking what is give to me as additional info by specifying JOB_NOTIFY_FIELD_DOCUMENT.
The problem now is with the event PRINTER_CHANGE_DELETE_JOB, where I don't get any addition info about the print job, though my logic is exactly the same for both events: I've written one generic thread function which simply gets executed with the event it is used for. My thread is recognizing the PRINTER_CHANGE_DELETE_JOB event, but on each call to FindNextPrinterChangeNotification whenever this event occured I don't get any addition data in ppPrinterNotifyInfo. This works for the start event, though, I verified using my logs and the debugger. But with PRINTER_CHANGE_DELETE_JOB the only thing I get is NULL.
I already searched the web and there are some similar questions, but most of the time related to VB or simply unanswered. I'm using a C++ project and as my code works for the ADD_JOB-event I don't think I'm doing something completely wrong. But even MSDN doesn't mention this behavior and I would really like to make sure that the DELETE_JOB event is the one for my document, which I can't without any information about the print job. After I get the DELETE_JOB event my code doesn't even recognize other events, which is OK because the print job is done afterwards.
The following is what I think is the relevant notification code:
WORD jobNotifyFields[1] = {JOB_NOTIFY_FIELD_DOCUMENT};
PRINTER_NOTIFY_OPTIONS_TYPE pnot[1] = {JOB_NOTIFY_TYPE, 0, 0, 0, 1, jobNotifyFields};
PRINTER_NOTIFY_OPTIONS pno = {2, 0, 1, pnot};
HANDLE defaultPrinter = PrintWaiter::openDefaultPrinter();
HANDLE changeNotification = FindFirstPrinterChangeNotification( defaultPrinter,
threadArgs->event,
0, &pno);
[...]
DWORD waitResult = WAIT_FAILED;
while ((waitResult = WaitForSingleObject(changeNotification, threadArgs->wfsoTimeout)) == WAIT_OBJECT_0)
{
LOG4CXX_DEBUG(logger, L"Irgendein Druckereignis im Thread zum Warten auf Ereignis " << LogStringConv(threadArgs->event) << L" erkannt.");
[...]
PPRINTER_NOTIFY_INFO notifyInfo = NULL;
DWORD events = 0;
FindNextPrinterChangeNotification(changeNotification, &events, NULL, (LPVOID*) ¬ifyInfo);
if (!(events & threadArgs->event) || !notifyInfo || !notifyInfo->Count)
{
LOG4CXX_DEBUG(logger, L"unpassendes Ereignis " << LogStringConv(events) << L" ignoriert");
FreePrinterNotifyInfo(notifyInfo);
continue;
}
[...]
I would really appreciate if anyone could give some hints on why I don't get any data regarding the print job. Thanks!
https://forums.embarcadero.com/thread.jspa?threadID=86657&stqc=true
Here's what I think is going on:
I observe two events in two different threads for the start and end of each print job. With some debugging and logging I recognized that FindNextPrinterChangeNotification doesn't always return only the two distinct events I've notified for, but some 0-events in general. In those cases FindNextPrinterChangeNotification returns 0 as the events in pdwChange. If I print a simple text file using notepad.exe I only get one event for creation of the print job with value 256 for pdwChange and the data I need in notifyInfo to compare my printed file name against and comparing both succeeds. If I print a pdf file using current Acrobat Reader 11 I get two events, one has pdwChange as 256, but gives something like "local printdatafile" as the name of the print job started, which is obviously not the file I printed. The second event has a pdwChange of 0, but the name of the print job provided in notifyInfo is the file name I used to print. As I use FreePDF for testing pruproses, I think the first printer event is something internal to my special setup.
The notifications for the deletion of a print job create 0 events, too. This time those are sent before FindNextPrinterChangeNotification returns 1024 in pdwChange, and timely very close after the start of the print job. In this case the exactly one generated 0 event contains notifyInfo with a document name which equals the file name I started printing. After the 0 event there's exactly one additional event with pdwChange of 1024, but without any data for notifyInfo.
I think Windows is using some mechanism which provides additional notifications for the same event as 0 events after the initial event has been fired with it's real value the user notified with, e.g. 256 for PRINTER_CHANGE_ADD_JOB. On the other hand it seems that some 0 events are simply fired to provide data for an upcoming event which then gets the real value of e.g. 1024 for PRINTER_CHANGE_DELETE_JOB, but without anymore data because that has already been delivered to the event consumer with a very early 0 event. Something like "Look, there's more for the last events." and "Look, something is going to happen with the data I already provide now." Implementing such an approach my prints now seem to work as expected.
Of course what I wrote doesn't fit to what is documented for FindNextPrinterChangeNotification, but it makes a bit of sense to me. ;-)
You're not checking for overflows or errors.
The documentation for FindNextPrinterChangeNotification says this:
If the PRINTER_NOTIFY_INFO_DISCARDED bit is set in the Flags member of
the PRINTER_NOTIFY_INFO structure, an overflow or error occurred, and
notifications may have been lost. In this case, no additional
notifications will be sent until you make a second
FindNextPrinterChangeNotification call that specifies
PRINTER_NOTIFY_OPTIONS_REFRESH.
You need to check for that flag and do as described above, and you should also be checking the return code from FindNextPrinterChangeNotification.