SystemC - measure and include file parsing time in systemc simulation - c++

I have a simple C++ function which parse CSV (10-10k rows) file line-by-line and insert each field in defined structure, array of structures to be more specific.
Now I want to measure parsing time using systemc methods (without C++ utilities, e.g clock()) and include it in simulation like any other process and generate trace file - is it possible at all?
For couple of days I am struggling doing that in every possible way. Actually I have realised that sc_time_stamp() is useless in that particular case as it only shows declared sc_start() simulation elapsed time.
I thought it will be as simple as:
wait() until pos.edg
parseFile()
signal.write(1)
doOtherStuff()
but apparently it's not...
Internet is full of adders, counters and other logic stuff examples but I did not found anything related to my problem.
Thanks in advance!

Your question "measure parsing time using systemc" and the pseudo code example you gave appear to indicate you want to measure real-time. The pseudo-code
wait some-event
parseFile()
signal.write(1)
...
Will take exactly zero simulation time since (unless parseFile() itself has wait statements) parseFile() is carried out in one delta cycle. If parseFile() does have wait statements (wait 10ms for example) then in simulation time it will take 10 ms. The only way systemc has of knowing how long a process takes to complete is if you tell it.

SystemC is a modeling library. So you can only measure simulation time with sc_time_stamp.
If you want to measure physical time you will need to measure it with some other C/C++ library (Easily measure elapsed time).
Then, if you want to add this time to your simulation time, you can put
wait( measured_parsing_time );

Related

Measure actual runtime of a procedure using Qt

I want to measure the runtime of a procedure that will last multiple days. This means the executing process will be interrupted many times since the Linux PC will be suspended to standby repeatedly. I don't want to include those sleep phases in my time measurement so I can't use simple time stamps.
Is there a better way to measure the net runtime of a procedure other than firing a QTimer every second and counting those timeouts?
As G.M. proposed, I used boost::timer::cpu_timer time to solve my issue. It returns the summed up computation time and the run time of the entire process which is fine for my purpose. Boost is not Qt but it is cross-platform anyway.

What is the proper way to calculate latency in omnet++?

I have written a simulation module. For measuring latency, I am using this:
simTime().dbl() - tempLinkLayerFrame->getCreationTime().dbl();
Is this the proper way ? If not then please suggest me or a sample code would be very helpful.
Also, is the simTime() latency is the actual latency in terms of micro
seconds which I can write in my research paper? or do I need to
scale it up?
Also, I found that the channel data rate and channel delay has no impact on the link latency instead if I vary the trigger duration the latency varies. For example
timer = new cMessage("SelfTimer");
scheduleAt(simTime() + 0.000000000249, timer);
If this is not the proper way to trigger simple module recursively then please suggest one.
Assuming both simTime and getCreationTime use the OMNeT++ class for representing time, you can operate on them directly, because that class overloads the relevant operators. Going with what the manual says, I'd recommend using a signal for the measurements (e.g., emit(latencySignal, simTime() - tempLinkLayerFrame->getCreationTime());).
simTime() is in seconds, not microseconds.
Regarding your last question, this code will have problems if you use it for all nodes, and you start all those nodes at the same time in the simulation. In that case you'll have perfect synchronization of all nodes, meaning you'll only see collisions in the first transmission. Therefore, it's probably a good idea to add a random jitter to every newly scheduled message at the start of your simulation.

How to use .. QNX Momentics Application Profiler?

I'd like to profile my (multi-threaded) application in terms of timing. Certain threads are supposed to be re-activated frequently, i.e. a thread executes its main job once every fixed time interval. In other words, there's a fixed time slice in which all the threads a getting re-activated.
More precisely, I expect certain threads to get activated every 2ms (since this is the cycle period). I made some simplified measurements which confirmed the 2ms to be indeed effective.
For the purpose of profiling my app more accurately it seemed suitable to use Momentics' tool "Application Profiler".
However when I do so, I fail to interpret the timing figures that I selected. I would be interested in the average as well in the min and max time it takes before a certain thread is re-activated. So far it seems, the idea is to be only able to monitor the times certain functions occupy. However, even that does not really seem to be the case. E.g. I've got 2 lines of code that are put literally next to each other:
if (var1 && var2 && var3) var5=1; takes 1ms (avg)
if (var4) var5=0; takes 5ms (avg)
What is that supposed to tell me?
Another thing confuses me - the parent thread "takes" up 33ms on avg, 2ms on max and 1ms on min. Aside the fact that the avg shouldn't be bigger than max (i.e. even more I expect avg to be not bigger than 2ms - since this is the cycle time), it's actually increasing the longer I run the the profiling tool. So, if I would run the tool for half an hour the 33ms would actually be something like 120s. So, it seems that avg is actually the total amount of time the thread occupies the CPU.
If that is the case, I would assume to be able to offset against the total time using the count figure which doesn't work either. Mostly due to the figure being almost never available - i.e. there is only as a separate list entry (for every parent thread) called which does not represent a specific process scope.
So, I read QNX community wiki about the "Application Profiler", incl. the manual about "New IDE Application Profiler Enhancements", as well as the official manual articles about how to use the profiler tool.. but I couldn't figure out how I would use the tool to serve my interest.
Bottom line: I'm pretty sure I'm misinterpreting and misusing the tool for what it was intended to be used. Thus my question - how would I interpret the numbers or use the tool's feedback properly to get my 2ms cycle time confirmed?
Additional information
CPU: single core
QNX SDP 6.5 / Momentics 4.7.0
Profiling Method: Sampling and Call Count Instrumentation
Profiling Scope: Single Application
I enabled "Build for Profiling (Sampling and Call Count Instrumentation)" in the Build Options1
The System Profiler should give you what you are looking for. It hooks into the micro kernel and lets you see the state of all threads on the system. I used it in a similar setup to find out what our system was getting unexpected time-outs. (The cause turned out to be Page Waits on critical threads.)

DirectShow IReferenceClock implementation

How exactly are you meant to implement an IReferenceClock that can be set via IMediaFilter::SetSyncSource?
I have a system that implements GetTime and AdviseTime, UnadviseTime. When a stream starts playing it sets a base time via AdviseTime and then increases Stream Time for each subsequent advise.
However how am I supposed to know when a new graph has run? I need to set a zero point for a given reference clock. Otherwise if I create a reference clock and then, 10 seconds later, I start the graph I am now in the position that I don't know whether I should be 10 seconds down the playback or whether I should be starting from 0. Obviously the base time will say that I am starting from 0 but have I just stalled for 10 seconds and do I need to drop a bunch of frames?
I really can't seem to figure out how to write a proper IReferenceClock so any hints or ideas would be hugely appreciated.
Edit: One example of a problem I am having is that I have 2 graphs and 2 videos. The audio from both videos is going to a null renderer. The Video to a standard CLSID_VideoRenderer. Now If i set the same reference clock to both and then Run graph 1 all seems to be fine. However if 10 seconds down the line I run graph 2 then it will run as though the SetSyncSource is NULL for the first 10 seconds or so until it has caught up with the other video.
Obviously if the graphs called GetTime to get their "base time" this would solve the problem but this is not what I'm seeing happening. Both videos end up with a base time of 0 because thats the point I run them from.
Its worth noting that if I set no clock at all (or call SetDefaultSyncSource) then both graphs run as fast as they can. I assume this is due to the lack of an Audio Renderer ...
However how am I supposed to know when a new graph has run?
The clock runs on its own, it is the graph that aligns its operation against the clock and not otherwise. The graph receives outer Run call, then it checks current clock time and assigns base time, which is distributed among filters, as "current clock time + some time for the things to take off". The clock itself doesn't have to have a faintest idea about all this and its task is to keep running and keep incrementing time.
In particular, clock time does not have to reset to zero at any time.
From documentation:
The clock's baseline—the time from which it starts counting—depends on the implementation, so the value returned by GetTime is not inherently meaningful. What matters is the delta from when the graph started running.
When an application calls IMediaControl::Run to run the filter graph, the Filter Graph Manager calls IMediaFilter::Run on each filter. To compensate for the slight amount of time it takes for the filters to start running, the Filter Graph Manager specifies a start time slightly in the future.
BaseClasses offer CBaseReferenceClock class, which you can use as reference implementation (in refclock.*).
Comment to your edit:
You obviously not describing the case in full and you are omitting important details. There is a simple test: you can instantiate standard clock (CLSID_SystemClock) and use it on two regular graphs - they WILL run fine, even with time-separated Run times.
I suspect that you are doing some sync'ing or matching between the graphs and you are time stamping the samples, also using the clock. Presumably you are doing something wrong at that point and then you have hard time fixing it through the clock.

Methodology for debugging serial poll

I'm reading values from a sensor via serial (accelerometer values), in a loop similar to this:
while( 1 ) {
vector values = getAccelerometerValues();
// Calculate velocity
// Calculate total displacement
if ( displacement == 0 )
print("Back at origin");
}
I know the time that it takes per sample, which is taken care of in the getAccelerometerValues(), so I have a time-period to calculate velocity, displacement etc. I sample at approximately 120 samples / second.
This works, but there are bugs (non-precise accelerometer values, floating-point errors, etc), and calibrating and compensating to get reasonably accurate displacement values is proving difficult.
I'm having great amounts of trouble finding a process to debug the loop. If I use a debugger (my code happens to be written in C++, and I'm slowly learning to use gdb rather than print statements), I have issues stepping through and pushing my sensor around to get an accelerometer reading at the point in time that the debugger executes the line. That is, it's very difficult to get the timing of "continue to next line" and "pushing sensor so it's accelerating" right.
I can use lots of print statements, which tend to fly past on-screen, but due to the number of samples, this gets tedious and difficult to derive where the problems are, particularly if there's more than one print statement per loop tick.
I can decrease the number of samples, which improves the readability of the programs output, but drastically decreases the reliability of the acceleration values I poll from the sensor; if I poll at 1Hz, the chances of polling the accelerometer value while it's undergoing acceleration drop considerably.
Overall, I'm having issues stepping through the code and using realistic data; I can step through it with non-useful data, or I can not really step through it with better data.
I'm assuming print statements aren't the best debugging method for this scenario. What's a better option? Are there any kinds of resources that I would find useful (am I missing something with gdb, or are there other tools that I could use)? I'm struggling to develop a methodology to debug this.
A sensible approach would be to use some interface for getAccelerometerValues() that you can substitute at either runtime or build-time, such as passing in a base-class pointer with a virtual method to override in the concrete derived class.
I can describe the mechanism in more detail if you need, but the ideal is to be able to run the same loop against:
real live data
real live data (and save it to file)
canned real data saved from a previous run
fake data you cooked up as a test case
Note in particular that the 'replay' version(s) should be easy to debug, if each call just returns the next data from the file.
Create if blocks for the exact conditions you want to debug. For example, if you only care about when the accelerometer reads that you are moving left:
if(movingLeft(values) {
print("left");
}
The usual solution to this problem is recording. Your capture sample sequences from your sensor in a real-time manner, and store them to files. Then you train your system, debug your code, etc. using the recorded data. Finally, you connect the working code to the real data stream which flows immediately from the sensor.
I would debug your code with fake (ie: random) values, before everything else.
It the computations work as expected, then I would use the values read from the port.
Also, isnt' there a way to read those values in a callback/push fashion, that is, get your function called only when there's new, reliable data?
edit: I don't know what libraries you are using, but in the .NET framework you can use a SerialPort class with the Event DataReceived. That way you are sure to use the most actual and reliable data.