I am working with a CAN application and am having some timing issues. It seems there is some time delta between when my CAN message write function completes and when the CAN message is actually transmitted. So I want to measure the time between the two. The write function is in C++, so it's a simple call to GetTickCount to know when the write function completes. It's knowing when the actual transmission happens that's the problem.
I am using Vector's CANalyzer to monitor my CAN bus, and heard it has a programming interface (CAPL). What I would like to do is grab the PC clock time at which a message has actually been transmitted. Is there any system-CAPL interface that I could use to do this?
It would be easier to measure the time in your C++ program. The CAN driver should provide some "TX confirmation callback function". The CAN driver calls this function as soon as the message has been successfully transmitted. You would need to configure the callback and to measure the time between your CAN write operation and this callback.
Related
I'm planning to write a small program that connects to the Jack Audio Kit on Linux. Whenever there's a new audio available to my program, Jack will call the registered callback. According to the documentation, the callback function should enable "real-time execution".
However, I'd like my program to do some analysis over the data flowing through and displaying this in a window. This is inherently not real-time.
To circumvent this issue, in an exploratory Python script, I used a queue to transfer data into another thread. But although the callback is called, the thread doesn't receive anything. I suspect this is due to the callback being called in another process. The debugger also doesn't stop inside the callback.
I saw, there are several applications for Jack that do other non-real-time things like UI and thelike (e.g., volume meter in Cadence). How is it possible to implement this callback without losing the real-time execution property?
Dears, here I have a small program need to do below action/processes sequentially. now I do below in one function, seems not good:)
init user interface, clear EDIT box, Listview; init the buttons status and so on;
power supply my devices;
need 20seconds sleep or timer, because the device need time for startup;
connect with device, read some data from it;
need 3seconds to wait feedback from device;
got reply from device, decode the data and show them on user interface;
...
For now, I just use sleep() in my program, and do the above steps one by one.
Fortunately, know from stackoverflow that my current way is not good, the feedback and user interface update is very slow, and sometimes, the program even freeze, quite stupid.
And some senior guys told me, I should use timer instead of sleep.
So, my question is:
how to use timer in my current program? (just do it like the MSDN say?)
How can I improve it base on above requirement?
Do I need multithread for it?
Sorry for so many questions:)
Indeed I want to get everything better.
Thank you very much in advance.
You don't specify if you are using a specific framework so I am going to assume a windows application using the native windows API directly:
Call SetTimer passing it your window handle (HWND), and the desired timer interval, and NULL for the TimerProc.
Your window procedure will now periodically be posted WM_TIMER message - you can use the ID parameter you passed to SetTimer in the case you have initiated multiple timers - and to eventually KillTimer when you no longer need it.
If you've ever used XNA game studio 4 you are familiar with the update method. By default the code within is processed at 60 times per second. I have been struggling to recreate such an effect in c++.
I would like to create a method where it will only process the code x amount of times per second. Every way I've tried it processes all at once, as loops do. I've tried for loops, while, goto, and everything processes all at once.
If anyone could please tell me how and if I can achieve such a thing in c++ it would be much appreciated.
With your current level of knowledge this is as specific as I can get:
You can't do what you want with loops, fors, ifs and gotos, because we are no longer in the MS-DOS era.
You also can't have code running at precisely 60 frames per second.
On Windows a system application runs within something called an "event loop".
Typically, from within the event loop, most GUI frameworks call the "onIdle" event, which happens when an application is doing nothing.
You call update from within the onIdle event.
Your onIdle() function will look like this:
void onIdle(){
currentFrameTime = getCurrentFrameTime();
if ((currentFrameTime - lastFrameTime) < minUpdateDelay){
sleepForSmallAmountOfTime();//using Sleep or anything.
//Delay should be much smaller than minUPdateDelay.
//Doing this will reduce CPU load.
return;
}
update(currentFrameTime - lastFrameTime);
lastFrameTime = currentFrameTime;
}
You will need to write your own update function, your update function should take amount of time passed since last frame, and you need to write a getFrameTime() function using either GetTickCount, QueryPerformanceCounter, or some similar function.
Alternatively you could use system timers, but that is a bad idea compared to onIdle() event - if your app runs too slowly.
In short, there's a long road ahead of you.
You need to learn some (preferably cross-platform) GUI framework, learn how to create a window, the concept of an event loop (can't do anything without it today), and then write your own "update()" and get a basic idea of multithreading programming and system events.
Good luck.
As you are familiar with XNA then i assume you also are familiar with "input" and "draw". What you could do is assign independant threads to these 3 functions and have a timer to see if its time to run a thread.
Eg the input would probably trigger draw, and both draw and input would trigger the update method.
-Another way to handle this is my messages events. If youre using Windows then look into Windows messages loop. This will make your input, update and draw event easier by executing on events triggered by the OS.
I have a file of data Dump, in with different timestamped data available, I get the time from timestamp and sleep my c thread for that time. But the problem is that The actual time difference is 10 second and the data which I receive at the receiving end is almost 14, 15 second delay. I am using window OS. Kindly guide me.
Sorry for my week English.
The sleep function will sleep for at least as long as the time you specify, but there is no guarantee that it won't sleep for longer.If you need an accurate interval, you will need to use some other mechanism.
If I understand well:
you have a thread that send data (through network ? what is the source of data ?)
you slow down sending rythm using sleep
the received data (at the other end of network) can be delayed much more (15 s instead of 10s)
If the above describe what you are doing, your design has several flaws:
sleep is very imprecise, it will wait at least n seconds, but it may be more (especially if your system is loaded by other running apps).
networks introduce a buffering delay, you have no guarantee that your data will be send immediately on the wire (usually it is not).
the trip itself introduce some delay (latency), if your protocol wait for ACK from the receiving end you should take that into account.
you should also consider time necessary to read/build/retrieve data to send and really send it over the wire. Depending of what you are doing it can be negligible or take several seconds...
If you give some more details it will be easier to diagnostic the source of the problem. sleep as you believe (it is indeed a really poor timer) or some other part of your system.
If your dump is large, I will bet that the additional time comes from reading data and sending it over the wire. You should mesure time consumed in the sending process (reading time before and after finishing sending).
If this is indeed the source of the additional time, you just have to remove that time from the next time to wait.
Example: Sending the previous block of data took 4s, the next block is 10s later, but as you allready consumed 4s, you just wait for 6s.
sleep is still a quite imprecise timer and obviously the above mechanism won't work if sending time is larger than delay between sendings, but you get the idea.
Correction sleep is not so bad in windows environment as it is in unixes. Accuracy of windows sleep is millisecond, accuracy of unix sleep is second. If you do not need high precision timing (and if network is involved high precision timing is out of reach anyway) sleep should be ok.
Any modern multitask OS's scheduler will not guarantee any exact timings to any user apps.
You can try to assign 'realtime' priority to your app some way, from a windows task manager for instance. And see if it helps.
Another solution is to implement a 'controlled' sleep, i.e. sleep a series of 500ms, checking current timestamp between them. so, if your all will sleep a 1s instead of 500ms at some step - you will notice it and not do additional sleep(500ms).
Try out a Multimedia Timer. It is about as accurate as you can get on a Windows system. There is a good article on CodeProject about them.
Sleep function can take longer than requested, but never less. Use winapi timer functions to get one function called-back in a interval from now.
You could also use the windows task scheduler, but that's going outside programmatic standalone options.
I'm looking to build a program that works within soft real-time schedules; to do this, I need to generate a timing event at an interval significantly less than a second.
Is there an API that exposes fine-grain timers in WebOS?
You can use the DOM API setTimeout() to have a function be called back in the future. The timing is specified in milliseconds. Your callback will be called at least that many milliseconds after the call to setTimeout, but it could be longer if other JS code is running, since the Javascript engine won't interrupt running code to call your function.