I have a small number of actors (written in Java, using Akka typed APIs), which make use of TimerScheduler to schedule messages to themselves. I'd like to write tests that check their interactions.
When using the ActorTestKit recommended in the documentation, it can execute them using normal timers, which means that when the actor schedules a message to itself in 10 seconds, then the test has to last 10 seconds.
In order to speed up tests, I'd like to use virtual time for the test, where the scheduler in the test does not actually wait but advances a virtual clock instead. Unfortunately, ActorTestKit does not seem to support this concept.
Is there any established pattern for running such tests?
The ActorTestKit includes ManualTime (in the Java API: akka.actor.testkit.typed.javadsl.ManualTime) which lets you advance the timer by durations (e.g. between assertions). See controlling the Scheduler in the ActorTestKit docs.
Related
I am developing an application that is responsible of moving and managing robots over an UDP connection.
The application needs to:
Read joystick/user input using SDL.
Generate and send a control packet to the robot every 20 milliseconds (UDP)
Receive and decode response packets from the robot (~20 msecs). This was implemented with the signal/slot mechanism and does not require a timer.
Receive and process robot messages for debugging reasons. This is not time-regulated.
Update the UI regularly to keep the user notified about the status of the robot (e.g. battery voltage). For most cases, I have also used Qt's signal/slot mechanism.
Use a watchdog that disables the robot if no response is received after 1 second. The watchdog is reset when the application receives a robot packet (~20 msecs)
For the moment, I have implemented all of the above. However, the application fails to send the packets regularly when the watchdog is activated or when two or more QTimer objects are used. The application would generally work, but I would not consider it "production ready". I have tried to use the precision flags of the timers (Qt::Precise, Qt::Coarse and Qt::VeryCoarse), but I still experienced problems.
Notes:
The code is generally well organized, there are no "god objects" in the code base (most source files are less than 150 lines long and only create the necessary dependencies).
Most of the times, I use QTimer::singleShot() (e.g. I will only send the next packet once the current packet has been sent).
Where we use timers:
To read joystick input (~50 msecs, precise timer)
To send robot packets (~20 msecs, precise timer)
To update some aspects of the UI (~500 msecs, coarse timer)
To update the elapsed time since the robot was enabled (~100 msecs, precise timer)
To implement a watchdog (put the application and robot in safe state if 1000 msecs have passed without a robot response)
Note: the watchdog is feed when we receive a response packet from the robot (~20 msecs)
Do you have any recommendations for using QTimer objects with performance-critical code (any idea is welcome). Note that I have also tried to use different threads, but it has caused me more problems, since the application would not be in "sync", thus failing to effectively control the robots that we have tested.
Actually, I seem to have underestimated Qt's timer and event loop performance. On my system I get on average around 20k nanoseconds for an event loop cycle plus the overhead from scheduling a queued function call, and a timer with interval 1 millisecond is rarely late, most of the timeouts are a few thousand nanoseconds short of a millisecond. But it is a high end system, on embedded hardware it may be a lot worse.
You should take the time and profile your target system and Qt build to determine whether it can indeed run snappy enough, and based on those measurements, adjust your timings to compensate for the system delays to get your events scheduled more on time.
You should definitely keep the timer thread as free as possible, because if you block it by IO or extensive computation, your timer will not be accurate. Use a dedicated thread to schedule work and extra worker threads to do the actual work. You may also try playing with thread priorities a bit.
Worst case scenario, look for 3rd party high performance event loop implementations or create your own and potentially, also a faster signaling mechanism as ell. As I already mentioned in the comments, Qt's inter-thread queued signals are very slow, at least compared to something like indirect function calls.
Last but not least, if you want to do task X every N units of time, it will only be only possible if task X takes N units of time or less on your system. You need to make this consideration for each task, and for all tasks running concurrently. And in order to get accurate scheduling, you should measure how long did task X took, and if less than its frequency, schedule the next execution in the time remaining, otherwise execute immediately.
I have a class which measures the time between calling Start and Stop. I have created a unit test that sleeps using boost::this_thread::sleep between Start and Stop and I test that the result is near the time slept.
However, this test fails on our build agent but not our development machines. The problem is: How do I know whether this is an actual problem of the stopwatch or if it's a "problem" that the build agent (running some other processes, being a virtual machine) might sleep longer than I told it to?
So the question: Is there a robust way to write something like "Do something that takes exactly x seconds?"
Thanks a lot!
There is no way to test something like this reliably on a non-realtime system. The way to go would be to wrap the APIs for getting the system-time that your stop-watch uses and mock them in the tests.
Take the system time before and after the sleep and refer only to the difference between these times and not to the time you slept.
What is the resolution of your stopwatch? If you need to be accurate by seconds then sleeping for 3 seconds and seeing if you are between 2.9 and 3.1 will work for you. If you need mili/nano second accuracy you should just use timestamp mocks as suggestd in the first reply.
That depends on the operating system you're using. On systems that supports multi-programming, the running time of your thread is not deterministic. On some real-time systems however, it's almost accurate if your thread is of top priority. You should be able to disable the interrupt to simulate that case, because your thread will not be preempted by OS scheduler.
I'm trying to test some actors using scala specs. I run the test in IDEA or Maven (as junit) and it does not exit. Looking at the code, my test finished, but some internal threads (scheduler) are hanging around. How can I make the test finish?
Currently this is only possible by causing the actor framework's scheduler to forcibly shut down:
scala.actors.Scheduler.impl.shutdown
However, the underlying implementation of the scheduler has been changing in patch-releases lately, so this may be different, or not quite work with the version you are on. In 2.7.7 the default scheduler appears to be an instance of scala.actors.FJTaskScheduler2 for which this approach should work, however if you end up with a SingleThreadedScheduler it will not, as the shutdown method is a no-op
This will only work if your actors are not waiting on a react at that time
I'm looking to build a program that works within soft real-time schedules; to do this, I need to generate a timing event at an interval significantly less than a second.
Is there an API that exposes fine-grain timers in WebOS?
You can use the DOM API setTimeout() to have a function be called back in the future. The timing is specified in milliseconds. Your callback will be called at least that many milliseconds after the call to setTimeout, but it could be longer if other JS code is running, since the Javascript engine won't interrupt running code to call your function.
I'm building my first web application after many years of desktop application development (I'm using Django/Python but maybe this is a completely generic question, I'm not sure). So please beware - this may be an ultra-newbie question...
One of my user processes involves heavy processing in the server (i.e. user inputs something, server needs ~10 minutes to process it). On a desktop application, what I would do it throw the user input into a queue protected by a mutex, and have a dedicated background thread running in low priority blocking on the queue using that mutex.
However in the web application everything seems to be oriented towards synchronization with the HTTP requests.
Assuming I will use the database as my queue, what is best practice architecture for running a background process?
There are two schools of thought on this (at least).
Throw the work on a queue and have something else outside your web-stack handle it.
Throw the work on a queue and have something else in your web-stack handle it.
In either case, you create work units in a queue somewhere (e.g. a database table) and let some process take care of them.
I typically work with number 1 where I have a dedicated windows service that takes care of these things. You could also do this with SQL jobs or something similar.
The advantage to item 2 is that you can more easily keep all your code in one place--in the web tier. You'd still need something that triggers the execution (e.g. loading the web page that processes work units with a sufficiently high timeout), but that could be easily accomplished with various mechanisms.
Since:
1) This is a common problem,
2) You're new to your platform
-- I suggest that you look in the contributed libraries for your platform to find a solution to handle the task. In addition to queuing and processing the jobs, you'll also want to consider:
1) status communications between the worker and the web-stack. This will enable web pages that show the percentage complete number for the job, assure the human that the job is progressing, etc.
2) How to ensure that the worker process does not die.
3) If a job has an error, will the worker process automatically retry it periodically?
Will you or an operations person be notified if a job fails?
4) As the number of jobs increase, can additional workers be added to gain parallelism?
Or, even better, can workers be added on other servers?
If you can't find a good solution in Django/Python, you can also consider porting a solution from another platform to yours. I use delayed_job for Ruby on Rails. The worker process is managed by runit.
Regards,
Larry
Speaking generally, I'd look at running background processes on a different server, especially if your web server has any kind of load.
Running long processes in Django: http://iraniweb.com/blog/?p=56