I would like to utilize the java.util.concurrent.Executors to fire some runner.
The problem is that i wanted to fire it fixed amount of times. It is important to call those runners a fixed rate.
For me it is perfectly sufficient use a Executors#newSingleThreadScheduledExecutor so i wonder why this is not possible ....
Thanks
I was trying to aproach this with TimerTask but the same effect.
Problem is how to effectively, elegant and treadsafely - shutdown TimerTask, ScheduledExecutorService after certain number of calls ...
Related
I have an executable program that performs latency measurements. C++ pseudo-code below:
void main(){
lock_priority();
start_measurements();
work();
end_measurements();
}
The work() creates multiple threads and takes a long time to complete, so ideally I'd like to minimize the executable console when the process is running, just to save screen space. This, however, reduces the output latency by around 50% compared to when not minimized.
I'd like to implement the lock_priority() function so that even when minimized, the process does not go into PROCESS_MODE_BACKGROUND_BEGIN mode.
What I've tried so far
SetPriorityClass(GetCurrentProcess(), REALTIME_PRIORITY_CLASS); - did not work
Created a thread that every few seconds calls the function above - it did work, but, scientifically speaking, "it looks ugly"
I have tried to find a method to attach a callback to the SetPriorityClass() function so that after it finishes if the PriorityClass was anything but REALTIME_PRIORITY_CLASS, it'd re-set it again (or at least set PROCESS_MODE_BACKGROUND_END priority). This sounds like a perfect solution, but I could not find anything in the docs about this.
I discovered there is a way to set the processor to prefer foreground/background tasks (reference) - however even if this was possible to be configured through code, I still need a way to bind this theoretical function to the priority change.
Any help would be very appreciated!
How about redirecting the programm output from console to a file or just buffer it, like here:
Redirect both cout and stdout to a string in C++ for Unit Testing
This way, you don't have any console latency at all - if this is alright for your testing.
I have my freeRTOS currently working on my Microzed board. I am using the Xilinx SDK as the software platform and until now I have been able to create tasks and assign priority.
I was just curious to know if it would be possible to assign a fixed time for each of my tasks such that for example after 100 miliseconds my scheduler would switch to the next task . So is it possible to set a fixed execution time for each of my tasks ?? As far as I checked I could not find a method to work this out, if there is any means to implement this using the utilities of freeRTOS, kindly let me know guys.
By default FreeRTOS will time slice tasks of equal priority, see http://www.freertos.org/a00110.html#configUSE_TIME_SLICING, but there is nothing to guarantee that each task gets an equal share of the CPU. For example, interrupts use an unknown amount of processing time during each time slice, and higher priority tasks can use part or all of a time slice.
Question for you though - why would you want the behaviour you requested? Maybe if you said what you were trying to achieve, rather than than ask if a feature existed, people would be able to make helpful suggestions.
I've narrowed down the area of the problem I'm facing and it turned out that MessageProducer.send() is too slow when it is created for a particular topic "replyfeserver":
auto producer = context.CreateProducerFromTopic("replyfeserver");
producer->send(textMessage); //it is slow
Here the call to send() blocks for up to 55-65 seconds occasionally — almost every after 4-5 calls, and up to 5-15 seconds in general.
However, if I use some other topic, say "feserver.action.status".
auto producer = context.CreateProducerFromTopic("feserver.action.status");
producer->send(textMessage); //it is fast!
Now the call to send() returns immediately, within a fraction of second. I've tried send() with several other topics and all of them work fast enough.
What could be the possible issues with this particular topic "replyfeserver"? What are the things I should look at in order to diagnose the issue with it? I was using this topic for last 2 months.
I'm using XMS C++ API and please assume that context object is an abstraction which creates session, destination, consumer, producer and so on.
I'd also like to know if there is any difference between these two approaches:
xms::Destination dest("topic://replyfeserver");
vs
xms::Destination dest = session.createTopic("replyfeserver");
I tried with both, it doesn't make any difference — at least I didn't notice it.
There shouldn't be any difference. Personally, I like to have my topics in a hierarchy. i.e. A.B.C
I would run an MQ trace then open a PMR with IBM and give them the trace and say please explain the delay.
If you've ever used XNA game studio 4 you are familiar with the update method. By default the code within is processed at 60 times per second. I have been struggling to recreate such an effect in c++.
I would like to create a method where it will only process the code x amount of times per second. Every way I've tried it processes all at once, as loops do. I've tried for loops, while, goto, and everything processes all at once.
If anyone could please tell me how and if I can achieve such a thing in c++ it would be much appreciated.
With your current level of knowledge this is as specific as I can get:
You can't do what you want with loops, fors, ifs and gotos, because we are no longer in the MS-DOS era.
You also can't have code running at precisely 60 frames per second.
On Windows a system application runs within something called an "event loop".
Typically, from within the event loop, most GUI frameworks call the "onIdle" event, which happens when an application is doing nothing.
You call update from within the onIdle event.
Your onIdle() function will look like this:
void onIdle(){
currentFrameTime = getCurrentFrameTime();
if ((currentFrameTime - lastFrameTime) < minUpdateDelay){
sleepForSmallAmountOfTime();//using Sleep or anything.
//Delay should be much smaller than minUPdateDelay.
//Doing this will reduce CPU load.
return;
}
update(currentFrameTime - lastFrameTime);
lastFrameTime = currentFrameTime;
}
You will need to write your own update function, your update function should take amount of time passed since last frame, and you need to write a getFrameTime() function using either GetTickCount, QueryPerformanceCounter, or some similar function.
Alternatively you could use system timers, but that is a bad idea compared to onIdle() event - if your app runs too slowly.
In short, there's a long road ahead of you.
You need to learn some (preferably cross-platform) GUI framework, learn how to create a window, the concept of an event loop (can't do anything without it today), and then write your own "update()" and get a basic idea of multithreading programming and system events.
Good luck.
As you are familiar with XNA then i assume you also are familiar with "input" and "draw". What you could do is assign independant threads to these 3 functions and have a timer to see if its time to run a thread.
Eg the input would probably trigger draw, and both draw and input would trigger the update method.
-Another way to handle this is my messages events. If youre using Windows then look into Windows messages loop. This will make your input, update and draw event easier by executing on events triggered by the OS.
I have a class which measures the time between calling Start and Stop. I have created a unit test that sleeps using boost::this_thread::sleep between Start and Stop and I test that the result is near the time slept.
However, this test fails on our build agent but not our development machines. The problem is: How do I know whether this is an actual problem of the stopwatch or if it's a "problem" that the build agent (running some other processes, being a virtual machine) might sleep longer than I told it to?
So the question: Is there a robust way to write something like "Do something that takes exactly x seconds?"
Thanks a lot!
There is no way to test something like this reliably on a non-realtime system. The way to go would be to wrap the APIs for getting the system-time that your stop-watch uses and mock them in the tests.
Take the system time before and after the sleep and refer only to the difference between these times and not to the time you slept.
What is the resolution of your stopwatch? If you need to be accurate by seconds then sleeping for 3 seconds and seeing if you are between 2.9 and 3.1 will work for you. If you need mili/nano second accuracy you should just use timestamp mocks as suggestd in the first reply.
That depends on the operating system you're using. On systems that supports multi-programming, the running time of your thread is not deterministic. On some real-time systems however, it's almost accurate if your thread is of top priority. You should be able to disable the interrupt to simulate that case, because your thread will not be preempted by OS scheduler.