I currently have this code
#include <iostream>
#include <curl.h>
#include <windows.h>
#include "boost\timer.hpp"
int main(void)
{
CURL *curl;
CURLcode res;
boost::timer t;
int number = 1;
while (number == 1)
{
if(t.elapsed() > 10)
{
curl = curl_easy_init();
if(curl)
{
curl_easy_setopt(curl, CURLOPT_URL, "http://google.com");
res = curl_easy_perform(curl);
/* always cleanup */
curl_easy_cleanup(curl);
}
t.restart();
}
}
}
What i'd like it to do is continue execution of this program and never end until someone closes the window.
I tried the aforementioned code, however CPU usage spiked to 25% on my quad core CPU.
So how do i continue the execution of the program and loop the code within the while without using so much CPU?
P.S
25% on a quad core means 100% cpu usage on a single core CPU.
You can use Sleep(10000) to pause program execution for approx. 10 seconds. You can drop the boost::timer - just sleep 10 seconds in each loop iteration (Sleep is not as accurate, but for 10 seconds the inaccuracy should be negligible).
Your code is what is called a 'busy loop' - for the CPU it makes no difference whether you hang around in a tight loop without much work or do heavy computations. Both will use 100% of a CPU core because there's an neverending stream of instructions coming in. To use less, you need to relinquish execution for a while to let the OS execute other processes.
What you're currently doing is busy waiting. That is, even though your program doesn't need to do anything, it's still keeping that loop spinning, waiting for the timer. What you need to do is to execute a true sleep, which tells your operating system that the process doesn't need to do anything for the next 10 seconds.
One way to do a true sleep in boost is the boost::this_thread::sleep function.
If need to slow it down with some sleep(). Basically you need to put your thread to sleep to allow other processes to execute.
What you've implemented is called a busy wait and is considered very bad style. Use sleep to suspend program execution for a short time, and write an eternal loop as:
for (;;)
or
while (true)
Looks like you want to do a sleep after each operations.
You can boost::threads to run it in it's own thread and then do a thread.join in the main thread to wait for it. If the other thread never ends, because of a while(true) then you program will run until you close the window.
Call SwitchToThread() inside of your while loop. (Sleep is less than ideal, as it forfeits the current time slice even if no other thread needs it.)
Consider using a timer instead of doing such a tight loop.
Alternatively you can put a System.Threading.Thread.Sleep(300) in between.
Someone would have to kill your process to close it, I cannot discern any window in your code.
If you are polling a website (it looks like you are doing something with Google there) then I would advise you to make a much larger interval! Not many a web-master would be happy to see such activity. It's more likely to be seen as a DOS-attack!
Any way, if there's a window, rather put this code in a timer delegate, otherwise start a timer and allow the user to exit your program somehow (maybe with Console.ReadKey() or so).
Related
I have a process that does something and needs to be repeated after a period of 1ms. How can I set period of a process on linux ?
I am using linux 3.2.0-4-rt-amd64 (with RT-Preempt patch) on Intel i7 -2600 CPU (total 8 cores) # 3.40 Ghz.
Basically I have about 6 threads in while loop shown in code and I want threads to be executed at every 1 ms. At the end I want to measure latency of each thread.
So How to set the period 1ms ?
for example in following code, how can I repeat Task1 after every 1ms ?
while(1){
//Task1(having threads)
}
Thank you.
A call to usleep(1000) inside the while loop will do the job, i.e.:
while (1) {
// Task1
usleep(1000); // 1000 microseconds = 1 millisecond
}
EDIT
Since usleep() is already deprecated in favor of nanosleep(), let's use the latter instead:
struct timespec timer;
timer.tv_sec = 0;
timer.tv_nsec = 1000000L;
while (1) {
// Task1
nanosleep(&timer, NULL);
}
Read time(7).
One millisecond is really a small period of time. (Can't you bear with e.g. a ten milliseconds delay?). I'm not sure regular processes on regular Linux on common laptop hardware are able to deal reliably with such a small period. Maybe you need RTLinux or at least real time scheduling (see sched_setscheduler(2) and this question) and perhaps a specially configured recent 3.x kernel
You can't be sure that your processing (inside your loop) is smaller than a millisecond.
You should explain what is your application doing, and what happens inside the loop.
You might have some event loop, consider using ppoll(2), timer_create(2) (see also timer_getoverrun(2)...) and/or timerfd_create(2) and clock_nanosleep(2)
(I would try something using ppoll and timerfd_create but I would accept some millisecond ticks to be skipped)
You should tell us more about your hardware and your kernel. I'm not even sure my desktop i3770K processor, asus P8Z77V motherboard, (3.13.3 PREEMPT Linux kernel) is able to reliably deal with a single millisecond delay.
(Of course, a plain loop simply calling clock_nanosleep, or better yet, using timerfd_create with ppoll, will usually do the job. But that is not reliable...)
Having a bit of an issue with a game I'm making using opengl. The game will sometimes run at half speed and sometimes it will run normally.
I don't think it is the opengl causing the problem since it runs at literally 14,000 fps on my computer. (even when its running at half speed)
This has led me to believe that is is the "game timer" thats causing the problem. The game timer runs on a seperate thread and is programmed to pause at the end of its "loop" with a Sleep(5) call. if i remove the Sleep(5) call, it runs so fast that i can barely see the sprites on the screen. (predictable behavior)
I tried throwing a Sleep(16) at the end of the Render() thread (also on its own thread). This action should limit the fps to around 62. Remember that the app runs sometimes at its intended speed and sometimes at half speed (i have tried on both of the computers that i own and it persists).
When it runs at its intended speed, the fps is 62 (good) and sometimes 31-ish (bad). it never switches between half speed and full speed mid execution, and the problem persists even after a reboot..
So its not the rendering that causing the slowness, its the Sleep() function
I guess what im saying is that the Sleep() function is inconsistent with the times that it actually sleeps. is this a proven thing? is there a better Sleep() function that i could use?
A waitable timer (CreateWaitableTimer and WaitForSingleObject or friends) is much better for periodic wakeup.
However, in your case you probably should just enable VSYNC.
See the following discussion of the Sleep function, focusing on the bit about scheduling priorities:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms686298(v=vs.85).aspx
yes, Sleep function is inconsistency, it is very useful in the case of macro condition.
if you want to a consistency time,please use QueryPerformanceFrequency get the frequency of CPU, and QueryPerformanceCount twice for start and end, and then (end-start) / frequency get the consistency time, but you must look out that if your CPU is mulit cores, the start and end time maybe not the same CPU core, so please us SetThreadAffinity for you working thread set the same CPU core.
Had a same problem. For I just made my own sleep logic and worked for me.
#include <chrono>
using namespace std::chrono;
high_resolution_clock::time_point sleep_start_time = high_resolution_clock::now();
while (duration_cast<duration<double>>(high_resolution_clock::now() - sleep_start_time).count() < must_sleep_duration) {}
I'm writing a check point. I'm checking every time I run a loop. I think this will waste a lot of CPU time. I wonder how to check with the system time every 10 seconds?
time_t start = clock();
while(forever)
{
if(difftime(clock(),start)/CLOCKS_PER_SEC >=timeLimit)
{
break;
}
}
The very short answer is that this is very difficult, if you're a novice programmer.
Now a few possiblilites:
Sleep for ten seconds. That means your program is basically pointless.
Use alarm() and signal handlers. This is difficult to get right, because you mustn't do anything fancy inside the signal handler.
Use a timerfd and integrate timing logic into your I/O loop.
Set up a dedicated thread for the timer (which can then sleep); this is exceedingly difficult because you need to think about synchronising all shared data access.
The point to take home here is that your problem doesn't have a simple solution. You need to integrate the timing logic deeply into your already existing program flow. This flow should be some sort of "main loop" (e.g. an I/O multiplexing loop like epoll_wait or select), possibly multi-threaded, and that loop should pick up the fact that the timer has fired.
It's not that easy.
Here's a tangent, possibly instructive. There are basically two kinds of computer program (apart from all the other kinds):
One kind is programs that perform one specific task, as efficiently as possible, and are then done. This is for example something like "generate an SSL key pair", or "find all lines in a file that match X". Those are the sort of programs that are easy to write and understand as far as the program flow is concerned.
The other kind is programs that interact with the user. Those programs stay up indefinitely and respond to user input. (Basically any kind of UI or game, but also a web server.) From a control flow perspective, these programs spend the vast majority of their time doing... nothing. They're just idle waiting for user input. So when you think about how to program this, how do you make a program do nothing? This is the heart of the "main loop": It's a loop that tells the OS to keep the process asleep until something interesting happens, then processes the interesting event, and then goes back to sleep.
It isn't until you understand how to do nothing that you'll be able to design programs of the second kind.
If you need precision, you can place a call to select() with null parameters but with a delay. This is accurate to the millisecond.
struct timeval timeout= {10, 0};
select(1,NULL,NULL,NULL, &timeout);
If you don't, just use sleep():
sleep(10);
Just add a call to sleep to yield CPU time to the system:
time_t start = clock();
while(forever)
{
if(difftime(clock(),start)/CLOCKS_PER_SEC >=timeLimit)
{
break;
}
sleep(1); // <<< put process to sleep for 1s
}
You can use a event loop in your program and schedule timer to do a callback. For example you can use libev to make an event loop and add timer.
ev_timer_init (timer, callback, 0., 5.);
ev_timer_again (loop, timer);
...
timer->again = 17.;
ev_timer_again (loop, timer);
...
timer->again = 10.;
ev_timer_again (loop, timer);
If you code in a specific toolkit you can use other event loops, gtk, qt, glib has own event loops so you can use them.
The simplest approach (in a single threaded environment), would be to sleep for some time and repeatedly check if the total waiting time has expired.
int sleepPeriodMs = 500;
time_t start = clock();
while(forever)
{
while(difftime(clock(),start)/CLOCKS_PER_SEC <timeLimit) // NOTE: Change of logic here!
{
sleep(sleepPeriod);
}
}
Please note, that sleep() is not very accurate. If you need higher accuracy timing (i.e. better than 10ms of resolution) you might need to dig deeper. Also, with C++11 there is the <chrono> header that offers a lot more of functionality.
using namespace std::chrono;
int sleepPeriodMs = 500;
time_t start = clock();
while(forever)
{
auto start = system_clock()::now()
// do some stuff that takes between [0..10[ seconds
std::this_thread::sleep_until(start+seconds(10));
}
I have discovered that SwapBuffers in OpenGL will busy-wait as long as the graphics card isn't done with its rendering or if it's waiting on V-Sync.
This is a problem for me because I don't want to waste 100% of a CPU core while just waiting for the card to be finished. I'm not writing a game, so I can not use the CPU cycles for anything more productive, I just want to yield them to some other process in the operating system.
I've found callback-functions such as glutTimerFunc and glutIdleFunc that could work for me, but I don't want to use glut. Still, glut must in some way use the normal gl functions to do this, right?
Is there any function such as "glReadyToSwap" or similar? In that case I could check that every millisecond or so and determine if I should wait a while longer or do the swap. I could also imagine perhaps skip SwapBuffers and write my own similar function that doesn't busy-wait if someone could point me in the right direction.
SwapBuffers is not busy waiting, it just blocks your thread in the driver context, which makes Windows calculating the CPU usage wrongly: Windows calculates the CPU usage by determining how much CPU time the idle process gets + how much time programs don't spend in driver context. SwapBuffers will block in driver context and your program obviously takes away that CPU time from the idle process. But your CPU is doing literally nothing in the time, the scheduler happily waiting to pass the time to other processes. The idle process OTOH does nothing else than immediately yield its time to the rest of the system, so the scheduler jumps right back into your process, which blocks in the driver what Windows counts as "is clogging CPU". If you'd measure the actual power consumption or heat output, for a simple OpenGL program this will stay rather low.
This irritating behaviour is actually an OpenGL FAQ!
Just create additional threads for parallel data processing. Keep OpenGL in one thread, the data processing in the other. If you want to get down the reported CPU usage, adding a Sleep(0) or Sleep(1) after SwapBuffers will do the trick. The Sleep(1) will make your process spend blocking a little time in user context, so the idle process gets more time, which will even out the numbers. If you don't want to sleep, you may do the following:
const float time_margin = ... // some margin
float display_refresh_period; // something like 1./60. or so.
void render(){
float rendertime_start = get_time();
render_scene();
glFinish();
float rendertime_finish = get_time();
float time_to_finish = rendertime_finish - rendertime_start;
float time_rest = fmod(render_finish - time_margin, display_refresh_period);
sleep(time_rest);
SwapBuffers();
}
In my programs I use this kind of timing but for another reason: I let SwapBuffers block without any helper Sleeps, however I give some other worker threads about that time to do stuff on the GPU through shared context (like updating textures) and I have the garbage collector running. It's not really neccesary to exactly time it, but the worker threads being finished just before SwapBuffers returns allows one to start rendering the next frame almost immediately since most mutexes are already unlocked then.
Though eglSwapBuffers does not busy wait a legitimate use for a nonblocking eglSwapBuffers is to have a more responsive GUI thread that can listen to user input or exit signals instead of waiting for OpenGL to finish swapping buffers. I have a solution to half of this problem. First in your main loop you buffer up your OpenGL commands to execute on your swapped out buffer. Then you poll on a sync object to see if your commands have finished executing on your swapped out buffer. Then you can swap buffers if the commands have finished executing. Unfortunately, this solution only asynchronously waits for commands to finish executing on your swapped out buffer and does not asynchronously wait for vsync. Here is the code:
void process_gpu_stuff(struct gpu_context *gpu_context)
{
int errnum = 0;
switch (gpu_context->state) {
case BUFFER_COMMANDS:
glDeleteSync(gpu_context->sync_object);
gpu_context->sync_object = 0;
real_draw(gpu_context);
glFlush();
gpu_context->sync_object = glFenceSync(GL_SYNC_GPU_COMMANDS_COMPLETE, 0);
if (0 == gpu_context->sync_object) {
errnum = get_gl_error();
break;
}
gpu_context->state = SWAP_BUFFERS;
break;
case SWAP_BUFFERS:
/* Poll to see if the buffer is ready for swapping, if
* it is not in ready we can listen for updates in the
* meanwhile. */
switch (glClientWaitSync(gpu_context->sync_object, 0, 1000U)) {
case GL_ALREADY_SIGNALED:
case GL_CONDITION_SATISFIED:
if (EGL_FALSE == eglSwapBuffers(display, surface)) {
errnum = get_egl_error();
break;
}
gpu_context->state = BUFFER_COMMANDS;
break;
case GL_TIMEOUT_EXPIRED:
/* Do nothing. */
break;
case GL_WAIT_FAILED:
errnum = get_gl_error();
break;
}
break;
}
}
the popular answer here is wrong. windows is not reporting the cpu usage "wrongly", lol. opengl, with vsync on, even while rendering a blank screen is actually burning 100% of 1 thread of your cpu. (you can check your CPU temps)
but the solution is simple. just call DwmFlush(); before or after SwapBuffers
What is the best way to exit out of a loop as close to 30ms as possible in C++. Polling boost:microsec_clock ? Polling QTime ? Something else?
Something like:
A = now;
for (blah; blah; blah) {
Blah();
if (now - A > 30000)
break;
}
It should work on Linux, OS X, and Windows.
The calculations in the loop are for updating a simulation. Every 30ms, I'd like to update the viewport.
The calculations in the loop are for
updating a simulation. Every 30ms, I'd
like to update the viewport.
Have you considered using threads? What you describe seems the perfect example of why you should use threads instead of timers.
The main process thread keeps taking care of the UI, and have a QTimer set to 30ms to update it. It locks a QMutex to have access to the data, performs the update, and releases the mutex.
The second thread (see QThread) does the simulation. For each cycle, it locks the QMutex, does the calculations and releases the mutex when the data is in a stable state (suitable for the UI update).
With the increasing trend on multi-core processors, you should think more and more on using threads than on using timers. Your applications automatically benefits from the increased power (multiple cores) of new processors.
While this does not answer the question, it might give another look at the solution. What about placing the simulation code and user interface in different threads? If you use Qt, periodic update can be realized using a timer or even QThread::msleep(). You can adapt the threaded Mandelbrot example to suit your need.
The code snippet example in this link pretty much does what you want:
http://www.cplusplus.com/reference/clibrary/ctime/clock/
Adapted from their example:
void runwait ( int seconds )
{
clock_t endwait;
endwait = clock () + seconds * CLOCKS_PER_SEC ;
while (clock() < endwait)
{
/* Do stuff while waiting */
}
}
If you need to do work until a certain time has elapsed, then docflabby's answer is spot-on. However, if you just need to wait, doing nothing, until a specified time has elapsed, then you should use usleep()
Short answer is: you can't in general, but you can if you are running on the right OS or on the right hardware.
You can get CLOSE to 30ms on all the OS's using an assembly call on Intel systems and something else on other architectures. I'll dig up the reference and edit the answer to include the code when I find it.
The problem is the time-slicing algorithm and how close to the end of your time slice you are on a multi-tasking OS.
On some real-time OS's, there's a system call in a system library you can make, but I'm not sure what that call would be.
edit: LOL! Someone already posted a similiar snippet on SO: Timer function to provide time in nano seconds using C++
VonC has got the comment with the CPU timer assembly code in it.
According to your question, every 30ms you'd like to update the viewport. I wrote a similar app once that probed hardware every 500ms for similar stuff. While this doesn't directly answer your question, I have the following followups:
Are you sure that Blah(), for updating the viewport, can execute in less than 30ms in every instance?
Seems more like running Blah() would be done better by a timer callback.
It's very hard to find a library timer object that will push on a 30ms interval to do updates in a graphical framework. On Windows XP I found that the standard Win32 API timer that pushes window messages upon timer interval expiration, even on a 2GHz P4, couldn't do updates any faster than a 300ms interval, no matter how low I set the timing interval to on the timer. While there were high performance timers available in the Win32 API, they have many restrictions, namely, that you can't do any IPC (like update UI widgets) in a loop like the one you cited above.
Basically, the upshot is you have to plan very carefully how you want to have updates occur. You may need to use threads, and look at how you want to update the viewport.
Just some things to think about. They caught me by surprise when I worked on my project. If you've thought these things through already, please disregard my answer :0).
You might consider just updating the viewport every N simulation steps rather than every K milliseconds. If this is (say) a serious commercial app, then you're probably going to want to go the multi-thread route suggested elsewhere, but if (say) it's for personal or limited-audience use and what you're really interested in is the details of whatever it is you're simulating, then every-N-steps is simple, portable and may well be good enough to be getting on with.
See QueryPerformanceCounter and QueryPerformanceFrequency
If you are using Qt, here is a simple way to do this:
QTimer* t = new QTimer( parent ) ;
t->setInterval( 30 ) ; // in msec
t->setSingleShot( false ) ;
connect( t, SIGNAL( timeout() ), viewPort, SLOT( redraw() ) ) ;
You'll need to specify viewPort and redraw(). Then start the timer with t->start().