I am now trying to write a program that waits two minutes and 25 seconds in C++. I use the Sleep function of like that:
Sleep(145000);
Now, my laptop heats up every time I run this function, and the fan starts working.
Now to the question - is this function known for being wasteful? Should I even use it? do I have a better option?
The Windows Sleep() function puts the current thread to sleep. It doesn't run a busy-waiting while-loop or anything like that, it just re-schedules the thread to start up again after the sleep period specified as the function parameter. If your fan is starting up, I suggest looking at the currently running processes using Task Manager.
Related
If you've ever used XNA game studio 4 you are familiar with the update method. By default the code within is processed at 60 times per second. I have been struggling to recreate such an effect in c++.
I would like to create a method where it will only process the code x amount of times per second. Every way I've tried it processes all at once, as loops do. I've tried for loops, while, goto, and everything processes all at once.
If anyone could please tell me how and if I can achieve such a thing in c++ it would be much appreciated.
With your current level of knowledge this is as specific as I can get:
You can't do what you want with loops, fors, ifs and gotos, because we are no longer in the MS-DOS era.
You also can't have code running at precisely 60 frames per second.
On Windows a system application runs within something called an "event loop".
Typically, from within the event loop, most GUI frameworks call the "onIdle" event, which happens when an application is doing nothing.
You call update from within the onIdle event.
Your onIdle() function will look like this:
void onIdle(){
currentFrameTime = getCurrentFrameTime();
if ((currentFrameTime - lastFrameTime) < minUpdateDelay){
sleepForSmallAmountOfTime();//using Sleep or anything.
//Delay should be much smaller than minUPdateDelay.
//Doing this will reduce CPU load.
return;
}
update(currentFrameTime - lastFrameTime);
lastFrameTime = currentFrameTime;
}
You will need to write your own update function, your update function should take amount of time passed since last frame, and you need to write a getFrameTime() function using either GetTickCount, QueryPerformanceCounter, or some similar function.
Alternatively you could use system timers, but that is a bad idea compared to onIdle() event - if your app runs too slowly.
In short, there's a long road ahead of you.
You need to learn some (preferably cross-platform) GUI framework, learn how to create a window, the concept of an event loop (can't do anything without it today), and then write your own "update()" and get a basic idea of multithreading programming and system events.
Good luck.
As you are familiar with XNA then i assume you also are familiar with "input" and "draw". What you could do is assign independant threads to these 3 functions and have a timer to see if its time to run a thread.
Eg the input would probably trigger draw, and both draw and input would trigger the update method.
-Another way to handle this is my messages events. If youre using Windows then look into Windows messages loop. This will make your input, update and draw event easier by executing on events triggered by the OS.
I have built my first application using glibmm. I'm using a lot of threads as it does heavy processing. I have tried to follow the guidelines concerning multithreading, i.e. not doing any GUI updates from other threads than the one where g_main_loop is running.
I do a lot of graphics rendering in worker threads but I always only update a PixBuf which is later drawn by the widgets on_draw() from the main loop.
All was fine as long as the data I render was read from files. When I started streaming data from a server which I render at regular intervals then the problems started.
Every now and then, especially when executing multiple instances of my application simultaneously, I see that the main threads takes 100% CPU time. Running strace on the process shows that g_main_loop has ended up in an eternal loop calling poll:
poll([{fd=3, events=POLLIN}, {fd=4, events=POLLIN}, {fd=10, events=POLLIN}, {fd=8, events=POLLIN}], 4, 100) = 1 ([{fd=10, revents=POLLIN}])
In proc I get this for file-descriptor 10: 10 -> socket:[1132750]
The poll always returns immediately as file-descriptor 10 has something to offer. This goes on forever so I assume that the file-descriptor is never read. The odd thing is that running 5 applications will almost always lead to all 5 ending up in the infinite poll loop after just a couple of minutes while running only instance one seems to work more than 30 minutes most of the times I try.
Why is this happening and is there any way to debug this?
My mistake was that I called queue_draw() from one of my worker threads. Given that the function is called "queue", I assumed it would queue a redraw which would later be executed by the g_main_loop. As it turned out, this was what broke the g_main_loop. I wish libgtkmm would have a little more detail about these multithreading restrictions in its reference manual.
My solution, to the problem was adding Glib::Dispatcher queueRedraw to my Widget and connecting it to the queue_draw() function:
queueRedraw.connect(sigc::mem_fun(*this, &MyWidgetClass::queue_draw))
Calling queueRedraw() signals the main thread to call the queue_draw() function.
I don't know if this is the best approach, but it solves the problem.
In my C++ program, I will start other programs with exec. However, I want to be able to specify a maximum amount of time that the programs can run. How can that be done?
Is setrlimit the right thing to use?
Bit of a brute-force version, but... save/get the handle of the started programm/process, start a timer and kill the other process after the timer has expired?
2 solutions that comes to mind.
1- Send the duration to the second program via the command line and manage the duration internally in the 2nd exe.
2- Create a timer in the first exe and when the timer is triggered kill the 2nd process.
Max.
In general, it can't be done using standard c++ - you will have to use whatever scheduling functions your operating system (which you haven't specified) provides.
Is there a way in qt to get the up time of the application as well as the up time for the system?
Thanks in advance.
You can use the QElapsedTimer class from Qt 4.7 to get uptime for your app. This class will use monotonic clocks if it can.
Just create an instance, and call start on it at the start of your program. From then on, you can get the number of milliseconds your program has been running (or more precisely, since the call to start) by calling
myElapsedTimer.elapsed()
On Windows you can simply calculate by calling Winapi function to get process start datetime.
More information you can find at http://www.codeproject.com/KB/threads/ProcessTime.aspx
On Linux, you can use the times system call to tell you elapsed processor time. This will not count the time your program has been idle waiting for input, or blocked waiting for input, or the time that it's been preempted by other programs also running on the system. (Therefore, this makes it very good for benchmarks.)
I am facing strange issue on Windows CE:
Running 3 EXEs
1)First exe doing some work every 8 minutes unless exit event is signaled.
2)Second exe doing some work every 5 minutes unless exit event signaled.
3)Third exe while loop is running and in while loop it do some work at random times.
This while loop continues until exit event signaled.
Now this exit event is global event and can be signaled by any process.
The Problem is
When I run First exe it works fine,
Run second exe it works fine,
run third exe it works fine
When I run all exes then only third exe runs and no instructions get executed in first and second.
As soon as third exe gets terminated first and second starts get processing.
It that can be the case that while loop in third exe is taking all CPU cycles?
I havn't tried putting Sleep but I think that can do some tricks.
But OS should give CPU to all processes ...
Any thoughts ???
Put the while loop in the third EXE to Sleep each time through the loop and see what happens. Even if it doesn't fix this particular probem, it isn't ever good practice to poll with a while loop, and even using Sleep inside a loop is a poor substitute for a proper timer.
On the MSDN, I also read that CE allows for (less than) 32 processes simultaneously. (However, the context switches are lightning fast...). Some are already taken by system services.
(From Memory) Processes in Windows CE run until completion if there are no higher priority processes running, or they run for their time slice (100ms) if there are other processes of equal priority running. I'm not sure if Windows CE gives the process with the active/foreground window a small priority boost (just like desktop Windows), or not.
In your situation the first two processes are starved of processor time so they never run until the third process exits. Some ways to solve this are:
Make the third process wait/block on some multi-process primitives (mutex, semaphore, etc) and a short timeout. Using WaitForMultipleObjects/WaitForSingleObject etc.
Make the third process wait using a call to Sleep every time around the processing loop.
Boost the priority of the other processes so when they need to run they will interrupt the third process and actually run. I would probably make the least often called process have the highest priority of the three processes.
The other thing to check is that the third process does actually complete its tasks in time, and does not peg the CPU trying to do its thing normally.
Yeah I think that is not good solution . I may try to use timer and see the results..