C++ make sprite appear every 3 seconds - c++

I'm making a whack-a-mole game for class and i'm trying to make my mole1 sprite appear every 3 seconds but I can't figure out how to get it to work. Right now i Have the game to just run for 5 seconds, in the end it will be 60. Here is the main for the project. I need to change the mole1.visible to to true so he shows up. After I get this I will add the 5 other moles to each hole.
EDIT
For some reason I can't get chrono to compile but i figured out how to first make the mole appear but I can't ge thim to disappear after. I used modulo to make it false and I thought doig the opposite would make it disappear but it doesn't
if((60-now)%4==3){
mole1.visible=true;
mole1.paint_sprite(myscreen);
}
if ((60-now)%4!=3){
mole1.visible=false;
mole1.paint_sprite(myscreen);
}
Rest of code:
using namespace std; // allows us to avoid std::cout
#include <iostream> // standard C++ include
#include <curses.h> // this is required to use the Unix curses libraries
#include "screen.cpp" // screen class
#include "sprite2.cpp" // generic sprite class
#include "nonblocking.h" // facilitates non-blocking keyboard events
#include <unistd.h> // used by sleep
#include <time.h>
long start_time, now;
int i;
main() // main function
{
char c; // used to get character input from keyboard
screen myscreen; // screen data structure declaration
char aimage[80][24]={' '}; // fills in entire array with spaces
long start_time, now;
int i; // used for counters
int loop=0;
aimage[1][0]='_';
aimage[2][0]='_';
aimage[0][1]='(';
aimage[1][1]='_';
aimage[2][1]='(';
aimage[3][1]=')';
aimage[1][2]='|';
aimage[2][2]='|';
char bgimage[80][24]={' '}; // fills in entire array with spaces
bgimage[3][0]='"';
bgimage[4][0]='"';
bgimage[5][0]='"';
bgimage[2][0]='-';
bgimage[6][0]='-';
bgimage[1][0]='.';
bgimage[7][0]='.';
bgimage[0][1]='/';
bgimage[8][1]='\\';
bgimage[0][2]='|';
bgimage[8][2]='|';
bgimage[0][3]='\\';
bgimage[8][3]='/';
bgimage[1][4]='"';
bgimage[2][4]='-';
bgimage[3][4]='.';
bgimage[4][4]='.';
bgimage[5][4]='.';
bgimage[6][4]='-';
bgimage[7][4]='"';
char cimage[80][24]={' '}; // fills in entire array with spaces
cimage[1][0]='c';
cimage[2][0]='.';
cimage[3][0]='_';
cimage[4][1]='\'';
cimage[5][1]='-';
cimage[6][1]='.';
cimage[0][1]='C';
cimage[3][1]='o';
cimage[5][2]='\'';
cimage[4][2]='.';
cimage[3][2]='.';
cimage[2][3]='-';
cimage[1][3]='-';
cimage[0][2]='(';
char dimage[80][24]={' '}; // fills in entire array with spaces
dimage[0][0]='6';
dimage[1][0]='0';
sprite hammer(22,10,3,4,aimage,&myscreen);
sprite hole1(20,3,5,9,bgimage,&myscreen);
sprite hole2(40,3,5,9,bgimage,&myscreen);
sprite hole3(60,3,5,9,bgimage,&myscreen);
sprite hole4(20,15,5,9,bgimage,&myscreen);
sprite hole5(40,15,5,9,bgimage,&myscreen);
sprite hole6(60,15,5,9,bgimage,&myscreen);
sprite mole1(21,4,4,7,cimage,&myscreen);
sprite timer(5,10,1,2,dimage, &myscreen);
mole1.visible=false; // bullet should be false until the player shoots
hole1.paint_sprite(myscreen);
hole2.paint_sprite(myscreen);
hole3.paint_sprite(myscreen);
hole4.paint_sprite(myscreen);
hole5.paint_sprite(myscreen);
hole6.paint_sprite(myscreen);
hammer.paint_sprite(myscreen);
mole1.paint_sprite(myscreen);
timer.paint_sprite(myscreen);
myscreen.display(); // cause the screen to paint for the first time
start_time=(unsigned)time(NULL);
for(;;) // infinite loop
{
now = (unsigned)time(NULL)-start_time;
if((5-now)<=0) //ends game after 60 seconds
{
endwin(); // clean up curses (really never executed)
return(1);
}
loop++;
if (kbhit())
{
c=getchar(); // get one character from the keyboard
tcflush(0, TCIFLUSH); // system call to flush the keyboard buffer
if (c=='a') // if z, move ship left
{
hammer.move_sprite(-20,0,myscreen);
}
if (c=='d') // if a, move ship right
{
hammer.move_sprite(20,0,myscreen);
}
if (c=='s') // if z, move ship down
{
hammer.move_sprite(0,10,myscreen);
}
if (c=='w') // if z, move ship up
{
hammer.move_sprite(0,-10,myscreen);
}
}
myscreen.display(); // refresh the screen
}
endwin(); // clean up curses (really never executed)
return(1); // end program (also, never executed)
}

You can use global loop to calculate time difference and then set visible=true; after passing 3.0 seconds.
Like here:
#include <iostream>
#include <chrono>
#include <unistd.h>
const float TIME_TO_SHOW = 3.0f;
//Function to update all objects
void Update( float dt )
{
static float DeltaCounter = 0.0f;
DeltaCounter+= dt;
if ( DeltaCounter > TIME_TO_SHOW )
{
DeltaCounter -= TIME_TO_SHOW; //Keep overflow
//Set object visible here. For example your mole1.visible=true;
}
}
int main()
{
typedef std::chrono::duration<float> FloatSeconds;
auto OldMs = std::chrono::system_clock::now().time_since_epoch();
const uint32_t SleepMicroseconds = 100;
//Global loop
while (true)
{
auto CurMs = std::chrono::system_clock::now().time_since_epoch();
auto DeltaMs = CurMs - OldMs;
OldMs = CurMs;
//Cast delta time to float seconds
auto DeltaFloat = std::chrono::duration_cast<FloatSeconds>(DeltaMs);
std::cout << "Seconds passed since last update: " << DeltaFloat.count() << " seconds" << std::endl;
//Update all object by time as float value.
Update( DeltaFloat.count() );
// Sleep to give time for system interaction
usleep(SleepMicroseconds);
// Any other actions to calculate can be here
//...
}
return 0;
}
If you have constant behavior you can use simple loop with sleep function. It sleeps your process for given seconds:
const int32_t CountObjectToShow = 10;
const unsigned int TIME_TO_SHOW = 3;
for ( int32_t i = 0; i < CountObjectToShow; i++ )
{
sleep(TIME_TO_SHOW);
//Set object visible here. For example your mole1.visible=true;
std::cout << "Object showed" << std::endl;
}
Code with global loop is more flexible and allows to do many others useful things.

Well, in order to show something every certain amount of seconds you need to have a variable referencing the start time. Then, you check if the delta between the current time and the stored time is greater than a certain amount.
A good tool to make this for kind of task would be a clock class.
Clock.h
#ifndef CLOCK_H
#define CLOCK_H
#include <chrono>
template<typename Clock_t = std::chrono::steady_clock>
class Clock
{
public:
using TimePoint = decltype(Clock_t::now());
private:
TimePoint m_start;
public:
Clock() : m_start(Clock_t::now()) {
}
~Clock() {
}
void reset() {
m_start = Clock_t::now():
}
float getSeconds() const {
return std::chrono::duration_cast<std::chrono::duration<float>>(Clock_t::now() - m_start).count();
}
long long getMilliseconds() const {
return std::chrono::duration_cast<std::chrono::milliseconds>(ClockType::now() - m_start).count();
}
};
#endif
Example
#include <iostream>
#include "Clock.h"
int main() {
Clock<> clock;
constexpr long long spawnRate = 3000;
while (true) {
if (clock.getMilliseconds() >= spawnRate) {
std::cout << "SPAWN\n";
clock.reset();
}
}
}
Thus, for your case you would have a clock for the game, a clock for the mole spawner, etc.
During the game you would just simply check if the clock's current elapsed time is greater than a certain delta. If that is the case, do some other things.
Also, make sure to correctly reset the clocks, such as when you are resetting the mole spawn timer, and when starting the game.
This should handle the timing of things. If you have other problems, then you should ask about those.

Related

SetLocalTime causing PC to lag, how can I optimize it?

I want to create a program that will slow down Windows' time. I will be using SetLocalTime() for this. However, when I open the program, my PC starts to micro-stutter and game performances drops even though the process isn't using nearly any CPU.
#include <iostream>
#include "Windows.h"
#include <thread>
#include <chrono>
using namespace std;
SYSTEMTIME st;
WORD hour;
WORD minute;
WORD second = 0;
int main()
{
GetLocalTime(&st);
hour = st.wHour;
minute = st.wMinute;
second = st.wSecond;
for (;;)
{
for (int i = 0; i < 4; i++)
{
this_thread::sleep_for(chrono::milliseconds(500));
st.wHour = hour;
st.wMinute = minute;
st.wSecond = second;
SetLocalTime(&st);
}
second++;
if (second == 60)
{
second = 0;
minute++;
}
if (minute == 60)
{
minute = 0;
hour++;
}
}
}
If you change the system clock, all the programs that use it for timing will also slow down.
From your comments, I could gather that you wish to time scale an application that uses time. So far, you didn't get more specific, so I cannot suggest anything more than a general approach.
Create a time manager class that, when you start your application, gets the current system time and store it as the base time of your application. Instead of using GetLocalTime() or GetSystemTime(), create a method in your class that will return the current time based on a time dilatation factor.
class TimeManager
{
private:
SYSTEMTIME _BaseTime;
double _TimeDilatation;
public:
TimeManager();
void SetTimeDilatation(double timeDilatation);
void GetTime(LPSYSTEMTIME lpSystemTime);
};
// Constructor will get the current local time.
TimeManager::TimeManager()
{
GetLocalTime(&_BaseTime);
}
// Sets the time dilatation factor.
// 0.0 to 0.9 time will slow down
// 1.0 normal flow of time
// 1.1 to max double, time will go faster
void TimeManager::SetTimeDilatation(double timeDilatation)
{
_TimeDilatation = timeDilatation;
}
// Get the current time taking into account time dilatation
void TimeManager::GetTime(LPSYSTEMTIME lpSystemTime)
{
SYSTEMTIME resultingTime;
SYSTEMTIME realTime;
FILETIME ftime;
ULARGE_INTEGER uliTime;
__int64 lowerValue, higherValue, result;
// Get the current local time
GetLocalTime(&realTime);
// Translate the base time into a large integer for subtraction
SystemTimeToFileTime(&_BaseTime, &ftime);
uliTime.LowPart = ftime.dwLowDateTime;
uliTime.HighPart = ftime.dwHighDateTime;
lowerValue = uliTime.QuadPart;
// Translate the current time into a large integer for subtraction
SystemTimeToFileTime(&realTime, &ftime);
uliTime.LowPart = ftime.dwLowDateTime;
uliTime.HighPart = ftime.dwHighDateTime;
higherValue = uliTime.QuadPart;
// Get the time difference and multiply the dilatation factor
result = (higherValue - lowerValue) * _TimeDilatation;
// Apply the difference to the base time value
result = lowerValue + result;
// Convert the new time back into a SYSTEMTIME value
uliTime.QuadPart = result;
ftime.dwLowDateTime = uliTime.LowPart;
ftime.dwHighDateTime = uliTime.HighPart;
FileTimeToSystemTime(&ftime,&resultingTime);
// Assign it to the pointer passed in parameter, and feel like a Time Lord.
*lpSystemTime = resultingTime;
}
int main()
{
TimeManager TM;
TM.SetTimeDilatation(0.75f); // the time will pass 75% slower
for (;;)
{
SYSTEMTIME before, after;
TM.GetTime(&before);
// Do something that should take exactly one minute to process.
TM.GetTime(&after);
// Inspect the value of before and after, you'll see
// that only 45 secondes has passed
}
}
Note that it's a general idea to push you in the right direction. I haven't compile that code, so there maybe an error or five. Feel free to point them out, and I'll fix my post. I just didn't want to be too specific since your question is a bit broad ; this code may or may not help you depending on your use case. But that's generally how you slow down time without affecting system time.

C++ get period of an std::chrono::duration

I was playing arround with std::chrono.
While I do some testing I wonder if I can get the ratio that was used to construct a std::chrono::duration because I want to print it.
Here some code to show what exactly I want to do:
You schould be able to compile this on windows and linux (g++) by adding the -std=c++11 flag.
This small sample code should measure the time your machine needs to cout to max int value.
main.cpp
#include<iostream>
#include "stopchrono.hpp"
#include<chrono>
#include<limit>
int main (){
stopchrono<> main_timer(true);
stopchrono<unsigned long long int,std::ratio<1,1000000000>,std::chrono::high_resolution_clock> m_timer(true);//<use long long int to store ticks,(1/1000000000)sekond per tick, obtain time_point from std::chrono::high_resolution_clock>
stopchrono<unsigned long long int,std::ratio<1,1000000000>> mtimer(true);
std::cout<<"count to max of int ..."<<std::endl;
for(int i=0;i<std::numeric_limits<int>::max();i++){}
std::cout<<"finished."<<std::endl;
main_timer.stop();
m_timer.stop();
mtimer.stop();
std::cout<<std::endl<<"It took me "<<(main_timer.elapsed()).count()<<" Seconds."<<std::endl;
std::cout<<" "<<(m_timer.elapsed()).count()<<std::endl;//print amount of elapsed ticks by std::chrono::duration::count()
std::cout<<" "<<(mtimer.elapsed()).count()<<std::endl;
std::cin.ignore();
return 0;
}
stopchrono.hpp
#ifndef STOPCHRONO_DEFINED
#define STOPCHRONO_DEFINED
#include<chrono>
template<class rep=double,class period=std::ratio<1>,class clock=std::chrono::steady_clock> //this templates first two parameters determines the duration type that will be returned, the third parameter defines from which clock the duration will be obtained
class stopchrono { // class for measurement of time programm parts are running
typename clock::time_point start_point;
std::chrono::duration<rep,period> elapsed_time;
bool running;
public:
stopchrono():
start_point(clock::now()),
elapsed_time(elapsed_time.zero()),
running(false)
{}
stopchrono(bool runnit)://construct already started object
running(runnit),
elapsed_time(elapsed_time.zero()),
start_point(clock::now())
{}
void start(){//set start_point to current clock::now() if not running
if(!running){
start_point=clock::now();
running=true;
}
}
void stop(){// add current duration to elapsed_time
if(running){
elapsed_time+=std::chrono::duration_cast<std::chrono::duration<rep,period>>(clock::now()-start_point);
running=false;
}
}
void reset(){// set elapsed_time to 0 and running to false
elapsed_time=elapsed_time.zero();
running=false;
}
std::chrono::duration<rep,period> elapsed(){//return elapsed_time
if(running){
return (std::chrono::duration_cast<std::chrono::duration<rep,period>>(elapsed_time+(clock::now()-start_point)));
}else{
return (elapsed_time);
}
}
bool is_running()const{// determine if the timer is running
return running;
}
};
#endif
actual sample output
count to max of int ...
finished.
It took me 81.6503 Seconds.
81650329344
81650331344
target sample output
count to max of int ...
finished.
It took me 81.6503 Seconds.
81650329344 (1/1000000000)sekonds
81650331344
How can I obtain the used period std::ratio<1,1000000000> from the returned duration even if I don't know which I have used to create the stopchrono object?
Is that even possible?
The std::chrono::duration class has a typedef period which is what you are looking for. You can access it via decltype(your_variable)::period. Something like the following should do
auto elapsed = main_timer.elapsed();
cout << elapsed.count() << " " << decltype(elapsed)::period::num << "/"
<< decltype(elapsed)::period::den << endl;
See also this working example which prints the elapsed time and the ratio of seconds.

Should pthread program take longer

Maybe I’m confusing myself with threads, but my understanding of threading conflicts with each other.
I’ve created a program which uses POSIX pthreads. Without using these threads the program takes 0.061723 seconds to run, and with threads takes 0.081061 seconds to run.
At first I thought this is what should happen, as threads allow something to happen while other things should be able to happen. i.e. processing a lot of data on one thread while still having responsive UI on another, this would mean the processing of the data would take longer as the CPU divides its time between processing UI and processing the data.
However, surely the point of multithreading is to make the program take advantage of multiple CPUs/cores?
As you can tell I’m something of an intermediate so excuse me if it’s a simple question.
But what should I expect the program to do?
I’m running this on a mid-2012 Macbook Pro 13” base model. CPU is 22 nm "Ivy Bridge" 2.5 GHz Intel "Core i5" processor (3210M), with two independent processor "cores" on a single silicon chip
UPDATED WITH CODE
This is in main function. I didn’t add variable declaration for convenience but I’m sure you can work out what each does by its name:
// Loop through all items we need to process
//
while (totalNumberOfItemsToProcess > 0 && numberOfItemsToProcessOnEachIteration > 0 && startingIndex <= totalNumberOfItemsToProcess)
{
// As long as we have items to process...
//
// Align the index with number of items to process per iteration
//
const uint endIndex = startingIndex + (numberOfItemsToProcessOnEachIteration - 1);
// Create range
//
Range range = RangeMake(startingIndex,
endIndex);
rangesProcessed[i] = range;
// Create thread
//
// Create a thread identifier, 'newThread'
//
pthread_t newThread;
// Create thread with range
//
int threadStatus = pthread_create(&newThread, NULL, processCoordinatesInRangePointer, &rangesProcessed[i]);
if (threadStatus != 0)
{
std::cout << "Failed to create thread" << std::endl;
exit(1);
}
// Add thread to threads
//
threadIDs.push_back(newThread);
// Setup next iteration
//
// Starting index
//
// Realign the index with number of items to process per iteration
//
startingIndex = (endIndex + 1);
// Number of items to process on each iteration
//
if (startingIndex > (totalNumberOfItemsToProcess - numberOfItemsToProcessOnEachIteration))
{
// If the total number of items to process is less than the number of items to process on each iteration
//
numberOfItemsToProcessOnEachIteration = totalNumberOfItemsToProcess - startingIndex;
}
// Increment index
//
i++;
}
std::cout << "Number of threads: " << threadIDs.size() << std::endl;
// Loop through all threads, rejoining them back up
//
for ( size_t i = 0;
i < threadIDs.size();
i++ )
{
// Wait for each thread to finish before returning
//
pthread_t currentThreadID = threadIDs[i];
int joinStatus = pthread_join(currentThreadID, NULL);
if (joinStatus != 0)
{
std::cout << "Thread join failed" << std::endl;
exit(1);
}
}
The processing functions:
void processCoordinatesAtIndex(uint index)
{
const int previousIndex = (index - 1);
// Get coordinates from terrain
//
Coordinate3D previousCoordinate = terrain[previousIndex];
Coordinate3D currentCoordinate = terrain[index];
// Calculate...
//
// Euclidean distance
//
double euclideanDistance = Coordinate3DEuclideanDistanceBetweenPoints(previousCoordinate, currentCoordinate);
euclideanDistances[index] = euclideanDistance;
// Angle of slope
//
double slopeAngle = Coordinate3DAngleOfSlopeBetweenPoints(previousCoordinate, currentCoordinate, false);
slopeAngles[index] = slopeAngle;
}
void processCoordinatesInRange(Range range)
{
for ( uint i = range.min;
i <= range.max;
i++ )
{
processCoordinatesAtIndex(i);
}
}
void *processCoordinatesInRangePointer(void *threadID)
{
// Cast the pointer to the right type
//
struct Range *range = (struct Range *)threadID;
processCoordinatesInRange(*range);
return NULL;
}
UPDATE:
Here are my global variables, which, are only global for simplicity - don’t have a go!
std::vector<Coordinate3D> terrain;
std::vector<double> euclideanDistances;
std::vector<double> slopeAngles;
std::vector<Range> rangesProcessed;
std::vector<pthread_t> threadIDs;
Correct me if I’m wrong, but, I think the issue was with how the time elapsed was measured. Instead of using clock_t I’ve moved to gettimeofday() and that reports a shorter time, from non threaded time of 22.629000 ms to a threaded time of 8.599000 ms.
Does this seem right to people?
Of course, my original question was based on whether or not a multithreaded program SHOULD be faster or not, so I won’t mark this answer as the correct one for that reason.

Delay execution 1 second

So I am trying to program a simple tick-based game. I write in C++ on a linux machine. The code below illustrates what I'm trying to accomplish.
for (unsigned int i = 0; i < 40; ++i)
{
functioncall();
sleep(1000); // wait 1 second for the next function call
}
Well, this doesn't work. It seems that it sleeps for 40 seconds, then prints out whatever the result is from the function call.
I also tried creating a new function called delay, and it looked like this:
void delay(int seconds)
{
time_t start, current;
time(&start);
do
{
time(&current);
}
while ((current - start) < seconds);
}
Same result here. Anybody?
To reiterate on what has already been stated by others with a concrete example:
Assuming you're using std::cout for output, you should call std::cout.flush(); right before the sleep command. See this MS knowledgebase article.
sleep(n) waits for n seconds, not n microseconds.
Also, as mentioned by Bart, if you're writing to stdout, you should flush the stream after each write - otherwise, you won't see anything until the buffer is flushed.
So I am trying to program a simple tick-based game. I write in C++ on a linux machine.
if functioncall() may take a considerable time then your ticks won't be equal if you sleep the same amount of time.
You might be trying to do this:
while 1: // mainloop
functioncall()
tick() # wait for the next tick
Here tick() sleeps approximately delay - time_it_takes_for(functioncall) i.e., the longer functioncall() takes the less time tick() sleeps.
sleep() sleeps an integer number of seconds. You might need a finer time resolution. You could use clock_nanosleep() for that.
Example Clock::tick() implementation
// $ g++ *.cpp -lrt && time ./a.out
#include <iostream>
#include <stdio.h> // perror()
#include <stdlib.h> // ldiv()
#include <time.h> // clock_nanosleep()
namespace {
class Clock {
const long delay_nanoseconds;
bool running;
struct timespec time;
const clockid_t clock_id;
public:
explicit Clock(unsigned fps) : // specify frames per second
delay_nanoseconds(1e9/fps), running(false), time(),
clock_id(CLOCK_MONOTONIC) {}
void tick() {
if (clock_nanosleep(clock_id, TIMER_ABSTIME, nexttick(), 0)) {
// interrupted by a signal handler or an error
perror("clock_nanosleep");
exit(EXIT_FAILURE);
}
}
private:
struct timespec* nexttick() {
if (not running) { // initialize `time`
running = true;
if (clock_gettime(clock_id, &time)) {
//process errors
perror("clock_gettime");
exit(EXIT_FAILURE);
}
}
// increment `time`
// time += delay_nanoseconds
ldiv_t q = ldiv(time.tv_nsec + delay_nanoseconds, 1000000000);
time.tv_sec += q.quot;
time.tv_nsec = q.rem;
return &time;
}
};
}
int main() {
Clock clock(20);
char arrows[] = "\\|/-";
for (int nframe = 0; nframe < 100; ++nframe) { // mainloop
// process a single frame
std::cout << arrows[nframe % (sizeof(arrows)-1)] << '\r' << std::flush;
clock.tick(); // wait for the next tick
}
}
Note: I've used std::flush() to update the output immediately.
If you run the program it should take about 5 seconds (100 frames, 20 frames per second).
I guess on linux u have to use usleep() and it must be found in ctime
And in windows you can use delay(), sleep(), msleep()

Making a Timer in c++?

I am developing a simple game in c++, a chase-the-dot style one, where you must click a drawn circle on the display and then it jumps to another random location with every click, but I want to make the game end after 60 seconds or so, write the score to a text file and then upon launching the program read from the text file and store the information into an array and somehow rearrange it to create a high score table.
I think I can figure out the high score and mouse clicking in a certain area myself, but I am completely stuck with creating a possible timer.
Any help appreciated, cheers!
In C++11 there is easy access to timers. For example:
#include <chrono>
#include <iostream>
int main()
{
std::cout << "begin\n";
std::chrono::steady_clock::time_point tend = std::chrono::steady_clock::now()
+ std::chrono::minutes(1);
while (std::chrono::steady_clock::now() < tend)
{
// do your game
}
std::cout << "end\n";
}
Your platform may or may not support <chrono> yet. There is a boost implementation of <chrono>.
Without reference to a particular framework or even the OS this is unanswerable.
In SDL there is SDL_GetTicks() which suits the purpose.
On linux, there is the general purpose clock_gettime or gettimeofday that should work pretty much everywhere (but beware of the details).
Win32 API has several function calls related to this, including Timer callback mechanisms, such as GetTickCount, Timers etc. (article)
Using timers is usually closely related to the meme of 'idle' processing. So you'd want to search for that topic as well (and this is where the message pump comes in, because the message pump decides when (e.g.) WM_IDLE messages get sent; Gtk has a similar concept of Idle hooks and I reckon pretty much every UI framework does)
Usually GUI program has so called "message pump" loop. Check of that timer should be a part of your loop:
while(running)
{
if( current_time() > end_time )
{
// time is over ...
break;
}
if( next_ui_message(msg) )
dispatch(msg);
}
Try this one out:
//Creating Digital Watch in C++
#include<iostream>
#include<Windows.h>
using namespace std;
struct time{
int hr,min,sec;
};
int main()
{
time a;
a.hr = 0;
a.min = 0;
a.sec = 0;
for(int i = 0; i<24; i++)
{
if(a.hr == 23)
{
a.hr = 0;
}
for(int j = 0; j<60; j++)
{
if(a.min == 59)
{
a.min = 0;
}
for(int k = 0; k<60; k++)
{
if(a.sec == 59)
{
a.sec = 0;
}
cout<<a.hr<<" : "<<a.min<<" : "<<a.sec<<endl;
a.sec++;
Sleep(1000);
system("Cls");
}
a.min++;
}
a.hr++;
}
}
See the details at: http://www.programmingtunes.com/creating-timer-c/#sthash.j9WLtng2.dpuf