I want to create a c++ program for a bank queue. Where every 3 minutes a new customer enters the queue. every customer needs 5 minutes for the service. The program print out the information after the first 30 minutes
The time of arriving for each customer
The time of leaving for each customer
How many customers are in the line?
Who is the current serving customer?
I wrote the current code:
#include <queue>
#include <ctime>
#include <time.h>
#include <conio.h>
#include <windows.h>
#include <iostream>
using namespace std;
int main()
{
queue <int> inq;
queue <int> inservice;
int z= 1;
for(int i=0; i<=9; i++)
{
inq.push(z);
Sleep(180000);
_strtime_s(hold_time);
cout<<"Time of arriving for customer number "<<z<<" is: "<<hold_time<<endl;
z++;
}
do
{
inservice.push(inq.front());
Sleep(300000);
_strtime_s(hold_time);
cout<<"Time of leaving for customer number "<<z<<" is: "<<hold_time<<endl;
inq.pop();
}
while(!inq.empty());
cout<<"number of customers waiting : "<<inq.size()<<endl;
cout<<"Customer number "<<inservice.front()<< " is currently serving"<<endl;
return 0;
}
The current code executes line by line; customers will not be moved to service until the queue loop is done.
To adjust time, I have to run the two loops at the same time. such that customers enter the queue at the same time other costumers are being served
Any suggestions ?
To run two loops at the same time, you will need to put them in different threads. Maybe start here to understand threading.
If you want a simulator of bank queuing, you can consider implementing it in Python using SimPy. A bank queue is even one of the examples:
SimPy Bank Example
You need a more sophisticated code architecture for this. Either naive threading, or an event-dispatch framework that has one busy-loop in main and periodically (based on timers and on events) calls out to other functions as needed.
The most basic yet robust solution you could do would be to have one loop that iterates very quickly (with a very small sleep), repeatedly checking the time to see whether it's time to perform any actions. Those actions will include customers arriving and customers leaving.
Using sleep in other any context is, IMO, usually the sign of a bad design.
A sophisticated way to approach this would indeed use threading, but if you don't want to do that you should instead loop over time, i.e.
for (int minutes = 1; mintues <= 30; minutes++) {
// if minutes modulo 3 do something
// if minutes modulo 5 do something
}
You could loop over seconds, microseconds, whatnot.
Related
I've got a simple C++ task - I need to create an alarm app, which triggers alarms a few times.
For example, there is a text file with lines of time in format: hour minutes
I read these into an array.
My idea is that I create an infinite loop which checks what time is it every 60 seconds. Inside this loops, it checks if time == time_array_element_1 or time == time_array_element_2 etc.
And it check every 60 seconds.
Could you guys help me to decide, maybe there is some more optimal way to do it?
"optimal" strongly depends on what you want to achieve:
If you just want to have an alarm: use an existing app.
If you need to implement it on your own program, use a library that provides timer (e.g., Qt, boost, ...)
If you can't use 3rd party libraries because you're not allowed to (homework?): build your own.
If you don't want or cannot build your own timer library: use that loop approach.
If you want to run the alarm at a particular time every day, you could write an infinite loop that checks whether that time has come. Pseudo-Code:
const int alarm_time
const int sleep_time;
while (true) {
const int current = get_seconds_since_midnight();
if (current - alarm_time < sleep_time) {
alarm();
}
sleep(sleep_time);
}
However, you'd still need to keep this program running all the time.
That's fine if you just want to learn.
But for any other use-case, this task should be handled by the OS (e.g., cron on unix).
Using functions available in the WinAPI, is it possible to ensure that a specific function is called according to a milisecond precise timestamp? And if so what would be the correct implementation?
I'm trying to write tool assisted speedrun software. This type of software sends user input commands at very exact moments after the script is launched to perform humanly impossible inputs that allow faster completion of videogames. A typical sequence looks something like this:
At 0 miliseconds send right key down event
At 5450 miliseconds send right key up, and up key down event
At 5460 miliseconds send left key down event
etc..
What I've tried so far is listed below. As I'm not experienced in the low level nuances of high precision timers I have some results, but no understanding of why they are this way:
Using Sleep in combination with timeBeginPeriod set to 1 between inputs gave the worst results. Out of 20 executions 0 have met the timing requirement. I believe this is well explained in the documentation for sleep Note that a ready thread is not guaranteed to run immediately. Consequently, the thread may not run until some time after the sleep interval elapses. My understanding is that Sleep isn't up for this task.
Using a busy wait loop checking GetTickCount64 with timeBeginPeriod set to 1 produced slightly better results. Out of 20 executions 2 have met the timing requirement, but apparently that was just a fortunate circumstance. I've looked up some info on this timing function and my suspicion is that it doesn't update often enough to allow 1 milisecond accuracy.
Replacing the GetTickCount64 with the QueryPerformanceCounter improved the situation slightly. Out of 20 executions 8 succeded. I wrote a logger that would store the QPC timestamps right before each input is sent and dump the values in a file after the sequence is finished. I even went as far as to preallocate space for all variables in my code to make sure that time isn't wasted on needless explicit memory allocations. The log values diverge from the timestamps I supply the program by anything from 1 to 40 miliseconds. General purpose programming can live with that, but in my case a single frame of the game is 16.7 ms, so in the worst possible case with delays like these I can be 3 frames late, which defeats the purpose of the whole experiment.
Setting the process priority to high didn't make any difference.
At this point I'm not sure where to look next. My two guesses are that maybe the time that it takes to iterate the busy loop and check the time using (QPCNow - QPCStart) / QPF is itself somehow long enough to introduce the mentioned delay, or that the process is interrupted by the OS scheduler somwhere along the execution of the loop and control returns too late.
The game is 100% deterministic and locked at 60 fps. I am convinced that if I manage to make the input be timed accurately the result will always be 20 out of 20, but at this point I'm begining to suspect that this may not be possible.
EDIT: As per request here is a stripped down testing version. Breakpoint after the second call to ExecuteAtTime and view the TimeBeforeInput variables. For me it reads 1029 and 6017(I've omitted the decimals) meaning that the code executed 29 and 17 miliseconds after it should have.
Disclaimer: the code is not written to demonstrate good programming practices.
#include "stdafx.h"
#include <windows.h>
__int64 g_TimeStart = 0;
double g_Frequency = 0.0;
double g_TimeBeforeFirstInput = 0.0;
double g_TimeBeforeSecondInput = 0.0;
double GetMSSinceStart(double& debugOutput)
{
LARGE_INTEGER now;
QueryPerformanceCounter(&now);
debugOutput = double(now.QuadPart - g_TimeStart) / g_Frequency;
return debugOutput;
}
void ExecuteAtTime(double ms, INPUT* keys, double& debugOutput)
{
while(GetMSSinceStart(debugOutput) < ms)
{
}
SendInput(2, keys, sizeof(INPUT));
}
INPUT* InitKeys()
{
INPUT* result = new INPUT[2];
ZeroMemory(result, 2*sizeof(INPUT));
INPUT winKey;
winKey.type = INPUT_KEYBOARD;
winKey.ki.wScan = 0;
winKey.ki.time = 0;
winKey.ki.dwExtraInfo = 0;
winKey.ki.wVk = VK_LWIN;
winKey.ki.dwFlags = 0;
result[0] = winKey;
winKey.ki.dwFlags = KEYEVENTF_KEYUP;
result[1] = winKey;
return result;
}
int _tmain(int argc, _TCHAR* argv[])
{
INPUT* keys = InitKeys();
LARGE_INTEGER qpf;
QueryPerformanceFrequency(&qpf);
g_Frequency = double(qpf.QuadPart) / 1000.0;
LARGE_INTEGER qpcStart;
QueryPerformanceCounter(&qpcStart);
g_TimeStart = qpcStart.QuadPart;
//Opens windows start panel one second after launch
ExecuteAtTime(1000.0, keys, g_TimeBeforeFirstInput);
//Closes windows start panel 5 seconds later
ExecuteAtTime(6000.0, keys, g_TimeBeforeSecondInput);
delete[] keys;
Sleep(1000);
return 0;
}
I am using OpenMP to perform a time consuming operation. I am unable to update a ProgressBar from GTK+ from within the time consuming loop at the same time the operations are carried out. The code I have updtates the ProgressBar, but it does so after everything is done. Not as the code progresses.
This is my dummy code that doesn't update the ProgressBar until everything is done:
void largeTimeConsumingFunction (GtkProgressBar** progressBar) {
int extensiveOperationSize = 1000000;
#pragma omp parallel for ordered schedule(dynamic)
for (int i = 0; i < extensiveOperationSize; i++) {
// Do something that will take a lot of of time with data
#pragma omp ordered
{
// Update the progress bar
gtk_progress_bar_set_fraction(*progressBar, i/(double)extensiveOperationSize);
}
}
}
When I do the same, but without using OpenMP, the same happens. It doesn't get updated until the end.
How could I get that GTK+ Widget to update at the same time the loop is working?
Edit: This is just a dummy code to keep it short and readable. It has the same structure as my actual code, but in my actual code I don't know before hand the size of the items I will be processing. It could be 10 or more than 1 million items and I will have to perform some action for each of them.
There are two potential issues here:
First, if you are performing long running computations that might block main thread, you have to call
while (gtk_events_pending ())
gtk_main_iteration ();
every now and then to keep UI responsive (which includes redrawing itself).
Second, you should call GTK+ functions only from main thread.
#include <iostream>
#include <ctime>
using namespace std;
int main()
{
clock_t t;
t = clock();
for(int i=0;i<1000000;i++)
;
t=clock()-t;
cout<<(float)t/CLOCKS_PER_SEC<<endl;
return 0;
}
I wrote a sample c++ program to measure the running time. Every time I run this code I get a different output. How is this happening? Shouldn't the time required by this program be same every time I run it.
I think that your running time you had is true. In the multitasking operating system, we have multi thread, so when your program running, maybe other program request CPU and your program to be comming delay for it.
You should read:
Easily measure elapsed time
If you are curious about the game timer program. You can use the game loop.
follows this:
How to make timer for a game loop?
I'm constructing a data visualisation system that visualises over 100,000 data points (visits to a website) across a time period. The time period (say 1 week) is then converted into simulation time (1 week = 2 minutes in simulation), and a task is performed on each and every piece of data at the specific time it happens in simulation time (the time each visit occurred during the week in real time). With me? =p
In other programming languages (eg. Java) I would simply set a timer for each datapoint. After each timer is complete it triggers a callback that allows me to display that datapoint in my app. I'm new to C++ and unfortunately it seems that timers with callbacks aren't built-in. Another method I would have done in ActionScript, for example, would be using custom events that are triggered after a specific timeframe. But then again I don't think C++ has support for custom events either.
In a nutshell; say I have 1000 pieces of data that span across a 60 second period. Each piece of data has it's own time in relation to that 60 second period. For example, one needs to trigger something at 1 second, another at 5 seconds, etc.
Am I going about this the right way, or is there a much easier way to do this?
Ps. I'm using Mac OS X, not Windows
I would not use timers to do that. Sounds like you have too many events and they may lie too close to each other. Performance and accuracy may be bad with timers.
a simulation is normally done like that:
You are simly doing loops (or iterations). And on every loop you add an either measured (for real time) or constant (non real time) amount to your simulation time.
Then you manually check all your events and execute them if they have to.
In your case it would help to have them sorted for execution time so you would not have to loop through them all every iteration.
Tme measuring can be done with gettimer() c function for low accuracy or there are better functions for higher accuracy e.g. QueryPerformanceTimer() on windows - dont know the equivalent for Mac.
Just make a "timer" mechanism yourself, that's the best, fastest and most flexible way.
-> make an array of events (linked to each object event happens to) (std::vector in c++/STL)
-> sort the array on time (std::sort in c++/STL)
-> then just loop on the array and trigger the object action/method upon time inside a range.
Roughly that gives in C++:
// action upon data + data itself
class Object{
public:
Object(Data d) : data(d) {
void Action(){display(data)};
Data data;
};
// event time + object upon event acts
class Event{
public:
Event(double t, Object o) time (t), object(o) {};
// useful for std::sort
bool operator<(Event e) { return time < e.time; }
double time;
Object object;
}
//init
std::vector<Event> myEvents;
myEvents.push_back(Event(1.0, Object(data0)));
//...
myEvents.push_back(Event(54.0, Object(data10000)));
// could be removed is push_back() is guaranteed to be in the correct order
std::sort(myEvents.begin(), myEvents.end());
// the way you handle time... period is for some fuzziness/animation ?
const double period = 0.5;
const double endTime = 60;
std::vector<Event>::iterator itLastFirstEvent = myEvents.begin();
for (double currtime = 0.0; currtime < endTime; currtime+=0.1)
{
for (std::vector<Event>::iterator itEvent = itLastFirstEvent ; itEvent != myEvents.end();++itEvent)
{
if (currtime - period < itEvent.time)
itLastFirstEvent = itEvent; // so that next loop start is optimised
else if (itEvent.time < currtime + period)
itEvent->actiontick(); // action speaks louder than words
else
break; // as it's sorted, won't be any more tick this loop
}
}
ps: About custom events, you might want to read/search about delegates in c++ and function/method pointers.
If you are using native C++, you should look at the Timers section of the Windows API on the MSDN website. They should tell you exactly what you need to know.