How to call a method/function 50 time in a second - c++

How to call a method/function 50 time in a second then calculate time spent, If time spent is less than one second then sleep for (1-timespent) seconds.
Below is the pseudo code
while(1)
{
start_time = //find current time
int msg_count=0;
send_msg();
msg_count++;
// Check time after sending 50 messages
if(msg_count%50 == 0)
{
curr_time = //Find current time
int timeSpent = curr_time - start_time ;
int waitingTime;
start_time = curr_time ;
waitingTime = if(start_time < 1 sec) ? (1 sec - timeSpent) : 0;
wait for waitingTime;
}
}
I am new with Timer APIs. Can anyone help me that what are the timer APIs, I have to use to achieve this. I want portable code.

First, read the time(7) man page.
Then you may want to call timer_create(2) to set up a timer. To query about time, use clock_gettime(2)
You probably may want to wait and multiplex on some input and output. poll(2) is useful for this. To sleep for a small amount of time without using the CPU consider nanosleep(2)
If using timer doing signals, read signal(7) and be careful because signal handlers are restricted to async-signal-safe functions (consider having a signal handler which just sets some global volatile sig_atomic_t flag). You may also be interested by the Linux specific timerfd_create(2) (which you could poll or pass to your event loop).
You might want to use some existing event loop library, like libevent or libev (or those from GTK/Glib, Qt, etc...), which are often using poll (or fancier things). The linux specific eventfd(2) and signalfd(2) might be very helpful.
Advanced Linux Programming is also useful to read.
If send_msg is doing network I/O, you probably need to redesign your program around some event loop (perhaps your own, based on poll) - you'll need to multiplex (i.e. poll) both on network sends and network recieves. continuation-passing style is then a useful paradigm.

Related

How to call a function every x seconds but be able to do stuff in the meantime

I'm building a tetris game and I need the pieces to fall every x seconds; something like:
while(true){
moveDown();
sleep(x)
}
The problem is, I need to be able to move the pieces left and right in the meantime, i.e., call a function while it's sleeping.
How can I do that in c++?
Both time and key presses can be events which can be used to wait on. On UNIXes you'd use something like poll() with a suitable time for timeout and the input device used to recognize key presses. On other systems there are similar facilities (I'm a UNIX persons and I have never worked on Windows specific stuff although it seems the Windows facilities are actually more flexible). Depending on the result of poll() (timeout or activity on the I/O device in that case) you'd do the appropriate action.
This problem is solvable in multiple ways (another idea that comes to mind is multithreading, but that seems overkill). One approach would be to keep track of the number of "game cycles" and execute some function every n-th cycle like this:
for(int32_t count{1};;count++)
{
if (!count % 5)
{
// do something every 5th cycle
}
// do something every cycle
sleep(x);
}
you can measure how much time has passed since last fall and move piece down after given amount and then reset counter. In pseudo-code it could look like this:
while(true)
{
counter.update();
if(counter.value() == fall_period)
{
move_piece_down();
couter.reset();
}
// rotate pieces
}
If you are using typical implementation of game loop your counter can just accumulate elapsed time since last frame.

ZeroMQ: how to reduce multithread-communication latency with inproc?

I'm using inproc and PAIR to achieve inter-thread communication and trying to solve a latency problem due to polling. Correct me if I'm wrong: Polling is inevitable, because a plain recv() call will usually block and cannot take a specific timeout.
In my current case, among N threads, each of the N-1 worker threads has a main while-loop. The N-th thread is a controller thread which will notify all the worker threads to quit at any time. However, worker threads have to use polling with a timeout to get that quit message. This introduces a latency, the latency parameter is usually 1000ms.
Here is an example
while (true) {
const std::chrono::milliseconds nTimeoutMs(1000);
std::vector<zmq::poller_event<std::size_t>> events(n);
size_t nEvents = m_poller.wait_all(events, nTimeoutMs);
bool isToQuit = false;
for (auto& evt : events) {
zmq::message_t out_recved;
try {
evt.socket.recv(out_recved, zmq::recv_flags::dontwait);
}
catch (std::exception& e) {
trace("{}: Caught exception while polling: {}. Skipped.", GetLogTitle(), e.what());
continue;
}
if (!out_recved.empty()) {
if (IsToQuit(out_recved))
isToQuit = true;
break;
}
}
if (isToQuit)
break;
//
// main business
//
...
}
To make things worse, when the main loop has nested loops, the worker threads then need to include more polling code in each layer of the nested loops. Very ugly.
The reason why I chose ZMQ for multithread communication is because of its elegance and the potential of getting rid of thread-locking. But I never realized the polling overhead.
Am I able to achieve the typical latency when using a regular mutex or an std::atomic data operation? Should I understand that the inproc is in fact a network communication pattern in disguise so that some latency is inevitable?
An above posted statement ( a hypothesis ):
"...a plain recv() call will usually block and cannot take a specific timeout."
is not correct:
a plain .recv( ZMQ_NOBLOCK )-call will never "block",
a plain .recv( ZMQ_NOBLOCK )-call can get decorated so as to mimick "a specific timeout"
An above posted statement ( a hypothesis ):
"...have to use polling with a timeout ... introduces a latency, the latency parameter is usually 1000ms."
is not correct:
- one need not use polling with a timeout
- the less one need not set 1000 ms code-"injected"-latency, spent obviously only on-no-new-message state
Q : "Am I able to achieve the typical latency when using a regular mutex or an std::atomic data operation?"
Yes.
Q : "Should I understand that the inproc is in fact a network communication pattern in disguise so that some latency is inevitable?"
No. inproc-transport-class is the fastest of all these kinds as it is principally protocol-less / stack-less and has more to do with ultimately fast pointer-mechanics, like in a dual-end ring-buffer pointer-management.
The Best Next Step:
1 )Re-factor your code, so as to always harness but the zero-wait { .poll() | .recv() }-methods, properly decorated for both { event- | no-event- }-specific looping.
2 )
If then willing to shave the last few [us] from the smart-loop-detection turn-around-time, may focus on improved Context()-instance setting it to work with larger amount of nIOthreads > N "under the hood".
optionally 3 )
For almost hard-Real-Time systems' design one may finally harness a deterministically driven Context()-threads' and socket-specific mapping of these execution-vehicles onto specific, non-overlapped CPU-cores ( using a carefully-crafted affinity-map )
Having set 1000 [ms] in code, no one is fair to complain about spending those very 1000 [ms] waiting in a timeout, coded by herself / himself. No excuse for doing this.
Do not blame ZeroMQ for behaviour, that was coded from the application side of the API.
Never.

Light event in WinAPI / C++

Is there some light (thus fast) event in WinAPI / C++ ? Particularly, I'm interested in minimizing the time spent on waiting for the event (like WaitForSingleObject()) when the event is set. Here is a code example to clarify further what I mean:
#include <Windows.h>
#include <chrono>
#include <stdio.h>
int main()
{
const int64_t nIterations = 10 * 1000 * 1000;
HANDLE hEvent = CreateEvent(nullptr, true, true, nullptr);
auto start = std::chrono::high_resolution_clock::now();
for (int64_t i = 0; i < nIterations; i++) {
WaitForSingleObject(hEvent, INFINITE);
}
auto elapsed = std::chrono::high_resolution_clock::now() - start;
double nSec = 1e-6 * std::chrono::duration_cast<std::chrono::microseconds>(elapsed).count();
printf("%.3lf Ops/sec\n", nIterations / nSec);
return 0;
}
On 3.85GHz Ryzen 1800X I'm getting 7209623.405 operations per second, meaning 534 CPU clocks (or 138.7 nanoseconds) are spent on average for a check whether the event is set.
However, I want to use the event in performance-critical code where most of the time the event is actually set, so it's just a check for a special case and in that case the control flow goes to code which is not performance-critical (because this situation is seldom).
WinAPI events which I know (created with CreateEvent) are heavy-weight because of security attributes and names. They are intended for inter-process communication. Perhaps WaitForSingleObject() is so slow because it switches from user to kernel mode and back, even when the event is set. Furthermore, this function has to behave differently for manual- and auto-reset events, and a check for the type of the event takes time too.
I know that a fast user-mode mutex (spin lock) can be implemented with atomic_flag . Its spinning loop can be extended with a std::this_thread::yield() in order to let other threads run while spinning.
With the event I wouldn't like a complete equivalent of a spin-lock, because when the event is not set, it may take substantial time till it becomes set again. If every thread that needs the event set start spinning till it's set again, that would be an epic waste of CPU electricity (though shouldn't affect system performance if they call std::this_thread::yield)
So I would rather like an analogy of a critical section, which usually just does the work in user mode and when it realizes it needs to wait (out of spins), it switches to kernel mode and waits on a heavy synchronization object like a mutex.
UPDATE1: I've found that .NET has ManualResetEventSlim , but couldn't find an equivalent in WinAPI / C++.
UPDATE2: because there were details of event usage requested, here they are. I'm implementing a knowledge base that can be switched between regular and maintenance mode. Some operations are maintenance-only, some operations are regular-only, some can work in both modes, but of them some are faster in maintenance and some are faster in regular mode. Upon its start each operation needs to know whether it is in maintenance or regular mode, as the logic changes (or the operation refuses to execute at all). From time to time user can request a switch between maintenance and regular mode. This is rare. When this request arrives, no new operations in the old mode can start (a request to do so fails) and the app waits for the current operations in the old mode to finish, then it switches mode. So light event is a part of this data structure: the operations except mode switching have to be fast, so they need to set/reset/wait event quickly.
begin from win8 the best solution for you use WaitOnAddress (in place WaitForSingleObject, WakeByAddressAll (work like SetEvent for NotificationEvent) and WakeByAddressSingle (work like SynchronizationEvent ). more read - WaitOnAddress lets you create a synchronization object
implementation can be next:
class LightEvent
{
BOOLEAN _Signaled;
public:
LightEvent(BOOLEAN Signaled)
{
_Signaled = Signaled;
}
void Reset()
{
_Signaled = FALSE;
}
void Set(BOOLEAN bWakeAll)
{
_Signaled = TRUE;
(bWakeAll ? WakeByAddressAll : WakeByAddressSingle)(&_Signaled);
}
BOOL Wait(DWORD dwMilliseconds = INFINITE)
{
BOOLEAN Signaled = FALSE;
while (!_Signaled)
{
if (!WaitOnAddress(&_Signaled, &Signaled, sizeof(BOOLEAN), dwMilliseconds))
{
return FALSE;
}
}
return TRUE;
}
};
don't forget add Synchronization.lib for linker input.
code for this new api very effective, they not create internal kernel objects for wait (like event) but use new api ZwAlertThreadByThreadId ZwWaitForAlertByThreadId special design for this targets.
how implement this yourself, before win8 ? for first look trivial - boolen varitable + event handle. and must look like:
void Set()
{
SetEvent(_hEvent);
// Sleep(1000); // simulate thread innterupted here
_Signaled = true;
}
void Reset()
{
_Signaled = false;
// Sleep(1000); // simulate thread innterupted here
ResetEvent(_hEvent);
}
void Wait(DWORD dwMilliseconds = INFINITE)
{
if(!_Signaled) WaitForSingleObject(_hEvent);
}
but this code really incorrect. problem that we do 2 operation in Set (Reset) - change state of _Signaled and _hEvent. and no way do this from user mode as atomic/interlocked operation. this mean that thread can be interrupted between this two operation. assume that 2 different threads in concurrent call Set and Reset. in most case operation will be executed in next order for example:
SetEvent(_hEvent);
_Signaled = true;
_Signaled = false;
ResetEvent(_hEvent);
here all ok. but possible and next order (uncomment one Sleep for test this)
SetEvent(_hEvent);
_Signaled = false;
ResetEvent(_hEvent);
_Signaled = true;
as result _hEvent will be in reset state, when _Signaled is true.
implement this as atomic yourself, without os support will be not simply, however possible. but i be first look for usage of this - for what ? are event like behavior this is exactly you need for task ?
The other answer is very good if you can drop support of Windows 7.
However on Win7, if you set/reset the event many times from multiple threads, but only need to sleep rarely, the proposed method is quite slow.
Instead, I use a boolean guarded by a critical section, with condition variable to wake / sleep.
The wait method will go to the kernel for sleep on SleepConditionVariableCS API, that’s expected and what you want.
However set & reset methods will work entirely in user mode: setting a single boolean variable is very fast, i.e. in 99% of cases, the critical section will do it’s user-mode lock free magic.

C++ Timer control

I want to create a timer so that after completing the time(suppose 10 sec) the control should come out of the function..Please note that am starting the timer inside the function.Code is given below..I want to give certain time limit to that function so that after completing the time the control should come out of the function..I don't want to calculate the time..I want to give my own time so that the function should complete its execution within that time period..suppose if function is waiting for an input then also after completing time limit the control should come out indicating that "time has expired"..once it comes out of the function then it should continue with the next function execution...Is this possible in c++...
Begin();
// here I would like to add timer.
v_CallId = v_CallId1;
call_setup_ind();
call_alert_ind();
dir_read_search_cnf();
dir_save_cnf();
END();
If the code is linear and the functions called cannot be chopped into smaller pieces, your stuck to letting an external process/thread do the timing and abort the worker thread when the timeout is exceeded.
When you can chop the worker into smaller pieces you could do something like this
Timeout.Start(5000);
while ((TimeOut.TimeOut() == false) && (completed == false))
{
completed = WorkToDo()
}
This is a pattern we frequently use in our embbeded application. The timeout class was in house develop. It just reads the tick counter and looks if the time has passed. An framework like QT or MFC should have such a class itself.

Limit iterations per time unit

Is there a way to limit iterations per time unit? For example, I have a loop like this:
for (int i = 0; i < 100000; i++)
{
// do stuff
}
I want to limit the loop above so there will be maximum of 30 iterations per second.
I would also like the iterations to be evenly positioned in the timeline so not something like 30 iterations in first 0.4s and then wait 0.6s.
Is that possible? It does not have to be completely precise (though the more precise it will be the better).
#FredOverflow My program is running
very fast. It is sending data over
wifi to another program which is not
fast enough to handle them at the
current rate. – Richard Knop
Then you should probably have the program you're sending data to send an acknowledgment when it's finished receiving the last chunk of data you sent then send the next chunk. Anything else will just cause you frustrations down the line as circumstances change.
Suppose you have a good Now() function (GetTickCount() is bad example, it's OS specific and has bad precision):
for (int i = 0; i < 1000; i++){
DWORD have_to_sleep_until = GetTickCount() + EXPECTED_ITERATION_TIME_MS;
// do stuff
Sleep(max(0, have_to_sleep_until - GetTickCount()));
};
You can check elapsed time inside the loop, but it may be not an usual solution. Because computation time is totally up to the performance of the machine and algorithm, people optimize it during their development time(ex. many game programmer requires at least 25-30 frames per second for properly smooth animation).
easiest way (for windows) is to use QueryPerformanceCounter(). Some pseudo-code below.
QueryPerformanceFrequency(&freq)
timeWanted = 1.0/30.0 //time per iteration if 30 iterations / sec
for i
QueryPerf(count1)
do stuff
queryPerf(count2)
timeElapsed = (double)(c2 - c1) * (double)(1e3) / double(freq) //time in milliseconds
timeDiff = timeWanted - timeElapsed
if (timeDiff > 0)
QueryPerf(c3)
QueryPerf(c4)
while ((double)(c4 - c3) * (double)(1e3) / double(freq) < timeDiff)
queryPerf(c4)
end for
EDIT: You must make sure that the 'do stuff' area takes less time than your framerate or else it doesn't matter. Also instead of 1e3 for milliseconds, you can go all the way to nanoseconds if you do 1e9 (if you want that much accuracy)
WARNING... this will eat your CPU but give you good 'software' timing... Do it in a separate thread (and only if you have more than 1 processor) so that any guis wont lock. You can put a conditional in there to stop the loop if this is a multi-threaded app too.
#FredOverflow My program is running very fast. It is sending data over wifi to another program which is not fast enough to handle them at the current rate. – Richard Knop
What you might need a buffer or queue at the receiver side. The thread that receives the messages from the client (like through a socket) get the message and put it in the queue. The actual consumer of the messages reads/pops from the queue. Of course you need concurrency control for your queue.
Besides the flow control methods mentioned, if you also have the need to maintain an accurate specific data sending rate in your sender part. Usually it can be done like this.
E.x. if you want to send at 10Mbps, create a timer of interval 1ms so it will call a predefined function every 1ms. Then in the timer handler function, by keep tracking of 2 static variables 1)Time elapsed since beginning of sending data 2)How much data in bytes have been sent up to last call, you can easily calculate how much data is needed to be sent in the current call (or just sleep and wait for next call).
By this way, you can do "streaming" of data in a very stable way with very little jitterness, and this is usually adopted in streaming of videos. Of course it also depends on how accurate the timer is.