How to insert a timer in popen function C++? - c++

I have this function, the input is the var "cmd" for example a dmesg command.
int i = 0;
char * bufferf;
bufferf = ( char * ) calloc( sizeof ( char ) , 200000 );
char buffer[1000][1280];
memset(buffer,0,1000 * 1280);
memset(bufferf,0,strlen(bufferf));
FILE* pipe = popen(cmd, "r");
if (!pipe){
send(client_fd,"EXCEPTION",9,0);
}
while(!feof(pipe)) {
if(fgets(buffer[i], 128, pipe) != NULL)
{
strcat(bufferf, buffer [i] );
}
i++;
}
pclose(pipe);
std::cout << bufferf ;
send(client_fd,bufferf,strlen(bufferf),0); }
Well. My goal is to calculate the amount of time between the start and the end of the while statement, by adding for each time a var that count the time passed.
For example dmesg is ~700 lines of output. The while runs for 700 times I have to add 700 times the amount of time to calculate the total sum.
How can I do that?
I've tried with difftime but it doesn't work very well.
Any other solutions?
Thank you.

You could make an extremely basic class that uses clock() to measure the time:
#include <ctime>
class Timer
{
private:
clock_t _start, _duration;
public:
Timer() : _start(0), _duration(0) { }
void start() { _start = clock(); }
void stop() { _duration = clock() - _start; }
float getTime() { return (float)_duration / CLOCKS_PER_SEC; }
};
Obviously multiply by 1000 if you want to display the time in milliseconds.
And then just:
Timer t;
t.start();
// do something
t.stop();
cout << "Duration: " << t.getTime() << endl;
Also, take note of what sarnold said, buffer is huge.

Related

High accuracy timer in c++

I am working on high accuracy timers in C++ on Windows. My application requires the timer accuracy in ms. Up to 5 ms error can be omitted
in the app. I have shared the code below and I have got very good repeatability in this timer. 2 times out of 20 were error but the errors are
quite big around 75 ms. What can be caused of these errors? Because whenever I run the program the error level is the same which is around
75 ms. I usually choose delay 10 ms. Is it possible to obtain ms accuracy on Windows? Thank you in advance.
#include <string>
#include <chrono>
#include <iostream>
using namespace std;
//Get the time stamp
time_t getTimeStamp()
{
std::chrono::time_point<std::chrono::system_clock, std::chrono::milliseconds> tp =
std::chrono::time_point_cast<std::chrono::milliseconds>(std::chrono::system_clock::now());
auto tmp = std::chrono::duration_cast<std::chrono::milliseconds>(tp.time_since_epoch());
time_t timestamp = tmp.count();
return timestamp;
}
//Get the time year-month-day hour-minute-second millisecond
std::string gettm(__int64 timestamp)
{
__int64 milli = timestamp + (__int64)8 * 60 * 60 * 1000;
auto mTime = std::chrono::milliseconds(milli);
auto tp = std::chrono::time_point<std::chrono::system_clock, std::chrono::milliseconds>(mTime);
auto tt = std::chrono::system_clock::to_time_t(tp);
std::tm now;
::gmtime_s(&now, &tt);
char res[64] = { 0 };
sprintf_s(res, _countof(res), "%03d",static_cast<int>(milli % 100));
return std::string(res);
}
int main(void)
{
bool state = false;
int delay;
cout << "Delay (ms): " << endl;
cin >> delay;
Again:
string now = gettm(getTimeStamp());
int now_int = stoi(now);
int sss = now_int + delay;
if (sss >= 100)
sss = sss % 100;
while(1)
{
string change = gettm(getTimeStamp());
int now_change = stoi(change);
if (now_change >= 100)
now_change = now_change % 100;
state = true;
cout << "High O4 and now (ms): "<< sss << " and change (ms): "<<now_change <<endl;
if (now_change == sss)
{
state = false;
cout << "Low O4" << endl;
goto Again;
}
}
return 0;
}

How to make a timer that counts down from 30 by 1 every second?

I want to make a timer that displays 30, 29 etc going down every second and then when there is an input it stops. I know you can do this:
for (int i = 60; i > 0; i--)
{
cout << i << endl;
Sleep(1000);
}
This will output 60, 59 etc. But this doesn't allow for any input while the program is running. How do I make it so you can input things while the countdown is running?
Context
This is not a homework assignment. I am making a text adventure game and there is a section where an enemy rushes at you and you have 30 seconds to decide what you are going to do. I don't know how to make the timer able to allow the user to input things while it is running.
Your game is about 1 frame per second, so user input is a problem. Normally games have higher frame rate like this:
#include <Windows.h>
#include <iostream>
int main() {
// Initialization
ULARGE_INTEGER initialTime;
ULARGE_INTEGER currentTime;
FILETIME ft;
GetSystemTimeAsFileTime(&ft);
initialTime.LowPart = ft.dwLowDateTime;
initialTime.HighPart = ft.dwHighDateTime;
LONGLONG countdownStartTime = 300000000; // 100 Nano seconds
LONGLONG displayedNumber = 31; // Prevent 31 to be displayed
// Game loop
while (true) {
GetSystemTimeAsFileTime(&ft); // 100 nano seconds
currentTime.LowPart = ft.dwLowDateTime;
currentTime.HighPart = ft.dwHighDateTime;
//// Read Input ////
bool stop = false;
SHORT key = GetKeyState('S');
if (key & 0x8000)
stop = true;
//// Game Logic ////
LONGLONG elapsedTime = currentTime.QuadPart - initialTime.QuadPart;
LONGLONG currentNumber_100ns = countdownStartTime - elapsedTime;
if (currentNumber_100ns <= 0) {
std::cout << "Boom!" << std::endl;
break;
}
if (stop) {
std::wcout << "Stopped" << std::endl;
break;
}
//// Render ////
LONGLONG currentNumber_s = currentNumber_100ns / 10000000 + 1;
if (currentNumber_s != displayedNumber) {
std::cout << currentNumber_s << std::endl;
displayedNumber = currentNumber_s;
}
}
system("pause");
}
If you're running this on Linux, you can use the classic select() call. When used in a while-loop, you can wait for input on one or more file descriptors, while also providing a timeout after which the select() call must return. Wrap it all in a loop and you'll have both your countdown and your handling of standard input.
https://linux.die.net/man/2/select

How to calculate time in c++?

I'm trying to figure out how to calculate time in c++ . I'm making
a program where every 3 seconds an event happens for example print out "hello" etc;
Here's an example using two threads so your program won't freeze and this_thread::sleep_for() in C++11:
#include <iostream>
#include <chrono>
#include <thread>
using namespace std;
void hello()
{
while(1)
{
cout << "Hello" << endl;
chrono::milliseconds duration( 3000 );
this_thread::sleep_for( duration );
}
}
int main()
{
//start the hello thread
thread help1(hello);
//do other stuff in the main thread
for(int i=0; i <10; i++)
{
cout << "Hello2" << endl;
chrono::milliseconds duration( 3000 );
this_thread::sleep_for( duration );
}
//wait for the other thread to finish in this case wait forever(while(1))
help1.join();
}
you can use boost::timer to calculate time in C++:
using boost::timer::cpu_timer;
using boost::timer::cpu_times;
using boost::timer::nanosecond_type;
...
nanosecond_type const three_seconds(3 * 1000000000LL);
cpu_timer timer;
cpu_times const elapsed_times(timer.elapsed());
nanosecond_type const elapsed(elapsed_times.system + elapsed_times.user);
if (elapsed >= three_seconds)
{
//more then 3 seconds elapsed
}
It is dependent on your OS/Compiler.
Case 1:
If you have C++11 then you can use as suggested by Chris:
std::this_thread::sleep_for() // You have to include header file thread
Case 2:
If you are on the windows platform then you can also use something like:
#include windows.h
int main ()
{
event 1;
Sleep(1000); // number is in milliseconds 1Sec = 1000 MiliSeconds
event 2;
return 0;
}
Case 3:
On linux platform you can simply use:
sleep(In seconds);

Single-threaded and multi-threaded code taking the same time

Ive been using pthreads but have realized that my code is taking the same amount of time independently if i use 1 thread or if i separate the task into 1/N for N threads. To exemplify i reduced my code to this example:
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <boost/progress.hpp>
#define SIZEEXEC 200000000
using namespace boost;
using std::cout;
using std::endl;
typedef struct t_d{
int intArg;
} Thread_data;
void* function(void *threadarg)
{
Thread_data *my_data= (Thread_data *) threadarg;
int size= my_data->intArg;
int i=0;
unsigned rand_state = 0;
for(i=0; i<size; i++) rand_r(&rand_state);
return 0;
}
void withOutThreads(void)
{
Thread_data* t1= new Thread_data();
t1->intArg= SIZEEXEC/3;
function((void *) t1);
Thread_data* t2= new Thread_data();
t2->intArg= SIZEEXEC/3;
function((void *) t2);
Thread_data* t3= new Thread_data();
t3->intArg= SIZEEXEC/3;
function((void *) t3);
}
void withThreads(void)
{
pthread_t* h1 = new pthread_t;
pthread_t* h2 = new pthread_t;
pthread_t* h3 = new pthread_t;
pthread_attr_t* atr = new pthread_attr_t;
pthread_attr_init(atr);
pthread_attr_setscope(atr,PTHREAD_SCOPE_SYSTEM);
Thread_data* t1= new Thread_data();
t1->intArg= SIZEEXEC/3;
pthread_create(h1,atr,function,(void *) t1);
Thread_data* t2= new Thread_data();
t2->intArg= SIZEEXEC/3;
pthread_create(h2,atr,function,(void *) t2);
Thread_data* t3= new Thread_data();
t3->intArg= SIZEEXEC/3;
pthread_create(h3,atr,function,(void *) t3);
pthread_join(*h1,0);
pthread_join(*h2,0);
pthread_join(*h3,0);
pthread_attr_destroy(atr);
delete h1;
delete h2;
delete h3;
delete atr;
}
int main(int argc, char *argv[])
{
bool multThread= bool(atoi(argv[1]));
if(!multThread){
cout << "NO THREADS" << endl;
progress_timer timer;
withOutThreads();
}
else {
cout << "WITH THREADS" << endl;
progress_timer timer;
withThreads();
}
return 0;
}
Either the code is wrong or there is something on my system not allowing for parallel processing. I'm running on Ubuntu 11.10 x86_64-linux-gnu, gcc 4.6, Intel® Xeon(R) CPU E5620 # 2.40GHz × 4
Thanks for any advice!
EDIT:
Given the answers i have realized that (1) progress_timer timer did not allow me to measure differences in "real" time and (2) that the task i am giving in "function" does not seem to be enough for my machine to give different times with 1 or 3 threads (which is odd, i get around 10 seconds in both cases...). I have tried to allocate memory and make it heavier and yes, i see a difference. Although my other code is more complex, there is a good chance it still runs +- the same time with 1 or 3 threads. Thanks!
This is expected. You are measuring CPU time, not wall time.
time ./test 1
WITH THREADS
2.55 s
real 0m1.387s
user 0m2.556s
sys 0m0.008s
Real time is less than user time, which is identical to your measured time. Real time is what your wall clock shows, user and sys are CPU time spent in user and kernel mode
by all CPUs combined.
time ./test 0
NO THREADS
2.56 s
real 0m2.578s
user 0m2.560s
sys 0m0.008s
Your measured time, real time and user time are all virtually the same.
The culprit seems to be progress_timer or rather understanding of it.
Try replacing main() with this. This tells the program doesn't take time as reported by progress_timer, maybe it reports total system time?
#include <sys/time.h>
void PrintTime() {
struct timeval tv;
if(!gettimeofday(&tv,NULL))
cout << "Sec=" << tv.tv_sec << " usec=" << tv.tv_usec << endl ;
}
int main(int argc, char *argv[])
{
bool multThread= bool(atoi(argv[1]));
PrintTime();
if(!multThread){
cout << "NO THREADS" << endl;
progress_timer timer;
withOutThreads();
}
else {
cout << "WITH THREADS" << endl;
progress_timer timer;
withThreads();
}
PrintTime();
return 0;
}

how to reduce the latency from one boost strand to another boost strand

Suppose there are several boost strand share_ptr stored in a vector m_poStrands. And tJobType is the enum indicated different type of job.
I found the time diff from posting a job in one strand (JOBA) to call the onJob of another strand (JOBB) is around 50 milli second.
I want to know if there is any way to reduce the time diff.
void postJob(tJobType oType, UINT8* pcBuffer, size_t iSize)
{
//...
m_poStrands[oType]->post(boost::bind(&onJob, this, oType, pcDestBuffer, iSize));
}
void onJob(tJobType oType, UINT8* pcBuffer, size_t iSize)
{
if (oType == JOBA)
{
//....
struct timeval sTV;
gettimeofday(&sTV, 0);
memcpy(pcDestBuffer, &sTV, sizeof(sTV));
pcDestBuffer += sizeof(sTV);
iSize += sizeof(sTV);
memcpy(pcDestBuffer, pcBuffer, iSize);
m_poStrands[JOBB]->(boost::bind(&onJob, this, JOBB, pcDestBuffer, iSize));
}
else if (oType == JOBB)
{
// get the time from buffer
// and calculate the dime diff
struct timeval eTV;
gettimeofday(&eTV, 0);
}
}
Your latency is probably coming from the memcpys between your gettimeofdays. Here's an example program I ran on my machine (2 ghz core 2 duo). I'm getting thousands of nanoseconds. So a few microseconds. I doubt that your system is running 4 orders of magnitude slower than mine. The worst I ever saw it run was 100 microsecond for one of the two tests. I tried to make the code as close to the code posted as possible.
#include <boost/asio.hpp>
#include <boost/chrono.hpp>
#include <boost/bind.hpp>
#include <boost/thread.hpp>
#include <iostream>
struct Test {
boost::shared_ptr<boost::asio::strand>* strands;
boost::chrono::high_resolution_clock::time_point start;
int id;
Test(int i, boost::shared_ptr<boost::asio::strand>* strnds)
: id(i),
strands(strnds)
{
strands[0]->post(boost::bind(&Test::callback,this,0));
}
void callback(int i) {
if (i == 0) {
start = boost::chrono::high_resolution_clock::now();
strands[1]->post(boost::bind(&Test::callback,this,1));
} else {
boost::chrono::nanoseconds sec = boost::chrono::high_resolution_clock::now() - start;
std::cout << "test " << id << " took " << sec.count() << " ns" << std::endl;
}
}
};
int main() {
boost::asio::io_service io_service_;
boost::shared_ptr<boost::asio::strand> strands[2];
strands[0] = boost::shared_ptr<boost::asio::strand>(new boost::asio::strand(io_service_));
strands[1] = boost::shared_ptr<boost::asio::strand>(new boost::asio::strand(io_service_));
boost::thread t1 (boost::bind(&boost::asio::io_service::run, &io_service_));
boost::thread t2 (boost::bind(&boost::asio::io_service::run, &io_service_));
Test test1 (1, strands);
Test test2 (2, strands);
t1.join();
t2.join();
}