How computer driven ignition timing on gas engines work? - c++

I have been dabbling with writing a C++ program that would control spark timing on a gas engine and have been running in to some trouble. My code is very simple. It starts by creating a second thread that works to emulate the output signal of a Hall Effect sensor that is triggered once per engine revolution. My main code processes the fake sensor output, recalculates engine rpm, and then determines the time necessary to wait for the crankshaft to rotate to the correct angle to send spark to the engine. The problem I'm running into is that I am using a sleep function in milliseconds and at higher RPM's I am losing a significant amount of data.
My question is how are real automotive ECU's programed to be able to control spark at high RPM's accurately?
My code is as follows:
#include <iostream>
#include <Windows.h>
#include <process.h>
#include <fstream>
#include "GetTimeMs64.cpp"
using namespace std;
void HEEmulator(void * );
int HE_Sensor1;
int *sensor;
HANDLE handles[1];
bool run;
bool *areRun;
int main( void )
{
int sentRpm = 4000;
areRun = &run;
sensor = &HE_Sensor1;
*sensor = 1;
run = TRUE;
int rpm, advance, dwell, oHE_Sensor1, spark;
oHE_Sensor1 = 1;
advance = 20;
uint64 rtime1, rtime2, intTime, curTime, sparkon, sparkoff;
handles[0] = (HANDLE)_beginthread(HEEmulator, 0, &sentRpm);
ofstream myfile;
myfile.open("output.out");
intTime = GetTimeMs64();
rtime1 = intTime;
rpm = 0;
spark = 0;
dwell = 10000;
sparkoff = 0;
while(run == TRUE)
{
rtime2 = GetTimeMs64();
curTime = rtime2-intTime;
myfile << "Current Time = " << curTime << " ";
myfile << "HE_Sensor1 = " << HE_Sensor1 << " ";
myfile << "RPM = " << rpm << " ";
myfile << "Spark = " << spark << " ";
if(oHE_Sensor1 != HE_Sensor1)
{
if(HE_Sensor1 > 0)
{
rpm = (1/(double)(rtime2-rtime1))*60000;
dwell = (1-((double)advance/360))*(rtime2-rtime1);
rtime1 = rtime2;
}
oHE_Sensor1 = HE_Sensor1;
}
if(rtime2 >= (rtime1 + dwell))
{
spark = 1;
sparkoff = rtime2 + 2;
}
if(rtime2 >= sparkoff)
{
spark = 0;
}
myfile << "\n";
Sleep(1);
}
myfile.close();
return 0;
}
void HEEmulator(void *arg)
{
int *rpmAd = (int*)arg;
int rpm = *rpmAd;
int milliseconds = (1/(double)rpm)*60000;
for(int i = 0; i < 10; i++)
{
*sensor = 1;
Sleep(milliseconds * 0.2);
*sensor = 0;
Sleep(milliseconds * 0.8);
}
*areRun = FALSE;
}

A desktop PC is not a real-time processing system.
When you use Sleep to pause a thread, you don't have any guarantees that it will wake up exactly after the specified amount of time has elapsed. The thread will be marked as ready to resume execution, but it may still have to wait for the OS to actually schedule it. From the documentation of the Sleep function:
Note that a ready thread is not guaranteed to run immediately. Consequently, the thread may not run until some time after the sleep interval elapses.
Also, the resolution of the system clock ticks is limited.
To more accurately simulate an ECU and the attached sensors, you should not use threads. Your simulation should not even depend on the passage of real time. Instead, use a single loop that updates the state of your simulation (both ECU and sensors) with each tick. This also means that your simulation should include the clock of the ECU.

Related

How to make a timer that counts down from 30 by 1 every second?

I want to make a timer that displays 30, 29 etc going down every second and then when there is an input it stops. I know you can do this:
for (int i = 60; i > 0; i--)
{
cout << i << endl;
Sleep(1000);
}
This will output 60, 59 etc. But this doesn't allow for any input while the program is running. How do I make it so you can input things while the countdown is running?
Context
This is not a homework assignment. I am making a text adventure game and there is a section where an enemy rushes at you and you have 30 seconds to decide what you are going to do. I don't know how to make the timer able to allow the user to input things while it is running.
Your game is about 1 frame per second, so user input is a problem. Normally games have higher frame rate like this:
#include <Windows.h>
#include <iostream>
int main() {
// Initialization
ULARGE_INTEGER initialTime;
ULARGE_INTEGER currentTime;
FILETIME ft;
GetSystemTimeAsFileTime(&ft);
initialTime.LowPart = ft.dwLowDateTime;
initialTime.HighPart = ft.dwHighDateTime;
LONGLONG countdownStartTime = 300000000; // 100 Nano seconds
LONGLONG displayedNumber = 31; // Prevent 31 to be displayed
// Game loop
while (true) {
GetSystemTimeAsFileTime(&ft); // 100 nano seconds
currentTime.LowPart = ft.dwLowDateTime;
currentTime.HighPart = ft.dwHighDateTime;
//// Read Input ////
bool stop = false;
SHORT key = GetKeyState('S');
if (key & 0x8000)
stop = true;
//// Game Logic ////
LONGLONG elapsedTime = currentTime.QuadPart - initialTime.QuadPart;
LONGLONG currentNumber_100ns = countdownStartTime - elapsedTime;
if (currentNumber_100ns <= 0) {
std::cout << "Boom!" << std::endl;
break;
}
if (stop) {
std::wcout << "Stopped" << std::endl;
break;
}
//// Render ////
LONGLONG currentNumber_s = currentNumber_100ns / 10000000 + 1;
if (currentNumber_s != displayedNumber) {
std::cout << currentNumber_s << std::endl;
displayedNumber = currentNumber_s;
}
}
system("pause");
}
If you're running this on Linux, you can use the classic select() call. When used in a while-loop, you can wait for input on one or more file descriptors, while also providing a timeout after which the select() call must return. Wrap it all in a loop and you'll have both your countdown and your handling of standard input.
https://linux.die.net/man/2/select

Buffer underruns on x310 when transmitting and receiving from same channel

I'm running on an x310 across dual 10 Gigabit ethernet, outfitted with twin basic tx rx daughterboards. I'm running on UHD version 3.11.0. Ideally, I would like two simultaneous transmit and receive streams, utilizing both channels to transmit and receive. I don't want to have to use 2 x310s for 2 receive and transmit streams
When I transmit and receive at the same time on the same channel, I get a lot of U's printed out to console signaling underflows, no matter what the rate. HOWEVER, if I transmit and receive on separate channels (tx_streamer has stream_args at channel 1 and rx_streamer has stream_args at channel 0) it works just fine.
I've attached the source code of a complete but simple program that will hopefully demonstrate my problem. In this program, two threads are created: a transmit thread and a receive thread. The receive thread is constantly receiving data to a buffer and overwriting that buffer with new data. The transmit thread is constantly transmitting 0's from a prefilled buffer.
If anyone has an x310 running across 10Gbps ethernet, can you compile and run my program to test if this problem occurs not just for me?
Here's what we have already tested:
I'm running on a server system rocking two 12 core intel xeon processors. (https://ark.intel.com/products/91767/Intel-Xeon-Processor-E5-2650-v4-30M-Cache-2_20-GHz). My network card is the recommended x520 da2. Someone had previously suggested NUMA to be an issue, but I don't think this is the case as the program works when we switch to transmitting and receiving on separate channels.
Since the program works just fine when we are transmitting and receiving on separate channels, I'm led to believe this is not a CPU power issue.
I've tested only transmitting and only receiving. We can transmit at 200MS/s across both channels and we can receive at 200MS/s across both channels, but we cannot transmit and receive from the same channel. This suggests our network card is working properly and we can handle the high rate.
I've tried my program on UHD 3.10.2 and the problem still occurs
I've tried setting the tx_metadata waiting 2 second before transmitting. The problem still occurs.
I've tried running the example program txrx_loopback_from_file and that works for simultaneous receive and transmit, but I have no idea why.
From the last point, I'm lead to believe that I am somehow calling the uhd API wrong, but I have no idea where the error is. Any help would be greatly appreciated.
Thanks,
Jason
#include <iostream>
#include <iomanip>
#include <stdlib.h>
#include <vector>
#include <csignal>
#include <thread>
#include <uhd/utils/thread_priority.hpp>
#include <uhd/utils/safe_main.hpp>
#include <uhd/usrp/multi_usrp.hpp>
#include <uhd/types/tune_request.hpp>
typedef std::complex<short> Complex;
// Constants and signal variables
static bool stop_signal_called = false;
const int NUM_CHANNELS = 1;
const int BUFF_SIZE = 64000;
//function prototypes here
void recvTask(Complex *buff, uhd::rx_streamer::sptr rx_stream);
void txTask(Complex *buff, uhd::tx_streamer::sptr tx_stream, uhd::tx_metadata_t md);
void sig_int_handler(int){
std::cout << "Interrupt Signal Received" << std::endl;
stop_signal_called = true;
}
int UHD_SAFE_MAIN(int argc, char *argv[]) {
uhd::set_thread_priority_safe();
//type=x300,addr=192.168.30.2,second_addr=192.168.40.2
std::cout << std::endl;
std::cout << boost::format("Creating the usrp device") << std::endl;
uhd::usrp::multi_usrp::sptr usrp = uhd::usrp::multi_usrp::make(std::string("type=x300,addr=192.168.30.2"));
std::cout << std::endl;
//set stream args
uhd::stream_args_t stream_args("sc16");
double samp_rate_tx = 10e6;
double samp_rate_rx = 10e6;
uhd::tune_request_t tune_request(0);
//Lock mboard clocks
usrp->set_clock_source(std::string("internal"));
//set rx parameters
usrp->set_rx_rate(samp_rate_rx);
usrp->set_rx_freq(tune_request);
usrp->set_rx_gain(0);
//set tx parameters
usrp->set_tx_rate(samp_rate_tx);
usrp->set_tx_freq(tune_request);
usrp->set_tx_gain(0);
std::signal(SIGINT, &sig_int_handler);
std::cout << "Press Ctrl + C to stop streaming..." << std::endl;
//create buffers, 2 per channel (1 for tx, 1 for rx)
// transmitting complex shorts -> typedef as Complex
Complex *rx_buffs[NUM_CHANNELS];
Complex *tx_buffs[NUM_CHANNELS];
for (int i = 0; i < NUM_CHANNELS; i++){
rx_buffs[i] = new Complex[BUFF_SIZE];
tx_buffs[i] = new Complex[BUFF_SIZE];
// only transmitting 0's
std::fill(tx_buffs[i], tx_buffs[i]+BUFF_SIZE, 0);
}
//////////////////////////////////////////////////////////////////////////////
////////////////START RECEIVE AND TRANSMIT THREADS////////////////////////////
//////////////////////////////////////////////////////////////////////////////
printf("setting up threading\n");
//reset usrp time
usrp -> set_time_now(uhd::time_spec_t(0.0));
// set up RX streams and threads
std::thread rx_threads[NUM_CHANNELS];
uhd::rx_streamer::sptr rx_streams[NUM_CHANNELS];
for (int i = 0; i < NUM_CHANNELS; i++){
stream_args.channels = std::vector<size_t>(1,i);
rx_streams[i] = usrp->get_rx_stream(stream_args);
//setup streaming
auto stream_mode = uhd::stream_cmd_t::STREAM_MODE_START_CONTINUOUS;
uhd::stream_cmd_t stream_cmd(stream_mode);
stream_cmd.num_samps = 0;
stream_cmd.stream_now = true;
stream_cmd.time_spec = uhd::time_spec_t();
rx_streams[i]->issue_stream_cmd(stream_cmd);
//start rx thread
std::cout << "Starting rx thread " << i << std::endl;
rx_threads[i] = std::thread(recvTask,rx_buffs[i],rx_streams[i]);
}
// set up TX streams and threads
std::thread tx_threads[NUM_CHANNELS];
uhd::tx_streamer::sptr tx_streams[NUM_CHANNELS];
// set up TX metadata
uhd::tx_metadata_t md;
md.start_of_burst = true;
md.end_of_burst = false;
md.has_time_spec = true;
// start transmitting 2 seconds later
md.time_spec = uhd::time_spec_t(2);
for (int i = 0; i < NUM_CHANNELS; i++){
//does not work when we transmit and receive on same channel,
//if we change to stream_args.channels = std::vector<size_t> (1,1), this works for 1 channel.
stream_args.channels = std::vector<size_t>(1,i);
tx_streams[i] = usrp->get_tx_stream(stream_args);
//start the thread
std::cout << "Starting tx thread " << i << std::endl;
tx_threads[i] = std::thread(txTask,tx_buffs[i],tx_streams[i],md);
}
printf("Waiting to join threads\n");
for (int i = 0; i < NUM_CHANNELS; i++){
//join threads
tx_threads[i].join();
rx_threads[i].join();
}
return EXIT_SUCCESS;
}
//////////////////////////////////////////////////////////////////////////////
////////////////RECEIVE AND TRANSMIT THREAD FUNCTIONS/////////////////////////
//////////////////////////////////////////////////////////////////////////////
void recvTask(Complex *buff, uhd::rx_streamer::sptr rx_stream){
uhd::rx_metadata_t md;
unsigned overflows = 0;
//receive loop
while(!stop_signal_called){
size_t amount_received = rx_stream->recv(buff,BUFF_SIZE,md,3.0);
if (amount_received != BUFF_SIZE){ printf("receive not equal\n");}
//handle the error codes
switch(md.error_code){
case uhd::rx_metadata_t::ERROR_CODE_NONE:
break;
case uhd::rx_metadata_t::ERROR_CODE_TIMEOUT:
std::cerr << "T";
continue;
case uhd::rx_metadata_t::ERROR_CODE_OVERFLOW:
overflows++;
std::cerr << "Got an Overflow Indication" << std::endl;
continue;
default:
std::cout << boost::format(
"Got error code 0x%x, exiting loop..."
) % md.error_code << std::endl;
goto done_loop;
}
} done_loop:
// tell receive to stop streaming
auto stream_cmd = uhd::stream_cmd_t(uhd::stream_cmd_t::STREAM_MODE_STOP_CONTINUOUS);
rx_stream->issue_stream_cmd(stream_cmd);
//finished
std::cout << "Overflows=" << overflows << std::endl << std::endl;
}
void txTask(Complex *buff, uhd::tx_streamer::sptr tx_stream, uhd::tx_metadata_t md){
//transmit loop
while(!stop_signal_called){
size_t samples_sent = tx_stream->send(buff,BUFF_SIZE,md);
md.start_of_burst = false;
md.has_time_spec = false;
}
//send a mini EOB packet
md.end_of_burst = true;
tx_stream -> send("",0,md);
printf("End transmit \n");
}

cassandra INSERT fails after a lot inserts with : "Operation timed out"

I use the cassandra-c++-driver to write 100000 rows in a 100-columns table like this:
#include <cstdlib>
#include <stdio.h>
#include <cassandra.h>
#include <string>
#include <iostream>
#include <random>
#include <chrono>
#include <unistd.h>
#include <thread>
CassFuture *connect_future = NULL;
CassCluster *cluster = NULL;
CassSession *session = NULL;
std::random_device rd;
std::mt19937_64 gen(rd());
std::uniform_int_distribution<unsigned long long> dis;
int COLUMNS_COUNT = 100;
using namespace std;
void insertQ() {
auto t1 = std::chrono::high_resolution_clock::now();
for (int row = 0; row < 10000; ++row) {
string columns;
for (int i = 0; i < COLUMNS_COUNT; ++i) {
columns += "name" + to_string(i) + " , ";
}
string result = "INSERT INTO mykeyspace.users2 (user_id,";
result += columns;
result += "lname) VALUES (";
string values = to_string(dis(gen) % 50000000) + ",";
for (int i = 0; i < COLUMNS_COUNT; ++i) {
values += "'name" + to_string(dis(gen)) + "' , ";
}
values += " 'lname" + to_string(dis(gen) % 20) + "'";
result += values;
result += ");";
CassStatement *statement = cass_statement_new(result.c_str(), 0);
CassFuture *result_future = cass_session_execute(session, statement);
cass_future_wait(result_future);
if (cass_future_error_code(result_future) == CASS_OK) {
// cout << "insert ok" << endl;
}
else {
const char *message;
size_t message_length;
cass_future_error_message(result_future, &message, &message_length);
fprintf(stderr, "Unable to run query: '%.*s'\n", (int) message_length,
message);
cerr << "index : " << row << endl;
}
cass_statement_free(statement);
cass_future_free(result_future);
if (row % 1000 == 0)
{
// usleep(1000000);
// std::this_thread::sleep_for(std::chrono::seconds(1));
// cass_se
}
}
auto t2 = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(t2 - t1);
cout << "duration: " << duration.count() << endl;
}
int main() {
/* Setup and connect to cluster */
connect_future = NULL;
cluster = cass_cluster_new();
session = cass_session_new();
/* Add contact points */
// cass_cluster_set_contact_points(cluster, "127.0.0.1,127.0.0.2,127.0.0.3");
cass_cluster_set_contact_points(cluster, "127.0.0.1");
/* Provide the cluster object as configuration to connect the session */
connect_future = cass_session_connect(session, cluster);
if (cass_future_error_code(connect_future) == CASS_OK) {
CassFuture *close_future = NULL;
insertQ();
/* Close the session */
close_future = cass_session_close(session);
cass_future_wait(close_future);
cass_future_free(close_future);
} else {
/* Handle error */
const char *message;
size_t message_length;
cass_future_error_message(connect_future, &message, &message_length);
fprintf(stderr, "Unable to connect: '%.*s'\n", (int) message_length,
message);
}
cass_future_free(connect_future);
cass_cluster_free(cluster);
cass_session_free(session);
return 0;
}
its works and writes about 90000 rows and then falls with this error:
index : 91627
Unable to run query: 'Operation timed out - received only 0 responses.'
..
and continues, I can execute 'SELECT' queries but after this 'INSERT's fails. unitl I restart the cassandra servcice.
Whats the problem?
My system: Ubuntu 14.04 x64, 8 gig ram, cassandra 2.1.4 (from cassandra debian repositories with default configrations)
thanks.
This error is coming back from Cassandra. It indicates that less than the amount of replicas required responded to your read/write request within the period of time configured in cassandra. Since you are not specifying a consistency level, all that is required is that one node responds and it isn't within the write timeout. The most relevant configurations to look at in cassandra.yaml are:
write_request_timeout_in_ms (default 2000ms)
read_request_timeout_in_ms (default: 5000ms)
range_request_timeout_in_ms (default: 10000ms)
Since you are doing inserts, write_request_timeout_in_ms is probably the most relevant configuration.
What's likely happening is that you are overwhelming your cassandra cluster. Have you looked a cpu utilization/disk io/memory utilization on the server while running your test?
The interesting thing is that your code only ever does 1 INSERT at a time, is this correct? I would expect that this should be fine, but maybe what is happening is that this is putting intense pressure on your memory heap in cassandra and it can't flush data fast enough, so it becomes backed up while writing to disk. You should take a look at your cassandra system.log (usually in /var/log/cassandra if on linux) and see if there are any suspicious messages about long garbage collections (look for GCInspector) or memtable pressure.

check process every 30sec in c++ [duplicate]

This question already has answers here:
How to check if a process is running or not using C++
(3 answers)
Closed 9 years ago.
Hi iuse this code for check Process after my App "piko.exe" run and if the programs such as
"non.exe","firefox.exe","lol.exe" if running closed my App and return an error.
But i need to this check process every 30 sec and i used while but my main program (this code is one part of my project) stopped working so pleas if possible pls someone edited my code thank you.
#include "StdInc.h"
#include <windows.h>
#include <tlhelp32.h>
#include <tchar.h>
#include <stdio.h>
void find_Proc(){
HANDLE proc_Snap;
HANDLE proc_pik;
HANDLE proc_pikterm;
PROCESSENTRY32 pe32;
PROCESSENTRY32 pe32pik;
int i;
char* chos[3] = {"non.exe","firefox.exe","lol.exe"};
char* piko = "piko.exe";
proc_pik = CreateToolhelp32Snapshot( TH32CS_SNAPPROCESS, 0 );
proc_Snap = CreateToolhelp32Snapshot( TH32CS_SNAPPROCESS, 0 );
pe32.dwSize = sizeof(PROCESSENTRY32);
pe32pik.dwSize = sizeof(PROCESSENTRY32);
for(i = 0; i < 3 ; i++){
Process32First(proc_Snap , &pe32);
do{
if(!strcmp(chos[i],pe32.szExeFile)){
MessageBox(NULL,"CHEAT DETECTED","ERROR",NULL);
Process32First(proc_pik,&pe32pik);
do{
if(!strcmp(iw4m,pe32pik.szExeFile)){
proc_pikterm = OpenProcess(PROCESS_ALL_ACCESS, TRUE, pe32pik.th32ProcessID);
if(proc_pikterm != NULL)
TerminateProcess(proc_pikterm, 0);
CloseHandle(proc_pikterm);
}
} while(Process32Next(proc_pik, &pe32pik));
}
} while(Process32Next(proc_Snap, &pe32));
}
CloseHandle(proc_Snap);
CloseHandle(proc_pik);
}
Based on what OS you're using you can poll the system time and check to see if 30 seconds have expired. The way to do so is to take the time at the beginning of your loop, take the time at the end and subtract them. Then subtract the time you want to sleep from the time it took your code to run that routine.
Also, if you don't need EXACTLY 30 seconds, you could just add sleep(30) to your loop.
Can you explain to me why this method wouldn't work for you? The code below is designed to count up one value each second. Make "checkMyProcess" do whatever you need it to do within that while loop before the sleep call.
#include <iostream>
using namespace std;
int someGlobal = 5;//Added in a global so you can see what fork does, with respect to not sharing memory!
bool checkMyProcess(const int MAX) {
int counter = 0;
while(counter < MAX) {
cout << "CHECKING: " << counter++ << " Global: " << someGlobal++ << endl;
sleep(1);
}
}
void doOtherWork(const int MIN) {
int counter = 100;
while(counter > MIN) {
cout << "OTHER WORK:" << counter-- << " Global: " << someGlobal << endl;
sleep(1);
}
}
int main() {
int pid = fork();
if(pid == 0) {
checkMyProcess(5);
} else {
doOtherWork(90);
}
}
Realize of course that, if you want to do work outside of the while loop, within this same program, you would have to use threading, or fork a pair of processes.
EDIT:
I added in a call to "fork" so you can see the two processes doing work at the same time. Note: if the "checkMyProcess" function needs to know something about the memory going on in the "doOtherWork" function threading will be a much easier solution for you!

Single-threaded and multi-threaded code taking the same time

Ive been using pthreads but have realized that my code is taking the same amount of time independently if i use 1 thread or if i separate the task into 1/N for N threads. To exemplify i reduced my code to this example:
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <boost/progress.hpp>
#define SIZEEXEC 200000000
using namespace boost;
using std::cout;
using std::endl;
typedef struct t_d{
int intArg;
} Thread_data;
void* function(void *threadarg)
{
Thread_data *my_data= (Thread_data *) threadarg;
int size= my_data->intArg;
int i=0;
unsigned rand_state = 0;
for(i=0; i<size; i++) rand_r(&rand_state);
return 0;
}
void withOutThreads(void)
{
Thread_data* t1= new Thread_data();
t1->intArg= SIZEEXEC/3;
function((void *) t1);
Thread_data* t2= new Thread_data();
t2->intArg= SIZEEXEC/3;
function((void *) t2);
Thread_data* t3= new Thread_data();
t3->intArg= SIZEEXEC/3;
function((void *) t3);
}
void withThreads(void)
{
pthread_t* h1 = new pthread_t;
pthread_t* h2 = new pthread_t;
pthread_t* h3 = new pthread_t;
pthread_attr_t* atr = new pthread_attr_t;
pthread_attr_init(atr);
pthread_attr_setscope(atr,PTHREAD_SCOPE_SYSTEM);
Thread_data* t1= new Thread_data();
t1->intArg= SIZEEXEC/3;
pthread_create(h1,atr,function,(void *) t1);
Thread_data* t2= new Thread_data();
t2->intArg= SIZEEXEC/3;
pthread_create(h2,atr,function,(void *) t2);
Thread_data* t3= new Thread_data();
t3->intArg= SIZEEXEC/3;
pthread_create(h3,atr,function,(void *) t3);
pthread_join(*h1,0);
pthread_join(*h2,0);
pthread_join(*h3,0);
pthread_attr_destroy(atr);
delete h1;
delete h2;
delete h3;
delete atr;
}
int main(int argc, char *argv[])
{
bool multThread= bool(atoi(argv[1]));
if(!multThread){
cout << "NO THREADS" << endl;
progress_timer timer;
withOutThreads();
}
else {
cout << "WITH THREADS" << endl;
progress_timer timer;
withThreads();
}
return 0;
}
Either the code is wrong or there is something on my system not allowing for parallel processing. I'm running on Ubuntu 11.10 x86_64-linux-gnu, gcc 4.6, Intel® Xeon(R) CPU E5620 # 2.40GHz × 4
Thanks for any advice!
EDIT:
Given the answers i have realized that (1) progress_timer timer did not allow me to measure differences in "real" time and (2) that the task i am giving in "function" does not seem to be enough for my machine to give different times with 1 or 3 threads (which is odd, i get around 10 seconds in both cases...). I have tried to allocate memory and make it heavier and yes, i see a difference. Although my other code is more complex, there is a good chance it still runs +- the same time with 1 or 3 threads. Thanks!
This is expected. You are measuring CPU time, not wall time.
time ./test 1
WITH THREADS
2.55 s
real 0m1.387s
user 0m2.556s
sys 0m0.008s
Real time is less than user time, which is identical to your measured time. Real time is what your wall clock shows, user and sys are CPU time spent in user and kernel mode
by all CPUs combined.
time ./test 0
NO THREADS
2.56 s
real 0m2.578s
user 0m2.560s
sys 0m0.008s
Your measured time, real time and user time are all virtually the same.
The culprit seems to be progress_timer or rather understanding of it.
Try replacing main() with this. This tells the program doesn't take time as reported by progress_timer, maybe it reports total system time?
#include <sys/time.h>
void PrintTime() {
struct timeval tv;
if(!gettimeofday(&tv,NULL))
cout << "Sec=" << tv.tv_sec << " usec=" << tv.tv_usec << endl ;
}
int main(int argc, char *argv[])
{
bool multThread= bool(atoi(argv[1]));
PrintTime();
if(!multThread){
cout << "NO THREADS" << endl;
progress_timer timer;
withOutThreads();
}
else {
cout << "WITH THREADS" << endl;
progress_timer timer;
withThreads();
}
PrintTime();
return 0;
}