It seems to be platform related (works with Ubuntu 12.04 on my laptop, doesn't work with another Ubuntu 12.04 on my workstation).
This is a sample code about what I am doing with two threads.
#include <iostream>
#include <thread>
#include <chrono>
#include <atomic>
#include <GL/glfw.h>
using namespace std;
int main() {
atomic_bool g_run(true);
string s;
thread t([&]() {
cout << "init" << endl;
if (!glfwInit()) {
cerr << "Failed to initialize GLFW." << endl;
abort();
}
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 2);
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 1);
if(!glfwOpenWindow(640, 480, 8, 8, 8, 0, 24, 0, GLFW_WINDOW)) {
glfwTerminate();
cerr << "Cannot open OpenGL 2.1 render context." << endl;
abort();
}
cout << "inited" << endl;
while (g_run) {
// rendering something
cout << "render" << endl;
this_thread::sleep_for(chrono::seconds(1));
}
// unload glfw
glfwTerminate();
cout << "quit" << endl;
});
__sync_synchronize(); // a barrier added as ildjarn suggested.
while (g_run) {
cin >> s;
cout << "user input: " << s << endl;
if (s == "q") {
g_run = false;
cout << "user interrupt" << endl;
cout.flush();
}
}
__sync_synchronize(); // another barrier
t.join();
}
Here is my compile parameters:
g++ -std=c++0x -o main main.cc -lpthread -lglfw
My laptop run this program, like this:
init
inited
render
render
q
user input: q
user interrupt
quit
And workstation just outputs:
init
inited
render
render
q
render
q
render
q
render
^C
It just simply ignored my inputs (another program same procedure with glew and glfw, just jump out of the while loop in main thread, without reading my inputs.) BUT this thing works normally with gdb!
any idea of what's going on?
Update
After more tests on other machines, NVIDIA's driver caused this. Same thing happens on other machines with NVIDIA graphics card.
I used this code to close my program and get my q key when its runing
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <pthread.h>
#include <termios.h>
static struct termios old, _new;
static void * breakonret(void *instance);
/* Initialize _new terminal i/o settings */
void initTermios(int echo)
{
tcgetattr(0, &old); /* grab old terminal i/o settings */
_new = old; /* make _new settings same as old settings */
_new.c_lflag &= ~ICANON; /* disable buffered i/o */
_new.c_lflag &= echo ? ECHO : ~ECHO; /* set echo mode */
tcsetattr(0, TCSANOW, &_new); /* use these _new terminal i/o settings now */
}
/* Read 1 character with echo */
char getche(void)
{
char ch;
initTermios(1);
ch = getchar();
tcsetattr(0, TCSANOW, &old);
return ch;
}
int main(){
pthread_t mthread;
pthread_create(&mthread, NULL, breakonret, NULL); //initialize break on return
while(1){
printf("Data on screen\n");
sleep(1);
}
pthread_join(mthread, NULL);
}
static void * breakonret(void *instance){// you need to press q and return to close it
char c;
c = getche();
printf("\nyou pressed %c \n", c);
if(c=='q')exit(0);
fflush(stdout);
}
With this you have a thread reading the data from your keyboard
After more tests on other machines, NVIDIA's driver caused this. Same thing happens on other machines with NVIDIA graphics card.
To fix this problem, there is something to be done with the initialization order. On nvidia machines glfw has to be initialized before anything (eg. create thread, even though you are not using glfw's threading routine.) The initialization has to be complete, say, create the output window after glfwInit(), otherwise the problem persists.
Here is the fixed code.
#include <iostream>
#include <thread>
#include <chrono>
#include <atomic>
#include <GL/glfw.h>
using namespace std;
int main() {
atomic_bool g_run(true);
string s;
cout << "init" << endl;
if (!glfwInit()) {
cerr << "Failed to initialize GLFW." << endl;
abort();
}
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 2);
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 1);
if(!glfwOpenWindow(640, 480, 8, 8, 8, 0, 24, 0, GLFW_WINDOW)) {
glfwTerminate();
cerr << "Cannot open OpenGL 2.1 render context." << endl;
abort();
}
cout << "inited" << endl;
thread t([&]() {
while (g_run) {
cin >> s;
cout << "user input: " << s << endl;
if (s == "q") {
g_run = false;
cout << "user interrupt" << endl;
cout.flush();
}
}
});
while (g_run) {
// rendering something
cout << "render" << endl;
this_thread::sleep_for(chrono::seconds(1));
}
t.join();
// unload glfw
glfwTerminate();
cout << "quit" << endl;
}
Thanks all your helps.
Related
I need your help. Program A executes program B with fork(). Every 5 seconds the process belonging to program B is interrupted. If the user enters any key within a certain time, the process is continued and interrupted again after the same time interval. If no key is entered, both program A and program B are terminated prematurely. I have tried the following code, but it does not work. Any suggestions/tips that will help me?
#include <iostream>
#include <chrono>
#include <unistd.h>
#include <sys/wait.h>
#include <signal.h>
using namespace std;
using namespace chrono;
int pid;
void signal_handler(int signum) {
cout << "Programm B is interrupted. Please enter any key within 5 or the programm will be terminated" << endl;
kill(pid,SIGSTOP);
alarm(5);
pause();
alarm(5);
}
int main(int argc, char* argv[]) {
//Usage
if(string(argv[1]) == "h" || string(argv[1]) == "help"){
cout << "usage" << endl;
return 0;
}
signal(SIGALRM, signal_handler);
pid = fork();
if (pid == 0) {
cout << "Name of programm B: " << argv[1] << endl;
cout << "PID of programm B: " << getpid() << endl;
execvp(argv[1], &argv[1]);
} else if (pid > 0) {
cout << "PID of programm A: " << getpid() << endl;
high_resolution_clock::time_point t1 = high_resolution_clock::now();
waitpid(pid, nullptr, 0);
high_resolution_clock::time_point t2 = high_resolution_clock::now();
auto duration = duration_cast<milliseconds>(t2 - t1).count();
cout << "Computing time: " << duration << "ms" << endl;
} else {
cerr << "error << endl;
return 1;
}
return 0;
}
Any help or sulution. I am a beginner in c++ btw.
Signals can get tricky and there are lots of issues with your approach.
You should:
kick off the timer (alarm(5)) in main
do the sighandler registration and timer kick-off after you've spawned the child (or you somewhat risk running the signal handler in the child in between fork and execvp)
use sigaction rather than signal to register the signal, as the former has clear portable semantics unlike the latter
loop on EINTR around waitpid (as signal interruptions will cause waitpid to fail with EINTR)
As for the handler, it'll need to
use only async-signal-safe functions
register another alarm() around read
unblock SIGALRM for the alarm around read but not before you somehow mark yourself as being in your SIGALRM signal handler already so the potential recursive entry of the handler can do a different thing (kill the child and exit)
(For the last point, you could do without signal-unblocking if you register the handler with .sa_flags = SA_NODEFER, but that has the downside of opening up your application to stack-overflow caused by many externally sent (via kill) SIGALRMs. If you wanted to handle externally sent SIGALRMs precisely, you could register the handler with .sa_flags=SA_SIGINFO and use info->si_code to differentiate between user-sends and alarm-sends of SIGALRM, presumably aborting on externally-sent ones)
It could look something like this (based on your code):
#include <iostream>
#include <chrono>
#include <unistd.h>
#include <sys/wait.h>
#include <signal.h>
#include <string.h>
//AS-safe raw io helper functions
ssize_t /* Write "n" bytes to a descriptor */
writen(int fd, const char *ptr, size_t n)
{
size_t nleft;
ssize_t nwritten;
nleft = n;
while (nleft > 0) {
if ((nwritten = write(fd, ptr, nleft)) < 0) {
if (nleft == n)
return(-1); /* error, return -1 */
else
break; /* error, return amount written so far */
} else if (nwritten == 0) {
break;
}
nleft -= nwritten;
ptr += nwritten;
}
return(n - nleft); /* return >= 0 */
}
ssize_t writes(int fd, char const *str0) { return writen(fd,str0,strlen(str0)); }
ssize_t writes2(char const *str0) { return writes(2,str0); }
//AS-safe sigprockmask helpers (they're in libc too, but not specified as AS-safe)
int sigrelse(int sig){
sigset_t set; sigemptyset(&set); sigaddset(&set,sig);
return sigprocmask(SIG_UNBLOCK,&set,0);
}
int sighold(int sig){
sigset_t set; sigemptyset(&set); sigaddset(&set,sig);
return sigprocmask(SIG_BLOCK,&set,0);
}
#define INTERRUPT_TIME 5
using namespace std;
using namespace chrono;
int pid;
volatile sig_atomic_t recursing_handler_eh; //to differentiate recursive executions of signal_handler
void signal_handler(int signum) {
char ch;
if(!recursing_handler_eh){
kill(pid,SIGSTOP);
writes2("Programm B is interrupted. Please type enter within 5 seconds or the programm will be terminated\n");
alarm(5);
recursing_handler_eh = 1;
sigrelse(SIGALRM);
if (1!=read(0,&ch,1)) signal_handler(signum);
alarm(0);
sighold(SIGALRM);
writes2("Continuing");
kill(pid,SIGCONT);
recursing_handler_eh=0;
alarm(INTERRUPT_TIME);
return;
}
kill(pid,SIGTERM);
_exit(1);
}
int main(int argc, char* argv[]) {
//Usage
if(string(argv[1]) == "h" || string(argv[1]) == "help"){
cout << "usage" << endl;
return 0;
}
pid = fork();
if (pid == 0) {
cout << "Name of programm B: " << argv[1] << endl;
cout << "PID of programm B: " << getpid() << endl;
execvp(argv[1], &argv[1]);
} else if (pid < 0) { cerr << "error" <<endl; return 1; }
struct sigaction sa; sa.sa_handler = signal_handler; sigemptyset(&sa.sa_mask); sa.sa_flags=0; sigaction(SIGALRM, &sa,0);
//signal(SIGALRM, signal_handler);
alarm(INTERRUPT_TIME);
cout << "PID of programm A: " << getpid() << endl;
high_resolution_clock::time_point t1 = high_resolution_clock::now();
int r;
do r = waitpid(pid, nullptr, 0); while(r==-1 && errno==EINTR);
high_resolution_clock::time_point t2 = high_resolution_clock::now();
auto duration = duration_cast<milliseconds>(t2 - t1).count();
cout << "Computing time: " << duration << "ms" << endl;
return 0;
}
Not that the above will wait only for an enter key. To wait for any key, you'll need to put your terminal in raw/cbreak mode and restore the previous settings on exit (ideally on signal deaths too).
The following code works just fine when linking with Oracle client 11g but if I compile the same linking with Oracle 12c libraries I receive the error EINVAL Invalid argument - errno 22.
#include <sys/types.h>
#include <aio.h>
#include <fcntl.h>
#include <errno.h>
#include <iostream>
using namespace std;
const int SIZE_TO_READ = 100;
int main()
{
// open the file
int file = open("blah.txt", O_RDONLY, 0);
if (file == -1)
{
cout << "Unable to open file!" << endl;
return 1;
}
// create the buffer
char* buffer = new char[SIZE_TO_READ];
// create the control block structure
aiocb cb;
memset(&cb, 0, sizeof(aiocb));
cb.aio_nbytes = SIZE_TO_READ;
cb.aio_fildes = file;
cb.aio_offset = 0;
cb.aio_buf = buffer;
// read!
if (aio_read(&cb) == -1)
{
cout << "Unable to create request!" << endl;
close(file);
}
cout << "Request enqueued!" << endl;
// wait until the request has finished
while(aio_error(&cb) == EINPROGRESS)
{
cout << "Working..." << endl;
}
// success?
int numBytes = aio_return(&cb);
if (numBytes != -1)
cout << "Success!" << endl;
else
cout << "Error!" << endl;
// now clean up
delete[] buffer;
close(file);
return 0;
}
Then to compile I first set the env variables
export ICLIBHOME=/u01/oracle/product/Linux/2.6/x86_64/clients/12.1.0.2/64bit/client/lib
export LD_LIBRARY_PATH=${ICLIBHOME}
Afterwards I compile and execute my compiled file but I get the error
g++ myaio.cpp -o myaio -lrt -L${ICLIBHOME} -lclntsh
./myaio
Unable to create request 22
I think it might be something related to the way my pointers are used/initialized in my struct, but I'm not completely sure. I use 'g++ -lpthread main.cpp' to compile. The program just hangs in Linux, while executing properly in windows. The program doesn't even spit out a cout I put in the beginning of the code for debugging purposes.
#include "pthread.h"
#include "semaphore.h"
#include "time_functions.h"
#include <iostream>
#include <fstream>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
using namespace std;
struct vars {
char buffer[10][1000];
int put;
int take;
sem_t openSlot;
sem_t slotInactive;
sem_t newData;
ifstream readFile;
ofstream writeFile;
};
void *write(void *in) //consumer, writes data to file and deletes data from buffer
{
vars* writeVars = (vars*)in;
while (true)
{
sem_wait(&(*writeVars).newData);
sem_wait(&(*writeVars).slotInactive);
if ((*writeVars).buffer[(*writeVars).take % 10][0] != '$')
{
(*writeVars).writeFile << (*writeVars).buffer[(*writeVars).take % 10];
if ((*writeVars).readFile.eof() != true)
{
(*writeVars).writeFile << endl;
}
else
{
break;
}
}
(*writeVars).take++;
sem_post(&(*writeVars).openSlot);
sem_post(&(*writeVars).slotInactive);
}
pthread_exit(0);
return 0;
}
void *read(void *in) //producer, reads data into buffer
{
vars* readVars = (vars*)in;
char read_line[1000];
while ((*readVars).readFile.getline(read_line, 1000))
{
sem_wait(&(*readVars).openSlot);
sem_wait(&(*readVars).slotInactive);
strcpy((*readVars).buffer[(*readVars).put % 10], read_line);
(*readVars).put++;
sem_post(&(*readVars).slotInactive);
sem_post(&(*readVars).newData);
}
sem_wait(&(*readVars).openSlot);
sem_wait(&(*readVars).slotInactive);
(*readVars).buffer[(*readVars).put % 10][0] = '$';
sem_post(&(*readVars).slotInactive);
sem_post(&(*readVars).newData);
pthread_exit(0);
return 0;
}
int main(int argc, char* argv[])
{
char pause[10];
vars *varsPointer, var;
varsPointer = &var;
var.take = 0;
var.put = 0;
var.writeFile.open(argv[2], ios::out);
var.readFile.open(argv[1], ios::in);
start_timing();
sem_init(&var.openSlot, 0, 10);
sem_init(&var.slotInactive, 0, 1);
sem_init(&var.newData, 0, 0);
pthread_t read_Thread, write_Thread;
pthread_create(&read_Thread, NULL, read, varsPointer);
pthread_create(&write_Thread, NULL, write, varsPointer);
pthread_join(read_Thread, NULL);
pthread_join(write_Thread, NULL);
sem_destroy(&var.openSlot);
sem_destroy(&var.slotInactive);
sem_destroy(&var.newData);
stop_timing();
var.readFile.close();
var.writeFile.close();
//Display timer
cout << "wall clock time (ms):" << get_wall_clock_diff() * 1000 << '\n';
cout << "cpu time (ms):" << get_CPU_time_diff() * 1000 << '\n';
cout << "Type Something and Press Enter To Continue";
cin >> pause; //Just used to keep cmd promt open in Windows after program execution
return 0;
}
I'm trying to make a timer which will count from the amount of time the user commands it, to zero.
Now I'm trying to add a pause faction to it, which will require to my programm to accept and read input while the timer ticks.
This is the code I have so far -
#include <iostream>
#include <stdio.h>
#include <time.h>
#include <unistd.h>
#include <dos.h>
#include <windows.h>
using namespace std;
// sleep(5000);
int seconds;
int hoursLeft;
int minutesLeft;
int secondsCount=0;
void timeLeft ()
{
hoursLeft = seconds/3600;
minutesLeft = seconds/60 - hoursLeft*60;
}
void timer ()
{
if (secondsCount == 60)
{
timeLeft();
cout << "The Amount of time left is: " << hoursLeft << " hours and " << minutesLeft << " minutes left." << endl;
secondsCount=0;
}
secondsCount++;
seconds--;
Sleep(1000);
timer();
}
int main()
{
// introduction and time picking
cout << "Welcome to my Timer - Please set the amount of hours and than minutes you want the timer to run" << endl;
double requestedHours, requestedMinutes;
cin >> requestedHours;
cin >> requestedMinutes;
double requestedSeconds = requestedHours*3600 + requestedMinutes*60;
seconds = requestedSeconds;
cout << "Timer Started";
timer();
}
#include <stdio.h>
#include <fcntl.h>
#include <string.h>
int main()
{
char buffer[16];
int flags;
int fd;
int r;
fd = 0; //stdin
flags = fcntl(fd, F_GETFL, 0);
fcntl(fd, F_SETFL, flags | O_NONBLOCK);
while (1)
{
memset(buffer, 0, sizeof(buffer));
r = read(0, buffer, sizeof(buffer)); //return the number of bytes it reads
if (r > 0) //something was read
{
printf("read: %d\n", buffer[0]);
fflush(stdin);
}
else //nothing has been read
{
puts("update timer here");
}
usleep(50000);
}
return (0);
}
using non blocking read on file descriptor can also be cool
sorry i only have this solution in C
PS: You're computer isnt suppose to work recursively infinitly, you should use a loop instead of an infinite recursion (timer() recalls itself), or your stack will overflow
I am trying to understand how semaphores work in C++ but I am having some troubles.
Here is my code:
#include <iostream>
#include <pthread.h>
#include <fcntl.h> /* For O_* constants */
#include <sys/stat.h> /* For mode constants */
#include <semaphore.h>
using namespace std;
static sem_t *sem_thread;
static pthread_t thread_id;
void * threadFunc(void *) {
cout << "threadFunc\n";
cout << "threadFunc\n";
cout << "threadFunc\n";
cout << "threadFunc\n";
cout << "threadFunc\n";
sem_post(sem_thread);
return 0;
}
int main()
{
// Init semaphores
sem_thread = sem_open("./semaphores/sem_thread", O_TRUNC, 0777, 0);
// Init thread
int rc = pthread_create(&thread_id, NULL, threadFunc, NULL);
if (rc != 0)
{
cerr << "Pthread couldn't be created. rc=" << rc << endl;
abort();
}
sem_wait(sem_thread);
cout << "Main thread\n";
cout << "Main thread\n";
cout << "Main thread\n";
cout << "Main thread\n";
cout << "Main thread\n";
sem_close(sem_thread);
sem_unlink("./semaphores/sem_thread");
return 0;
}
So I expect the program to print threadFunc first and then Main thread. However, this is what I get:
Main thread
tMhariena dtFhurneca
dt
hMraeiand Ftuhnrce
atdh
rMeaaidnF utnhcr
etahdr
eMaadiFnu ntch
rteharde
adFunc
Any idea of what's happening?
You're not creating the semaphore, nor checking whether it was created.
There are two problems with your call to sem_open:
you need O_CREAT, not O_TRUNC, to create it.
the name isn't valid. Named semaphores aren't kept in the filesystem.
Looking at man sem_overview, the naming convention is specified thusly:
A named semaphore is identified by a name of the form /somename;
that is, a null-terminated string of up to NAME_MAX-4 (i.e.,
251) characters consisting of an initial slash, followed by one
or more characters, none of which are slashes.