PseudoTerminal - Not read from stdin - c++

I am creating a class where I can create multiple threads that are pseudo terminals, in order to talk to each one of them I have to create multiple files / Fifos to talk to each pseudo terminal slave, because taslking to the stdin makes any created pseudoterminal listen, the problem is that when using a fifo for input it does not work.
Here is the code
void * Terminal::tTerminal(void * pvParameters)
{
Terminal (*self) = reinterpret_cast<Terminal*>(pvParameters);
fd_set inFds;
//dup2(self->in, STDIN_FILENO);
for (;;)
{
FD_ZERO(&inFds);
FD_SET(self->in, &inFds);
FD_SET(self->masterFd, &inFds);
if (select(self->masterFd + 1, &inFds, &inFds, NULL, NULL) == -1)
{
printf("select");
}
if (FD_ISSET(self->in, &inFds))
{
self->numRead = read(self->in, self->buf, BUF_SIZE);
if (self->numRead <= 0)
exit(EXIT_SUCCESS);
if (write(self->masterFd, self->buf, self->numRead) != self->numRead)
printf("partial/failed write (masterFd)");
}
else
{
printf("partial/failed write (masterFd)");
fflush(stdout);
}
if (FD_ISSET(self->masterFd, &inFds))
{
self->numRead = read(self->masterFd, self->buf, BUF_SIZE);
if (self->numRead <= 0)
exit(EXIT_SUCCESS);
if (write(self->out, self->buf, self->numRead) != self->numRead)
printf("partial/failed write (STDOUT_FILENO)");
}
else
{
printf("partial/failed write (STDOUT_FILENO)");
fflush(stdout);
}
}
For further notices the Fifos are created correctly, the file descriptors are not 0, the master and slave are running, the only problem is in:
FD_ISSET(self->in, &inFds)
Which means it is not set,
Thanks

You should make sure the first argument to select() is the highest of all possible filedescriptors plus one, so:
select(std::max(self->masterFd, self->in) + 1, &inFds, &inFds, NULL, NULL)

Related

How to make Two processes to complete one task, and whose finish task first, terminate another one

I am looking for a good way to create 2 equally child processes, which will complete 1 task - to sort separately, one from the other, an array with numbers data with different algorithms for example Merge-Sort, and QuickSort.
The first process, which completed the task, should terminate another one (and print result)
Also which will be the best way to transfer data to processes (Pipes?)
Also is there something I didn't take into account?
I should use system functions, Posix and Linux based code
int main() {
pid_t child_a, child_b;
int unsortedData[MAX] = getNumericDataFromFile(/*filename*/);
// Open two pipes for communication
// The descriptors will be available to both
// parent and child.
int in_fd[2];
int out_fd[2];
if (pipe(in_fd) == -1) { // For child's stdin
fprintf(stderr, "Pipe In Failed");
return 1;
}
if (pipe(out_fd) == -1) { // For child's stdout
fprintf(stderr, "Pipe Out Failed");
return 1;
}
if ((child_a = fork()) < 0) { // create first child process
fprintf(stderr, "Fork of first child process Failed");
return 1;
}
if (child_a == 0) {
/* Child A code, execute first algorithm */
int sortedData[MAX] = firstChildProcess(/*unsortedData*/);
/* terminate other process properly if its firs ? */
} else {
if ((child_b = fork()) < 0) { // create second child process
fprintf(stderr, "Fork of second child process Failed");
return 1;
}
if (child_b == 0) {
/* Child B code, execute second algorithm */
int sortedData[MAX] = ChildProcess(/*unsortedData*/)
/* terminate other process properly if its first? */
} else {
parentCode(..);/* Parent Code */
}
}
return 0;
}
Input : 6, 5, 4, 3, 2, 1
Output: Second algorithm finished first sorting task with result:
1, 2, 3, 4, 5, 6
Thank you in advance

Program terminates, when master terminal is closed

In my program, when I was trying to close the master file descriptor, suddenly my program got crashed and I haven't seen any cores. Could someone help me with this? I am providing the code that I have used. This is the code I copied from the internet(http://www.rkoucha.fr/tech_corner/pty_pdip.html), The only difference is that instead of fork I spawn a thread. I know some small info I miss. Could someone please shed the light?
Thanks in advance!!!
int ScalingCommandReceiver::execute_ptcoi_commands_sequence(const char * bc_name, std::vector<cmd_output_pair>& cmd_seq, std::string& output_str)
{
int fdm, fds;
int rc;
output_str.clear();
fdm = posix_openpt(O_RDWR);
if (fdm < 0)
{
output_str.append("Error on posix_openpt() \n");
return -1;
}
rc = grantpt(fdm);
if (rc != 0)
{
output_str.append("Error on grantpt() \n");
close(fdm);
return -1;
}
rc = unlockpt(fdm);
if (rc != 0)
{
output_str.append("Error on unlockpt() \n");
close(fdm);
return -1;
}
// Open the slave side ot the PTY
fds = open(ptsname(fdm), O_RDWR);
if (fds < 0)
{
output_str.append("Error on posix_openpt() \n");
close(fdm);
return -1;
}
std::string cp_name ("bc3");
pt_session_struct *file_refs = NULL;
file_refs = (pt_session_struct*) ::malloc(sizeof(pt_session_struct));
if (file_refs == NULL) {
output_str.append("ERROR: Failed to create the struct info for the thread! \n");
close(fdm);
close(fds);
return -1;
}
file_refs->fds = fds;
file_refs->cp_name = (char*)bc_name;
//Spawn a thread
if (ACE_Thread::spawn(ptcoi_command_thread, file_refs, THR_DETACHED) < 0) {
output_str.append("ERROR: Failed to start ptcoi_command_thread thread! \n");
close(fdm);
close(fds);
::free(file_refs);
return -1;
}
int i = 0;
while (i <= cmd_seq_dim)
{
char buffer[4096] = {'\0'};
ssize_t bytes_read = 0;
int read_res = 0;
do
{
// get the output in buffer
if((read_res = read(fdm, (buffer + bytes_read), sizeof(buffer))) > 0)
{
// The number of bytes read is returned and the file position is advanced by this number.
// Let's advance also buffer position.
bytes_read += read_res;
}
}
while((read_res > 0) && !strchr(buffer, cpt_prompt) && (std::string(buffer).find(ptcoi_warning) == std::string::npos));
if (bytes_read > 0) // No error
{
// Send data on standard output or wherever you want
//Do some operations here
}
else
{
output_str.append("\nFailed to read from master PTY \n");
}
if(i < cmd_seq_dim)
{
// Send data on the master side of PTY
write(fdm, cmd_seq[i].first.c_str(), cmd_seq[i].first.length());
}
++i;
} // End while
if(/*have some internal condition*/)
{
close(fdm); //Here I observe the crash :-(
return 0; // OK
}
else
{
output_str.append ("\nCPT printouts not expected.\n");
close(fdm);
return -1; // Failure
}
close(fdm);
return 0; // OK
}
ACE_THR_FUNC_RETURN ScalingCommandReceiver::ptcoi_command_thread(void* ptrParam)
{
pt_session_struct* fd_list = (pt_session_struct*) ptrParam;
struct termios slave_orig_term_settings; // Saved terminal settings
struct termios new_term_settings; // Current terminal settings
int fds = fd_list->fds;
char* cp_name = fd_list->cp_name;
::free (fd_list);
// Save the defaults parameters of the slave side of the PTY
tcgetattr(fds, &slave_orig_term_settings);
// Set RAW mode on slave side of PTY
new_term_settings = slave_orig_term_settings;
cfmakeraw (&new_term_settings);
tcsetattr (fds, TCSANOW, &new_term_settings);
int stdinCopy, stdoutCopy, stdErr;
stdinCopy = dup (0);
stdoutCopy = dup (1);
stdErr = dup (2);
// The slave side of the PTY becomes the standard input and outputs of the child process
close(0); // Close standard input (current terminal)
close(1); // Close standard output (current terminal)
close(2); // Close standard error (current terminal)
dup(fds); // PTY becomes standard output (0)
dup(fds); // PTY becomes standard output (1)
dup(fds); // PTY becomes standard error (2)
// Now the original file descriptor is useless
close(fds);
// Make the current process a new session leader
//setsid();
// As the child is a session leader, set the controlling terminal to be the slave side of the PTY
// (Mandatory for programs like the shell to make them manage correctly their outputs)
ioctl(0, TIOCSCTTY, 1);
// Execution of the program
char PTCOI [64] = {0};
snprintf(PTCOI, sizeof(PTCOI), "/opt/ap/mas/bin/mas_cptaspmml PTCOI -cp %s -echo 7", cp_name);
system(PTCOI); //my command
close(0); // Close standard input (current terminal)
close(1); // Close standard output (current terminal)
close(2); // Close standard error (current terminal)
dup2 (stdinCopy, 0);
dup2 (stdoutCopy, 1);
dup2 (stdErr, 2);
close (stdinCopy);
close (stdoutCopy);
close (stdErr);
return 0;
}
execute_ptcoi_commands_sequence seems to contain steps necessary to daemonize your process:
// The slave side of the PTY becomes the standard input and outputs of the child process
close(0); // Close standard input (current terminal)
close(1); // Close standard output (current terminal)
close(2); // Close standard error (current terminal)
. . .
Which means the fork and setsid were there to detach from the controlling terminal, so that your process can survive beyond your terminal session.
After you removed the fork your process remains associated with the controlling terminal and probably terminates when the terminal sends a SIGHUP on close.

server and multiple clients using pthreads and select()

consider the next piece of code -
int get_ready_connection(int s) {
/* socket of connection */
int caller;
if ((caller = accept(s,NULL,NULL)) < SUCCESS)
{
server_log->write_to_log(sys_call_error(SERVER, "accept"));
return FAILURE;
}
return caller;
}
int establish_connection(sockaddr_in& connection_info)
{
// Create socket
if ((server_sock = socket(AF_INET, SOCK_STREAM, 0)) < SUCCESS)
{
server_log->write_to_log(sys_call_error(SERVER, "socket"));
return FAILURE;
}
// Bind `sock` with given addresses
if (bind(server_sock, (struct sockaddr *) &connection_info, \
sizeof(struct sockaddr_in)) < SUCCESS)
{
close(server_sock);
server_log->write_to_log(sys_call_error(SERVER, "bind"));
return FAILURE;
}
// Max # of queued connects
if (listen(server_sock, MAX_PENDING_CONNECTIONS) < SUCCESS)
{
server_log->write_to_log(sys_call_error(SERVER, "listen"));
return FAILURE;
}
// Create a set of file descriptors and empty it.
fd_set set;
bool is_inside;
int ret_val;
while(true)
{
FD_ZERO(&set);
FD_SET(STDIN_FILENO, &set);
FD_SET(server_sock, &set);
struct timeval tv = {2, 0};
ret_val = select(server_sock + 1, &set, NULL, NULL, &tv); // TODO ret_val
is_inside = FD_ISSET(STDIN_FILENO, &set);
if(is_inside)
{
// get user input
string user_input;
getline(cin, user_input);
if ((strcasecmp(user_input.c_str(), EXIT_TEXT) == 0))
{
return SUCCESS;
}
}
is_inside = FD_ISSET(server_sock, &set);
if(is_inside)
{
// get the first connection request
int current_connection = get_ready_connection(server_sock);
if (current_connection == FAILURE) {
free_allocated_memory();
exit_write_close(server_log, sys_call_error(SERVER, "accept"),
ERROR);
}
// if exit was not typed by the server's stdin, process the request
pthread_t thread;
// create thread
if (pthread_create(&thread, NULL, command_thread_func, &current_connection) != 0)
{
free_allocated_memory();
exit_write_close(server_log, sys_call_error(SERVER, "pthread_create"), ERROR);
}
}
}
}
All im trying to do, is to "listen" to STDIN for the user to type 'EXIT' in server's shell, and to wait for clients to pass commands from their shells (every time a command is recieved by the server from the user, the server parses it, and the server creates a thread that handles execution of the command)
To do it simultaniously, i used select().
When i work with a single thread, everything's perfect. But the problem is when im connecting another client i get a seg fault. i suspect that the problem is right here. any suggestions?
Hard to know if this is your exact problem, but this is definitely a problem:
You can't call pthread_create and provide a pointer to a stack variable (&current_connection) as your thread function's argument. For one thing, it's subject to immediate destruction as soon as the parent exits that scope.
Secondly, it will be overwritten on the next call to get_ready_connection.

How can I tell if there is new data on a pipe?

I'm working on Windows, and I'm trying to learn pipes, and how they work.
One thing I haven't found is how can I tell if there is new data on a pipe (from the child/receiver end of the pipe?
The usual method is to have a thread which reads the data, and sends it to be processed:
void GetDataThread()
{
while(notDone)
{
BOOL result = ReadFile (pipe_handle, buffer, buffer_size, &bytes_read, NULL);
if (result) DoSomethingWithTheData(buffer, bytes_read);
else Fail();
}
}
The problem is that the ReadFile() function waits for data, and then it reads it. Is there a method of telling if there is new data, without actually waiting for new data, like this:
void GetDataThread()
{
while(notDone)
{
BOOL result = IsThereNewData (pipe_handle);
if (result) {
result = ReadFile (pipe_handle, buffer, buffer_size, &bytes_read, NULL);
if (result) DoSomethingWithTheData(buffer, bytes_read);
else Fail();
}
DoSomethingInterestingInsteadOfHangingTheThreadSinceWeHaveLimitedNumberOfThreads();
}
}
Use PeekNamedPipe():
DWORD total_available_bytes;
if (FALSE == PeekNamedPipe(pipe_handle,
0,
0,
0,
&total_available_bytes,
0))
{
// Handle failure.
}
else if (total_available_bytes > 0)
{
// Read data from pipe ...
}
One more way is to use IPC synchronization primitives such as events (CreateEvent()). In case of interprocess communication with complex logic -- you should put your attention at them too.

Linux: Executing child process with piped stdin/stdout

Using Linux and C++, I would like a function that does the following:
string f(string s)
{
string r = system("foo < s");
return r;
}
Obviously the above doesn't work, but you get the idea. I have a string s that I would like to pass as the standard input of a child process execution of application "foo", and then I would like to record its standard output to string r and then return it.
What combination of Linux syscalls or POSIX functions should I use?
I'm using Linux 3.0 and do not need the solution to work with older systems.
The code provided by eerpini does not work as written. Note, for example, that the pipe ends that are closed in the parent are used afterwards. Look at
close(wpipefd[1]);
and the subsequent write to that closed descriptor. This is just transposition, but it shows this code has never been used. Below is a version that I have tested. Unfortunately, I changed the code style, so this was not accepted as an edit of eerpini's code.
The only structural change is that I only redirect the I/O in the child (note the dup2 calls are only in the child path.) This is very important, because otherwise the parent's I/O gets messed up. Thanks to eerpini for the initial answer, which I used in developing this one.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <errno.h>
#define PIPE_READ 0
#define PIPE_WRITE 1
int createChild(const char* szCommand, char* const aArguments[], char* const aEnvironment[], const char* szMessage) {
int aStdinPipe[2];
int aStdoutPipe[2];
int nChild;
char nChar;
int nResult;
if (pipe(aStdinPipe) < 0) {
perror("allocating pipe for child input redirect");
return -1;
}
if (pipe(aStdoutPipe) < 0) {
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
perror("allocating pipe for child output redirect");
return -1;
}
nChild = fork();
if (0 == nChild) {
// child continues here
// redirect stdin
if (dup2(aStdinPipe[PIPE_READ], STDIN_FILENO) == -1) {
exit(errno);
}
// redirect stdout
if (dup2(aStdoutPipe[PIPE_WRITE], STDOUT_FILENO) == -1) {
exit(errno);
}
// redirect stderr
if (dup2(aStdoutPipe[PIPE_WRITE], STDERR_FILENO) == -1) {
exit(errno);
}
// all these are for use by parent only
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);
// run child process image
// replace this with any exec* function find easier to use ("man exec")
nResult = execve(szCommand, aArguments, aEnvironment);
// if we get here at all, an error occurred, but we are in the child
// process, so just exit
exit(nResult);
} else if (nChild > 0) {
// parent continues here
// close unused file descriptors, these are for child only
close(aStdinPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);
// Include error check here
if (NULL != szMessage) {
write(aStdinPipe[PIPE_WRITE], szMessage, strlen(szMessage));
}
// Just a char by char read here, you can change it accordingly
while (read(aStdoutPipe[PIPE_READ], &nChar, 1) == 1) {
write(STDOUT_FILENO, &nChar, 1);
}
// done with these in this example program, you would normally keep these
// open of course as long as you want to talk to the child
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
} else {
// failed to create child
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);
}
return nChild;
}
Since you want bidirectional access to the process, you would have to do what popen does behind the scenes explicitly with pipes. I am not sure if any of this will change in C++, but here is a pure C example :
void piped(char *str){
int wpipefd[2];
int rpipefd[2];
int defout, defin;
defout = dup(stdout);
defin = dup (stdin);
if(pipe(wpipefd) < 0){
perror("Pipe");
exit(EXIT_FAILURE);
}
if(pipe(rpipefd) < 0){
perror("Pipe");
exit(EXIT_FAILURE);
}
if(dup2(wpipefd[0], 0) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
if(dup2(rpipefd[1], 1) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
if(fork() == 0){
close(defout);
close(defin);
close(wpipefd[0]);
close(wpipefd[1]);
close(rpipefd[0]);
close(rpipefd[1]);
//Call exec here. Use the exec* family of functions according to your need
}
else{
if(dup2(defin, 0) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
if(dup2(defout, 1) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
close(defout);
close(defin);
close(wpipefd[1]);
close(rpipefd[0]);
//Include error check here
write(wpipefd[1], str, strlen(str));
//Just a char by char read here, you can change it accordingly
while(read(rpipefd[0], &ch, 1) != -1){
write(stdout, &ch, 1);
}
}
}
Effectively you do this :
Create pipes and redirect the stdout and stdin to the ends of the two pipes (note that in linux, pipe() creates unidirectional pipes, so you need to use two pipes for your purpose).
Exec will now start a new process which has the ends of the pipes for stdin and stdout.
Close the unused descriptors, write the string to the pipe and then start reading whatever the process might dump to the other pipe.
dup() is used to create a duplicate entry in the file descriptor table. While dup2() changes what the descriptor points to.
Note : As mentioned by Ammo# in his solution, what I provided above is more or less a template, it will not run if you just tried to execute the code since clearly there is a exec* (family of functions) missing, so the child will terminate almost immediately after the fork().
Ammo's code has some error handling bugs. The child process is returning after dup failure instead of exiting. Perhaps the child dups can be replaced with:
if (dup2(aStdinPipe[PIPE_READ], STDIN_FILENO) == -1 ||
dup2(aStdoutPipe[PIPE_WRITE], STDOUT_FILENO) == -1 ||
dup2(aStdoutPipe[PIPE_WRITE], STDERR_FILENO) == -1
)
{
exit(errno);
}
// all these are for use by parent only
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);