C++ to call python script to handle emailing - c++

On a server I have a C++ program which takes some input and writes some output to a file. After the file is generated, I want to send an email to a person with the corresponding link to the file.
I'd rather avoid dealing with SMTP from C++ itself, so I thought about having C++ using a system call to execute a python script, which would in turn handle the emailing process.
In C++:
system("python emailer.py foo#bar.com filetodownload.txt");
In Python:
import sys
email = sys.argv[1]
file = sys.argv[2]
// handle SMTP emailing...
I have a question about this simple approach. The C++ program is multithreaded, so there may be more than one thread wanting to call the python script to send an email. Is this a concern? Would one (again simple) solution be having a mutex variable in the C++ program which allows only one thread to call the python script at a time? Also, if there are better ways to go about accomplishing this task please let me know.

From what you've shown I don't see any shared resource that would require any multi-threaded synchronisation. Each system call to python will result in a separate process.

Related

C++ execute many commands in shell

I have a C++ program from which I want to execute multiple commands in a shell.
My current solution use the system() function and looks like this:
return_value = system(SETUP_ENVIRONMENT; RUN_USEFUL_APP_1);
... do_something_else ...
return_value = system(SETUP_ENVIRONMENT; RUN_USEFUL_APP_2);
... do_something_else ...
return_value = system(SETUP_ENVIRONMENT; RUN_USEFUL_APP_3);
...
It works, but SETUP_ENVIRONMENT takes a few seconds making the program really slow. But I have to run it every time since system() runs in a new shell each time.
I want to be able to setup my shell once and then run all commands in it.
execute_in_shell(SETUP_ENVIRONMENT);
return_value = execute_in_shell(RUN_USEFUL_APP_1);
... do_something_else ...
return_value = execute_in_shell(RUN_USEFUL_APP_2);
... do_something_else ...
return_value = execute_in_shell(RUN_USEFUL_APP_3);
...
How do I do that?
I'm on Linux.
Alternatively to answer 1, you could also use your program to create a shell script which will run all your useful programs and execute this script at once. Then the shell won't be started each time for each particular useful program.
You have three reasonable options for doing this, depending on your specific need.
If the various calls you make to external tools are part of coherent routine, then you can – and probably should – follow #dmi's advice and write a short shell script that you can call from your C++ program.
If you instead need to start procedures here and there, you might be interested into running the shell as an inferior process and attaching your program to it – so that instead of talking with your terminal, the shell process talks to your C++ program.
This method is not very difficult but has a few gotchas (for instance, some programs like ssh, sudo or docker may expect to be attached to a tty). It is very well covered in most introductions to system programming (look for inter process communication and subprocesses) for any Unix variant. Let me outline that procedure:
use the pipe system call to create pipes (stdin_r, stdin_w)
use the pipe system call to create pipes (stdout_r, stdout_w)
use the pipe system call to create pipes (stderr_r, stderr_w)
use the fork system call to duplicate your program
In the child, you close stdin_w, stdout_r, stderr_r, and use the
exec system call parametrised by stdin_r, stdout_w, stderr_w to
run the shell.
In the parent, you close stdin_r, stdout_w, stderr_w, and you
can now write commands in stdin_w, and read command output from
stdout_r and stderr_r.
(This intentionally very sketchy, I included the outline only so that you are sure you found the right place in your favourite textbook).
There are third party libraries implementing all that low-level stuff for you. You can use boost::process (which is not yet an official part of boost now) whose usage is illustrated with a full tutorial. There are plenty of alternatives such as pstreams.
The third option would be to avoid using the shell and executing directly shell commands you use. This is the approach followed by Rashell, an OCaml library defining primitives allowing to reliably compose sub-processes, which you can use for your own inspiration.

Attach interactive console to embedded python script

I have a Python Script that I run from a C++ GUI Application.
I want to get the output of that Script into a Python Console and have the ability to manipulate them before calling another Python Function from C++.
My Question: Is that possible by just redirection stdin & stdout to Files?
Is there a better way using pure python?
Please note that I don´t want to spawn the console from the C++ Programm but from outside the C++ Programm.
You should be able to adapt the approach in this answer to your needs. The example it links to uses UDP sockets for transferring commands to/from the interactive interpreter, but you could easily change that to pull data from stdin (or wherever) instead.
The key thing to take away from the example is the use of the builtin InteractiveConsole's push() method to determine whether the input is:
Well-formed Python snippet that may be evaluated as-is
A syntactically invalid snippet, or
A snippet that may become valid, but more input is needed

Communication with a script from a C++ program

I have a c++ program (very complicated, and lengthy both in code and execution time).
Once in a while this program stops and calls a user-specified shell script.
Before calling the script, my program creates a .out file with current data. I call the script via system() command. The script then reads the .out file, and creates its own script.out file and exits.
Then the system() function call ends, and my program reads and parses the script.out file.
Question: is there a better way to execute communication between my c++ program and a random shell script?
My intent is to have full communication between the two. Script could virtually "ask" the program "What data do you have right now?" and the program would reply with some strict convention. Then the script could say "Add this data...", or "delete all your previous data" etc.etc.
The reason I need this is because the shell script tells the program to modify its data. The exact data that was put in the original .out file. So after the modification is done -- the actual data held by the program does not correspond to the data written in the .out file.
Thanks!
P.S.
I swear I've searched around, but everyone suggests an intermediate file.
There are certainly ways to do that without intermediate files. The most common approach is to use command line arguments for input, and pipes for standard output; others also use pipes for input. The most straight-forward alternative to system then is to use popen.
On a unix-like system? Perhaps pipe (2) will work for you?
From the man page (Mac OS X 10.5 version):
SYNOPSIS
#include <unistd.h>
int pipe(int fildes[2]);
DESCRIPTION
The pipe() function creates a pipe (an object that allows unidirectional
data flow) and allocates a pair of file descriptors. The first descrip-
tor connects to the read end of the pipe; the second connects to the
write end.
You will, of course, have to follow the creation of the pipes with a fork and exec pair. Probably this has already been answered in detail, and now you know what to search on...
It's been a while since I did this, but:
In the main process, before forking the sub-process you call pipe twice. Now you have two pipes and control both ends of both of them.
You fork.
The main process will read from one pipe and write from the other. It doesn't matter which is which, but you need to be clear about this.
The child process will call one of the exec family of function to replace it's image with that of the shell you want to run but first you will use dup2 to replace it's standard input and output with the ends of the two pipes (again, this is where you need to be clear about which pipe is which).
At his point you have two processes, the main process can send things into one pipe ad they will be received on the standard input of the script, and anything the script writes to it's standard output will be sent up the other pipe to the controlling process. So they take turns, just like interacting with the shell.
You can use pipes or (maybe more convenient) sockets - for example frontends to gdb, or expect do that. It would require changes to your shell scripts, and switching from system() to more low-level fork() and exec().
It's rather complicated so please, be more specific about your environment and what you need to clarify.
You are asking the question on Interprocess Communication (IPC).
There are a lot of ways to do that. You can do a simply search and Internet will return you most answers.
If I am not wrong, Google chrome uses a technique called Named Pipe.
Anyway, I think the most "portable way" is probably a file. But if you know you are working on which operating system, you can definitely use most of the IPC techniques.

Run a C++ Program from Django Framework

I need to run a C++ Program from Django Framework. In a sense, I get inputs from UI in views.py . Once I have these inputs, I need to process the input using my C++ program and use those results. Is it possible ?
Compile that C++ program to executable and call with subprocess module from python
You can use swig to create a C++ module that can be imported in python.
An alternative is boost::python (but personnaly, I prefer swig).
One way of doing this would be to use os.popen. Assuming your C++ executable is in the system wide path and is named mycpp, you would do something like:
results = os.popen('mycpp %s' % user_input).read()
However, this could get computationally expensive real fast if you're invoking this command often 'cause os.popen basically forks off a subprocess. Also, as noted in the docs, it's been deprecated since Python 2.6 so proceed with caution.
Assuming you are on *nix, compile your C++ program and store it somewhere on your system, say /home/rishabh/myexe.
Now from your django app call the executable using commands module:
import commands
status, res = commands.getstatusoutput("/home/rishabh/myexe")
# status contains process status (0 for success, non-zero for unsuccesful termination) and res contains the output of the process

How do I do os.getpid() in C++?

newb here. I am trying to make a c++ program that will read from a named pipe created by python. My problem is, the named pipe created by python uses os.getpid() as part of the pipe name. when i try calling the pipe from c++, i use getpid(). i am not getting the same value from c++. is there a method equivalent in c++ for os.getpid?
thanks!
edit:
sorry, i am actually using os.getpid() to get the session id via ProcessIDtoSessionID(). i then use the session id as part of the pipe name
You don't get same proccess IDs because your python program and c++ programs are run in different proccesses thus having different process IDs. So generally use a different logic to name your fifo files.
You won't get the same value if you're running as a separate process as each process has their own process ID. Find some other way to identify the pipe.
The standard library does not give you anything other than files. You will need to use some other OS specific API.
You cannot easily retrieve the Python interpreter's PID from your C++ program.
Either assign the named pipe a constant name, or if you really need multiple pipes of the same Python program, create a temporary file to which the Python programs write their PIDs (use file locking!) - then you can read the PIDs from the C++ program.