I want to embed Python 3.x in our C++ application to allow scripting of maintenance and other tasks. It so far does everything we need including file manipulation.
The problem is that to meet some specifications (like PCI), we aren't allowed to arbitrarily run shell commands such as with subprocess.call or popen.
Is there a way to prevent and similar calls from working in embedded Python?
One option is to remove all the modules that allow running arbitrary shell commands, i.e.: subprocess.py*, os.py*... and include only the modules that the end users are allowed to have immediate access to.
Unless your application is really locked down I don't think you can prevent someone from loading their own python module from an arbitrary directory (with import) so I don't think you can prevent execution of arbitrary code as long as you have python embedded.
Related
I want to make a script that can run in the background, but work like a command line tool at the same time. I have no experience with making daemons, so I have no idea if that could do what I am describing better.
I would like a loop that uses some values, and I want to be able to change these values through the Linux terminal.
E.g. I want it to run continuously and for me to be able to adjust some variables using the terminal if necessary, without restarting it.
Sorry for the pretty bad question
You'll need two programs -- one that runs in the background, and a second one that communicates with it to tell it to update its values.
There are various options on how to do this depending on your requirements. One possibility is to have the background program accept TCP connections and take commands over them. Another would be to have a configuration file that it re-reads each time it does something. More exotic options useful in some circumstances are shared memory blocks and named pipes.
The general keyword here is "inter-process communication (IPC)."
I have a python module that defines some paths when imported. I want those paths to be readable by scripts in c++ and shell.
One option would be to make a text file with those paths, and write a reader in python, c++ and shell, and then start all my scripts with the respective reader, but it's a solution I dislike. I would be constantly opening the text file, and it could give me issues if more than one script tries opening it at the same time.
Another solution would be to have the python file define the paths as environment variables instead. The usual way to do environment variables in python, os.environ, only lasts until I close python. Could anyone tell me how to do this?
(Defining these variables in my .bashrc would be quite the headache, because the module will also be used by other people, and the paths have to be changed from time to time.)
Any other way to solve this problem would also be appreciated.
Currently I am calling an R script from C++ in the following way:
system("PATH C:\\Program Files\\R\\R-3.0.1\\bin\\x64");
system("RScript CommandTest.R");
Where CommandTest.R is my script.
This works, but is slow, since I need a particular package and this method makes the package load on every call.
Is there a way to load the package once and then keep that instance of Rscript open so that I can continue to make calls to it without having to reload the package every time?
PS: I know that the 'better' method is probably to go with Rcpp/Rinside, and I will go down that route if necessary, but I thought it'd be worth asking if there's an easy way to do what I need without it.
It seems like the Rserve package is what you seek. Basically it keeps open a "server" which can be asked to evaluate expressions.
It has options for Java, C++ and communication between one R session and another.
In the documentation, you might want to look at run.Rserve and self.ctrlEval
I'm not aware of a solution to keeping R permanently open, but you can speed up startup by calling R with the --vanilla option. (See Appendix B.1 of Intro to R for more options.)
You could also try accessing the functions using :: to save completely loading the package. (Try profiling to see if that really saves you much time. Is the package load actually the slow part of your analysis?)
I am new to python and wxpython, I am making a automated tool using python and for user interface wxpython and i use shell script.shell script can be called from the python. but now I am facing problem with the spinctrl value. whenever that spinctrl value changes it have to send that value into one txt.exe file which is written in BASH .{ if we run txt.exe file in command line it will ask for number then it will accept that value whenever we press enter}. but i am not able to understand that how to send value from spinctrl to txt.exe whenever i press "ok" button in GUI. please share your thoughts and knowledge.
Thank you
To call an executable in Python, you need to look at Python's subprocess module: http://docs.python.org/2/library/subprocess.html
If your exe doesn't accept arguments, then it won't work. If you created the exe yourself using something like PyInstaller or py2Exe, then you need to make it so your app can accept arguments. The simplest way is using sys.argv, but there are also the optparse and argparse libraries in Python as well.
I do not get a good feeling about your concept. It is hard to say, but I suspect you would benefit from talking with someone experienced about what you are trying to achieve and how it could best be done.
However, to get you started down your current path, I think you should take a look at wxExecute ( http://docs.wxwidgets.org/2.8/wx_processfunctions.html#wxexecute ). This will allow you to run your txt.exe utility from inside your GUI.
wxExecute is a wxWidgets utility. If you are working in python then there will be a more direct way to do this using a python utility - some-one else will have to advise on that. I suggest you edit your question for clarity so a python expert can help you.
I have an app that needs to import a .sql file. I can import the file from the command line with mysql -u my_user -pMyPassword db_name < import.sql, but I'd like to move this into my app. I have some things that need to be done before the import and others after. Right now I have to break it into 3 steps. The closest to a solution I've found was to get the connection (Doctrine\DBAL\Connection) and use exec() but it throws syntax errors even though my source file is correct. I'm guessing it's trying to escape things and double escaping the SQL. The file was generated with mysqldump.
With Symfony using Doctrine, you can do it with:
php app/dev_console doctrine:database:import import.sql
You can use the DBAL "import" command and get the sql executed. This is anyway less performant than using the mysql command directly, since it loads the entire file into memory.
Otherwise, I'd suggest you to use your own Symfony Console Command.
in my case it was :
php bin/console doctrine:database:import my_sql_file.sql
Status September 2021:
I rather trust the code in Doctrine\Bundle\DoctrineBundle\Command\Proxy\ImportDoctrineCommand which calls a deprecation-warning in the execute function. It is not good programming practice to ignore deprecation warnings. Calling dual:run-sql would not be efficient enough because of the overhead.
Alternatively, you can also call e.g. mysql on the operating system level. This causes problems in multi-user environments, for example because the database password has to be specified on the command line. In addition, the server must also activate this function; exec() is switched off for security reasons in many environments, especially with low-cost providers. Furthermore, this function would not be database abstract. This abstraction is one of the most outstanding features of Doctrine.
Therefore, I recommend reading in the SQL data yourself and executing it line by line (see entityManager->createNativeSQL() function). You then have better possibilities to react to possible errors.