Mkdir over SSH with Python does not work - python-2.7

I'm trying to create a new dir via SSH with a python script. When i try my commands by using the Python command line it just works. But when I try to do the same by a script it does not create the new 'test' folder (I even copy/paste the commands in the script into the Python cmd to verify they are right and there they work). So any ideas why it does not work by script?
The used code:
child = pexpect.spawn('ssh 192.168.56.101 -oStrictHostKeyChecking=no')
child.expect=('password:')
child.sendline('MyPwd')
child.sendline('mkdir /home/myUser/Desktop/test')

Seems to work when I just add another line
for example
child.sendline('\n')
so the entire script is
child = pexpect.spawn('ssh 192.168.56.101 -oStrictHostKeyChecking=no')
child.expect=('password:')
child.sendline('MyPwd')
child.sendline('mkdir /home/myUser/Desktop/test')
child.sendline('\n')

What I usually do to solve this issue is sync-ing with host machine. After I send something to the machine, I expect an answer, which usually translates in the machine's prompt. So, in your case, I would go for something like this:
child = pexpect.spawn('ssh 192.168.56.101 -oStrictHostKeyChecking=no')
child.expect('password:')
child.sendline('MyPwd')
child.expect('YourPromptHere')
child.sendline('mkdir /home/myUser/Desktop/test')
child.expect('YourPromptHere')
You can just replace YourPromptHere with the prompt of the machine, if you are running the script on a single target, or with a regular expression (eg. "(\$ )|(# )|(> )").
tl;dr : To summarize what I said, you need to wait until the previous action was finished until sending a new one.

Related

Receiving back string of lenght 0 from os.popen('cmd').read()

I am working with a command line tool called 'ideviceinfo' (see https://github.com/libimobiledevice) to help me to quickly get back serial, IMEI and battery health information from the iOS device I work with daily. It executes much quicker than Apple's own 'cfgutil' tools.
Up to know I have been able to develop a more complicated script than the one shown below in PyCharm (my main IDE) to assign specific values etc to individual variables and then to use something like to pyclip and pyautogui to help automatically paste these into the fields of the database app we work with. I have also been able to use the simplified version of the script both in Mac OS X terminal and in the python shell without any hiccups.
I am looking to use AppleScript to help make running the script as easy as possible.
When I try to use Applescript's "do shell script 'python script.py'" I just get back a string of lenght zero when I call 'ideviceinfo'. The exact same thing happens when I try to build an Automator app with a 'Run Shell Script' component for "python script.py".
I have tried my best to isolate the problem down. When other more basic commands such as 'date' are called within the script they return valid strings.
#!/usr/bin/python
import os
ideviceinfoOutput = os.popen('ideviceinfo').read()
print ideviceinfoOutput
print len (ideviceinfoOutput)
boringExample = os.popen('date').read()
print boringExample
print len (boringExample)
I am running Mac OS X 10.11 and am on Python 2.7
Thanks.
I think I've managed to fix it on my own. I just need to be far more explicit about where the 'ideviceinfo' binary (I hope that's the correct term) was stored on the computer.
Changed one line of code to
ideviceinfoOutput = os.popen('/usr/local/bin/ideviceinfo').read()
and all seems to be OK again.

Permissions issue calling bash script from c++ code that apache is running

The goal of this code is to create a stack trace whenever a sigterm/sigint/sigsegv/etc is caught. In order to not rely on memory management inside of the C++ code in the case of a sigsegv, I have decided to write a bash script that will receive the PID and memory addresses in the trace array.
The Sig events are being caught.
Below is where I generate the call to the bash script
trace_size = backtrace(trace, 16);
trace[1] = (void *)ctx->rsi;
messages = backtrace_symbols(trace, trace_size);
char syscom[356] = {0};
sprintf(syscom,"bash_crash_supp.sh %d", getpid());
for (i=1; i<(trace_size-1) && i < 10; ++i)
{
sprintf(syscom,"%s %p",syscom,trace[i]);
}
Below is where my issue arises. The command in syscom is generating correctly. I can stop the code before the following popen, run the command in a terminal, and it functions correctly.
However running the script directly from the c++ code does not seem to work.
setuid(0);
FILE *bashCommand = popen(syscom,"r");
char buf[256] = {0};
while(fgets(buf,sizeof(buf),bashCommand) != 0) {
LogMessage(LOG_WARNING, "%s\n", buf);
}
pclose(bashCommand);
exit(sig);
The purpose of the bash script is to get the offset from /proc/pid/maps, and then use that to run addr2line to get the file name/line number.
strResult=$(sudo cat /proc/"$1"/maps | grep "target_file" | grep -m 1 '[0-9a-fA-F]')
offset=$( cut -d '-' -f 1 <<< "$strResult");
However offset is getting 0 when I run from the c++ code, but when I run the exact same command (that is stored in syscom in the c++ code) in a terminal, I get the expected output.
I have been trying to fix this for a while. Permissions are most likely the issue, but I've tried to work around these with every way I know of/have seen via google. The user trying to run the script (currently running the c++ code) is apache.
The fix does not need to worry about the security of the box. If something as simple as "chmod 777 /proc -r" worked, that would have been the solution (sadly the OS doesn't let me mess do such commands with /proc).
Things I've already tried:
Adding `` around the command that's stored in syscom from the c++ code
chown the script to apache
chmod 4755 on the bash_crash_supp.sh script, allowing it to always fire as root.
I have added apache to sudoers (visudo), allowing them to run sudo without using a password
I have added a sub file to sudoers.d (just in case) that does the same as above
I have looked into objdump, however it does not give me either the offset or the file/line num for an addr (from what I can see)
I have setuid(0) in the c++ code to set the current user to root
Command generated in C++
bash_crash_supp.sh 25817 0x7f4bfe600ec8 0x7f4bf28f7400 0x7f4bf28f83c6 0x7f4bf2904f02 0x7f4bfdf0fbb0 0x7f4bfdf1346e 0x7f4bfdf1eb30 0x7f4bfdf1b9a8 0x7f4bfdf176b8
Params in bash:
25817 0x7f4bfe600ec8 0x7f4bf28f7400 0x7f4bf28f83c6 0x7f4bf2904f02 0x7f4bfdf0fbb0 0x7f4bfdf1346e 0x7f4bfdf1eb30 0x7f4bfdf1b9a8 0x7f4bfdf176b8
Can anyone think of any other ways to solve this?
Long story short, almost all Unix-based systems ignore setuid on any interpreted script (anything with a shebang #!) as a security precaution.
You may use setuid on actual executables, but not the shell scripts themselves. If you're willing to take a massive security risk, you can make a wrapper executable to run the shell script and give the executable setuid.
For more information, see this question on the Unix StackExchange: https://unix.stackexchange.com/a/2910

zmq ventilator/worker/sink paradigm not working w/ subprocess

I am trying to replicate the ventilator/workers/sink paradigm described in the ZMQ guide. I have the same Python Ventilator, the same C++ worker as, and the same Python Sink as was described in the ZMQ examples. I want to launch the ventilator, workers, and sink from one main python script, so I created "class" wrappers around the ventilator & sink, and both of those classes subclass the Python module "multiprocessing.Process." Since the C++ is a binary, I launch it with Python's subprocess.Popen call.
The order of starting all of this up is as follows:
h = subprocess.Popen('test') # test is the name of the binary
time.sleep(1)
s = sinkObj.start()
time.sleep(1)
v = ventObj.start()
What I am finding is that no data is getting through the system when I start up the components like this. However, if I start the C++ binary in its own shell, and only start the sinkObj and ventObj from the main python script, it works fine.
I apologize in advance if this is more of a Python question than a ZMQ question, but I haven't run into issues like this w/ Python's subprocess. I have also tried using os.system() instead of the subprocess... but same issue. I put all the code on this website: https://github.com/kkarrancsu/zmqtest if anybody is curious to test it out. There is a readme on that git which tells you what the files are.
Any ideas on why this could be happening?
------------------------- UPDATE --------------------
I found that if I create a shell script which simply launches the C binary, and call that shell script w/ os.system('run_the_shell_script') it works! So this means that there is something wrong with the way that I am using subprocess.Popen(...), but can't seem to pinpoint what the issue is. I tried w/ the shell=True flag, but it still hangs with that...
It's the name of the worker binary file that causes the problem.
There two solutions:
Chang the name of the binary file test to test_new and do the same in your All.py file, and then it will work as you desire.
Substitute subprocess.Popen('./test', shell=True) for subprocess.Popen('test', shell=True).
test is Linux command. If you type the following in your shell
$ echo $PATH
You may see that . is at the last position. It means that until shell couldn't find the binary file to be executed in the directories that $PATH indicates, it will try to search for it in the current directory .
When you execute subprocess.Popen('test', shell=True), it could find it before trying the . directory and so it won't execute the workers.
As I see, the ventilator and sink bind() to ports 6557 and 6558, and the C++ app connect() to these ports. In this case, if you start the cpp app first, it will try to connect() to the endpoints, but as nothing is bound there, it will drop silently.
In ZeroMQ the basic principle is "First Bind, then Connect". So you should not connect() before you bind() something on the socket. Imagine bind() is the 'Server', and connect() is the client. You cannot connect client to non-existing server. Also, in ZeroMQ every socket can be 'Server', but you SHOULD HAVE only 1 bind()-ing socket per URL. And you can have multiple 'connect()'s.

Python not calling an external program part 3

I have been having problems trying to run an external program from a python program that was generated from a trigger in a postgres 9.2 database. The trigger works. It writes to a file. I had tried just running the external program but the permissions would not allow it to run. I was able to create a folder (using os.system(“mkdir”) ). The owner of the folder is NETWORK SERVICE.
I need to run a program called sdktest. When I try to run it no response happens so I think that means that the python program does not have enough permissions (with an owner of NETWORK SERVICE) to run it.
I have been having my program copy files that it needs into that directory so they would have the correct permissions and that has worked to some degree but the program that I need to run is the last one and it is not running because it does not have enough permissions.
My python program runs a C++ program called PG_QB_Connector which calls sdktest.
Is there any way I can change the owner of the process to be a “normal” owner? Is there a better way to do this? Basically I just need to have this C++ program have eniough perms to run correctly.
BTW, when I run the C++ program by hand, the line that runs the sdktest program runs correctly, however, when I run it from the postgres/python it does not do anything...
I have Windows 7, python 3.2. The other 2 questions that I asked about this are located here and here
The python program:
CREATE or replace FUNCTION scalesmyone (thename text)
RETURNS int
AS $$
a=5
f = open('C:\\JUNK\\frompython.txt','w')
f.write(thename)
f.close()
import os
os.system('"mkdir C:\\TEMPWITHOWNER"')
os.system('"mkdir C:\\TEMPWITHOWNER\\addcustomer"')
os.system('"copy C:\\JUNK\\junk.txt C:\\TEMPWITHOWNER\\addcustomer"')
os.system('"copy C:\\BATfiles\\junk6.txt C:\\TEMPWITHOWNER\\addcustomer"')
os.system('"copy C:\\BATfiles\\run_addcust.bat C:\\TEMPWITHOWNER\\addcustomer"')
os.system('"copy C:\\Workfiles\\PG_QB_Connector.exe C:\\TEMPWITHOWNER\\addcustomer"')
os.system('"copy C:\\Workfiles\\sdktest.exe C:\\TEMPWITHOWNER\\addcustomer"')
import subprocess
return_code = subprocess.call(["C:\\TEMPWITHOWNER\\addcustomer\\PG_QB_Connector.exe", '"hello"'])
$$ LANGUAGE plpython3u;
The C++ program that is called from the python program and calls sdktest.exe is below
command = "copy C:\\Workfiles\\AddCustomerFROMWEB.xml C:\\TEMPWITHOWNER\\addcustomer\\AddCustomerFROMWEB.xml";
system(command.c_str());
//everything except for the qb file is in my local folder
command = "C:\\TEMPWITHOWNER\\addcustomer\\sdktest.exe \"C:\\Users\\Public\\Documents\\Intuit\\QuickBooks\\Company Files\\Shain Software.qbw\" C:\\TEMPWITHOWNER\\addcustomer\\AddCustomerFROMWEB.xml C:\\TEMPWITHOWNER\\addcustomer\\outputfromsdktestofaddcust.xml";
system(command.c_str());
It sounds like you want to invoke a command-line program from within a PostgreSQL trigger or function.
A usually-better alternative is to have the trigger send a NOTIFY and have a process with a PostgreSQL connection LISTENing for notifications. When a notification comes in, the process can start your program. This is the approach I would recommend; it's a lot cleaner and it means your program doesn't have to run under PostgreSQL's user ID. See NOTIFY and LISTEN.
If you really need to run commands from inside Pg:
You can use PL/Pythonu with os.system or subprocess.check_call; PL/Perlu with system(); etc. All these can run commands from inside Pg if you need to. You can't invoke programs directly from PostgreSQL, you need to use one of the 'untrusted' (meaning fully privileged, not sandboxed) procedural languages to invoke external executables. PL/TCL can probably do it too.
Update:
Your Python code as shown above has several problems:
Using os.system in Python to copy files is just wrong. Use the shutil library: http://docs.python.org/3/library/shutil.html to copy files, and the simple os.mkdir command to create directories.
The double-layered quoting looks wrong; didn't you mean to quote only each argument not the whole command? You should be using subprocess.call instead of os.system anyway.
Your final subprocess.call invocation appears OK, but fails to check the error code so you'll never know if it went wrong; you should use subprocess.check_call instead.
The C++ code also appears to fail to check for errors from the system() invocations so you'll never know if the command it runs fails.
Like the Python code, copying files in C++ by using the copy shell command is generally wrong. Microsoft Windows provides the CopyFile function for this; equivalents or alternatives exist on other platforms and you can use portable-but-less-efficient stream copying too.

How to start a Shell Script with QProcess?

How can I start a Shell Script using QProcess?
The Shell Script has eight different commands in it, some with arguments others without.
I tried to start the Shell Script with (using Ubuntu 11.10):
QProcess *Prozess = new QProcess();
Prozess->setWorkingDirectory(MainDirectory);
Prozess->start("/bin/sh", QStringList() << "Shell.sh");
But this doesn't work, that means nothing happens. How to make it work?
Code is fine. Problem is at run-time.
Either your program can't run /bin/sh for some reason (test if you can run gedit instead?), or the MainDirectory variable has wrong directory path (debug it), or the Shell.sh does not exist in that directory (capitalization mistakes? What about "./Shell.sh"?), or you don't have enough privileges to run or modify target directory/files (are they owned by you?).
The process you have started is running in background. if you want to see any explicit output from the running script you have to connect to void readyReadStandardOutput() or/and void readyReadStandardError() and read from the process explicitly. For example:
void onReadyRead() {
QByteArray processOutput = Prozess->readAllStandardOutput();
}
This should work:
QProcess::ProcessError Error = myProcess->readAllStandardError();
return Error;
QProcess ().execute ("/bin/sh " + MainDirectory + "/Shell.sh");
will do the job.