I'm trying to run a batch script remotely
python C:\FtpServer.py
when i start it manually it works fine, but when i use the remote script the python process wont start or terminates directly.
my python code is
import wmi
import win32con
connection = wmi.WMI("10.60.2.244", user="XY", password="XY")
startup = connection.Win32_ProcessStartup.new(ShowWindow=win32con.SW_SHOWNORMAL)
process_id, return_value = connection.Win32_Process.Create(CommandLine="C:\\startFtpServer.bat", ProcessStartupInformation=startup)
I get a pid and return value is 0
When i tasklist in cmd on the remote machine the process is not listed. Just starting python.exe instead of that batch file with that script works fine though.
Related
Looking for some recommendations to coding a python script that will edit the crontab file to make a file boot automatically. I already have it working through command line, but I'm trying to automate the process to simplify re-programming multiple pi's.
Currently I'm using os.system() to open the file, but now need to add, "#reboot sh /home/pi/epaperHat/RaspberryPi/machine/launcher.sh >/home/pi/logs/cronlog 2>&1" at the end.
import os
from time import sleep
os.system("chmod 755 launcher.sh")
sleep(0.5)
os.system("cd")
os.system("mkdir logs")
sleep(0.1)
os.system("sudo crontab -e")
sleep(0.1)
Ideally, I run this file then the raspberry pi reboots and runs launcher.sh shell.
I'm working on creating a tool that allows users to run a jupyter-notebook w/ pyspark on an AWS server and forward the port to their localhost to connect to the notebook.
I've been using subprocess.Popen to ssh into the remote server and kick off the pyspark shell/notebook, but I'm unable to avoid having it print everything to the terminal. I WANT to perform an action per line to retrieve the port number.
For example, running this (following the most popular answer here: Read streaming input from subprocess.communicate())
command = "jupyter-notebook"
con = subprocess.Popen(['ssh', node, command], stdout=subprocess.PIPE, bufsize=1)
with con.stdout:
for line in iter(con.stdout.readline, b''):
print(line),
con.wait()
this ignores the context manager, and the con portion starts printing off stdout so that this is immediately printed to terminal
[I 16:13:20.783 NotebookApp] [nb_conda_kernels] enabled, 0 kernels found
[I 16:13:21.031 NotebookApp] JupyterLab extension loaded from /home/*****/miniconda3/envs/aws/lib/python3.7/site-packages/jupyterlab
[I 16:13:21.031 NotebookApp] JupyterLab application directory is /data/data0/home/*****/miniconda3/envs/aws/share/jupyter/lab
[I 16:13:21.035 NotebookApp] [nb_conda] enabled
...
...
...
I can get the context manager to function when I call a random script like the below instead of "jupyter-notebook" (where command="bash random_script.sh")
# random_script.sh
for i in $(seq 1 100)
do
echo "some output: $i"
sleep 2
done
This acts as expected, and I can actually perform an action per line within the with statement. Is there something fundamentally different about the jupyter version that prevents this from acting similarly?
The issue turned out to have everything to do with the fact that the console output produced by jupyter was actually going to STDERR instead of stdout. I'm not sure why. But regardless, this change totally fixed the issue:
con = subprocess.Popen(['ssh', node, command],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, # <-- redirect stderr to stdout
bufsize=1)
I currently have an instance of Rstudio which runs on a private AWS server (which I built from using this AMI: http://www.louisaslett.com/RStudio_AMI/)
I am currently trying schedule a script to run, using the tasksheduleR package:
The script I am using to schedule is:
myscript <- system.file("extdata", "EG_pricedropAPI.R", package = "taskscheduleR")
cat(readLines(myscript), sep = "\n")
## Run script once at a specific timepoint (within 62 seconds)
runon <- format(Sys.time() + 62, "%H:%M")
taskscheduler_create(taskname = "testScript", rscript = myscript,
schedule = "ONCE", starttime = runon)
Where 'EG_pricedropAPI.R' is a script I have written, in the 'extdata' location, which successfully runs when I run without taskscheduleR.
However every time I run this script, or a similar 'taskscheduler_create()' script I get the following error:
sh: 1: schtasks: not found
Error in system(cmd, intern = TRUE) : error in running command
Does anyone know what the fix for this is?
The taskcheduler package only works with the Windows task scheduler; the EC2 machine is running Linux. You won't be able to use it to schedule tasks on your RStudio Server.
Fortunately, it is pretty straightforward to set up an R script as a scheduled task on your EC2 Linux instance using cron. Use these links to get started:
https://tgmstat.wordpress.com/2013/09/11/schedule-rscript-with-cron/
Schedule R script using cron
How to run an R script in crontab
I have never ran into this before because I can always just run the dev server, open up a new tab in terminal and curl from there. I can't do this now because I am running the Django Development server from a Docker container and so if I open a new tab, I will be in the local shell and not the docker container.
How can I leave the development server running and still be able to curl or run other commands?
When I run the development server I'm left with this message:
Django version 1.10.3, using settings 'test.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
and so unable to type any commands.
You can use & to run the server as a background job in the current shell:
$ python manage.py runserver &
[1] <pid>
$
You can use the fg command to get back direct control over the runserver process, then you can stop it as usual using Ctrl+C.
To set a foreground process as a background job, you can pause it using Ctrl+Z, and run the bg command. You can see a list of running backgrounds job in the current shell using the jobs command.
The difference with screen is that this will run the server in the current shell. If you exit the shell, the server will stop as well, while screen uses a separate process that will continue after you exit the current shell.
In a development environment you can do following also.
Let the server run in one terminal window.
Open a new terminal window/tab and run
docker exec -it <Container ID/Name> /bin/bash
It will give you interactive access to your container, i.e. you can execute any command in your container rather than in your local shell.
Type exit to come out container shell to local shell.
I have a html file with one button. When the button is clicked, a javascript function "run" is called:
function run() {
window.open("http://localhost/cgi-bin/run.py", "_self");
}
run.py is simply trying to run a helloworld.exe program, that outputs in a terminal the string "helloworld", but nothing happens, and browser keeps "waiting for localhost" indefinitely.
#!python
import sys, string, os, cgitb
cgitb.enable()
os.system("helloworld.exe")
I have tried helloworld.exe alone and it works, I have run run.py on the terminal, and it worked, and also I have tested on the browser the test site http://localhost/cgi-bin/helloworld.py, and it worked fine (helloworld.py is another script just to see if my apache is configured OK).
I am using wamp.
What I am trying to do is a bigger program that allows a client connect to a server, and "interact" with a program on the server side. The program is already done in c++, and won't be translated into php or javascript.
EDIT: I have been trying with the functions: subprocess.Popen, subprocess.call and os.system. I have also tested the code to run .exe files created by me living at apache/cgi-bin folder or executables like wordpad, living at c:\windows. And it always succeeds when the python script runs from the terminal, and it never works when trying from the browser. Is it possible that it is because of the server I am using? I use the apache from wamp, and have added the sentence "AddHandler cgi-script .exe" to the httpd.conf file.
I am sure it doesn't work locally. os.system returns you the exit code of the command, not its output.
You will need to use subprocess.Popen and pipeline the output to read the output.
import subprocess
p = subprocess.Popen("whatever.exe",stdout=subprocess.PIPE)
print p.stdout.read()
p.wait()
Also, the output of your CGI script is not valid HTTP protocol, and depending on your server that could be causing problems (some servers sanitize the output of the scripts, some others expect them to be written properly).
To me this code example works (on GNU/Linux using weborf as server, but it should be the same). You can try setting the content type and sending the \r\n\r\n final sequence. Servers are not expected to send that because the CGI script might want to add some more HTTP headers.
#!/usr/bin/env python
import cgitb
import subprocess
import sys
cgitb.enable()
sys.stdout.write("Content-type: text/plain\r\n\r\n")
p = subprocess.Popen("/usr/games/fortune",stdout=subprocess.PIPE)
data = p.stdout.read()
p.wait()
print data
In general my experience has been that if something works locally and doesn't work when you install all the same code on a different server, you are looking at a permissions problem. Check the permissions that visitors on your site have and make sure that the user coming through your web server has the right permissions to run helloworld.exe and run.py.