How to launch an external program in a python script - python-2.7

I am creating a Python script where it does a bunch of tasks and one of those tasks is to launch and open google chrome. What is the ideal way of accomplishing that in my script?

You can launch anything from python as a command line call. This is done with Subprocess.
For example:
import subprocess
sp = subprocess.Popen("PATH_TO_CHROME\chrome")
#If you want your process to wait for chrome to "complete"
ret_val = sp.wait()
#or
sp_out, sp_error = sp.communicate()

Maybe you can try with:
Subprocess.call
import subprocess
subprocess.call(['C:\\Users\username\AppData\Local\Google\Chrome\Application\chrome.exe'])

Maybe with commands.
import commands
a = commands.getoutput("./PATH_TO_CHROME/chrome")

Related

How to handle GCP preemptive shutdown for jupyter notebooks

In preemptive VM instances in Google Cloud Platform, a forced shut down can be called at any time. They allow to run a shutdown script to avoid file loss. But how do I use the script to cause a specific interrupt in my jupyter notebook?
I have come up with a solution.
from os import getpid, kill
from time import sleep
import signal
import ipykernel
import psutil
def get_active_kernels():
active_kernels = []
pids = psutil.pids()
my_pid = getpid()
for pid in pids:
if pid == my_pid:
continue
try:
p = psutil.Process(pid)
cmd = p.cmdline()
for arg in cmd:
if arg.count('ipykernel'):
active_kernels.append(pid)
except psutil.AccessDenied:
continue
return active_kernels
if __name__ == '__main__':
kernels = get_active_kernels()
for kernel in kernels:
kill(kernel, signal.SIGINT)
One can use this code as a shut-down script. It invokes a keyboard interrupt to all existing jupyter kernels. So, a simple try-except block that excepts KeyboardInterrupt can be used inside the jupyter notebook.
try:
... #regular code
except KeyboardInterrupt:
... #code to save the progress
Jupyter notebooks work with an instance VM on its backend and you can access them through ssh protocol like the other instances from the compute engine. This means that any script which works in a computer engine instance must work with a jupyter notebook.
In your description I understand you are referring to this shutdown script. This scripts saves a checkpoint while your instance is being shutdown, so it doesn't trigger the shutdown command itself.
There are many ways to shutdown an instance, either from inside the instance (script) as from outside (cloud shell, console UI ...).
Could you explain which is your specific purpose so I can help you further?

Python script to automatically change crontab startup on raspberry pi

Looking for some recommendations to coding a python script that will edit the crontab file to make a file boot automatically. I already have it working through command line, but I'm trying to automate the process to simplify re-programming multiple pi's.
Currently I'm using os.system() to open the file, but now need to add, "#reboot sh /home/pi/epaperHat/RaspberryPi/machine/launcher.sh >/home/pi/logs/cronlog 2>&1" at the end.
import os
from time import sleep
os.system("chmod 755 launcher.sh")
sleep(0.5)
os.system("cd")
os.system("mkdir logs")
sleep(0.1)
os.system("sudo crontab -e")
sleep(0.1)
Ideally, I run this file then the raspberry pi reboots and runs launcher.sh shell.

Running batch script remote

I'm trying to run a batch script remotely
python C:\FtpServer.py
when i start it manually it works fine, but when i use the remote script the python process wont start or terminates directly.
my python code is
import wmi
import win32con
connection = wmi.WMI("10.60.2.244", user="XY", password="XY")
startup = connection.Win32_ProcessStartup.new(ShowWindow=win32con.SW_SHOWNORMAL)
process_id, return_value = connection.Win32_Process.Create(CommandLine="C:\\startFtpServer.bat", ProcessStartupInformation=startup)
I get a pid and return value is 0
When i tasklist in cmd on the remote machine the process is not listed. Just starting python.exe instead of that batch file with that script works fine though.

manage.py: cannot connect to X server

I have used PyQt4.QtWebkit to crawl the web page in my django application.In the production environment that module doesn't work to crawl it.it throws the error "manage.py: cannot connect to X server"
My Qt class :
class Render(QWebPage):
def __init__(self, url):
self.app = QApplication(sys.argv)
QWebPage.__init__(self)
self.loadFinished.connect(self._loadFinished)
self.mainFrame().load(QUrl(url))
self.app.exec_()
def _loadFinished(self, result):
self.frame = self.mainFrame()
self.app.quit()
calling from django-shell:
r = Render(url)
when i call this "Render" class through django with the Django-shell(python manage.py shell) the render function throws the error.
could you please help me on this?
The Reason is "Xvfb"
i need to run my python program in bash shell with xvfb(X virtual frame buffer)
likewise,
ubuntu#localhost$ xvfb-run python webpage_scrapper.py http://www.google.ca/search?q=navaspot
It gives the result.
Now My requirement is i need to execute this shell command in python and waiting for tine to collect the result.I have to process the result.
Could you please suggest me for executing this command on python effectively.
Seems like environment variables for X display are not set and that's the reason you get such error. It can occur because you're running script from environment, that isn't bound to X display (ssh to server).
Try adding display variable:
DISPLAY=:0.0 python manage.py script
It is also possible to set DISPLAY environment variable from python. You may set it before calling the PyQt4:
import os
os.putenv('DISPLAY', ':0.0')
It's also may not be possible to run PyQt4.QtWebkit if your production environment doesn't have X server running.
Generally on headless machines, the DISPLAY variable is absent or misconfigured. To work on such machines, you can use the following approach. As a example for Ubuntu 14.04-LTS machines:
First install X server:
sudo apt-get install xserver-xorg
Now start the X server (say at :0):
sudo /usr/bin/X :0&
You can use process managers like supervisor to handle the above process.
Now just set the DISPLAY environment variable and make sure it is available to any processes you are running which depend on this,
DISPLAY=:0 python manage.py
The way you provide the environment variables to your application is upto you.

Django custom command and cron

I want my custom made Django command to be executed every minute. However it seems like python /path/to/project/myapp/manage.py mycommand doesn't seem to work while at the directory python manage.py mycommand works perfectly.
How can I achieve this ? I use /etc/crontab with:
****** root python /path/to/project/myapp/manage.py mycommand
I think the problem is that cron is going to run your scripts in a "bare" environment, so your DJANGO_SETTINGS_MODULE is likely undefined. You may want to wrap this up in a shell script that first defines DJANGO_SETTINGS_MODULE
Something like this:
#!/bin/bash
export DJANGO_SETTINGS_MODULE=myproject.settings
./manage.py mycommand
Make it executable (chmod +x) and then set up cron to run the script instead.
Edit
I also wanted to say that you can "modularize" this concept a little bit and make it such that your script accepts the manage commands as arguments.
#!/bin/bash
export DJANGO_SETTINGS_MODULE=myproject.settings
./manage.py ${*}
Now, your cron job can simply pass "mycommand" or any other manage.py command you want to run from a cron job.
cd /path/to/project/myapp && python manage.py mycommand
By chaining your commands like this, python will not be executed unless cd correctly changes the directory.
If you want your Django life a lot more simple, use django-command-extensions within your project:
http://code.google.com/p/django-command-extensions/
You'll find a command named "runscript" so you simply add the command to your crontab line:
****** root python /path/to/project/myapp/manage.py runscript mycommand
And such a script will execute with the Django context environment.
This is what i have done recently in one of my projects,(i maintain venvs for every project i work, so i am assumning you have venvs)
***** /path/to/venvs/bin/python /path/to/app/manage.py command_name
This worked perfectly for me.
How to Schedule Django custom Commands on AWS EC-2 Instance?
Step -1
First, you need to write a .cron file
Step-2
Write your script in .cron file.
MyScript.cron
* * * * * /home/ubuntu/kuzo1/venv/bin/python3 /home/ubuntu/Myproject/manage.py transfer_funds >> /home/ubuntu/Myproject/cron.log 2>&1
Where * * * * * means that the script will be run at every minute. you can change according to need (https://crontab.guru/#*_*_*_*_*). Where /home/ubuntu/kuzo1/venv/bin/python3 is python virtual environment path. Where /home/ubuntu/kuzo1/manage.py transfer_funds is Django custom command path & /home/ubuntu/kuzo1/cron.log 2>&1 is a log file where you can check your running cron log
Step-3
Run this script
$ crontab MyScript.cron
Step-4
Some useful command
1. $ crontab -l (Check current running cron job)
2. $ crontab -r (Remove cron job)
The runscript extension wasn't well documented. Unlike the django command this one can go anywhere in your project and requires a scripts folder. The .py file requires a run() function.
If its a standalone script, you need to do this:
from django.conf import settings
from django.core.management import setup_environ
setup_environ(settings)
#your code here which uses django code, like django model
If its a django command, its easier: https://coderwall.com/p/k5p6ag
In (management/commands/exporter.py)
from django.core.management.base import BaseCommand, CommandError
class Command(BaseCommand):
args = ''
help = 'Export data to remote server'
def handle(self, *args, **options):
# do something here
And then, in the command line:
$ python manage.py exporter
Now, it's easy to add a new cron task to a Linux system, using crontab:
$ crontab -e
or $ sudo crontab -e if you need root privileges
In the crontab file, for example for run this command every 15 minutes, something like this:
# m h dom mon dow command
*/15 * * * * python /var/www/myapp/manage.py exporter