Returning data to the original process that invoke a subprocess - python-2.7

Someone told me to post this as a new question. This is a follow up to
Instantiating a new WX Python GUI from spawn thread
I implemented the following code to a script that gets called from a spawned thread (Thread2)
# Function that gets invoked by Thread #2
def scriptFunction():
# Code to instantiate GUI2; GUI2 contains wx.TextCtrl fields and a 'Done' button
p = subprocess.Popen("python secondGui.py", bufsize=2048, shell=True,stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# Wait for a response
p.wait()
# Read response
response = p.stdout.read()
# Process entered data
processData()
On the new process running GUI2, I want the 'Done' button event handler to return 4 data sets to Thread2, and then destroy itself (GUI2)
def onDone(self,event):
# This is the part I need help with; Trying to return data back to main process that instantiated this GUI (GUI2)
process = subprocess.Popen(['python', 'MainGui.py'], shell=False, stdout=subprocess.PIPE)
print process.communicate('input1', 'input2', 'input3', 'input4')
# kill GUI
self.Close()
Currently, this implementation spawns another Main GUI in a new process. What I want to do is return data back to the original process. Thanks.

Do the two scripts have to be separate? I mean, you can have multiple frames running on one main loop and transfer information between the two using pubsub: http://www.blog.pythonlibrary.org/2010/06/27/wxpython-and-pubsub-a-simple-tutorial/
Theoretically, what you're doing should work too. Other methods I've heard of involve using Python's socket server library to create a really simple server that runs that the two programs can post to and read data from. Or a database or watching a directory for file updates.

Function that gets invoked by Thread #2
def scriptFunction():
# Code to instantiate GUI2; GUI2 contains wx.TextCtrl fields and a 'Done' button
p = subprocess.Popen("python secondGui.py", bufsize=2048, shell=True,stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# Wait for a response
p.wait()
# Read response and split the return string that contains 4 word separated by a comma
responseArray = string.split(p.stdout.read(), ",")
# Process entered data
processData(responseArray)
'Done' button event handler that gets invoked when the 'Done' button is clicked on GUI2
def onDone(self,event):
# Package 4 word inputs into string to return back to main process (Thread2)
sys.stdout.write("%s,%s,%s,%s" % (dataInput1, dataInput2, dataInput3, dataInput4))
# kill GUI2
self.Close()
Thanks for your help Mike!

Related

Not able to execute a task in Background using apscheduler

I used Blockingscheduler before, but I am facing problem using Backgroundscheduler.
Need to run a scheduler task after returning a value, but the scheduled task is never executed.
from datetime import datetime
from apscheduler.schedulers.background import BackgroundScheduler
def my_job(text):
print(text)
def job1():
now = datetime.datetime.now()
sched = BackgroundScheduler()
sched.add_job(my_job, 'date', run_date=now +
datetime.timedelta(seconds = 20), args=['text'])
sched.start()
def fun1():
try:
return "hello"
finally:
job1()
print fun1()
I am getting only output as "hello" and the code is exiting. Expected output is "hello" and "text" which should be executed once after 20seconds. Please let me know what I messed up!!
You may find this FAQ entry enlightening.
To summarize, a Python script will exit once it reaches to the end, unless non-daemonic threads are active. The scheduler thread is daemonic by default.
Furthermore, it is bad practice to create a new scheduler in a function and not save the instance in a global variable which could be used to schedule further jobs or to shut down the scheduler. The way your code works now is that it will keep creating new schedulers without shutting down the previous ones.

How can I access/split/assign the (content) of 'body' from a rabbitmq message from within the pika callback function in python 2.7

I am new to python and I am having a hard time getting my head around the following concept...
I have created some python code which imports Pika to connect and consume messages from a rabbitmq queue.
The basic code is shown below:
receive.py
#!/usr/bin/env python
import pika
# RABBITMQ CONNECTION VARIABLES
MqHostName = 'centosserver'
MqUserName = 'guest'
MqPassWord = 'guest'
QueueName = 'Q1'
# PIKA CODE TO CONNECT TO RABBITMQ
credentials = pika.PlainCredentials(MqUserName, MqPassWord)
connection = pika.BlockingConnection(pika.ConnectionParameters(
host=MqHostName, credentials=credentials))
channel = connection.channel()
channel.queue_declare(queue=QueueName)
# CALLBACK ROUTINE TO RECEIVE MESSAGES FROM RABBITMQ
def callback(ch, method, properties, body):
print(" [x] Received %r" % body)
# DEFINE CALLBACK QUEUE
channel.basic_consume(callback,
queue=QueueName,
no_ack=True)
print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()
The above works in as much as the mq messages are printed as they are received however...
I would like to be able to consume the 'body' of the mq messages via my Python application. I have been able to do this in as much as I can assign the message body to a str variable and then use split to split the message into its component variables using a delimiter contained within each message (;) and then assign each one to a variable within my Python code, but I only seem to be able to do this within the callback function section of the code above...
If I then try and access/work with these variables from another python script - by importing from the receive.py script, I just get raw messages (or body) from the pika connection and I cant understand why this is...
The code I have developed so far is shown below:
import pika
# RABBITMQ CONNECTION VARIABLES
MqHostName = 'centosserver'
MqUserName = 'guest'
MqPassWord = 'guest'
QueueName = 'Q1'
# PIKA CODE TO CONNECT TO RABBITMQ
credentials = pika.PlainCredentials(MqUserName, MqPassWord)
connection = pika.BlockingConnection(pika.ConnectionParameters(host=MqHostName, credentials=credentials))
channel = connection.channel()
channel.queue_declare(queue=queue_name)
def callback(ch, method, properties, body):
print(body) # this just prints what has been received from the queue with no formatting
# create a string variable to hold the whole rabbit message body
string_body = str(body)
#split the string created above based on the message seporator enclosed in '' below
message_vars = string_body.split(';')
# define names for the variables which are seporated from the string_body in the step above and assign then to an array
mv1, mv2, mv3, mv4, mv5, mv6, mv7, = \
mq_vars[0], mq_vars[1], mq_vars[2], mq_vars[3], mq_vars[4], mq_vars[5], mq_vars[6]
# set the queue
channel.basic_consume(callback,
queue=queue_name,
no_ack=True)
print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()
This allows me to print or perform operations on the mq1, mq2, etc... variables if I enter the code within the 'def callback(ch, method, properties, body):' block of code but I want to be able to have the receive script running and splitting the messages and assigning the content to global variables that I can access from another script by using an import statement.
Please can you point me in the right direction, basically I would like to be able to have a process.py script that will import just the values of mv1, mv2, etc... which will be updated each time a new MQ message is received...
process.py
from receive.py import mv1, mv2, mv3, mv4, mv5, mv6, mv7
new_var = (mv1 * mv2) # this is an example of what I would like to be able to do..
I know I don't yet understand enough about Python programming so there is likely to be an obvious answer to this but any pointers will be much appreciated.
Thank you!

How can i unlock Gtk button event

When i click on some button, the entire application stay locked waiting a result from the method, to return something..
So, i have one Gtk.Button, and i connected him to a function, for example on_button_clicked:
button = Gtk.Button()
button.connect('clicked', on_button_clicked)
the function on_button_clicked look like this:
def on_button_clicked(widget):
func1()
func2()
func3()
When the functions (func1, func2, func3) is runing, the entire application stops, waiting a result from the main function (on_button_clicked). The os say 'The Application is not responding'.
Basicaly, the func1 encode one url, request that url using urllib, that request return a response that is a json file, then the func2 load that json, then make a dict with informations from json, and make an iteration in this dict printing the informations.
func1(term):
url = 'https://api.flickr.com/services/rest/?'
values = OrderedDict([
('url',url),
('method','flickr.photos.search'),
('api_key', '47a28953049fe88b32522c8997e712bb'),
('text', term.replace(' ', '+')),
('format', 'json'),
('nojsoncallback',1)
])
url_encoded = urllib.urlencode(values)
url_encoded = urllib.unquote(url_encoded)
request = Request(url_encoded[4:])
try:
response = urlopen(request, timeout =1)
except urllib2.URLError, e:
print 'There was an error: %r' %e
In this time i can't click or edit no other widget.
func1(), func2() and func3() are blocking the gtk main loop. In this case, it is probably the network request. Therefore, you have to use threads.
Probably something like this:
from threading import Thread
...
def on_button_clicked(widget):
Thread(target=func1).start()
However, you should note that you have to use glib.idle_add(), if you want to modify gtk widgets from a thread. To hide a widget from a Thread for example, you would do glib.idle_add(widget.set_visible, False).

Python tk(): No window appears when using in scripts (console works)

I have the following problem with this easy script:
from Tkinter import *
root = Tk()
while 1:
pass
I think, after the 2nd line everyone would expect a Tkinter window would appear. But it does not!
If I put this line into the Python console (without the endless-while-loop), it works.
[I wanted to add an image here, but as I'm new I'm to allowed to :-(]
But running the script (double-clicking on the *.py file in the Windows Explorer) results only in an empty Python console!
Background:
Actually I want to use Snack for Python. This is based on Tkinter. That means I have to create an Tk() instance first. Everything works fine in the Python console. But I want to write a bigger program with at least one Python script thus I cannot type the whole program into the console everytime :-)
I have installed Python 2.7 and Tcl/Tk 8.5 (remember: it works in the console)
EDIT: So here's my solution:
First, I create a class CSoundPlayer:
from Tkinter import*
import tkSnack
class CSoundPlayer:
def __init__(self, callbackFunction):
self.__activated = False
self.__callbackFunction = callbackFunction
self.__sounds = []
self.__numberOfSounds = 0
self.__root = Tk()
self.__root.title("SoundPlayer")
tkSnack.initializeSnack(self.__root)
def __mainFunction(self):
self.__callbackFunction()
self.__root.after(1, self.__mainFunction)
pass
def activate(self):
self.__activated = True
self.__root.after(1, self.__mainFunction)
self.__root.mainloop()
def loadFile(self, fileName):
if self.__activated:
self.__sounds.append(tkSnack.Sound(load=fileName))
self.__numberOfSounds += 1
# return the index of the new sound
return self.__numberOfSounds - 1
else:
return -1
def play(self, soundIndex):
if self.__activated:
self.__sounds[soundIndex].play()
else:
return -1
Then, the application itself must be implemented in a class thus the main() is defined when handed over to the CSoundPlayer() constructor:
class CApplication:
def __init__(self):
self.__programCounter = -1
self.__SoundPlayer = CSoundPlayer(self.main)
self.__SoundPlayer.activate()
def main(self):
self.__programCounter += 1
if self.__programCounter == 0:
self.__sound1 = self.__SoundPlayer.loadFile("../mysong.mp3")
self.__SoundPlayer.play(self.__sound1)
# here the cyclic code starts:
print self.__programCounter
CApplication()
As you can see, the mainloop() is called not in the constructor but in the activate() method. This is because the CApplication won't ever get the reference to CSoundPlayer object because that stucks in the mainloop.
The code of the class CApplication itself does a lot of overhead. The actual "application code" is placed inside the CApplication.main() - code which shall be executed only once is controlled by means of the program counter.
Now I put it to the next level and place a polling process of the MIDI Device in the CApplication.main(). Thus I will use MIDI commands as trigger for playing sound files. I hope the performance is sufficient for appropriate latency.
Have you any suggestions for optimization?
You must start the event loop. Without the event loop, tkinter has no way to actually draw the window. Remove the while loop and replace it with mainloop:
from Tkinter import *
root = Tk()
root.mainloop()
If you need to do polling (as mentioned in the comments to the question), write a function that polls, and have that function run every periodically with after:
def poll():
<do the polling here>
# in 100ms, call the poll function again
root.after(100, poll)
The reason you don't need mainloop in the console depends on what you mean by "the console". In IDLE, and perhaps some other interactive interpreters, tkinter has a special mode when running interactively that doesn't require you call mainloop. In essence, the mainloop is the console input loop.

Show file information that is transferring using Django celery?

I have a task like this in Django celery:
#task
def file(password, source12, destination):
subprocess.Popen(['sshpass', '-p', password, 'rsync', '-avz', '--info=progress2', source12, destination],
stderr=subprocess.PIPE, stdout=subprocess.PIPE).communicate()[0]
I have a function that executes the above task:
#celery.task
#login_required(login_url='/login_backend/')
def sync(request):
#user_id = request.session['user_id']
"""Sync the files into the server with the progress bar"""
if request.method == 'POST':
choices = request.POST.getlist('choice')
for i in choices:
new_source = source +"/"+ i
#b = result.successful()
#result.get() #Poll the database to get the progress
start_date1 = datetime.datetime.utcnow().replace(tzinfo=utc)
source12 = new_source.replace(' ', '') #Remove whitespaces
file.delay(password, source12, destination)
return HttpResponseRedirect('/uploaded_files/')
else:
return HttpResponseRedirect('/uploaded_files/')
I want to show the user the file transferring information with the progress.I want to show the file name, remaining time and size of the file that is being transferred. How can I do that?
Here you have example of how pass the information through celery itself. Simply start upload process as an AJAX call, return task`s id and later ask by AJAX about state of the task.
To actually know how you're doing you need to know the progress in the file task itself. I doubt if you can achieve that by calling subprocess sshpass in other way then parsing PIPE output.
Mind celery spawn a process for you already when you .delay a task.
Try to read destination in chunks, rsync the chunks and then merge the file in destination. That way you know how many chunks you've transferred already.