How can i unlock Gtk button event - python-2.7

When i click on some button, the entire application stay locked waiting a result from the method, to return something..
So, i have one Gtk.Button, and i connected him to a function, for example on_button_clicked:
button = Gtk.Button()
button.connect('clicked', on_button_clicked)
the function on_button_clicked look like this:
def on_button_clicked(widget):
func1()
func2()
func3()
When the functions (func1, func2, func3) is runing, the entire application stops, waiting a result from the main function (on_button_clicked). The os say 'The Application is not responding'.
Basicaly, the func1 encode one url, request that url using urllib, that request return a response that is a json file, then the func2 load that json, then make a dict with informations from json, and make an iteration in this dict printing the informations.
func1(term):
url = 'https://api.flickr.com/services/rest/?'
values = OrderedDict([
('url',url),
('method','flickr.photos.search'),
('api_key', '47a28953049fe88b32522c8997e712bb'),
('text', term.replace(' ', '+')),
('format', 'json'),
('nojsoncallback',1)
])
url_encoded = urllib.urlencode(values)
url_encoded = urllib.unquote(url_encoded)
request = Request(url_encoded[4:])
try:
response = urlopen(request, timeout =1)
except urllib2.URLError, e:
print 'There was an error: %r' %e
In this time i can't click or edit no other widget.

func1(), func2() and func3() are blocking the gtk main loop. In this case, it is probably the network request. Therefore, you have to use threads.
Probably something like this:
from threading import Thread
...
def on_button_clicked(widget):
Thread(target=func1).start()
However, you should note that you have to use glib.idle_add(), if you want to modify gtk widgets from a thread. To hide a widget from a Thread for example, you would do glib.idle_add(widget.set_visible, False).

Related

Concurrency issue or something else? .save() method + DB timing

So the situtation is this:
I have an endpoint A that creates data and calls .save() on that data (call this functionA) which also sends a post request to an external 3rd party API that will call my endpoint B (call this functionB)
def functionA():
try:
with transaction.atomic()
newData = Blog(title="new blog")
newData.save()
# findSavedBlog = Blog.objects.get(title="new blog")
# print(findSavedBlog)
r = requests.post('www.thirdpartyapi.com/confirm_blog_creation/', some_data) # this post request will trigger the third party to send a post request to endpoint calling functionB
return HttpResponse("Result was: " + r.status)
def functionB():
blogTitle = request.POST.get('blog_title') # assume this evaluates to 'new blog'
# sleep(20)
try:
findBlog = Blog.objects.get(title=blogTitle) # again this will be the same as Blog.objects.get(title="new blog")
except ObjectDoesNotExist as e:
print("Blog not found!")
If I uncomment the findSavedBlog portion of functionA, it will print the saved blog, but functionB will still fail.
If I add in a sleep to function B to wait for the DB to finish writing and then trying to fetch the newly created data, it still fails anyway.
Anyone with knowledge of Django's .save() method and/or some concurrency knowledge help me out here? Much appreciated. Thanks!
EDIT:
The issue was that I was wrapping all of functionA in an atomic block (forgot to write that part of functionA originally), which meant that the transactions don't commit until after functionA returns!

Twisted - how to make lots of Python code non-blocking

I've been trying to get this script to perform the code in hub() in written order.
hub() contains a mix of standard Python code and requests to carry out I/O using Twisted and Crossbar.
However, because the Python code is blocking, reactor doesn't have any chance to carry out those 'publish' tasks. My frontend receives all the published messages at the end.
This code is a massively simplified version of what I'm actually dealing with. The real script (hub() and the other methods it calls) is over 1500 lines long. Modifying all those functions to make them non-blocking is not ideal. I'd rather be able to isolate the changes to a few methods like publish() if that's possible to fix this problem.
I have played around with terms like async, await, deferLater, loopingCall, and others. I have not found an example that helped yet in my situation.
Is there a way to modify publish() (or hub()) so they send out the messages in order?
from autobahn.twisted.component import Component, run
from twisted.internet.defer import inlineCallbacks, returnValue
from twisted.internet import reactor, defer
component = Component(
transports=[
{
u"type": u"websocket",
u"url": u"ws://127.0.0.1:8080/ws",
u"endpoint": {
u"type": u"tcp",
u"host": u"localhost",
u"port": 8080,
},
u"options": {
u"open_handshake_timeout": 100,
}
},
],
realm=u"realm1",
)
#component.on_join
#inlineCallbacks
def join(session, details):
print("joined {}: {}".format(session, details))
def publish(context='output', value='default'):
""" Publish a message. """
print('publish', value)
session.publish(u'com.myapp.universal_feedback', {"id": context, "value": value})
def hub(thing):
""" Main script. """
do_things
publish('output', 'some data for you')
do_more_things
publish('status', 'a progress message')
do_even_more_things
publish('status', 'some more data')
do_all_the_things
publish('other', 'something else')
try:
yield session.register(hub, u'com.myapp.hello')
print("procedure registered")
except Exception as e:
print("could not register procedure: {0}".format(e))
if __name__ == "__main__":
run([component])
reactor.run()
Your join() function is async (decorated with #inlineCallbacks and contains at least one yield in the body).
Internally it registers function hub() as WAMP RPC; hub() is however not async.
Also the calls to session.publish() are not yielded as async calls should be.
Result: you add a bunch of events to the eventloop but don't await them until you flush the eventloop on application shutdown.
You need to make your function hub and publish async.
#inlineCallbacks
def publish(context='output', value='default'):
""" Publish a message. """
print('publish', value)
yield session.publish(u'com.myapp.universal_feedback', {"id": context, "value": value})
#inlineCallbacks
def hub(thing):
""" Main script. """
do_things
yield publish('output', 'some data for you')
do_more_things
yield publish('status', 'a progress message')
do_even_more_things
yield publish('status', 'some more data')
do_all_the_things
yield publish('other', 'something else')

Pepper: pass variable from Python to web JS

I'm programming an App for the Aldebaran's Pepper robot. I'm using Choregraphe and I made an html for displaying in robots tablet. I have made the boxes for displaying the web and I need to pass a variable from Python to the web Javascript.
Is there any way to do it?
The Python code is the same as the default of a Raise Event box, it receives a string "IMAGE" on his onStart input:
class MyClass(GeneratedClass):
def __init__(self):
GeneratedClass.__init__(self)
pass
def onLoad(self):
self.memory = ALProxy("ALMemory")
def onUnload(self):
self.memory = None
def onInput_onStart(self, p):
self.memory.raiseEvent(self.getParameter("key"), p)
self.onStopped(p)
def onInput_onStop(self):
self.onUnload() #~ it is recommended to call onUnload of this box in a onStop method, as the code written in onUnload is used to stop the box as well
pass
And the Javascript code is this:
$('document').ready(function(){
var session = new QiSession();
session.service("ALMemory").done(function (ALMemory) {
ALMemory.subscriber("PepperQiMessaging/totablet").done(function(subscriber) {
$("#log").text("AAA");
subscriber.signal.connect(toTabletHandler);
});
});
function toTabletHandler(value) {
$("#log").text("-> ");
}
});
It enters the first #log but not the second of JS.
yes, I think the webpage is loading too late to catch your event. One quick solution would be to send an event from Javascript when the page is ready, and wait for this event in your Python script. Once this event is received, then you know that the webpage is ready and you can send the "sendImage" event.
I solved the problem. I put a delay box of 2 seconds between the show HTML box and the sendImage box like in the image below:
I think the problem was that the string that is send to tabled was sent before the web was prepared to receive it, and the delay of 2 seconds (with 1 it doesn't work) the page have time to prepare for receiving data.

How to start a new request after the item_scraped scrapy signal is called?

I need to scrap the data of each item from a website using Scrapy(http://example.com/itemview). I have a list of itemID and I need to pass it in a form in example.com.
There is no url change for each item. So for each request in my spider the url will always be the same. But the content will be different.
I don't wan't a for loop for handling each request. So i followed the below mentioned steps.
started spider with the above url
added item_scraped and spider_closed signals
passed through several functions
passed the scraped data to pipeline
trigerred the item_scraped signal
After this it automatically calls the spider_closed signal. But I want the above steps to be continued till the total itemID are finished.
class ExampleSpider(scrapy.Spider):
name = "example"
allowed_domains = ["example.com"]
itemIDs = [11111,22222,33333]
current_item_num = 0
def __init__(self, itemids=None, *args, **kwargs):
super(ExampleSpider, self).__init__(*args, **kwargs)
dispatcher.connect(self.item_scraped, signals.item_scraped)
dispatcher.connect(self.spider_closed, signals.spider_closed)
def spider_closed(self, spider):
self.driver.quit()
def start_requests(self):
request = self.make_requests_from_url('http://example.com/itemview')
yield request
def parse(self,response):
self.driver = webdriver.PhantomJS()
self.driver.get(response.url)
first_data = self.driver.find_element_by_xpath('//div[#id="itemview"]').text.strip()
yield Request(response.url,meta={'first_data':first_data},callback=self.processDetails,dont_filter=True)
def processDetails(self,response):
itemID = self.itemIDs[self.current_item_num]
..form submission with the current itemID goes here...
...the content of the page is updated with the given itemID...
yield Request(response.url,meta={'first_data':response.meta['first_data']},callback=self.processData,dont_filter=True)
def processData(self,response):
...some more scraping goes here...
item = ExamplecrawlerItem()
item['first_data'] = response.meta['first_data']
yield item
def item_scraped(self,item,response,spider):
self.current_item_num += 1
#i need to call the processDetails function here for the next itemID
#and the process needs to contine till the itemID finishes
self.parse(response)
My piepline:
class ExampleDBPipeline(object):
def process_item(self, item, spider):
MYCOLLECTION.insert(dict(item))
return
I wish I had an elegant solution to this. But instead it's a hackish way of calling the underlying classes.
self.crawler.engine.slot.scheduler.enqueue_request(scrapy.Request(url,self.yourCallBack))
However, you can yield a request after you yield the item and have it callback to self.processDetails. Simply add this to your processData function:
yield item
self.counter += 1
yield scrapy.Request(response.url,callback=self.processDetails,dont_filter=True, meta = {"your":"Dictionary"}
Also, PhantomJS can be nice and make your life easy, but it is slower than regular connections. If possible, find the request for json data or whatever makes the page unparseable without JS. To do so, open up chrome, right click, click inspect, go to the network tab, then enter the ID into the form, then look at the XHR or JS tabs for a JSON that has the data or next url you want. Most of the time, there will be some url made by adding the ID, if you can find it, you can just concatenate your urls and call that directly without having the cost of JS rendering. Sometimes it is randomized, or not there, but I've had fair success with it. You can then also use that to yield many requests at the same time without having to worry about phantomJS trying to do two things at once or having to initialize many instances of it. You could use tabs, but that is a pain.
Also, I would use a Queue of your IDs to ensure thread safety. Otherwise, you could have processDetails called twice on the same ID, though in the logic of your program everything seems to go linearly, which means you aren't using the concurrency capabilities of Scrapy and your program will go more slowly. To use Queue add:
import Queue
#go inside class definition and add
itemIDQueue = Queue.Queue()
#within __init__ add
[self.itemIDQueue.put(ID) for ID in self.itemID]
#within processDetails replace itemID = self.itemIDs[self.current_item_num] with
itemID = self.itemIDQueue.get()
And then there is no need to increment the counter and your program is thread safe.

Returning data to the original process that invoke a subprocess

Someone told me to post this as a new question. This is a follow up to
Instantiating a new WX Python GUI from spawn thread
I implemented the following code to a script that gets called from a spawned thread (Thread2)
# Function that gets invoked by Thread #2
def scriptFunction():
# Code to instantiate GUI2; GUI2 contains wx.TextCtrl fields and a 'Done' button
p = subprocess.Popen("python secondGui.py", bufsize=2048, shell=True,stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# Wait for a response
p.wait()
# Read response
response = p.stdout.read()
# Process entered data
processData()
On the new process running GUI2, I want the 'Done' button event handler to return 4 data sets to Thread2, and then destroy itself (GUI2)
def onDone(self,event):
# This is the part I need help with; Trying to return data back to main process that instantiated this GUI (GUI2)
process = subprocess.Popen(['python', 'MainGui.py'], shell=False, stdout=subprocess.PIPE)
print process.communicate('input1', 'input2', 'input3', 'input4')
# kill GUI
self.Close()
Currently, this implementation spawns another Main GUI in a new process. What I want to do is return data back to the original process. Thanks.
Do the two scripts have to be separate? I mean, you can have multiple frames running on one main loop and transfer information between the two using pubsub: http://www.blog.pythonlibrary.org/2010/06/27/wxpython-and-pubsub-a-simple-tutorial/
Theoretically, what you're doing should work too. Other methods I've heard of involve using Python's socket server library to create a really simple server that runs that the two programs can post to and read data from. Or a database or watching a directory for file updates.
Function that gets invoked by Thread #2
def scriptFunction():
# Code to instantiate GUI2; GUI2 contains wx.TextCtrl fields and a 'Done' button
p = subprocess.Popen("python secondGui.py", bufsize=2048, shell=True,stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# Wait for a response
p.wait()
# Read response and split the return string that contains 4 word separated by a comma
responseArray = string.split(p.stdout.read(), ",")
# Process entered data
processData(responseArray)
'Done' button event handler that gets invoked when the 'Done' button is clicked on GUI2
def onDone(self,event):
# Package 4 word inputs into string to return back to main process (Thread2)
sys.stdout.write("%s,%s,%s,%s" % (dataInput1, dataInput2, dataInput3, dataInput4))
# kill GUI2
self.Close()
Thanks for your help Mike!