ROS python setting global variables from subscriber callbacks - python-2.7

I'm using the code below in an attempt to read laser data and determine which side of my robot is closest to a wall.
The laser data is successfully being printed in the left and right callbacks but I try to assign both values to global variables and use those varibles in a third function. However, the varibales DO NOT print in my run() function:
#!/usr/bin/env python
import rospy
import math
from sensor_msgs.msg import LaserScan
from geometry_msgs.msg import Twist
from nav_msgs.msg import Odometry
from tf import transformations
from sensor_msgs.msg import LaserScan
subL_value = ''
subR_value = 'test'
def callbackLeft(msg) :
subL_value = min(msg.ranges[0:49])
def callbackRight(msg) :
global subR_value
subR_value = min(msg.ranges[0:49])
def run() :
# rospy.Subscriber('/scan_left', LaserScan, callbackLeft)
rospy.Subscriber('/scan_right', LaserScan, callbackRight)
# print('\tLeft: '),
# print(subL_value)
print('\tRight: '),
global subR_value
print(subR_value)
if __name__ == "__main__":
rospy.init_node('wall_detector')
run()
rospy.spin()
In the code above, I am only using the subR_value, Initiating it to a value of test, resetting it in the callbackRight subscriber callback function and attemoting to read the new value in the run() function.
However, running the script, I have 2 problems:
The printed value is test and hence is no overwritten as expected
The script does not loop only outputting a single Right: test output
I found this post which is a similar problem and also this one but I seem to be satisfying the necessary global variable labelling.
Am I missing something ?

In your code you are only calling run once, it is not called periodically and therefore also does not print periodically.
The node is initialised inside your __main__, enters once into the run() routine, registers the callback for /scan_right, prints the print(...) statements and exits the routine. Returned to the main the program will be stopped from exiting due to rospy.spin(). As long as ROS is alive every single time a new message on the /scan_right topic is received the callbackRight will be called which updates the global variable (but does not print it).
If you wanted to print the variable every single time an update occurs you would have to call the print(...) inside the callback. If you wanted to print it periodically (with a fixed rate but not on every update) you would have to modify ros.spin() to something like
rate = rospy.Rate(10) # Fixed update frequency of 10hz
while not rospy.is_shutdown():
# Call your print function here
rate.sleep()
Instead of using global variables I would move the code inside a class. Furthermore do not use print with ROS but instead use rospy.loginfo(), rospy.logwarn() and rospy.logerr(). They have several advantages as already discussed at the bottom of this post.

Related

Running multiple functions

I think I have confused myself on how I should approach this.
I have a number of functions that I use to interact with an api, for example get product ID, update product detail, update inventory. These calls need to be done one after another, and are all wrapped up in one function api.push().
Let's say I need to run api.push() 100 times, 100 product IDs
What I want to do is run many api.push at the same time, so that I can speed up the processing of my. For example, lets say I want to run 5 at a time.
I am confused to whether this is multiprocessing or threading, or neither. I tried both but they didn't seem to work, for example I have this
jobs = []
for n in range(0, 4):
print "adding a job %s" % n
p = multiprocessing.Process(target=api.push())
jobs.append(p)
# Starts threads
for job in jobs:
job.start()
for job in jobs:
job.join()
Any guidance would be appreciated
Thanks
Please read the python doc and do some research on the global interpreter lock to see whether you should use threading or multiprocessing in your situation.
I do not know the inner workings of api.push, but please note that you should pass a function reference to multiprocessing.Process.
Using p = multiprocessing.Process(target=api.push()) will pass whatever api.push() returns as the function to be called in the subprocesses.
if api.push is the function to be called in the subprocess, you should use p = multiprocessing.Process(target=api.push) instead, as it passes a reference to the function rather than a reference to the result of the function.

Is it possible to put a function in timed loop using django-background-task

Say i want to execute a function every 5 minutes without using cron job.
What i think of doing is create a django background task which actually calls that function and at the end of that function, i again create that task with schedule = say 60*5.
this effectively puts the function in a time based loop.
I tried a few iterations, but i am getting import errors. But is it possible to do or not?
No It's not possible in any case as it will effectively create cyclic import problems in django. Because in tasks you will have to import that function and in the file for that function, you will have to import tasks.
So no whatever strategy you take, you are gonna land into the same problem.
I made something like. Are you looking for this?
import threading
import time
def worker():
"""do your stuff"""
return
threads = list()
while (true):
time.sleep(300)
t = threading.Thread(target=worker)
threads.append(t)
t.start()

twisted self.transport.write working inside loop

I have the following code for the client which sends some data to server after every 8 seconds and following is my code
class EchoClient(LineReceiver):
def connectionMade(self):
makeByteList()
self.transport.write(binascii.unhexlify("7777"))
while 1:
print "hello"
lep = random.randint(0,4)
print lep
print binascii.unhexlify(sendHexBytes(lep))
try:
self.transport.write("Hello")
self.transport.write(binascii.unhexlify(sendHexBytes(lep)))
except Exception, ex1:
print "Failed to send"
time.sleep(8)
def lineReceived(self, line):
pass
def dataReceived(self, data):
print "receive:", data
Every statement inside while loop execute except self.transport.write. The server doesn't receive any data. Also self.transport.write outside while loop doesn't execute. In both cases no exception is raised, but if I remove while loop the statement outside loop executes correctly. Why is this happening? Please correct me where I am making mistake?
All methods in twisted are asynchronous. All of the the methods such as connectionMade and lineReceived are happening on the same thread. The Twisted reactor runs a loop (called an event loop) and it calls methods such as connectionMade and lineReceived when these events happen.
You have an infinite loop in connectionMade. Once Python gets into that loop, it can never get out. Twisted calls connectionMade when connection is established, and your code stays there forever. Twisted has no opportunity to actually write the data to the transport, or receive data, it is stuck in connectionMade!
When you write Twisted code, the important point that you must understand is that you may not block on the Twisted thread. For example, let's say I want to send a "Hello" 4 seconds after a client connects. I might write this:
class EchoClient(LineReceiver):
def connectionMade(self):
time.sleep(4)
self.transport.write("Hello")
but this would be wrong. What happens if 2 clients connect at the same time? The first client will go into connectionMade, and my program will hang for 4 seconds until the "Hello" is sent.
The Twisted way to do this would be like this:
class EchoClient(LineReceiver):
def connectionMade(self):
reactor.callLater(4, self.sendHello)
def sendHello(self):
self.transport.write("Hello")
Now, when Twisted enters connectionMade, it calls reactor.callLater to schedule an event 4 seconds in the future. Then it exits connectionMade and continues doing all the other stuff it needs to do. Until you grasp the concept of async programming you can't continue in Twisted. I suggest you read through the Twisted docs here.
Finally, an unrelated note: If you have a LineReceiver, you should not implement your own dataReceived, it will make lineReceived not called. LineReceiver is a protocol which implements its own dataReceived which buffers and breaks up data into lines and calls lineReceived methods.

How do I embed an IPython Interpreter into an application running in an IPython Qt Console

There are a few topics on this, but none with a satisfactory answer.
I have a python application running in an IPython qt console
http://ipython.org/ipython-doc/dev/interactive/qtconsole.html
When I encounter an error, I'd like to be able to interact with the code at that point.
try:
raise Exception()
except Exception as e:
try: # use exception trick to pick up the current frame
raise None
except:
frame = sys.exc_info()[2].tb_frame.f_back
namespace = frame.f_globals.copy()
namespace.update(frame.f_locals)
import IPython
IPython.embed_kernel(local_ns=namespace)
I would think this would work, but I get an error:
RuntimeError: threads can only be started once
I just use this:
from IPython import embed; embed()
works better than anything else for me :)
Update:
In celebration of this answer receiving 50 upvotes, here are the updates I've made to this snippet in the intervening six years since it was posted.
First, I now like to import and execute in a single statement, as I use black for all my python code these days and it reformats the original snippet in a way that doesn't make sense in this specific and unusual context. So:
__import__("IPython").embed()
Given than I often use this inside a loop or a thread, it can be helpful to include a snippet that allows terminating the parent process (partly for convenience, and partly to remind myself of the best way to do it). os._exit is the best choice here, so my snippet includes this (same logic w/r/t using a single statement):
q = __import__("functools").partial(__import__("os")._exit, 0)
Then I can simply use q() if/when I want to exit the master process.
My full snippet (with # FIXME in case I would ever be likely to forget to remove it!) looks like this:
q = __import__("functools").partial(__import__("os")._exit, 0) # FIXME
__import__("IPython").embed() # FIXME
You can follow the following recipe to embed an IPython session into your program:
try:
get_ipython
except NameError:
banner=exit_msg=''
else:
banner = '*** Nested interpreter ***'
exit_msg = '*** Back in main IPython ***'
# First import the embed function
from IPython.frontend.terminal.embed import InteractiveShellEmbed
# Now create the IPython shell instance. Put ipshell() anywhere in your code
# where you want it to open.
ipshell = InteractiveShellEmbed(banner1=banner, exit_msg=exit_msg)
Then use ipshell() whenever you want to be dropped into an IPython shell. This will allow you to embed (and even nest) IPython interpreters in your code and inspect objects or the state of the program.

Calling a non-returning python function from a python script

I want to call a wrapped C++ function from a python script which is not returning immediately (in detail: it is a function which starts a QApplication window and the last line in that function is QApplication->exec()). So after that function call I want to move on to my next line in the python script but on executing this script and the previous line it hangs forever.
In contrast when I manually type my script line for line in the python command line I can go on to my next line after pressing enter a second time on the non-returning function call line.
So how to solve the issue when executing the script?
Thanks!!
Edit:
My python interpreter is embedded in an application. I want to write an extension for this application as a separate Qt4 window. All the python stuff is only for make my graphical plugin accessible per script (per boost.python wrapping).
My python script:
import imp
import os
Plugin = imp.load_dynamic('Plugin', os.getcwd() + 'Plugin.dll')
qt = Plugin.StartQt4() # it hangs here when executing as script
pl = PluginCPP.PluginCPP() # Creates a QMainWindow
pl.ShowWindow() # shows the window
The C++ code for the Qt start function looks like this:
class StartQt4
{
public:
StartQt4()
{
int i = 0;
QApplication* qapp = new QApplication(i, NULL);
qapp->exec();
}
};
Use a thread (longer example here):
from threading import Thread
class WindowThread(Thread):
def run(self):
callCppFunctionHere()
WindowThread().start()
QApplication::exec() starts the main loop of the application and will only return after the application quits. If you want to run code after the application has been started, you should resort to Qt's event handling mechanism.
From http://doc.trolltech.com/4.5/qapplication.html#exec :
To make your application perform idle
processing, i.e. executing a special
function whenever there are no pending
events, use a QTimer with 0 timeout.
More advanced idle processing schemes
can be achieved using processEvents().
I assume you're already using PyQT?