ROS: saving object in a file when a ros node is killed - python-2.7

I am running a rosnode with a kalman filter running. The kalman filter is an object with states that get updated as time plays out. Conventionally, a ros node has a run(self) method that runs at a specified frequency using the while condition
while not rospy.is_shutdown():
do this
Going through each loop my kalman filter object updates. I just want to be able to save the kalman filter object when the node is shutdown either some external condition or when the user presses ctrl+C. I am not able to do this. In the run(self) method, I tried
while not rospy.is_shutdown():
do this
# save in file
output = pathlib.Path('path/to/location')
results_path = output.with_suffix('.npz')
with open(results_path, 'xb') as results_file:
np.savez(results_file,kfObj=kf_list)
But it has not worked. Is it not executing the save command? If ctrl+C is pressed does it stop short of executing it? Whats the way to do it?

Check out the atexit module:
http://docs.python.org/library/atexit.html
import atexit
def exit_handler():
output = pathlib.Path('path/to/location')
results_path = output.with_suffix('.npz')
with open(results_path, 'xb') as results_file:
np.savez(results_file,kfObj=kf_list
atexit.register(exit_handler)
Just be aware that this works great for normal termination of the script, but it won't get called in all cases (e.g. fatal internal errors).

Why not try the following python example class structure engaging shutdown hooks :
import rospy
class Hardware_Interface:
def __init__(self, selectedBoard):
...
# Housekeeping, cleanup at the end
rospy.on_shutdown(self.shutdown)
# Get the connection settings from the parameter server
self.port = rospy.get_param("~"+self.board+"-port", "/dev/ttyACM0")
# Get the prefix
self.prefix = rospy.get_param("~"+self.board+"-prefix", "travel")
# Overall loop rate
self.rate = int(rospy.get_param("~rate", 5))
self.period = rospy.Duration(1/float(self.rate))
...
def shutdown(self):
rospy.loginfo("Shutting down Hardware Interface Node...")
try:
rospy.loginfo("Stopping the robot...")
self.controller.send(0, 0, 0, 0)
#self.cmd_vel_pub.publish(Twist())
rospy.sleep(2)
except:
rospy.loginfo("Cannot stop!")
try:
self.controller.close()
except:
pass
finally:
rospy.loginfo("Serial port closed.")
os._exit(0)
This just an extract from a personal script, please modify it to your needs. I imagine that the on_shutdown will do the trick. Another similar approach comes from my friends in the Robot Ignite Academy in The Construct and seems like that
#!/usr/bin/env python
import rospy
from geometry_msgs.msg import Twist
class my_class():
def __init__(self):
...
self.cmd = Twist()
self.ctrl_c = False
self.rate = rospy.Rate(10) # 10hz
rospy.on_shutdown(self.shutdownhook)
def publish_once_in_cmd_vel(self):
while not self.ctrl_c:
...
def shutdownhook(self):
# works better than the rospy.is_shutdown()
self.ctrl_c = True
def move_something(self, linear_speed=0.2, angular_speed=0.2):
self.cmd.linear.x = linear_speed
self.cmd.angular.z = angular_speed
self.publish_once_in_cmd_vel()
if __name__ == '__main__':
rospy.init_node('class_test', anonymous=True)
...
This is obviously a sample of their code (for more please join the academy)

Related

Python watchdog Watch For the Creation of New File By Pattern Match

I'm still learning Python so I apologize up front. I am trying to write a watcher that watches for the creation of a file that matches a specific pattern. I am using watchdog with some code that is out there to get it watch a directory.
I don't quite understand how to have the watcher watch for a filename that matches a pattern. I've edited the regexes=[] field with what I think might work but I have not had luck.
Here is an example of a file that might come in: Hires For Monthly TM_06Jan_0946.CSV
I can get the watcher to tell me when a .CSV is created in the directory, I just can't get it to only tell me when essentially Hires.*\.zip is created.
I've checked the this link but have not had luck
How to run the RegexMatchingEventHandler of Watchdog correctly?
Here is my code:
import time
import watchdog.events
from watchdog.observers import Observer
from watchdog.events import RegexMatchingEventHandler
import re
import watchdog.observers
import time
import fnmatch
import os
class Handler(watchdog.events.RegexMatchingEventHandler):
def __init__(self):
watchdog.events.RegexMatchingEventHandler.__init__(self, regexes=['.*'], ignore_regexes=[], ignore_directories=False, case_sensitive=False)
def on_created(self, event):
print("Watchdog received created event - % s." % event.src_path)
def on_modified(self, event):
print("Watchdog received modified event - % s." % event.src_path)
if __name__ == "__main__":
src_path = r"C:\Users\Downloads"
event_handler = Handler()
observer = watchdog.observers.Observer()
observer.schedule(event_handler, path=src_path, recursive=True)
observer.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join() ```
The reason why '.*' regex works and while the Hires.*\.zip regex doesn't is watchdog tries to match the full path with the regex. In terms of regex: r'Hires.*\.zip' will not give you a full match with "C:\Users\Downloads\Hires blah.zip"
Instead, try:
def __init__(self):
watchdog.events.RegexMatchingEventHandler.__init__(
self, regexes=[r'^.*?\.cvs$', r'.*?Hires.*?\.zip'],
ignore_regexes=[], ignore_directories=False, case_sensitive=False)
This will match with all the cvs files and the zip files starts with "Hires". The '.*?' matches with "C:\Users\Downloads" and the rest ensures file types, file names etc.

Load topic file to NAO robot 2.1

Hello I want to know how to load a Dialog Topic file using python.
I made sure that the file path is right, but it keeps saying that it isn't. I have also used the tutorials in NAO 2.1's documentation ALDialog and ALModule
Please send me a code that works or tell me the error. I tried using the following code:
NAO_IP = "nao.local"
dialog_p = None
ModuleInstance = None
class NaoFalanteModule(ALModule):
def __init__(self, name):
ALModule.__init__(self, name)
self.tts = ALProxy("ALTextToSpeech")
self.tts.setLanguage("Brazilian")
global dialog_p
try:
dialog_p = ALProxy("ALDialog")
except Exception, e:
print "Error dialog"
print str(e)
exit(1)
dialog_p.setLanguage("Brazilian")
self.naoAlc()
def naoAlc(self):
topf_path = "/simpleTestes/diaSimples/testeSimples_ptb.top"
topf_path = topf_path.decode("utf-8")
topic = dialog_p.loadTopic(topf_path.encode("utf-8"))
# Start dialog
dialog_p.subscribe("NaoFalanteModule")
dialog_p.activateTopic(topic)
raw_input(u"Press 'Enter' to exit.")
dialog_p.unload(topic)
dialog_p.unsubscribe
def main():
parser = OptionParser()
parser.add_option("--pip",
help="Parent broker port. The IP address or your robot",
dest="pip")
parser.add_option("--pport",
help="Parent broker port. The port NAOqi is listening to",
dest="pport",
type="int")
parser.set_defaults(
pip=NAO_IP,
pport=9559)
(opts, args_) = parser.parse_args()
pip = opts.pip
pport = opts.pport
myBroker = ALBroker("myBroker",
"0.0.0.0",
0,
pip,
pport)
global ModuleInstance
ModuleInstance = NaoFalanteModule("ModuleInstance")
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
printI tried using the following code:
print "Interrupted by user, shutting down"
myBroker.shutdown()
sys.exit(0)
if __name__ == "__main__":
main()
The path to the topic needs to be the absolute path to that file, whereas you're passing a relative path compared to your current execution directory. The reason is that ALDialog is a separate service running in it's own process and knows nothing about the execution context of whoever is calling it.
And the program .top file must be uploaded to the robot using Choregraphe.
So, your absolute path in this case might be something like
topf_path = "/home/nao/simpleTestes/diaSimples/testeSimples_ptb.top"
... or if you want to be a bit cleaner, if you know your script is being executed at the root of your application package, use os.path:
topf_path = os.path.abspath("diaSimples/testeSimples_ptb.top")

How to get the running time in the Pylint of each file

I am using the pylint in my project and it runs over 1min which is too long for me .How can I get the specific running time of each file in my project?
Here is my research:
The issue on the github
How to speed up pylint
Can you give me some advice about the issue and how to speed up the pylint ?
thanks in advance !!!!
I create a new checker class and add the print sentences to get the time. I think it is not the best way and I will do the further research
from pylint.checkers import BaseChecker
from pylint.interfaces import IAstroidChecker
class CustomTimeChecker(BaseChecker):
"""
find the check type in the following url:
https://github.com/PyCQA/pylint/blob/63eb8c4663a77d0caf2a842b716e4161f9763a16/pylint/checkers/typecheck.py
"""
print(begin)
__implements__ = IAstroidChecker
name = 'import-time-checker'
priority = -1
def __init__(self, linter):
super().__init__(linter)
print('test In samuel !')
def visit_importfrom(self, node):
end = datetime.datetime.now()
print('')
def visit_import(self, node):
)
def visit_attribute(self, node):
end = datetime.datetime.now()
print(' function Name '+str(node.name)+ ' takes the time for '+ str(end - self.begin))
def leave_functiondef(self, node):
end = datetime.datetime.now()
print(' function Name '+str(node.name)+ ' takes the time for '+ str(end - self.begin))
def leave_module(self, node):
"""
Actual checks are implemented here
"""
end = datetime.datetime.now()
print('Leaving the module ' + str(node.name) +' when the time is '+str(end - self.begin))
print('*'*40)
# print(node.name)
def visit_module(self, node):
end = datetime.datetime.now()
print('Entering the module ' + str(node.name) + ' when the time is' + str(end - self.begin))
def register(linter):
linter.register_checker(CustomTimeChecker(linter))
You can speed up pylint by spawning multiple processes and checking files in parallel. This functionality is exposed via the -j command-line parameter. If the provided number is 0, then the total number of CPUs will be autodetected and used. From the output of pylint --help:
-j <n-processes>, --jobs=<n-processes>
Use multiple processes to speed up Pylint. Specifying
0 will auto-detect the number of processors available
to use. [current: 1]
There are some limitations in running checks in parallel in the current implementation. It is not possible to use custom plugins (i.e. --load-plugins option), nor it is not possible to use initialization hooks (i.e. the --init-hook option).

ReactorNotRestartable error

I have a tool, where i am implementing upnp discovery of devices connected in network.
For that i have written a script and used datagram class in it.
Implementation:
whenever scan button is pressed on tool, it will run that upnp script and will list the devices in the box created in tool.
This was working fine.
But when i again press the scan button, it gives me following error:
Traceback (most recent call last):
File "tool\ui\main.py", line 508, in updateDevices
upnp_script.main("server", localHostAddress)
File "tool\ui\upnp_script.py", line 90, in main
reactor.run()
File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 1191, in run
self.startRunning(installSignalHandlers=installSignalHandlers)
File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 1171, in startRunning
ReactorBase.startRunning(self)
File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 683, in startRunning
raise error.ReactorNotRestartable()
twisted.internet.error.ReactorNotRestartable
Main function of upnp script:
def main(mode, iface):
klass = Server if mode == 'server' else Client
obj = klass
obj(iface)
reactor.run()
There is server class which is sending M-search command(upnp) for discovering devices.
MS = 'M-SEARCH * HTTP/1.1\r\nHOST: %s:%d\r\nMAN: "ssdp:discover"\r\nMX: 2\r\nST: ssdp:all\r\n\r\n' % (SSDP_ADDR, SSDP_PORT)
In server class constructor, after sending m-search i am stooping reactor
reactor.callLater(10, reactor.stop)
From google i found that, we cannot restart a reactor beacause it is its limitation.
http://twistedmatrix.com/trac/wiki/FrequentlyAskedQuestions#WhycanttheTwistedsreactorberestarted
Please guide me how can i modify my code so that i am able to scan devices more than 1 time and don't get this "reactor not restartable error"
In response to "Please guide me how can i modify my code...", you haven't provided enough code that I would know how to specifically guide you, I would need to understand the (twisted part) of the logic around your scan/search.
If I were to offer a generic design/pattern/mental-model for the "twisted reactor" though, I would say think of it as your programs main loop. (thinking about the reactor that way is what makes the problem obvious to me anyway...)
I.E. most long running programs have a form something like
def main():
while(True):
check_and_update_some_stuff()
sleep 10
That same code in twisted is more like:
def main():
# the LoopingCall adds the given function to the reactor loop
l = task.LoopingCall(check_and_update_some_stuff)
l.start(10.0)
reactor.run() # <--- this is the endless while loop
If you think of the reactor as "the endless loop that makes up the main() of my program" then you'll understand why no-one is bothering to add support for "restarting" the reactor. Why would you want to restart an endless loop? Instead of stopping the core of your program, you should instead only surgically stop the task inside that is complete, leaving the main loop untouched.
You seem to be implying that the current code will keep "sending m-search"s endlessly when the reactor is running. So change your sending code so it stops repeating the "send" (... I can't tell you how to do this because you didn't provide code, but for instance, a LoopingCall can be turned off by calling its .stop method.
Runnable example as follows:
#!/usr/bin/python
from twisted.internet import task
from twisted.internet import reactor
from twisted.internet.protocol import Protocol, ServerFactory
class PollingIOThingy(object):
def __init__(self):
self.sendingcallback = None # Note I'm pushing sendToAll into here in main()
self.l = None # Also being pushed in from main()
self.iotries = 0
def pollingtry(self):
self.iotries += 1
if self.iotries > 5:
print "stoping this task"
self.l.stop()
return()
print "Polling runs: " + str(self.iotries)
if self.sendingcallback:
self.sendingcallback("Polling runs: " + str(self.iotries) + "\n")
class MyClientConnections(Protocol):
def connectionMade(self):
print "Got new client!"
self.factory.clients.append(self)
def connectionLost(self, reason):
print "Lost a client!"
self.factory.clients.remove(self)
class MyServerFactory(ServerFactory):
protocol = MyClientConnections
def __init__(self):
self.clients = []
def sendToAll(self, message):
for c in self.clients:
c.transport.write(message)
# Normally I would define a class of ServerFactory here but I'm going to
# hack it into main() as they do in the twisted chat, to make things shorter
def main():
client_connection_factory = MyServerFactory()
polling_stuff = PollingIOThingy()
# the following line is what this example is all about:
polling_stuff.sendingcallback = client_connection_factory.sendToAll
# push the client connections send def into my polling class
# if you want to run something ever second (instead of 1 second after
# the end of your last code run, which could vary) do:
l = task.LoopingCall(polling_stuff.pollingtry)
polling_stuff.l = l
l.start(1.0)
# from: https://twistedmatrix.com/documents/12.3.0/core/howto/time.html
reactor.listenTCP(5000, client_connection_factory)
reactor.run()
if __name__ == '__main__':
main()
This script has extra cruft in it that you might not care about, so just focus on the self.l.stop() in PollingIOThingys polling try method and the l related stuff in main() to illustrates the point.
(this code comes from SO: Persistent connection in twisted check that question if you want to know what the extra bits are about)

using topic exchange to send message from one method to another

Recently, I have been going though celery & kombu documentation as i need them integrated in one of my projects. I have a basic understanding of how this should work but documentation examples using different brokers have me confused.
Here is the scenario:
Within my application i have two views ViewA and ViewB both of them does some expensive processing, so i wanted to have them use celery tasks for processing. So this is what i did.
views.py
def ViewA(request):
tasks.do_task_a.apply_async(args=[a, b])
def ViewB(request):
tasks.do_task_b.apply_async(args=[a, b])
tasks.py
#task()
def do_task_a(a, b):
# Do something Expensive
#task()
def do_task_b(a, b):
# Do something Expensive here too
Until now, everything is working fine. The problem is that do_task_a creates a txt file on the system, which i need to use in do_task_b. Now, in the do_task_b method i can check for the file existence and call the tasks retry method [which is what i am doing right now] if the file does not exist.
Here, I would rather want to take a different approach (i.e. where messaging comes in). I would want do_task_a to send a message to do_task_b once the file has been created instead of looping the retry method until the file is created.
I read through the documentation of celery and kombu and updated my settings as follows.
BROKER_URL = "django://"
CELERY_RESULT_BACKEND = "database"
CELERY_RESULT_DBURI = "sqlite:///celery"
TASK_RETRY_DELAY = 30 #Define Time in Seconds
DATABASE_ROUTERS = ['portal.db_routers.CeleryRouter']
CELERY_QUEUES = (
Queue('filecreation', exchange=exchanges.genex, routing_key='file.create'),
)
CELERY_ROUTES = ('celeryconf.routers.CeleryTaskRouter',)
and i am stuck here.
don't know where to go from here.
What should i do next to make do_task_a to broadcast a message to do_task_b on file creation ? and what should i do to make do_task_b receive (consume) the message and process the code further ??
Any Ideas and suggestions are welcome.
This is a good example for using Celery's callback/linking function.
Celery supports linking tasks together so that one task follows another.
You can read more about it here
apply_async() functions has two optional arguments
+link : excute the linked function on success
+link_error : excute the linked function on an error
#task
def add(a, b):
return a + b
#task
def total(numbers):
return sum(numbers)
#task
def error_handler(uuid):
result = AsyncResult(uuid)
exc = result.get(propagate=False)
print('Task %r raised exception: %r\n%r' % (exc, result.traceback))
Now in your calling function do something like
def main():
#for error_handling
add.apply_async((2, 2), link_error=error_handler.subtask())
#for linking 2 tasks
add.apply_async((2, 2), link=add.subtask((8, )))
# output 12
#what you can do is your case is something like this.
if user_requires:
add.apply_async((2, 2), link=add.subtask((8, )))
else:
add.apply_async((2, 2))
Hope this is helpful