How to get the running time in the Pylint of each file - django

I am using the pylint in my project and it runs over 1min which is too long for me .How can I get the specific running time of each file in my project?
Here is my research:
The issue on the github
How to speed up pylint
Can you give me some advice about the issue and how to speed up the pylint ?
thanks in advance !!!!

I create a new checker class and add the print sentences to get the time. I think it is not the best way and I will do the further research
from pylint.checkers import BaseChecker
from pylint.interfaces import IAstroidChecker
class CustomTimeChecker(BaseChecker):
"""
find the check type in the following url:
https://github.com/PyCQA/pylint/blob/63eb8c4663a77d0caf2a842b716e4161f9763a16/pylint/checkers/typecheck.py
"""
print(begin)
__implements__ = IAstroidChecker
name = 'import-time-checker'
priority = -1
def __init__(self, linter):
super().__init__(linter)
print('test In samuel !')
def visit_importfrom(self, node):
end = datetime.datetime.now()
print('')
def visit_import(self, node):
)
def visit_attribute(self, node):
end = datetime.datetime.now()
print(' function Name '+str(node.name)+ ' takes the time for '+ str(end - self.begin))
def leave_functiondef(self, node):
end = datetime.datetime.now()
print(' function Name '+str(node.name)+ ' takes the time for '+ str(end - self.begin))
def leave_module(self, node):
"""
Actual checks are implemented here
"""
end = datetime.datetime.now()
print('Leaving the module ' + str(node.name) +' when the time is '+str(end - self.begin))
print('*'*40)
# print(node.name)
def visit_module(self, node):
end = datetime.datetime.now()
print('Entering the module ' + str(node.name) + ' when the time is' + str(end - self.begin))
def register(linter):
linter.register_checker(CustomTimeChecker(linter))

You can speed up pylint by spawning multiple processes and checking files in parallel. This functionality is exposed via the -j command-line parameter. If the provided number is 0, then the total number of CPUs will be autodetected and used. From the output of pylint --help:
-j <n-processes>, --jobs=<n-processes>
Use multiple processes to speed up Pylint. Specifying
0 will auto-detect the number of processors available
to use. [current: 1]
There are some limitations in running checks in parallel in the current implementation. It is not possible to use custom plugins (i.e. --load-plugins option), nor it is not possible to use initialization hooks (i.e. the --init-hook option).

Related

ROS: saving object in a file when a ros node is killed

I am running a rosnode with a kalman filter running. The kalman filter is an object with states that get updated as time plays out. Conventionally, a ros node has a run(self) method that runs at a specified frequency using the while condition
while not rospy.is_shutdown():
do this
Going through each loop my kalman filter object updates. I just want to be able to save the kalman filter object when the node is shutdown either some external condition or when the user presses ctrl+C. I am not able to do this. In the run(self) method, I tried
while not rospy.is_shutdown():
do this
# save in file
output = pathlib.Path('path/to/location')
results_path = output.with_suffix('.npz')
with open(results_path, 'xb') as results_file:
np.savez(results_file,kfObj=kf_list)
But it has not worked. Is it not executing the save command? If ctrl+C is pressed does it stop short of executing it? Whats the way to do it?
Check out the atexit module:
http://docs.python.org/library/atexit.html
import atexit
def exit_handler():
output = pathlib.Path('path/to/location')
results_path = output.with_suffix('.npz')
with open(results_path, 'xb') as results_file:
np.savez(results_file,kfObj=kf_list
atexit.register(exit_handler)
Just be aware that this works great for normal termination of the script, but it won't get called in all cases (e.g. fatal internal errors).
Why not try the following python example class structure engaging shutdown hooks :
import rospy
class Hardware_Interface:
def __init__(self, selectedBoard):
...
# Housekeeping, cleanup at the end
rospy.on_shutdown(self.shutdown)
# Get the connection settings from the parameter server
self.port = rospy.get_param("~"+self.board+"-port", "/dev/ttyACM0")
# Get the prefix
self.prefix = rospy.get_param("~"+self.board+"-prefix", "travel")
# Overall loop rate
self.rate = int(rospy.get_param("~rate", 5))
self.period = rospy.Duration(1/float(self.rate))
...
def shutdown(self):
rospy.loginfo("Shutting down Hardware Interface Node...")
try:
rospy.loginfo("Stopping the robot...")
self.controller.send(0, 0, 0, 0)
#self.cmd_vel_pub.publish(Twist())
rospy.sleep(2)
except:
rospy.loginfo("Cannot stop!")
try:
self.controller.close()
except:
pass
finally:
rospy.loginfo("Serial port closed.")
os._exit(0)
This just an extract from a personal script, please modify it to your needs. I imagine that the on_shutdown will do the trick. Another similar approach comes from my friends in the Robot Ignite Academy in The Construct and seems like that
#!/usr/bin/env python
import rospy
from geometry_msgs.msg import Twist
class my_class():
def __init__(self):
...
self.cmd = Twist()
self.ctrl_c = False
self.rate = rospy.Rate(10) # 10hz
rospy.on_shutdown(self.shutdownhook)
def publish_once_in_cmd_vel(self):
while not self.ctrl_c:
...
def shutdownhook(self):
# works better than the rospy.is_shutdown()
self.ctrl_c = True
def move_something(self, linear_speed=0.2, angular_speed=0.2):
self.cmd.linear.x = linear_speed
self.cmd.angular.z = angular_speed
self.publish_once_in_cmd_vel()
if __name__ == '__main__':
rospy.init_node('class_test', anonymous=True)
...
This is obviously a sample of their code (for more please join the academy)

Audio Timeout error in Speech to text API of Google Cloud

I aim to make my jarvis, which listens all the time and activates when I say hello. I learned that Google cloud Speech to Text API doesn't listen for more than 60 seconds, but then I found this not-so-famous link, where this listens for infinite duration. The author of github script says that, he has played a trick that script refreshes after 60 seconds, so that program doesn't crash.
https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/speech/cloud-client/transcribe_streaming_indefinite.py
Following is the modified version, since I wanted it to answer of my questions, followed by "hello", and not answer me all the time. Now if I ask my Jarvis, a question, which while answering takes more than 60 seconds and it doesn't get the time to refresh, the program crashes down :(
#!/usr/bin/env python
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Google Cloud Speech API sample application using the streaming API.
NOTE: This module requires the additional dependency `pyaudio`. To install
using pip:
pip install pyaudio
Example usage:
python transcribe_streaming_indefinite.py
"""
# [START speech_transcribe_infinite_streaming]
from __future__ import division
import time
import re
import sys
import os
from google.cloud import speech
from pygame.mixer import *
from googletrans import Translator
# running=True
translator = Translator()
init()
import pyaudio
from six.moves import queue
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "C:\\Users\\mnauf\\Desktop\\rehandevice\\key.json"
from commands2 import commander
cmd=commander()
# Audio recording parameters
STREAMING_LIMIT = 55000
SAMPLE_RATE = 16000
CHUNK_SIZE = int(SAMPLE_RATE / 10) # 100ms
def get_current_time():
return int(round(time.time() * 1000))
def duration_to_secs(duration):
return duration.seconds + (duration.nanos / float(1e9))
class ResumableMicrophoneStream:
"""Opens a recording stream as a generator yielding the audio chunks."""
def __init__(self, rate, chunk_size):
self._rate = rate
self._chunk_size = chunk_size
self._num_channels = 1
self._max_replay_secs = 5
# Create a thread-safe buffer of audio data
self._buff = queue.Queue()
self.closed = True
self.start_time = get_current_time()
# 2 bytes in 16 bit samples
self._bytes_per_sample = 2 * self._num_channels
self._bytes_per_second = self._rate * self._bytes_per_sample
self._bytes_per_chunk = (self._chunk_size * self._bytes_per_sample)
self._chunks_per_second = (
self._bytes_per_second // self._bytes_per_chunk)
def __enter__(self):
self.closed = False
self._audio_interface = pyaudio.PyAudio()
self._audio_stream = self._audio_interface.open(
format=pyaudio.paInt16,
channels=self._num_channels,
rate=self._rate,
input=True,
frames_per_buffer=self._chunk_size,
# Run the audio stream asynchronously to fill the buffer object.
# This is necessary so that the input device's buffer doesn't
# overflow while the calling thread makes network requests, etc.
stream_callback=self._fill_buffer,
)
return self
def __exit__(self, type, value, traceback):
self._audio_stream.stop_stream()
self._audio_stream.close()
self.closed = True
# Signal the generator to terminate so that the client's
# streaming_recognize method will not block the process termination.
self._buff.put(None)
self._audio_interface.terminate()
def _fill_buffer(self, in_data, *args, **kwargs):
"""Continuously collect data from the audio stream, into the buffer."""
self._buff.put(in_data)
return None, pyaudio.paContinue
def generator(self):
while not self.closed:
if get_current_time() - self.start_time > STREAMING_LIMIT:
self.start_time = get_current_time()
break
# Use a blocking get() to ensure there's at least one chunk of
# data, and stop iteration if the chunk is None, indicating the
# end of the audio stream.
chunk = self._buff.get()
if chunk is None:
return
data = [chunk]
# Now consume whatever other data's still buffered.
while True:
try:
chunk = self._buff.get(block=False)
if chunk is None:
return
data.append(chunk)
except queue.Empty:
break
yield b''.join(data)
def search(responses, stream, code):
responses = (r for r in responses if (
r.results and r.results[0].alternatives))
num_chars_printed = 0
for response in responses:
if not response.results:
continue
# The `results` list is consecutive. For streaming, we only care about
# the first result being considered, since once it's `is_final`, it
# moves on to considering the next utterance.
result = response.results[0]
if not result.alternatives:
continue
# Display the transcription of the top alternative.
top_alternative = result.alternatives[0]
transcript = top_alternative.transcript
# music.load("/home/pi/Desktop/rehandevice/end.mp3")
# music.play()
# Display interim results, but with a carriage return at the end of the
# line, so subsequent lines will overwrite them.
# If the previous result was longer than this one, we need to print
# some extra spaces to overwrite the previous result
overwrite_chars = ' ' * (num_chars_printed - len(transcript))
if not result.is_final:
sys.stdout.write(transcript + overwrite_chars + '\r')
sys.stdout.flush()
num_chars_printed = len(transcript)
else:
#print(transcript + overwrite_chars)
# Exit recognition if any of the transcribed phrases could be
# one of our keywords.
if code=='ur-PK':
transcript=translator.translate(transcript).text
print("Your command: ", transcript + overwrite_chars)
if "hindi assistant" in (transcript+overwrite_chars).lower():
cmd.respond("Alright. Talk to me in urdu",code=code)
main('ur-PK')
elif "english assistant" in (transcript+overwrite_chars).lower():
cmd.respond("Alright. Talk to me in English",code=code)
main('en-US')
cmd.discover(text=transcript + overwrite_chars,code=code)
for i in range(10):
print("Hello world")
break
num_chars_printed = 0
def listen_print_loop(responses, stream, code):
"""Iterates through server responses and prints them.
The responses passed is a generator that will block until a response
is provided by the server.
Each response may contain multiple results, and each result may contain
multiple alternatives; for details, see https://cloud.google.com/speech-to-text/docs/reference/rpc/google.cloud.speech.v1#streamingrecognizeresponse. Here we
print only the transcription for the top alternative of the top result.
In this case, responses are provided for interim results as well. If the
response is an interim one, print a line feed at the end of it, to allow
the next result to overwrite it, until the response is a final one. For the
final one, print a newline to preserve the finalized transcription.
"""
responses = (r for r in responses if (
r.results and r.results[0].alternatives))
music.load(r"C:\\Users\\mnauf\\Desktop\\rehandevice\\coins.mp3")
num_chars_printed = 0
for response in responses:
if not response.results:
continue
# The `results` list is consecutive. For streaming, we only care about
# the first result being considered, since once it's `is_final`, it
# moves on to considering the next utterance.
result = response.results[0]
if not result.alternatives:
continue
# Display the transcription of the top alternative.
top_alternative = result.alternatives[0]
transcript = top_alternative.transcript
# Display interim results, but with a carriage return at the end of the
# line, so subsequent lines will overwrite them.
#
# If the previous result was longer than this one, we need to print
# some extra spaces to overwrite the previous result
overwrite_chars = ' ' * (num_chars_printed - len(transcript))
if not result.is_final:
sys.stdout.write(transcript + overwrite_chars + '\r')
sys.stdout.flush()
num_chars_printed = len(transcript)
else:
print("Listen print loop", transcript + overwrite_chars)
# Exit recognition if any of the transcribed phrases could be
# one of our keywords.
if re.search(r'\b(hello)\b', transcript.lower(), re.I):
#print("Give me order")
music.play()
search(responses, stream,code)
break
elif re.search(r'\b(ہیلو)\b', transcript, re.I):
music.play()
search(responses, stream,code)
break
num_chars_printed = 0
def main(code):
cmd.respond("I am Rayhaan dot A Eye. How can I help you?",code=code)
client = speech.SpeechClient()
config = speech.types.RecognitionConfig(
encoding=speech.enums.RecognitionConfig.AudioEncoding.LINEAR16,
sample_rate_hertz=SAMPLE_RATE,
language_code='en-US',
max_alternatives=1,
enable_word_time_offsets=True)
streaming_config = speech.types.StreamingRecognitionConfig(
config=config,
interim_results=True)
mic_manager = ResumableMicrophoneStream(SAMPLE_RATE, CHUNK_SIZE)
print('Say "Quit" or "Exit" to terminate the program.')
with mic_manager as stream:
while not stream.closed:
audio_generator = stream.generator()
requests = (speech.types.StreamingRecognizeRequest(
audio_content=content)
for content in audio_generator)
responses = client.streaming_recognize(streaming_config,
requests)
# Now, put the transcription responses to use.
try:
listen_print_loop(responses, stream, code)
except:
listen
if __name__ == '__main__':
main('en-US')
# [END speech_transcribe_infinite_streaming]
You can call your functions after recognition in different thread. Example:
new_thread = Thread(target=music.play)
new_thread.daemon = True # Not always needed, read more about daemon property
new_thread.start()
Or if you want just to prevent exception - you can always use try/except. Example:
with mic_manager as stream:
while not stream.closed:
try:
audio_generator = stream.generator()
requests = (speech.types.StreamingRecognizeRequest(
audio_content=content)
for content in audio_generator)
responses = client.streaming_recognize(streaming_config,
requests)
# Now, put the transcription responses to use.
listen_print_loop(responses, stream, code)
except BaseException as e:
print("Exception occurred - {}".format(str(e)))

Does GDB have a "step-to-next-call" instruction?

WinDBG and the related windows kernel debuggers support a "pc" command which runs the target until reaching the next call statement (in assembly). In other words, it breaks just prior to creating a new stack frame, sort of the opposite of "finish". "Start" in GDB runs until main starts, but in essence I want 'start' but with a wildcard of "any next frame".
I'm trying to locate a similar functionality in GDB, but have not found it.
is this possible?
Example WinDBG doc: http://windbg.info/doc/1-common-cmds.html#4_expr_and_cmds
Simple answer: no, step-to-next-call is not part of GDB commands.
GDB/Python-aware answer: no, it's not part of GDB commands, but it's easy to implement!
I'm not sure to understand if you want to stop before or after the call instruction execution.
To stop before, you need to stepi/nexti (next assembly instruction) until you see call in the current instruction:
import gdb
class StepBeforeNextCall (gdb.Command):
def __init__ (self):
super (StepBeforeNextCall, self).__init__ ("step-before-next-call",
gdb.COMMAND_OBSCURE)
def invoke (self, arg, from_tty):
arch = gdb.selected_frame().architecture()
while True:
current_pc = addr2num(gdb.selected_frame().read_register("pc"))
disa = arch.disassemble(current_pc)[0]
if "call" in disa["asm"]: # or startswith ?
break
SILENT=True
gdb.execute("stepi", to_string=SILENT)
print("step-before-next-call: next instruction is a call.")
print("{}: {}".format(hex(int(disa["addr"])), disa["asm"]))
def addr2num(addr):
try:
return int(addr) # Python 3
except:
return long(addr) # Python 2
StepBeforeNextCall()
To stop after the call, you compute the current stack depth, then step until it's deeper:
import gdb
def callstack_depth():
depth = 1
frame = gdb.newest_frame()
while frame is not None:
frame = frame.older()
depth += 1
return depth
class StepToNextCall (gdb.Command):
def __init__ (self):
super (StepToNextCall, self).__init__ ("step-to-next-call",
gdb.COMMAND_OBSCURE)
def invoke (self, arg, from_tty):
start_depth = current_depth =callstack_depth()
# step until we're one step deeper
while current_depth == start_depth:
SILENT=True
gdb.execute("step", to_string=SILENT)
current_depth = callstack_depth()
# display information about the new frame
gdb.execute("frame 0")
StepToNextCall()
just put that in a file, source it with GDB (or in your .gdbinit) and that will provide you the new commands step-before-next-call and step-to-next-call.
Relevant documentation is there:
Python API table of content
Basic Python
Python representation of architectures
Accessing inferior stack frames from Python.

Wait until a Jenkins build is complete

I am using Python 2.7 and Jenkins.
I am writing some code in Python that will perform a checkin and wait/poll for Jenkins job to be complete. I would like some thoughts on around how I achieve it.
Python function to create a check-in in Perforce-> This can be easily done as P4 has CLI
Python code to detect when a build got triggered -> I have the changelist and the job number. How do I poll the Jenkins API for the build log to check if it has the appropriate changelists? The output of this step is a build url which is carrying out the job
How do I wait till the Jenkins job is complete?
Can I use snippets from the Jenkins Rest API or from Python Jenkins module?
If you need to know if the job is finished, the buildNumber and buildTimestamp are not enough.
This is the gist of how I find out if a job is complete, I have it in ruby but not python so perhaps someone could update this into real code.
lastBuild = get jenkins/job/myJob/lastBuild/buildNumber
get jenkins/job/myJob/lastBuild/build?token=gogogo
currentBuild = get jenkins/job/myJob/lastBuild/buildNumber
while currentBuild == lastBuild
sleep 1
thisBuild = get jenkins/job/myJob/lastBuild/buildNumber
buildInfo = get jenkins/job/myJob/[thisBuild]/api/xml?depth=0
while buildInfo["freeStyleBuild/building"] == true
buildInfo = get jenkins/job/myJob/[thisBuild]/api/xml?depth=0
sleep 1
ie. I found I needed to A) wait until the build starts (new build number) and B) wait until the building finishes (building is false).
You can query the last build timestamp to determine if the build finished. Compare it to what it was just before you triggered the build, and see when it changes. To get the timestamp, add /lastBuild/buildTimestamp to your job URL
As a matter of fact, in your Jenkins, add /lastBuild/api/ to any Job, and you will see a lot of API information. It even has Python API, but I not familiar with that so can't help you further
However, if you were using XML, you can add lastBuild/api/xml?depth=0 and inside the XML, you can see the <changeSet> object with list of revisions/commit messages that triggered the build
Simple solution using invoke and block_until_complete methods (tested with Python 3.7)
import jenkinsapi
from jenkinsapi.jenkins import Jenkins
...
server = Jenkins(jenkinsUrl, username=jenkinsUser,
password=jenkinsToken, ssl_verify=sslVerifyFlag)
job = server.create_job(jobName, None)
queue = job.invoke()
queue.block_until_complete()
Inpsired by a test method in pycontribs
This snippet starts build job and wait until job is done.
It is easy to start the job but we need some kind of logic to know when job is done. First we need to wait for job ID to be applied and than we can query job for details:
from jenkinsapi import jenkins
server = jenkins.Jenkins(jenkinsurl, username=username, password='******')
job = server.get_job(j_name)
prev_id = job.get_last_buildnumber()
server.build_job(j_name)
while True:
print('Waiting for build to start...')
if prev_id != job.get_last_buildnumber():
break
time.sleep(3)
print('Running...')
last_build = job.get_last_build()
while last_build.is_running():
time.sleep(1)
print(str(last_build.get_status()))
Don't know if this was available at the time of the question, but jenkinsapi module's Job.invoke() and/or Jenkins.build_job() return a QueueItem object, which can block_until_building(), or block_until_complete()
jobq = server.build_job(job_name, job_params)
jobq.block_until_building()
print("Job %s (%s) is building." % (jobq.get_job_name(), jobq.get_build_number()))
jobq.block_until_complete(5) # check every 5s instead of the default 15
print("Job complete, %s" % jobq.get_build().get_status())
Was going through the same problem and this worked for me, using python3 and python-jenkins.
while "".join([d['color'] for d in j.get_jobs() if d['name'] == "job_name"]) == 'blue_anime':
print('Job is Running')
time.sleep(1)
print('Job Over!!')
Working Github Script: Link
This is working for me
#!/usr/bin/env python
import jenkins
import time
server = jenkins.Jenkins('https://jenkinsurl/', username='xxxxx', password='xxxxxx')
j_name = 'test'
server.build_job(j_name, {'testparam1': 'test', 'testparam2': 'test'})
while True:
print('Running....')
if server.get_job_info(j_name)['lastCompletedBuild']['number'] == server.get_job_info(j_name)['lastBuild']['number']:
print "Last ID %s, Current ID %s" % (server.get_job_info(j_name)['lastCompletedBuild']['number'], server.get_job_info(j_name)['lastBuild']['number'])
break
time.sleep(3)
print('Stop....')
console_output = server.get_build_console_output(j_name, server.get_job_info(j_name)['lastBuild']['number'])
print console_output
the issue main issue that the build_job doesn't return the number of the job, returns the number of a queue item (that only last 5 min). so the trick is
build_job
get the queue number,
with the queue number get the job_number
now we know the name of the job and the job number
get_job_info and loop the jobs till we find one with our job number
check the status
so i made a function for it with time_out
import time
from datetime import datetime, timedelta
import jenkins
def launch_job(jenkins_connection, job_name, parameters={}, wait=False, interval=30, time_out=7200):
"""
Create a jenkins job and waits for the job to finish
:param jenkins_connection: jenkins server jenkins object
:param job_name: the name of job we want to create and see if finish string
:param parameters: the parameters of the job to build directory
:param wait: if we want to wait for the job to finish or not bool
:param interval: how often we want to monitor seconds int
:param time_out: break the loop after certain X seconds int
:return: build job number int
"""
# we lunch the job and returns a queue_id
job_id = jenkins_connection.build_job(job_name, parameters)
# from the queue_id we get the job number that was created
queue_job = jenkins_connection.get_queue_item(job_id, depth=0)
build_number = queue_job["executable"]["number"]
print(f"job_name: {job_name} build_number: {build_number}")
if wait is True:
now = datetime.now()
later = now + timedelta(seconds=time_out)
while True:
# we check current time vs the timeout(later)
if datetime.now() > later:
raise ValueError(f"Job: {job_name}:{build_number} is running for more than {time_out} we"
f"stop monitoring the job, you can check it in Jenkins")
b = jenkins_connection.get_job_info(job_name, depth=1, fetch_all_builds=False)
for i in b["builds"]:
loop_id = i["id"]
if int(loop_id) == build_number:
result = (i["result"])
print(f"result: {result}") # in the json looks like null
if result is not None:
return i
# break
time.sleep(interval)
# return result
return build_number
after we ask jenkins to build the job>get queue#>get job#> loop the info and get the status till change from None to something else.
if works will return the directory with the information of that job. (hope the jenkins library could implement something like this.)

using topic exchange to send message from one method to another

Recently, I have been going though celery & kombu documentation as i need them integrated in one of my projects. I have a basic understanding of how this should work but documentation examples using different brokers have me confused.
Here is the scenario:
Within my application i have two views ViewA and ViewB both of them does some expensive processing, so i wanted to have them use celery tasks for processing. So this is what i did.
views.py
def ViewA(request):
tasks.do_task_a.apply_async(args=[a, b])
def ViewB(request):
tasks.do_task_b.apply_async(args=[a, b])
tasks.py
#task()
def do_task_a(a, b):
# Do something Expensive
#task()
def do_task_b(a, b):
# Do something Expensive here too
Until now, everything is working fine. The problem is that do_task_a creates a txt file on the system, which i need to use in do_task_b. Now, in the do_task_b method i can check for the file existence and call the tasks retry method [which is what i am doing right now] if the file does not exist.
Here, I would rather want to take a different approach (i.e. where messaging comes in). I would want do_task_a to send a message to do_task_b once the file has been created instead of looping the retry method until the file is created.
I read through the documentation of celery and kombu and updated my settings as follows.
BROKER_URL = "django://"
CELERY_RESULT_BACKEND = "database"
CELERY_RESULT_DBURI = "sqlite:///celery"
TASK_RETRY_DELAY = 30 #Define Time in Seconds
DATABASE_ROUTERS = ['portal.db_routers.CeleryRouter']
CELERY_QUEUES = (
Queue('filecreation', exchange=exchanges.genex, routing_key='file.create'),
)
CELERY_ROUTES = ('celeryconf.routers.CeleryTaskRouter',)
and i am stuck here.
don't know where to go from here.
What should i do next to make do_task_a to broadcast a message to do_task_b on file creation ? and what should i do to make do_task_b receive (consume) the message and process the code further ??
Any Ideas and suggestions are welcome.
This is a good example for using Celery's callback/linking function.
Celery supports linking tasks together so that one task follows another.
You can read more about it here
apply_async() functions has two optional arguments
+link : excute the linked function on success
+link_error : excute the linked function on an error
#task
def add(a, b):
return a + b
#task
def total(numbers):
return sum(numbers)
#task
def error_handler(uuid):
result = AsyncResult(uuid)
exc = result.get(propagate=False)
print('Task %r raised exception: %r\n%r' % (exc, result.traceback))
Now in your calling function do something like
def main():
#for error_handling
add.apply_async((2, 2), link_error=error_handler.subtask())
#for linking 2 tasks
add.apply_async((2, 2), link=add.subtask((8, )))
# output 12
#what you can do is your case is something like this.
if user_requires:
add.apply_async((2, 2), link=add.subtask((8, )))
else:
add.apply_async((2, 2))
Hope this is helpful