I am having a issue with the celery , I will explain with the code
def samplefunction(request):
print("This is a samplefunction")
a=5,b=6
myceleryfunction.delay(a,b)
return Response({msg:" process execution started"}
#celery_app.task(name="sample celery", base=something)
def myceleryfunction(a,b):
c = a+b
my_obj = MyModel()
my_obj.value = c
my_obj.save()
In my case one person calling the celery it will work perfectly
If many peoples passing the request it will process one by one
So imagine that my celery function "myceleryfunction" take 3 Min to complete the background task .
So if 10 request are coming at the same time, last one take 30 Min delay to complete the output
How to solve this issue or any other alternative .
Thank you
I'm assuming you are running a single worker with default settings for the worker.
This will have the worker running with worker_pool=prefork and worker_concurrency=<nr of CPUs>
If the machine it runs on only has a single CPU, you won't get any parallel running tasks.
To get parallelisation you can:
set worker_concurrency to something > 1, this will use multiple processes in the same worker.
start additional workers
use celery multi to start multiple workers
when running the worker in a docker container, add replica's of the container
See Concurrency for more info.
Related
I'm trying to use multiprocessing inside docker container. However, I'm facing two issues.
(I'm using python 2.7)
Creating ProcessingPool()/Pool() (I tried both) takes abnormally long time to create. Maybe over a minute or two.
After it processes the function, it hangs.
I basically trying to run a very simple case inside my container. Here's what I have..
import pathos.multiprocessing import ProcessingPool
import multiprocessing
class MultiprocessClassExample():
.
.
.
def worker(self, number):
return "Printing number %s" %(number)
.
.
def generateNumber(self):
PROCESSES = multiprocessing.cpu_count() - 1
NUMBER = ['One', 'Two', 'Three', 'Four', 'Five']
result = ProcessingPool(PROCESSES).map(self.worker, NUMBER)
print("Finished processing.")
print(result)
and I call using the following code.
MultiprocessClassExample().generateNumber()
Now, this seems fairly straight forward enough. I ran this on a jupyter notebook and it ran without an issue. I also tried running python inside my docker container, and tried running the above code inside, and it went fine. So I'm assuming it has to do with the complete code that I have. Obviously I didn't write out all the code, but that's the main section of the code I'm trying to handle right now.
I would expect the above code to work as well. However, first thing I notice is that when I call ProcessingPool(), it takes a long time. I tried regular multiprocessing.Pool() before, and had the same effect. Whereas, in the notebook, it ran very quick and smoothly.
After waiting several minutes, it prints :
Printing number One
Printing number Two
Printing number Three
Printing number Four
Printing number Five
and that's it. It never prints out Finished processing. and it just hangs there.
But when the print statements appear, I notice that several debug message appear at the same time. It says
[CRITICAL] WORKER TIMEOUT
[WARNING] Worker graceful timeout
[INFO] Worker exiting
[INFO] Booting worker with pid:
Any suggestions would be greatly appreciated.
These are the tasks in tasks.py:
#shared_task
def add(x, y):
return x * y
#shared_task
def verify_external_video(video_id, media_id, video_type):
return True
I am calling verify_external_video 1000+ times from a custom Django command I run from CLI
verify_external_video.delay("1", "2", "3")
In Flower, I am then monitoring the success or failure of the jobs. A random number of jobs fail, others succeed...
Those that fail, do so because of two reasons that I just cannot understand:
NotRegistered('lstv_api_v1.tasks.verify_external_video')
if it's not registered, why are 371 succeedings?
and...
TypeError: verify_external_video() takes 1 positional argument but 3 were given
Again, a mystery, as I quit Celery and Flower, and run them AGAIN from scratch before running my CLI Django command. There is no code living anywhere where verify_external_video() takes 1 parameter. And if this is the case... why are SOME of the calls successful?
This type of failure isn't sequential. I can have 3 successful jobs, followed by one that does not succeed, followed by success again, so it's not a timing issue.
I'm at a loss here.
In Short: I had a number of rogue celery processes running around from previous "violent" CTRL-C's which prevented graceful termination of what was running.
def get_url(url):
# conditions
import multiprocessing
threads = []
thread = multiprocessing.Process(target=get_url,args=(url))
threads.append(thread)
for st in threads:
st.start()
Now i want to execute 10 requests at a time, once those 10 are completed. Pick other 10 and so on. I was going through the documentation but i haven't found any use case. I am using this module for the first time. Any help would be appreciated.
I have a method which updates all DNS records for an account with 1 delayed job for each record. There's a lot of workers and queues which is great for getting other jobs done quickly, but this particular job completes quickly and overwhelms the database. Because each job requires DNS to resolve, it's difficult to move this to a process which collects the information then writes once. So I'm instead looking for a way to stagger delayed jobs.
As far as I know, just using sleep(0.1) in the after method should do the trick. I wanted to see if anyone else has specifically dealt with this situation and solved it.
I've created a custom job to test out a few different ideas. Here's some example code:
def update_dns
Account.active.find_each do |account|
account.domains.where('processed IS NULL').find_each do |domain|
begin
Delayed::Job.enqueue StaggerJob.new(self.id)
rescue Exception => e
self.domain_logger.error "Unable to update DNS for #{domain.name} (id=#{domain.id})"
self.domain_logger.error e.message
self.domain_logger.error e.backtrace
end
end
end
end
When a cron job calls Domain.update_dns, the delayed job table floods with tens of thousands of jobs, and the workers start working through them. There's so many workers and queues that even setting the lowest priority overwhelms the database and other requests suffer.
Here's the StaggerJob class:
class StaggerJob < Struct.new(:domain_id)
def perform
domain.fetch_dns_job
end
def enqueue(job)
job.account_id = domain.account_id
job.owner = domain
job.priority = 10 # lowest
job.save
end
def after(job)
# Sleep to avoid overwhelming the DB
sleep(0.1)
end
private
def domain
#domain ||= Domain.find self.domain_id
end
end
This may entirely do the trick, but I wanted to verify if this technique was sensible.
It turned out the priority for these jobs were set to 0 (highest). Setting to 10 (lowest) helped. Sleeping in the job in the after method would work, but there's a better way.
Delayed::Job.enqueue StaggerJob.new(domain.id, :fetch_dns!), run_at: (Time.now + (0.2*counter).seconds) # stagger by 0.2 seconds
This ends up pausing outside the job instead of inside. Better!
When multiprocessing.pool.apply_async() is called, is it possible to know which worker process is assigned the Task?
I have following code:
self.workers = multiprocessing.Pool(MAX_WORKERS,
init_worker,
maxtasksperchild=MAX_TASKS_PER_WORKER)
self.pending = []
tasks = get_tasks() # keeps on sending tasks what need to be executed
self.pending.append((task, self.workers.apply_async(run_task, task)))
# monitor AsyncResults
for task, result in self.pending:
if result.ready:
try:
result.get(timeout=1)
except:
self.pending.append((task, self.workers.apply_async(run_task, task)))
I found that some of the worker processes terminates while executing run_task() method and using AsyncResult object there is no way to know if worker process has terminated. So in my case self.pending list always hold the task and result of which process terminated in-between and no work is actually getting done.
A task can take any amount of time. So I don't want to use timeout to assign new process to the task as it is possible that first process might be still running.
Is there a way to know what process is associated with AsyncResult, so that we can check if process is still_alive or a way to get notification in case process terminates?