How to excute postgre query asynchronously in django - django

In current scenario result2 not get executed until result1 one finished its execution. I want to execute select * from Test1 and select * from Test2 asynchronously.result2 should not wait utile result1 complete its execution.And finally send both the result set to client at a time.
def fetchQueryData(request):
cur=connection.cursor()
cur.execute("select * from Test1")
result1=cur.fetchall()
json1=result1[0][0]
cur.execute("select * from Test2")
result2=cur.fetchall()
json2=result2[0][0]
cur.close()
return JsonResponse([json1,json2])

Celery can help you with that. You can start asynchronous tasks and wait for them to complete in a while loop. It requires some setting up and additional software on your server though.
To me, it seems that your queries are quite long running. I don't think waiting for them in a single request is the best solution regarding to user experience, but that is probably another topic.

Related

Why does .save() still take up time when using transaction.atomic()?

In Django, I read that transaction.atomic() should leave all the queries to be executed until the end of the code segment in a single transaction. However, it doesn't seem to be working as expected:
import time
from django.db import transaction
my_objs=Obj.objects.all()[:100]
count=avg1=0
with transaction.atomic():
for obj in my_objs:
start = time.time()
obj.save()
end = time.time()
avg1+= end - start
count+= 1
print("total:",avg1,"average:",avg1/count)
Why when wrapping the .save() method around a start/end time to check how long it takes, it is not instantaneous?
The result of the code above was:
total: 3.5636022090911865 average: 0.035636022090911865
When logging the SQL queries with the debugger, it also displays an UPDATE statement for each time .save() is called.
Any ideas why its not working as expected?
PS. I am using Postgres.
There is probably just a misunderstanding here about what transaction.atomic actually does. It doesn't necessarily wait to execute all the queries -- the ORM is still talking to the database as you execute your code in an atomic block. It simply waits to commit (SQL COMMIT;) changes until the [successful] end of the block. In the case there is an exception before the end of the transaction block, all the modifications in the transaction are not committed and are rolled back.

Why celery not executing parallelly in Django?

I am having a issue with the celery , I will explain with the code
def samplefunction(request):
print("This is a samplefunction")
a=5,b=6
myceleryfunction.delay(a,b)
return Response({msg:" process execution started"}
#celery_app.task(name="sample celery", base=something)
def myceleryfunction(a,b):
c = a+b
my_obj = MyModel()
my_obj.value = c
my_obj.save()
In my case one person calling the celery it will work perfectly
If many peoples passing the request it will process one by one
So imagine that my celery function "myceleryfunction" take 3 Min to complete the background task .
So if 10 request are coming at the same time, last one take 30 Min delay to complete the output
How to solve this issue or any other alternative .
Thank you
I'm assuming you are running a single worker with default settings for the worker.
This will have the worker running with worker_pool=prefork and worker_concurrency=<nr of CPUs>
If the machine it runs on only has a single CPU, you won't get any parallel running tasks.
To get parallelisation you can:
set worker_concurrency to something > 1, this will use multiple processes in the same worker.
start additional workers
use celery multi to start multiple workers
when running the worker in a docker container, add replica's of the container
See Concurrency for more info.

How to launch a long-running SQLite query in Go?

I have a code that, when needed, interrupts a Sqlite query using a context with a deadline. My problem is to write a unit test for it: I need to launch a query that I know will run for a long time, ideally in an infinite loop, and check that it is interrupted. I use https://github.com/mattn/go-sqlite3 to access Sqlite 3 from Go.
For example, this:
with recursive rec as
(select 1 as n union all select n + 1 from rec)
select n from rec;
returns 1 immediately instead of looping, as it does in the SQLite console (is there something to do to enable CTE’s?). I also found no sleep function or anything similar.

Delayed Job Overwhelming DB

I have a method which updates all DNS records for an account with 1 delayed job for each record. There's a lot of workers and queues which is great for getting other jobs done quickly, but this particular job completes quickly and overwhelms the database. Because each job requires DNS to resolve, it's difficult to move this to a process which collects the information then writes once. So I'm instead looking for a way to stagger delayed jobs.
As far as I know, just using sleep(0.1) in the after method should do the trick. I wanted to see if anyone else has specifically dealt with this situation and solved it.
I've created a custom job to test out a few different ideas. Here's some example code:
def update_dns
Account.active.find_each do |account|
account.domains.where('processed IS NULL').find_each do |domain|
begin
Delayed::Job.enqueue StaggerJob.new(self.id)
rescue Exception => e
self.domain_logger.error "Unable to update DNS for #{domain.name} (id=#{domain.id})"
self.domain_logger.error e.message
self.domain_logger.error e.backtrace
end
end
end
end
When a cron job calls Domain.update_dns, the delayed job table floods with tens of thousands of jobs, and the workers start working through them. There's so many workers and queues that even setting the lowest priority overwhelms the database and other requests suffer.
Here's the StaggerJob class:
class StaggerJob < Struct.new(:domain_id)
def perform
domain.fetch_dns_job
end
def enqueue(job)
job.account_id = domain.account_id
job.owner = domain
job.priority = 10 # lowest
job.save
end
def after(job)
# Sleep to avoid overwhelming the DB
sleep(0.1)
end
private
def domain
#domain ||= Domain.find self.domain_id
end
end
This may entirely do the trick, but I wanted to verify if this technique was sensible.
It turned out the priority for these jobs were set to 0 (highest). Setting to 10 (lowest) helped. Sleeping in the job in the after method would work, but there's a better way.
Delayed::Job.enqueue StaggerJob.new(domain.id, :fetch_dns!), run_at: (Time.now + (0.2*counter).seconds) # stagger by 0.2 seconds
This ends up pausing outside the job instead of inside. Better!

Django: Gracefully restart nginx + fastcgi sites to reflect code changes?

Common situation: I have a client on my server who may update some of the code in his python project. He can ssh into his shell and pull from his repository and all is fine -- but the code is stored in memory (as far as I know) so I need to actually kill the fastcgi process and restart it to have the code change.
I know I can gracefully restart fcgi but I don't want to have to manually do this. I want my client to update the code, and within 5 minutes or whatever, to have the new code running under the fcgi process.
Thanks
First off, if uptime is important to you, I'd suggest making the client do it. It can be as simple as giving him a command called deploy-code. Using your method, if there is an error in their code, your method requires a 10 minute turnaround (read: downtime) for fixing it, assuming he gets it correct.
That said, if you actually want to do this, you should create a daemon which will look for files modified within the last 5 minutes. If it detects one, it will execute the reboot command.
Code might look something like:
import os, time
CODE_DIR = '/tmp/foo'
while True:
if restarted = True:
restarted = False
time.sleep(5*60)
for root, dirs, files in os.walk(CODE_DIR):
if restarted=True:
break
for filename in files:
if restared=True:
break
updated_on = os.path.getmtime(os.path.join(root, filename))
current_time = time.time()
if current_time - updated_on <= 6 * 60: # 6 min
# 6 min could offer false negatives, but that's better
# than false positives
restarted = True
print "We should execute the restart command here."