Async call in API made in Django - django

I am using DRF with Twilio SMS sending service. I have added this code on some object save - which I do in some of the API calls. But as I can see Django waits for Twilio code to be executed (which probably waits for response) and it takes around 1-2 seconds to get response from Twilio server.
I would like to optimize my API, but I am not sure how should I send a request for Twilio SMS asynchronously. This is my code.
def send_sms_registration(sender, instance, **kwargs):
start = int(round(time.time() * 1000))
if not instance.ignore_sms:
client = TwilioRestClient(TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN)
activation_code = instance.activation_code
client.messages.create(
to = instance.phone_number,
from_ = DEFAULT_SMS_NAME,
body = SMS_REGISTRATION_TEXT + activation_code,
)
end = int(round(time.time() * 1000))
print("send_sms_registration")
print(end - start)
post_save.connect(send_sms_registration, sender=Person, dispatch_uid="send_sms_registration")
Thanks for suggestions!

The API call is not asynchronous, you need to use other methods to make sending SMS async, you can use any of the following:
django-background-tasks: Simple and doesn't require a worker
python-rq: Great for simple async tasks
celery: A more complete solution

Related

AWS SES python sqs.receive_message returning only one message output

I am using amazon ses python sdk to see how many messages are there in the queue for a given queue URL. in amazon GUI console i can see there are 3 messages within the queue for the queue URL. However i do not get more than 1 message as output everytime i run the command. Below is my code
import boto3
import json
from botocore.exceptions import ClientError
def GetSecretKeyAndAccesskey():
#code to pull secretkey and access key
return(aws_access_key,aws_secret_key)
# Create SQS client
aws_access_key_id,aws_secret_access_key = GetSecretKeyAndAccesskey()
sqs = boto3.client('sqs',aws_access_key_id=str(aws_access_key_id),aws_secret_access_key=str(aws_secret_access_key) ,region_name='eu-west-1')
response = sqs.receive_message(
QueueUrl='my_queue_url',
AttributeNames=[
'All',
],
MaxNumberOfMessages=10,
)
print(response["Messages"][0])
Every time i run the code i get a different message id, and if i change my print code to check for the next list i get list index out of bound meaning that there is only one message
print(response["Messages"][1])
C:\>python testing.py
d4e57e1d-db62-4fc5-8233-c5576cb2603d
C:\>python testing.py
857858e9-55dc-4d23-aead-3c6622feccc5
First, you need to add "WaitTimeSeconds" to turn on long polling and collect more messages during a single connection.
The other issue is that if you only put 3 messages on the queue, they get separated on the backend systems as part of the redundancy of the AWS SQS service. So when you call to SQS, it connects you to one of the systems and delivers the single message that's available. If you increase the number of total messages, you'll get more messages per request.
I wrote this code to demonstrate the functionality of SQS and allow you to play around with the concept and test.
import json
session = boto3.Session(region_name="us-east-2", profile_name="dev")
sqs = session.client('sqs')
def get_message():
response = sqs.receive_message(QueueUrl='test-queue', MaxNumberOfMessages=10, WaitTimeSeconds=10)
return len(response["Messages"])
def put_messages(seed):
for message_number in range(seed):
body = {"test": "message {}".format(message_number)}
sqs.send_message(QueueUrl='test-queue', MessageBody=json.dumps(body))
if __name__ == '__main__':
put_messages(2)
print(get_message())

How to create a queue for python-requests in Django?

REST API service has a limit of requests (say a maximum of 100 requests per minute). In Django, I am trying to allow USERs to access such API and retrieve data in real-time to update SQL tables. Therefore there is a problem that if multiple users are trying to access the API, the limit of requests is likely to be exceeded.
Here is a code snippet as an example of how I currently perform requests - each user will add a list of objects he wants to request and run request_engine().start(object_list) to access the API. I use multithreading to speed up requests. I also allow retrying failed API requests via setting a limit on the number of requests for each request object upper_limit.
As I understand there should be some queue for API requests. I anticipate there must be a more elegant solution for this, however, I could not find any similar examples. How can one implement/rewrite this for multiUSER usage with Django?
import requests
from multiprocessing.dummy import Pool as ThreadPool
N=50 # number of threads
upper_limit=1 # limit on the number of requests for a single object
class request_engine():
def __init__(self):
pass
def start(self,objs):
self.objs={obj:{'status':0,'data':None} for obj in objs}
done=False
while not done:
self.parallel_requests()
done=all(_['status']>upper_limit or _['status']==-1 for obj,_ in self.objs.items())
return dict(self.objs)
def single_request(self,request_obj):
URL = f"https://reqres.in/api/users?page={request_obj}"
r = requests.get(url = URL)
if r.ok:
res = r.json()
self.objs[request_obj]['status']=-1
self.objs[request_obj]['data']=res
else:
self.objs[request_obj]['status']+=1
def parallel_requests(self):
objs=[obj for obj,_ in self.objs.items() if _['status']!=-1 and _['status']<=upper_limit]
pool = ThreadPool(N)
pool.map(self.single_request, objs)
pool.close()
pool.join()
objs=[1,2,3,4,5,6,7,7,8,234,124,24,535,6,234,24,4,1,3,4,5,4,3,5,3,1,5,2,3,5,3]
result=request_engine().start(objs)
print([_['status'] for obj,_ in result.items()])
# status corresponds to the number of unsuccessful requests
# status=-1 implies success of the request
Thanks in advance.

time.sleep blocks flask request

I am implementing server sent events using flask. If I use time.sleep inside my function, the sse doesn't return anything and the request stays as pending in the browser. If I don't use sleep, there would be overload of responses in the browser, so I need to use some delay. Why is time.sleep blocking the request? Is there another way I can add time delay here?
def get_message():
time.sleep(1.0)
s="xyz" #some function here for our business logic
return s
#app.route('/stream')
def stream():
def eventStream():
while True:
yield 'data: {}\n\n'.format(get_message())
return Response(eventStream(), mimetype="text/event-stream")

Simulate a synchronous request on top of background async job with Flask

I'll first explain the architecture of my system and then move to the question:
I have a REST API which is used as my API gateway. This server is build using Flask. I also have RabbitMQ cluster, and a client I wrote that listens to a specific queue and executes the tasks its getting.
Until now, all of my requests were asynchronous, so once a request has reached to the API gateway, a callback_uri field with URL to POST the results to provided as part of the request, and the API gateway was just responsible for sending the task to RabbitMQ and the worker processed the task, and at the end POSTed the results back to the callback URL.
My question is:
I want a new endpoint to be synchronous in the sense of, that the processing will be done still by the same worker as before, but I want to get the results back to the API gateway to return to the user and release the connection.
My current solution:
I'm sending a unique callback_uri as part of the request to the worker as before, but now the specific endpoint is implemented by my API gateway and allow both POST and GET methods, so the worker can POST the results once it finished, and my API gateway keeps polling the callback URL until a result is available and then return the result to the client.
Is there any other preferred option other than having a busy-waiting HTTP worker polling its own endpoint to get the results? but still be synchronous so the connection released only when the results become available?
Code for illustration only:
#app.route('/long_task', methods=['POST'])
#sync_request
def long_task():
try:
if request.get_json() is None:
return ERROR_MSG_NO_JSON, 400
create_and_send_request_to_rabbitmq()
return '', 200
except Exception as ex:
return ERROR_MSG_NO_DATA, 400
def sync_request(func):
def call(*args, **kwargs):
create_callback_uri()
result = func(*args, **kwargs)
status_code = result[1]
if status_code == 200:
result = get_callback_result()
return result
return call
def get_callback_result():
callback_uri = request.get_json()['callback_uri']
has_answer = False
headers = {'content-type': 'application/json'}
empty_response = {}
content = json.dumps(empty_response)
try:
with Timeout(seconds=SYNC_REQUEST_TIMEOUT_SECONDS):
while not has_answer:
response = requests.get(callback_uri, headers=headers)
if response.status_code == 200:
has_answer = True
content = response.content
else:
time.sleep(0.2)
except TimeoutException:
log.debug('Timed out on sync request for request %s ' % request)
return content, 200
So if I understand you correctly you want your backend to wait for the response from some worker (via RabbitMQ). You can achieve that by implementing rpc over rabbitmq. The key idea is to use the correlation id.
But definitely the most efficient way would be to run the client over websockets (or raw tcp socket if it is not a browser) and notify him directly when the job is done. That way you don't lock resources (client connection, rabbitmq queues) and you avoid performance hit (rpc).

Wierd Behavior When Using Python requests.put() with Flask

Background
I have a service A accessible with HTTP requests. And I have other services that want to invoke these APIs.
Problem
When I test service A's APIs with POSTMAN, every request works fine. But when I user python's requests library to make these request, there is one PUT method that just won't work. For some reason, the PUT method being called cannot receive the data (HTTP body) at all, though it can receive headers. On the other side, the POST method called in the same manner receives the data perfectly.
I managed to achieve my goal simply by using httplib library instead, but I am still quite baffled by what exactly happened here.
The Crime Scene
Route 1:
#app.route("/private/serviceA", methods = ['POST'])
#app.route("/private/serviceA/", methods = ['POST'])
def A_create():
# request.data contains correct data that can be read with request.get_json()
Route 2:
#app.route("/private/serviceA/<id>", methods = ['PUT'])
#app.route("/private/serviceA/<id>/", methods = ['PUT'])
def A_update(id):
# request.data is empty, though request.headers contains headers I passed in
# This happens when sending the request with Python requests library, but not when sending with httplib library or with POSTMAN
# Also, data comes in fine when all other routes are commented out
# Unless all other routes are commented out, this happens even when the function body has only one line printing request.data
Route 3:
#app.route("/private/serviceA/schema", methods = ['PUT'])
def schema_update_column():
# This one again works perfectly fine
Using POSTMAN:
Using requests library from another service:
#app.route("/public/serviceA/<id>", methods = ['PUT'])
def A_update(id):
content = request.get_json()
headers = {'content-type': 'application/json'}
response = requests.put('%s:%s' % (router_config.HOST, serviceA_instance_id) + '/private/serviceA/' + str(id), data=json.dumps(content), headers = headers)
return Response(response.content, mimetype='application/json', status=response.status_code)
Using httplib library from another service:
#app.route('/public/serviceA/<id>', methods=['PUT'])
def update_course(id):
content= request.get_json()
headers = {'content-type': 'application/json'}
conn = httplib.HTTPConnection('%s:%s' % (router_config.HOST, serviceA_instance_id))
conn.request("PUT", "/private/serviceA/%s/" % id, json.dumps(content), headers)
return str(conn.getresponse().read())
Questions
1. What am I doing wrong for the route 2?
2. For route 2, the handler doesn't seem to be executed when either handler is commented out, which also confuses me. Is there something important about Flask that I'm not aware of?
Code Repo
Just in case some nice ppl are interested enough to look at the messy undocumented code...
https://github.com/fantastic4ever/project1
The serviceA corresponds to course service (course_flask.py), and the service calling it corresponds to router service (router.py).
The version that was still using requests library is 747e69a11ed746c9e8400a8c1e86048322f4ec39.
In your use of the requests library, you are using requests.post, which is sending a POST request. If you use requests.put then you would send a PUT request. That could be the issue.
Request documentation