Objective: to provide end-user number of 'notifications' in (almost) real time.
To keep it simple, notifications should come when userX submits form XYZ and all other users should see that the number of notifications incremented by 1 (if userY see the number 50 that means there are 50 NEW XYZ forms).
Question #1: given my django channels web-socket, where should I iterate to get the result? At the moment I placed it under websocket_connect with an endless loop such that:
class EchoDiscussionNotificationConsumer(AsyncConsumer):
async def websocket_connect(self, event):
await self.send({
"type": "websocket.accept",
})
# NOT SURE THIS IS A GOOD DESIGN!
while True:
await asyncio.sleep(2)
rand = random.randint(1,100)
mesg = "#"+str(rand)
await self.send({
'type': 'websocket.send',
'text': mesg,
})
This works great but I don't think this is a good design.
Question #2: I don't want to query the db every 2 seconds what I had in mind is to query only when (1) the user logs in and (2) another user submits form XYZ. So once I have a 'table of notifications' from the db where should I store it (in-memory) to have faster access? (session?)
As you already suggested, you should have a Notification table. Notifications should be created each time a form is submitted then you can use the notification post_save signal to send notifications to the websocket.
In this way, you won't have to long-poll the DB as that defeats the purpose of websockets.
As for where to save the notifications, the DB is quite enough in this case unless you have a very high load.
Related
REST API service has a limit of requests (say a maximum of 100 requests per minute). In Django, I am trying to allow USERs to access such API and retrieve data in real-time to update SQL tables. Therefore there is a problem that if multiple users are trying to access the API, the limit of requests is likely to be exceeded.
Here is a code snippet as an example of how I currently perform requests - each user will add a list of objects he wants to request and run request_engine().start(object_list) to access the API. I use multithreading to speed up requests. I also allow retrying failed API requests via setting a limit on the number of requests for each request object upper_limit.
As I understand there should be some queue for API requests. I anticipate there must be a more elegant solution for this, however, I could not find any similar examples. How can one implement/rewrite this for multiUSER usage with Django?
import requests
from multiprocessing.dummy import Pool as ThreadPool
N=50 # number of threads
upper_limit=1 # limit on the number of requests for a single object
class request_engine():
def __init__(self):
pass
def start(self,objs):
self.objs={obj:{'status':0,'data':None} for obj in objs}
done=False
while not done:
self.parallel_requests()
done=all(_['status']>upper_limit or _['status']==-1 for obj,_ in self.objs.items())
return dict(self.objs)
def single_request(self,request_obj):
URL = f"https://reqres.in/api/users?page={request_obj}"
r = requests.get(url = URL)
if r.ok:
res = r.json()
self.objs[request_obj]['status']=-1
self.objs[request_obj]['data']=res
else:
self.objs[request_obj]['status']+=1
def parallel_requests(self):
objs=[obj for obj,_ in self.objs.items() if _['status']!=-1 and _['status']<=upper_limit]
pool = ThreadPool(N)
pool.map(self.single_request, objs)
pool.close()
pool.join()
objs=[1,2,3,4,5,6,7,7,8,234,124,24,535,6,234,24,4,1,3,4,5,4,3,5,3,1,5,2,3,5,3]
result=request_engine().start(objs)
print([_['status'] for obj,_ in result.items()])
# status corresponds to the number of unsuccessful requests
# status=-1 implies success of the request
Thanks in advance.
I'm writing Django app and want to send out tokens using Web3 once Coinpayments sends me callback about successfull payment. The problem is that Coinpayments sends multiple callbacks at once and just in one case tokens are sending, other callbacks get replacement transaction underpriced error. I've already tried to use solutions like add +1 to nonce or remove this parameter, but that doesn't help me because transactions are still building with the same nonce. How can that be fixed or what am I doing wrong?
class CoinpaymentsIPNPaymentView(BaseCoinpaymentsIPNView):
def post(self, request, order_id, *args, **kwargs):
status = int(request.POST.get('status'))
order = Order.objects.get(id=order_id)
order.status = request.POST.get("status_text")
if not status >= 100:
order.save()
return JsonResponse({"status": status})
amount = Decimal(request.POST.get('amount1'))
record = Record.objects.create(
user=order.user,
type='i',
amount=amount,
)
order.record = record
order.save()
gold_record = GoldRecord.objects.get(from_record=record)
contract = w3.eth.contract(address=CONTRACT_ADDRESS, abi=ABI_JSON)
transaction = contract.functions.transfer(order.user.wallet.address, int(gold_record.amount * 10 ** 18)).buildTransaction({
'chainId': 1,
'gas': 70000,
'nonce': w3.eth.getTransactionCount(WALLET_ADDRESS) # address where all tokens are stored
})
signed_tx = w3.eth.account.signTransaction(transaction, WALLET_PRIVATE_KEY) # signing with wallet's above private key
tx_hash = w3.eth.sendRawTransaction(signed_tx.rawTransaction)
print(tx_hash.hex())
tx_receipt = w3.eth.waitForTransactionReceipt(tx_hash)
return JsonResponse({"status": status})
P.S. I've already asked it on Ethereum StackExchange, but nobody answered or commented it: https://ethereum.stackexchange.com/questions/80961/sending-tokens-out-on-coinpayments-success-payment-using-web3py
Ok, let the web know answer and solution that I found out by myself
Each transaction should have unique nonce, so I noticed that if I do a loop for sending transactions and set nonce as w3.eth.getTransactionCount(WALLET_ADDRESS) + index then it sends all transactions without any errors. So I removed instant coins sending (even removed waitForTransactionReceipt to speed up it), and made management command where I process all payouts and if it was sent successfully I assign its tx_hash and run it every 10 minutes with Heroku Scheduler
I have a non-profitable website that I need to handle newsletter emails to probably thousand people (lets be realistic and give an upper bound of at most 2000 - 2500 registered users).
I have implemented email this way:
#login_required
def SendEmail(request):
receivers = []
users = Users.objects.all()
receivers.append(user.Email for user in users)
emailTypeSelected = request.POST.get('email_type', -1)
email_factory = EmailFactory()
emailManager = email_factory.create_email(emailTypeSelected)
emailManager.prepare("Some Title")
emailManager.send_email_to(receivers)
return render(request, 'new_user_email.html')
And here is the "abstract" class.
class Email(object):
title = ""
plain_message = ""
html_message = ""
def send_email_to(self, receivers):
send_mail(
self.title,
self.plain_message,
SENDER,
receivers,
html_message=self.html_message
)
I have tested this code and it takes a while to send 1 email to 1 user. My concern is that for thousand emails will put a big overhead to the server.
I was thinking to do the following:
Break the users into group of 100 and send email to those users every 30 minutes.
But I am not sure how this can be implemented. Seems that I will need to implement a sort of threads that will be triggered independently and handle the email for me.
Is there any design pattern that you are aware on how to solve this problem?
Now I know that the best way to do this is to use an external service that handle email newsletter and free up my server from doing this but as a non-profitable website I am trying to minimise the expenses as I already have to pay the server expenses. So at the moment I am trying to find a way to implement that in-house unless big problem arises which will force me to go into third-party services.
I send a private link to user via email,
= link_to user_profile_url(user_token: #user.token, user_id: #user.uid), method: :post
To avoid let user login again from mobile mail app. I devide to let user login in, if the request has user_token, user_id params.
Because my current method can be brute forcely tried in a short time.
What's the best practice to avoid brute force attacks, thanks
def get_user_by_token(user_id=nil ,token=nil)
if user_id and token
User.where({id: user_id, token: token}).first
else
nil
end
end
You have essentially two complementary options:
make the token complex enough to require a reasonably big computation effort. You can use SecureRandom.hex or even SecureRandom.urlsafe_base64 that includes not only letters and digits but also other chars.
Throttle the action. Keep track of the HTTP requests to that specific action and block the user by IP if the number of requests per minute is higher than, for example 5.
I have a long running celery task which iterates over an array of items and performs some actions.
The task should somehow report back which item is it currently processing so end-user is aware of the task's progress.
At the moment my django app and celery seat together on one server, so I am able to use Django's models to report the status, but I am planning to add more workers which are away from Django, so they can't reach DB.
Right now I see few solutions:
Store intermediate results manually using some storage, like redis or mongodb making then available over the network. This worries me a little bit because if for example I will use redis then I should keep in sync the code on a Django side reading the status and Celery task writing the status, so they use the same keys.
Report status to the Django back from celery using REST calls. Like PUT http://django.com/api/task/123/items_processed
Maybe use Celery event system and create events like Item processed on which django updates the counter
Create a seperate worker which runs on a server with django which holds a task which only increases items proceeded count, so when the task is done with an item it issues increase_messages_proceeded_count.delay(task_id).
Are there any solution or hidden problems with the ones I mentioned?
There are probably many ways to achieve your goal, but here is how I would do it.
Inside your long running celery task set the progress using django's caching framework:
from django.core.cache import cache
#app.task()
def long_running_task(self, *args, **kwargs):
key = "my_task: %s" % self.result.id
...
# do whatever you need to do and set the progress
# using cache:
cache.set(key, progress, timeout="whatever works for you")
...
Then all you have to do is make a recurring AJAX GET request with that key and retrieve the progress from cache. Something along those lines:
def task_progress_view(request, *args, **kwargs):
key = request.GET.get('task_key')
progress = cache.get(key)
return HttpResponse(content=json.dumps({'progress': progress}),
content_type="application/json; charset=utf-8")
Here is a caveat though, if you are running your server as multiple processes, make sure that you are using something like memcached, because django's native caching will be inconsistent among the processes. Also I probably wouldn't use celery's task_id as a key, but it is sufficient for demonstration purpose.
Take a look at flower - a real-time monitor and web admin for Celery distributed task queue:
https://github.com/mher/flower#api
http://flower.readthedocs.org/en/latest/api.html#get--api-tasks
You need it for presentation, right? Flower works with websockets.
For instance - receive task completion events in real-time (taken from official docs):
var ws = new WebSocket('ws://localhost:5555/api/task/events/task-succeeded/');
ws.onmessage = function (event) {
console.log(event.data);
}
You would likely need to work with tasks ('ws://localhost:5555/api/tasks/').
I hope this helps.
Simplest:
Your tasks and django app already share access one or two data stores - the broker and the results backend (if you're using one that is different to the broker)
You can simply put some data into one or other of these data stores that indicates which item the task is currently processing.
e.g. if using redis simply have a key 'task-currently-processing' and store the data relevant to the item currenlty being processed in there.
You can use something like Swampdragon to reach the user from the Celery instance (you have to be able to reach it from the client thou, take care not to run afoul of CORS thou). It can be latched onto the counter, not the model itself.
lehins' solution looks good if you don't mind your clients repeatedly polling your backend. That may be fine but it gets expensive as the number of clients grows.
Artur Barseghyan's solution is suitable if you only need the task lifecycle events generated by Celery's internal machinery.
Alternatively, you can use Django Channels and WebSockets to push updates to clients in real-time. Setup is pretty straightforward.
Add channels to your INSTALLED_APPS and set up a channel layer. E.g., using a Redis backend:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("redis", 6379)]
}
}
}
Create an event consumer. This will receive events from Channels and push them via Websockets to the client. For instance:
import json
from asgiref.sync import async_to_sync
from channels.generic.websocket import WebSocketConsumer
class TaskConsumer(WebsocketConsumer):
def connect(self):
self.task_id = self.scope['url_route']['kwargs']['task_id'] # your task's identifier
async_to_sync(self.channel_layer.group_add)(f"tasks-{self.task_id}", self.channel_name)
self.accept()
def disconnect(self, code):
async_to_sync(self.channel_layer.group_discard)(f"tasks-{self.task_id}", self.channel_name)
def item_processed(self, event):
item = event['item']
self.send(text_data=json.dumps(item))
Push events from your Celery tasks like this:
from asgiref.sync import async_to_sync
from channels.layers import get_channel_layer
...
async_to_sync(get_channel_layer.group_send)(f"tasks-{task.task_id}", {
'type': 'item_processed',
'item': item,
})
You can also write an async consumer and/or invoke group_send asynchronously. In either case you no longer need the async_to_sync wrapper.
Add websocket_urlpatterns to your urls.py:
websocket_urlpatterns = [
path(r'ws/tasks/<task_id>/', TaskConsumer.as_asgi()),
]
Finally, to consume events from JavaScript in your client, you can do something like this:
let task_id = 123;
let protocol = location.protocol === 'https:' ? 'wss://' : 'ws://';
let socket = new WebSocket(`${protocol}${window.location.host}/ws/tasks/${task_id}/`);
socket.onmessage = function(event) {
let data = JSON.parse(event.data);
let item = data.item;
// do something with the item (e.g., push it into your state container)
}