AsyncWebsocketConsumer VS AsyncConsumer - django

I'm trying to use Channels2 in my project. it's the first time that I meet channel in Django :)
I have two main useful and almost complete sources here: 1)video on youtube DJANGO CHANNELS 2 Tutorial (V2) - Real Time 2)document of Channel in Read The Doc
as I don't know what will happen in the future of my code I need you to help me choose using between AsyncConsumer as mentioned in source #1, or AsyncWebsocketConsumer that is used in source # 2 for starting Django channel app that including by this way:
from channels.generic.websocket import AsyncWebsocketConsumer
from channels.consumer import AsyncConsumer
explanation:
class AsyncConsumer:
"""
Base consumer class. Implements the ASGI application spec, and adds on
channel layer management and routing of events to named methods based
on their type.
"""
class AsyncWebsocketConsumer(AsyncConsumer):
"""
Base WebSocket consumer, async version. Provides a general encapsulation
for the WebSocket handling model that other applications can build on.
"""
my goal of using channel: trying to integrated real-time chat, notification/alert/transfer_data to the clients for the specific situations. (now the app working without Websocket with DRF)
and if you have any suggestions, ideas or notices I will be very happy to listen.thank you so much.

Channels is a project meant to be used to work with different protocols including but not limited to HTTP and WebSockets as explained on the docs page.
The AsyncConsumer is the base generic consumer class from which other protocol-specific consumer classes are derived. One of those classes is the AsynWebsocketConsumer which you mentioned. As the name goes, it's used to work with websockets, so if you want use websockets for your realtime app, then that is the class you should use. There is also the AsyncHttpConsumer for working with HTTP. You most likely want to work with websockets so go with the AsynWebsocketConsumer or its derivative, AsyncJsonWebsocketConsumer.
I would also advise that you read the documentation to understand in better detail about the supported protocols and how and when to use them.

Related

Rest API instead of table in Django Proxy unmanaged models

In my Django arch. task I must connect multiple projects. Some can only communicate through REST API and don't have common database access. However, the task is to provide seamless communication on model level. I.e. my model should serve as a wrapper for read-only requests in such a way, that business layer does not see the difference between local models and models that wrap remote data. Assuming that inherited create-delete-update methods will do nothing and network communication and authentication is handled by some method:
read_remote_records(**kwargs)
This method returns any data that I would need for my model. All I need is to wrap it in query set api for seamless use in views.
I have read these:
Proxy models documentation
Unmanaged models documentation
Also documentation on creation of custom managers and querysets.
However, not a single good example for what I am set to achieve.
Perhaps you can provide a solid Django example how to modify manager and query set in this way?

ember-data save and find with postgrest

ember-data understands json-api natively ,if we have to integrate ember-data and its save() and find() methods with postgrest style REST calls, where do we need to do changes ?
Do we need to modify client side in ember or some server side logic for mapping with ember-data requirements.
So postgrest REST api calls look like these to get films and their title and competition.name from related table ->
http://localhost:3001/film?select=title,competition{name}
http://localhost:3001/users?select=email::text&id=eq.2&sign_in_count=eq.16
There are two questions here:
Is it better to transform to/from JSON API at the server or the client?
If transforming in the client, where do transforms to/from JSON API occur?
Is it better to transform to/from JSON API at the server or the client?
The first question is really a matter of preference. I personally prefer having the server emit and accept JSON API format because it allows you to ship fewer lines of JavaScript to the client and there's a tendency for multiple clients to communicate with the same server, so standardizing that makes for faster client application development.
For example, you might have two Ember clients (one general user-facing, one admin-facing), an iOS client, and perhaps another server all requesting to your PostgREST server.
That said, you can also think of the format that PostgREST uses as its own spec and have all the clients adhere to that.
If transforming in the client, where do transforms to/from JSON API occur?
Which brings us to question 2: How do you make Ember Data communicate with a server that does not use the JSON API standard?
This occurs in two places: The Adapter and the Serializer.
The Adapter translates requests for data into the appropriate URL where the data can be found (or added) and initiates requests.
So, "give me the photo with the ID of 22" (store.find('photo', 2)), will ask the adapter (assuming Photo #2 isn't already loaded), "hey, someone wants Photo #2, go fetch it please".
The Adapter will make the request and hand the response over to its Serializer.
The Serializer is responsible for translating the data that comes back into a format that Ember Data understands: JSON API.
Both Adapter and Serializer have methods you can implement to customize their behaviors.
In your case with PostgREST, the best places to start would be implementing your own findRecord on the Adapter and implementing your own normalizeResponse on the Serializer.
The docs for both explain the actions you need to take and what type of value you should return from each method.
These are two of the most basic interfaces. They don't provide a lot of functionality out of the box, but will help you become familiar with how these two objects interact.
Once you're comfortable with this basic interaction, check out the sample RestAdapter and RestSerializer for ideas on how to rely on some of the conventions Adapters and Serializers have to offer to clean up any custom code you've written.

Delegate from different Django app

Say I have two Django apps, abc and xyz. The abc app is a package that should be installed using pip3, and xyz is some custom app using features from abc.
How can xyz provide a delegate method to be used in a view in abc? Say abc need to send a message, it doesn't care if it should be send via SMS, email or avian carrier, so if xyz could provide a delegate method, this method could be called from abc, without abc caring about the implementation.
Python handles delegates smoothly, but how do I wire it up in a Django view?
I know I can use a message queue or a callback url, but it seems a bit strange to me that there isn't an easier way to do this.
Signals can help you achieve what you're looking for.
From Django's documentation on the subject:
Django includes a “signal dispatcher” which helps allow decoupled applications get notified when actions occur elsewhere in the framework. In a nutshell, signals allow certain senders to notify a set of receivers that some action has taken place. They’re especially useful when many pieces of code may be interested in the same events.
Here are a few other resources on the subject:
Using signals in your Django app
A simplified explanation

REST app architecture

I am working on a RESTful application, which goal is to allow user to track his results, progress and performance in sport activities.
I want to create two or more clients for this app:
1) web client
2) mobile client ( iOS / Android )
I am writting it in django using tastypie app and I wonder if I should make web client in the same application, which will provide the RESTful api or should I leave it as a pure REST service and build separate web client, which will contact it via the api?
As for now I don't see any drawbacks of making both in one, but I am not an expert in programs with such an architecture, so I am looking for some advise with argumentation behind it.
It is not easy to answer to this as it depends a lot on what kind of service you are building.
Approach 1: Traditional Django app + API
Here your Django-app and the tastpie API share common data models but present them differently. One using Djangos templates and views and the other using tastypie.
Pros:
Building a traditinal web service is relatively easy and well understood problem
Django provides a lot of tools for this
Cons:
There is no gurantee that the API presents the same functionality as the web service
You have to maintain two different ways to interact with your data.
Approach 2: API only + JavaScript webapp that uses the API
There is only one interface to the service via the tastypie API. The web client is built separately using javascript tools like backbone.js and backbone-tastypie.
Pros:
You gurantee that the 3rd party developers can build a service with the same functionality as your web service (see dogfooding).
Works pretty nicely if your service is more of an application than a collection of pages.
Cons:
Client side JavaScript tools are not as good as Djangos (for example, templating).
Client side rendering of templates only happens after most of the resources are loaded.
First pageload is slow
Pre-IE9 browsers won't work without trick, IE9 may need tricks
You really need to mind about browser caches
SEO is not as straightforward as with traditional web service.
Approach 3: API only + call the API from Django views
Pretty much same as Approach 1 but instead of using your models directly you call the tastypie resources internally.
Pros:
You can still use most of the Django tools.
You mostly use the same API as potential 3rd party developers
Cons:
I do not know how much overhead this incurs.
There is a fourth way to do this, which extends on #seppo-erviälä Approach 1 and Approach 2:
Approach 4: API View + Django View via Handler
Create a handler that returns the RESTful resource just as RESTful API view normally would. But this handler is callable from anywhere. It gets the same request dictionary that the view gets and it returns the same JSON the view returns. So now, the architecture is:
Handler
/ | \
/ | \
/ | \
/ | \
RESTfulView | Normal Django View
|
Anything Else
The handler:
class ResourceHandler:
def create_resource(self, data):
# code
def fetch_resource(self, rId):
# code
# and so on
and you call it from the views like so:
# /views/restfulview.py
# using django-rest-framework
from rest_framework.response import Response
class RESTCallView(APIView):
h = ResourceHandler()
def get(self, request, rId):
return Response(self.h.fetch_resource(rId))
# /views/normalview.py
from django.views.generic.base import TemplateView
class SomeDjangoView(TemplateView):
h = ResourceHandler()
def get(self, request, rId):
return HttpResponse(self.h.fetch_resource(rId))
Of course, this is just example code and not really pythonic but you get the idea.
I work in a large e-commerce company and some of my teammates have used this approach to good success.
Some other advantages are:
Migration of old data becomes a lot easier now since you just need to create the dict and send it to the Handler. Similarly, sync also becomes a lot easier.
Changes to APIs are in one place, and you can cut off the app without killing access to the data.
Hope this helps you too.. :)
Better create a pure REST service and consume it from both the client. It will provide a clean layered architecture as you are not mixing service with client in one app. By having a common service separately you would have : Separation of concern, Clean Architecture, Proper Laying, Readability and better Maintainability.

django app consuming rest api - where to put the code

I have an django app, a model which stores data entered via a web interface by a user.
I need to consume an third party REST api when viewing / saving a model instance. I know how to do this but, what I am unsure about is where this code should live with the django app.
my gut is to put this code with in the model class, but then you could also use a view... I am just not sure.
How has this been done before, there are lots of posts asking how to do this, but none stating best place to put the code.
any guidance would be gratefully received.
Cheers
This is a subjective question, so here is a subjective answer.
First of all, ensure that any code that interacts with this external REST API resides in a separate module. E.g, if you're grabbing word definitions from a dictionary API, all the code that talks to this API should ideally be in a separate dictionary module, which you can then import into your view.
Secondly, your models.py should merely declare your application's data model and define the operations on this model, and little else. They should not be concerned with request/response cycles, reading files, rendering templates, making HTTP calls, or anything else. By this logic, you should be making these REST API calls from your views and, if required, passing the returned data into your models.
And finally, think twice about making REST calls from your Django app. Python does synchronous (blocking) I/O by default, which means as long the app is waiting for the REST call to finish, it can't service any incoming HTTP requests. This is not an issue if you don't have too many users, but it's something to keep in mind for apps that need to scale. You might want to look into async I/O libraries for Python.