For example a have a main app, let’s say for users handling and secondary app where I would create logs, models change history, statistics, ets.
Generally most of the CRUD activity in main app tigers CREATE operations in secondary app via signals to create logs and stuff.
What I want to achieve is to avoid exceptions being raised by secondary app from being propagated and shown to user via DRF response or make them ‘fail silently’ in a way, as , for instance, is user would update his account and history log subsequently created in secondary model would raise Integrity error – better just continue and do nothing rather then notify user about it.
There are to main types of exceptions - IntegrityError and ValidationError.
I could try to try/except all validation ones and maybe use custom exception handler to intercept Integrity errors if I know constraints names but
a) I still can’t intercept them all as some of them are originated from Django source code
2) A lot of hardcode.
Question is – is it possible somehow to intercept all exceptions from a certain app and suppress them all?
Thank you.
I think you’re very aware that suppressing all exceptions is bad practice in it’s own right, however, considering your situation perhaps you can try something like this serializer.is_valid(raise_exception=False) in your APIs
Related
I have a python+django project and want to implement following functionality: by certain trigger (at certain time or by manual admin action) show message to all active users (i.e. to all sessions).
Webpush seems unnecessary here as far as django has nice built-in messages subframework.
I guess I may develop such functionality manually: make sql table session_messages, each time take snapshot of all sessions and show message, perform some checks to assure that message is shown only once and only to active users, etc. etc.
Question: maybe there is some nice little library that already does it? Or even maybe django messages are able to do it "from the box"?
I've googled a bit but found only packages for webpush integrations.
Thanks :)
You must implement a software architecture based on django channels, redis or rabbitmq and signals
Basically you must open a socket at the moment the user logs in, add the authenticated user to a group, and when you trigger the event with the signals send a message to the group
I have a django application that deploys the model logic and data handling through the administration.
I also have in the same project a python file (scriptcl.py) that makes use of the model data to perform heavy calculations that take some time, per example 5 secs, to be processed.
I have migrated the project to the cloud and now I need an API to call this file (scriptcl.py) passing parameters, process the computation accordingly to the parameters and data of the DB (maintained in the admin) and then respond back.
All examples of the django DRF that I've seen so far only contain authentication and data handling (Create, Read, Update, Delete).
Could anyone suggest an idea to approach this?
In my opinion correct approach would be using Celery to perform this calculations asynchronous.
Write a class which inherits from DRF APIView which handles authentication, write whatever logic you want or call whichever function, Get the final result and send back the JsonReposen. But as you mentioned if the Api takes more time to respond. Then you might have to think of some thing else. Like giving back a request_id and hit that server with the request_id every 5seconds to get the data or something like that.
Just to give a feedback to this, the approach that I took was to build another API using flask and normal python scripts.
I also used sqlalchemy to access the database and retrieve the necessary data.
My app is a dashboard that makes several axaj requests for data on a single dashboard url. I'm looking for best practices for handling and presenting errors to the user. A few strategies have crossed my mind:
Use a generic error handler in my route/controller that handles errors from every service and figures out what to do
Create error handler methods for each service and use those to set properties on the dashboard controller
I started down this road and it got pretty messy and didn't feel very Ember-ey. Is there a preferred way to do this?
Also, $.ajax's .fail() method doesn't always get called because $.ajax() will succeed with a 200, but there's an application error in the payload. So I have to manually check all return values for errors.
I'm looking for general guidance, not necessarily specific code.
Here's a screencap to give you an idea of what the app looks like: screencap
Thanks!
Lets say I have django.contrib.sessions.middleware.SessionMiddleware installed in django and I'm using SessionAuthentication class for API authentication in tastypie. Within a session I'm doing some changes in models through my API and after that I want to roll back. Can I do it through tastypie? If yes, what method should I execute? I can't find such a method in tastypie docs. Do you have any working example of that?
Django supports database transactions, which will commit multiple state changes atomically. (Documentation...)
It is unclear in your question how you want to trigger the rollback. One option is to use request transactions, which will rollback if an unhandled exception is issued by the view function. If you want more fine grained control, familiarize yourself with the options in the linked-to documentation. For example, you may explicitly create a transaction and then roll it back inside your view.
With respect to Tastypie, you may need to place your transaction management inside the appropriate method on the Resource interface.
I hope this gives you some pointers. Please update your question with more details if necessary.
So you want to commit changes to your models to the database, and then roll them back on a future request? That's not something that TastyPie supports (or, for that matter, Django or SQL). It's not really possible to do a clean rollback like that, considering other requests could have interacted with/changed/ built relationships with those objects in the mean time.
The best solution would probably be to integrate something like Reversion that would allow you to restore objects to a previous state.
If you want to be able to roll back all of the operations during a session, you'd probably need to keep track of the session start time and the list of objects that had been changed. If you wanted to do a rollback, you'd just have to iterate over that list and invoke reversion's revert method, like
reversion.get_for_date(your_model, session_start_datetime).revert()
However, again, that will also roll back any changes any other users have made in the same time frame, but that will be the case for any solution to this requirement.
I want to trace user's actions in my web site by logging their requests to database as plain text in Django.
I consider to write a custom decorator and place it to every view that I want to trace.
However, I have some troubles in my design.
First of all, is such logging mecahinsm reasonable or because of my log table will be enlarging rapidly it causes some preformance problems ?
Secondly, how should be my log table's design ?
I want to keep keywords if the user call search view or keep the item's id if the user call details of item view.
Besides, IP addresses of user's should be kept but how can I seperate users if they connect via single IP address as in many companies.
I am glad to explain in detail if you think my question is unclear.
Thanks
I wouldn't do that. If this is a production service then you've got a proper web server running in front of it, right? Apache, or nginx or something. That can do logging, and can do it well, and can write to a form that won't bloat your database, and there's a wealth of analytical tools for log analysis.
You are going to have to duplicate a lot of that functionality in your decorator, such as when you want to switch it on or off, or change the log level. The only thing you'll get by doing it all in django is the possibility of ultra-fine control, such as only logging views of blog posts with id numbers greater than X or something. But generally you'd not want that level of detail, and you'd log everything and do any stripping at the analysis phase. You've not given any reason currently why you need to do it from Django.
If you really want it in a RDBMS, reading an apache log file into Postgres or MySQL or one of those expensive ones is fairly trivial.
One thing you should keep in mind is that SQL databases don't offer you a very good writing performance (in comparison with reading), so if you are experiencing heavy loads you should probably look for a better in-memory solution (eg. some key-value-store like redis).
But keep in mind, that, especially if you would use a non-sql solution you should be aware what you want to do with the collected data (just display something like a 'log' or do some more in-deep searching/querying on the data).
If you want to identify different users from the same IP address you should probably look for a cookie-based solution (if you are using django's session framework the session's are per default identified through a cookie - so you could just simply use sessions). Another solution could be doing the logging 'asynchronously' via javascript after the page has loaded in the browser (which could give you more possibilities in identifying the user and avoid additional load when generating the page).