Django Axes Log Turnover Time - django

Could not find the answer to this rather trivial question within the documentation... I am using Django Axes to track and log the access to my Django applications. In the admin interface I see very old logins older than 2 years. Is there a setting where I can tell Axes to automatically delete log entries older than time X?

Related

How to add notifications in django admin backend with audio

I found this blog that addresses the task I want to accomplish which is; being able to get real time notifications in django admin (backend alerts) every time a new order in placed on an ecommerce website. However the blog is dated 2014 and the swamp django repository referenced hasn't been maintained atleast for 4 years ago.
Would you recommend I go ahead and deploy using this library because I believe alot has changed since then and I don't want to get into loops of fixing errors which have no updated and maintained documentation. Or do you know of any other library that I could use instead to do such a task. The ultimate objective here would be to even have an audio sound/notification play everytime a new order is placed or cart has been emptied.
Thanks.

Google static maps api issue- won't display all my markers

I have a project where I need to automatically generate maps with markers. I have successfully generated these maps with the built-in markers and now watch to switch to custom markers to more accurately describe the items being marked.
So far I have been able to do this fine for 2-3 icons, but as soon as I add more (say 5/6 icons) some of them are simply omitted from the map. Currently these images are all hosted on the same machine running the code and are served up via my Django website.
My first thought is that the issue has to do with my server being too slow to serve up all 6 icons simultaneously in the time google takes to render the static map, but I would think that google's code waits for the marker icons to load before rendering.....
Any suggestions? I would post my request here, but I don't want to publish my API key. If you think it would be helpful, I could post an obfuscated version.
After additional research, it appears there is a limit of 5 custom markers per staticmaps api request.
To get around this, make multiple requests and merge the maps. For maps 2+ set the maptype to roadmap and style=feature:all|visibility:off
More details can be found here: Anyway to overcome the 5 custom icon urls per request?

how to automatically identify n+1 queries in a django app?

Is there any tool, plugin or technique that I can use to help identify n+1 queries on a Django application? There is a gem called bullet for Rails that identifies n+1 queries and logs or pops up warnings in a number of ways but I haven't been able to find anything similar for Django. I'd be open to guidance on how to write my own plugin if no one knows of an existing solution.
nplusone does this
Auto-detecting the n+1 queries problem in Python
https://github.com/jmcarp/nplusone
comes with proper django support! also integrates with flask, vanilla wsgi, ...
I don't know any plugin that would find them automatically and warn you.
I personally use the Django Debug Toolbar:
https://github.com/django-debug-toolbar/django-debug-toolbar
It shows the number of queries ran on a page and you can view them.
Scout, an APM product that supports Django apps, identifies expensive N+1 queries in production.
Here's how to use it:
Install the scout-apm Python package (MIT license) and provide your Scout API key. The API key is found in the Scout web UI.
Deploy your app, confirm Scout is receiving data, then check back in an hour or so. Scout analyzes every web request, checking for N+1s, and then displays the worst performers on a dashboard (screenshot).
Select an N+1 you're interested in to reveal a transaction of the request that triggered the N+1. This includes the raw SQL of the query and a backtrace to the LOC that is triggering the query (screenshot).
An advantage to Scout over a development tool like Bullet: most development databases have a small amount of data, so the true impact of an N+1 is frequently unknown. Scout identifies just those N+1s that are consuming significant time, which can help focus your efforts.

Using memcached with a dynamic django backend

My Django backend is always dynamic. It serves an iOS app similar to that of Instagram and Vine where users upload photos/videos and their followers can comment and like the content. Just for the sake of this question, imagine my backend serves an iOS app that is exactly like Instagram.
Many sources claim that using memcached can improve performance because it decreases the amount of hits that are made to the database.
My question is, for a backend that is already in dynamic in nature (always changing since users are uploading new pictures, commenting, liking, following new users etc..) what can I possibly cache?
It's a problem I've been thinking about for quite some time. I could cache the user profile data, but other than that, I don't know where else memcached would be useful.
Other sources mentioned using it everywhere in the backend where a 'GET' call is made but then I would need to set a suitable time limit to expire the cache since the app is always dynamic. What are your solutions and suggestions for getting around this problem?
You would cache whatever is being most frequently accessed from your Database. Make a list of the most frequent requests to get data from the database and cache the data in that priority.
Cache the most frequent requests based on category of the pictures
Cache based on users - power users go into cache (those which do a lot of data access)
Cache the most recent inserts (in case you have a page which shows the recently added posts/pictures)
I am sure you can come up with more scenarios. I am positive memcached (or any other caching) will help, even though your app is very 'dynamic'.

pymongo + new relic

We have a django application server monitored by new relic. We have have used Mysql and MongoDb for data storage in our app. In rpm.newrelic we are having the transaction details of Mysql. We also want to get the transaction details of mongoDb too.
We are using pymongo module for interacting with mongo. I read here that they have included the support of pymongo in their latest python agent. But I am not able to find the documentation for the same. Can anyone point me to some docs ?
At one point we had the same question, and so we built this: https://github.com/Livefyre/pymongo-newrelic
This has some rough edges, but you'll see queries (roughly mapped to SQL terms), and time spent in granular detail.
And while the newer New Relic agents support pymongo directly: https://docs.newrelic.com/docs/python/instrumented-python-packages#nosql-database-clients
They do include this caveat (as of this writing):
Note that MongoDB and Redis calls are currently only recorded as transaction breakdown metrics. That is, no rollup metrics are produced and so they will still be shown on the overview dashboard as Python time and not as a separate segment or even as database calls. Also, no specific details of MongoDB queries are captured at this time and so no information will be displayed on the databases page in the UI corresponding to these queries.