In my django app I want to show that the user has tried logging in with wrong password three times.
I am using django-brutebuster
I can see a table in my postgres on migrate named BruteBuster_failedattempt which has the following columns :
id
username
IP
failures
timestamp
settings.py
Installed Apps:
...
BruteBuster
Middleware
...
BruteBuster.middleware.RequestMiddleware',
Block threshhold
BB_MAX_FAILURES = 3
BB_BLOCK_INTERVAL = 3
I want to display the count of failed attempts on my django template.
This is just a model, so you can
from BruteBuster.models import FailedAttempt
from django.db.models import Sum
total_failed = FailedAttempt.objects.filter(
username=request.user.username
).aggregate(
total_failures=Sum('failures')
)['total_failures']
There can however be some problems here:
BruteBuster here tracks the number of failures per username and per IP, so we need to sum these up;
if the user changes their username, then there are no records anymore for this FailedAttempt, so it might not work perfectly for such scenarios; and
if there are nu failed attempts, then it will return None, not 0, but you can add or 0 at the end to convert None to 0.
Related
Case: paginated page with an overview of users whom sent the 'receiver' a message. Once clicked, the receiver will go to the page with the actual conversation.
The 'overview page' is paginated (works). However, after implementation we noticed that if user X received 2 messages from Y the front-end displays two rows for Y. So we are looking for a solution to group the paginated objects - the below came to life. This however gives us the next challenge. As the pagination only retrieves the first X objects, if the next 'paginated page' contains the user from page 1 (user Y). user Y will occur on page 2 as well. I tried to create a pagination object after making the group 'unique_group' but this is just a list and cannot be paginated.
Retrieving .all() seems inefficient, especially when the application grows.
def group_paginated_msgs(user, Message):
#[Y, Y, Z] = simple example
''' grab user messages '''
page = request.args.get('page', 1, type=int)
messages = user.messages_received.order_by(
Message.timestamp.desc()
).paginate(
page,
5, #from config in prod.
False
)
unique_authors = []
unique_group = []
# group user messages
try:
p = messages.items
for m in p:
if m.author in unique_authors:
continue
else:
unique_authors.append(m.author)
unique_group.append(m)
except:
...
# print(unique_group) -> [Y, Z]
Could you use the .distinct() method to get the unique users from your query before you paginate?
Something like this
messages = Messages.query.filter(receiver==user).distinct(Message.author)
.order_by(
Message.timestamp.desc()
).paginate(
page,
5, #from config in prod.
False
)
This example works by querying the messages that have been received by your user and then filtering for distinct message authors.
I'm not sure exactly what I have written will work, but I think using the distinct() method is a good place to start. You can find more info about the method here.
I have a script that loops through the rows of an external csv file (about 12,000 rows) and executes a single Model.objects.get() query to retrieve each item from the database (final product will be much more complicated but right now it's stripped down to the barest functionality possible to try to figure this out).
For right now the path to the local csv file is hardcoded into the script. When I run the script through the shell using py manage.py runscript update_products_from_csv it runs in about 6 seconds.
The ultimate goal is to be able to upload the csv through the admin and then have the script run from there. I've already been able to accomplish that, but the runtime when I do it that way takes more like 160 seconds. The view for that in the admin looks like...
from .scripts import update_products_from_csv
class CsvUploadForm(forms.Form):
csv_file = forms.FileField(label='Upload CSV')
#admin.register(Product)
class ProductAdmin(admin.ModelAdmin):
# list_display, list_filter, fieldsets, etc
def changelist_view(self, request, extra_context=None):
extra_context = extra_context or {}
extra_context['csv_upload_form'] = CsvUploadForm()
return super().changelist_view(request, extra_context=extra_context)
def get_urls(self):
urls = super().get_urls()
new_urls = [path('upload-csv/', self.upload_csv),]
return new_urls + urls
def upload_csv(self, request):
if request.method == 'POST':
# csv_file = request.FILES['csv_file'].file
# result_string = update_products_from_csv.run(csv_file)
# I commented out the above two lines and added the below line to rule out
# the possibility that the csv upload itself was the problem. Whether I execute
# the script using the uploaded file or let it use the hardcoded local path,
# the results are the same. It works, but takes more than 20 times longer
# than executing the same script from the shell.
result_string = update_products_from_csv.run()
print(result_string)
messages.success(request, result_string)
return HttpResponseRedirect(reverse('admin:products_product_changelist'))
Right now the actual running parts of the script are about as simple as this...
import csv
from time import time
from apps.products.models import Product
CSV_PATH = 'path/to/local/csv_file.csv'
def run():
csv_data = get_csv_data()
update_data = build_update_data(csv_data)
update_handler(update_data)
return 'Finished'
def get_csv_data():
with open(CSV_PATH, 'r') as f:
return [d for d in csv.DictReader(f)]
def build_update_data(csv_data):
update_data = []
# Code that loops through csv data, applies some custom logic, and builds a list of
# dicts with the data cleaned and formatted as needed
return update_data
def update_handler(update_data):
query_times = []
for upd in update_data:
iter_start = time()
product_obj = Product.objects.get(external_id=upd['external_id'])
# external_id is not the primary key but is an indexed field in the Product model
query_times.append(time() - iter_start)
# Code to export query_times to an external file for analysis
update_handler() has a bunch of other code checking field values to see if anything needs to be changed, and building the objects when a match does not exist, but that's all commented out right now. As you can see, I'm also timing each query and logging those values. (I've been dropping time() calls in various places all day and have determined that the query is the only part that's noticeably different.)
When I run it from the shell, the average query time is 0.0005 seconds and the total of all query times comes out to about 6.8 seconds every single time.
When I run it through the admin view and then check the queries in Django Debug Toolbar it shows the 12,000+ queries as expected, and shows a total query time of only about 3900ms. But when I look at the log of query times gathered by the time() calls, the average query time is 0.013 seconds (26 times longer than when I run it through the shell), and the total of all query times always comes out at 156-157 seconds.
The queries in Django Debug Toolbar when I run it through the admin all look like SELECT ••• FROM "products_product" WHERE "products_product"."external_id" = 10 LIMIT 21, and according to the toolbar they are mostly all 0-1ms. I'm not sure how I would check what the queries look like when running it from the shell, but I can't imagine they'd be different? I couldn't find anything in django-extensions runscript docs about it doing query optimizations or anything like that.
One additional interesting facet is that when running it from the admin, from the time I see result_string print in the terminal, it's another solid 1-3 minutes before the success message appears in the browser window.
I don't know what else to check. I'm obviously missing something fundamental, but I don't know what.
Somebody on Reddit suggested that running the script from the shell might be automatically spinning up a new thread where the logic can run unencumbered by the other Django server processes, and this seems to be the answer. If I run the script in a new thread from the admin view, it runs just as fast as it does when I run it from the shell.
I have a resolver and I gave it a key to save into django-redis, I can see the key and value inside redis but somehow the loading time is still the same.
If I am doing a regular rest it will work fine but somehow graphql doesn't seem to be working properly?
def resolve_search(self, info, **kwargs):
redis_key = 'graphl_search'
if cache.keys(redis_key): # check if key exists, if do render the value
print('using redis') # this will run the second time since key exists for 3600 seconds
return cache.get(redis_key)
# set redis
print('set redis') # this will run the first time since they key does not exist yet
my_model = Model.objects.filter(slug=kwargs.get('slug')).first()
cache.set(redis_key, my_model, timeout=3600)
return my_model
works properly and doesn't go into the rest of the block if key exists, but when I checked the time it would be the same.
1st time (5xx ms)
2nd time (5xx ms)
Am I doing something wrong or that is now how to should use redis with graphql?
Thanks in advance for any help or suggestions
I am creating a flask app with two panels one for the admin and the other is for users. In the app scheme I have a utilities file where I keep most of the redundant variables besides other functions, (by redundant i mean i use it in many different parts of the application)
utilities.py
# ...
opening_hour = db_session.query(Table.column).one()[0] # 10:00 AM
# ...
The Table.column or let's say the opening_hour variable's value above is entered to the database by the admin though his/her web panel. This value limits the users from accessing certain functionalities of the application before the specified hour.
The problem is:
If the admin changes that value through his/her web panel, let's say to 11:00 AM. the changes is not being shown directly in the users panel."even though it was entered to the database!".
If I want the new opening_hour's value to take control. I have to manually shutdown the app and restart it "sometimes even this doesn't work"
I have tried adding gc.collect()...did nothing. There must be a way around this other than shutting and restarting the app manually. first, I doubt the admin will be able to do that. second, even if he/she can, that would be really frustrating.
If someone can relate to this please explain why is this occurring and how to get around it. Thanks in advance :)
You are trying to add advanced logic to a simple variable: You want to query the DB only once, and periodically force the variable to update by re-loading the module. That's not how modules and the import mechanism is supposed to be used.
If you want to access a possibly changing value from the database, you have to read it over and over again.
The solution is to, instead of a variable, define a function opening_hours that executes the DB query every time you check the value
def opening_hours():
return (
db_session.query(Table.column).one()[0], # 10:00 AM
db_session.query(Table.column).one()[1] # 5:00 PM
)
Now you may not want to have to query the Database every time you check the value, but maybe cache it for a few minutes. The easiest would be to use cachetools for that:
import cachetools
cache = cachetools.TTLCache(maxsize=10, ttl=60) # Cache for 60 seconds
#cachetools.cached(cache)
def opening_hours():
return (
db_session.query(Table.column).one()[0], # 10:00 AM
db_session.query(Table.column).one()[1] # 5:00 PM
)
Also, since you are using Flask, you can create a route decorator that controls access to your views depending on the view of the day
from datetime import datetime, time
from functools import wraps
from flask import g, request, render_template
def only_within_office_hours(f):
#wraps(f)
def decorated_function(*args, **kwargs):
start_time, stop_time = opening_hour()
if start_time <= datetime.now().time() <= stop_time:
return render_template('office_hours_error.html')
return f(*args, **kwargs)
return decorated_function
that you can use like
#app.route('/secret_page')
#login_required
#only_within_office_hours
def secret_page():
pass
I have a bit of code that is causing my page to load pretty slow (49 queries in 128 ms). This is the landing page for my site -- so it needs to load snappily.
The following is my views.py that creates a feed of latest updates on the site and is causing the slowest load times from what I can see in the Debug toolbar:
def product_feed(request):
""" Return all site activity from friends, etc. """
latestparts = Part.objects.all().prefetch_related('uniparts').order_by('-added')
latestdesigns = Design.objects.all().order_by('-added')
latest = list(latestparts) + list(latestdesigns)
latestupdates = sorted (latest, key = lambda x: x.added, reverse = True)
latestupdates = latestupdates [0:8]
# only get the unique avatars that we need to put on the page so we're not pinging for images for each update
uniqueusers = User.objects.filter(id__in = Part.objects.values_list('adder', flat=True))
return render_to_response("homepage.html", {
"uniqueusers": uniqueusers,
"latestupdates": latestupdates
}, context_instance=RequestContext(request))
The query that causes the most time seem to be:
latest = list(latestparts) + list(latestdesigns) (25ms)
There is another one that's 17ms (sitewide annoucements) and 25ms (adding tagged items on each product feed item) respectively that I am also investigating.
Does anyone see any ways in which I can optimize the loading of my activity feed?
You never need more than 8 items, so limit your queries. And don't forget to make sure that added in both models is indexed.
latestparts = Part.objects.all().prefetch_related('uniparts').order_by('-added')[:8]
latestdesigns = Design.objects.all().order_by('-added')[:8]
For bonus marks, eliminate the magic number.
After making those queries a bit faster, you might want to check out memcache to store the most common query results.
Moreover, I believe adder is ForeignKey to User model.
Part.objects.distinct().values_list('adder', flat=True)
Above line is QuerySet with unique addre values. I believe you ment exactly that.
It saves you performing a subuery.