here's my view (simplified):
#login_required(login_url='/try_again')
def change_bar(request):
foo_id = request.POST['fid']
bar_id = request.POST['bid']
foo = models.Foo.objects.get(id=foo_id)
if foo.value > 42:
bar = models.Bar.objects.get(id=bar_id)
bar.value = foo.value
bar.save()
return other_view(request)
Now I'd like to check if this view works properly (in this simplified model, if Bar instance changes value when it should). How do I go about it?
I'm going to assume you mean automated testing rather than just checking that the post request seems to work. If you do mean the latter, just check by executing the request and checking the values of the relevant Foo and Bar in a shell or in the admin.
The best way to go about sending POST requests is using a Client. Assuming the name of the view is my_view:
from django.test import Client
from django.urls import reverse
c = Client()
c.post(reverse('my_view'), data={'fid':43, 'bid':20})
But you still need some initial data in the database, and you need to check if the changes you expected to be made got made. This is where you could use a TestCase:
from django.test import TestCase, Client
from django.urls import reverse
FooBarTestCase(TestCase):
def setUp(self):
# create some foo and bar data, using foo.objects.create etc
# this will be run in between each test - the database is rolled back in between tests
def test_bar_not_changed(self):
# write a post request which you expect not to change the value
# of a bar instance, then check that the values didn't change
self.assertEqual(bar.value, old_bar.value)
def test_bar_changes(self):
# write a post request which you expect to change the value of
# a bar instance, then assert that it changed as expected
self.assertEqual(foo.value, bar.value)
A library which I find useful for making setting up some data to execute the tests easier is FactoryBoy. It reduces the boilerplate when it comes to creating new instances of Foo or Bar for testing purposes. Another option is to write fixtures, but I find that less flexible if your models change.
I'd also recommend this book if you want to know more about testing in python. It's django-oriented, but the principles apply to other frameworks and contexts.
edit: added advice about factoryboy and link to book
you can try putting "print" statements in between the code and see if the correct value is saved. Also for update instead of querying with "get" and then saving it (bar.save()) you can use "filter" and "update" method.
#login_required(login_url='/try_again')
def change_bar(request):
foo_id = request.POST['fid']
bar_id = request.POST['bid']
foo = models.Foo.objects.get(id=foo_id)
if foo.value > 42:
models.Bar.objects.filter(id=bar_id).update(value=foo.value)
#bar.value = foo.value
#bar.save()
return other_view(request)
Related
i'm trying to make a custom plotly-graphic on a wagtail homepage.
I got this far. I'm overriding the wagtail Page-model by altering the context returned to the template. Am i doing this the right way, is this possible in models.py ?
Thnx in advanced.
from django.db import models
from wagtail.models import Page
from wagtail.fields import RichTextField
from wagtail.admin.panels import FieldPanel
import psycopg2
from psycopg2 import sql
import pandas as pd
import plotly.graph_objs as go
from plotly.offline import plot
class CasPage(Page):
body = RichTextField(blank=True)
content_panels = Page.content_panels + [
FieldPanel('body'),
]
def get_connection(self):
try:
return psycopg2.connect(
database="xxxx",
user="xxxx",
password="xxxx",
host="xxxxxxxxxxxxx",
port=xxxxx,
)
except:
return False
conn = get_connection()
cursor = conn.cursor()
strquery = (f'''SELECT t.datum, t.grwaarde - LAG(t.grwaarde,1) OVER (ORDER BY datum) AS
gebruiktgas
FROM XXX
''')
data = pd.read_sql(strquery, conn)
fig1 = go.Figure(
data = data,
layout=go.Layout(
title="Gas-verbruik",
yaxis_title="aantal M3")
)
output = plotly.plot(fig1, output_type='div', include_plotlyjs=False)
# https://stackoverflow.com/questions/32626815/wagtail-views-extra-context
def get_context(self, request):
context = super(CasPage, self).get_context(request)
context['output'] = output
return context
Kind of the right track. You should move all the plot code into its own method though. At the moment, it runs the plot code when the site initialises then stays stored in memory.
There's three usual ways to get the plot to the rendered page then.
As you've done with context
As a property or method of the page class
As a template tag called from the template
The first two have more or less the same effect, except the 2nd makes the property available anywhere, not just the template. The context method runs before the page starts rendering, the other two happen during that process. I guess the only real difference there is that if you're using template caching, the context will always run each time the page is loaded, the other two only run when the cache is invalid, or if the code is escaped out of the cache (for fragment caching).
To call the plot as a property of your page class, you'd just pull out the code into a def with the #property decorator:
class CasPage(Page):
....
#property
def plot(self):
try:
conn = psycopg2.connect(
database="xxxx",
user="xxxx",
password="xxxx",
host="xxxxxxxxxxxxx",
port=xxxxx,
)
cursor = conn.cursor()
strquery = (f'''SELECT t.datum, t.grwaarde - LAG(t.grwaarde,1) OVER (ORDER BY datum) AS
gebruiktgas FROM XXX''')
data = pd.read_sql(strquery, conn)
fig1 = go.Figure(
data = data,
layout=go.Layout(
title="Gas-verbruik",
yaxis_title="aantal M3")
)
return plotly.plot(fig1, output_type='div', include_plotlyjs=False)
except Exception as e:
print(f"{type(e).__name__} at line {e.__traceback__.tb_lineno} of {__file__}: {e}")
return None
^ I haven't tried this code ... it should work as is, but no guarantees I didn't make a typo ;)
Now you can access your plot with {{ self.plot }} in the template.
If you want to stick with context, then you'd stay with the def above but just amend your output line to
context['output'] = self.plot
Template tags are more useful when they're being used in StructBlocks and not part of a page class like this, or where you have code that you want to re-use in multiple templates.
Then you'd move all that plot code into a template tag file, register it and call it in the template with {% plot %}. Wagtail template tags work the same as Django: https://docs.djangoproject.com/en/4.1/howto/custom-template-tags/
Is the plot data outside of the site database? If not, you could probably get the data via the ORM if it was defined as a model. If so, it's probably worth writing a view (or stored procedure if you want to pass parameters) on the db server and calling that rather than hard coding the SQL into your python.
The other consideration is the page load time - if the dataset is big, this could take a while and prevent the page from loading. You'd probably want a front-end solution in that case.
I have a Django application that executes a full-text-search on a database. The view that executes this query is my search_view (I'm ommiting some parts for the sake of simplicity). It just retrieve the results of the search on my Post model and send to the template:
def search_view(request):
posts = m.Post.objects.all()
query = request.GET.get('q')
search_query = SearchQuery(query, config='english')
qs = Post.objects.annotate(
rank=SearchRank(F('vector_column'), search_query) + TrigramSimilarity('post_title', query)
).filter(rank__gte=0.15).order_by('-rank'), 15
)
context = {
results = qs
}
return render(request, 'core/search.html', context)
The application is working just fine. The problem is with a test I created. Here is my tests.py:
class SearchViewTests(TestCase):
def test_search_without_results(self):
"""
If the user's query did not retrieve anything
show him a message informing that
"""
response = self.client.get(reverse('core:search') + '?q=eksjeispowjskdjies')
self.assertEqual(response.status_code, 200)
self.assertContains(response.content, "We didn\'t find anything on our database. We\'re sorry")
This test raises an ProgrammingError exception:
django.db.utils.ProgrammingError: function similarity(character varying, unknown) does not exist
LINE 1: ...plainto_tsquery('english'::regconfig, 'eksjeispowjskdjies')) + SIMILARITY...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
I understand very well this exception, 'cause I got it sometimes. The SIMILARITY function in Postgres accepts two arguments, and both need to be of type TEXT. The exception is raising because the second argument (my query term) is of type UNKNOWN, therefore the function won't work and Django raises the exception. And I don't understand why, because the actual search is working! Even in the shell it works perfectly:
In [1]: from django.test import Client
In [2]: c = Client()
In [3]: response = c.get(reverse('core:search') + '?page=1&q=eksjeispowjskdjies')
In [4]: response
Out[4]: <HttpResponse status_code=200, "text/html; charset=utf-8">
Any ideas about why test doesn't work, but the actual execution of the app works and console test works too?
I had the same problem and this how I solved it in my case:
First of all, the problem was that when Django creates the test database that it is going to use for tests it does not actually run all of your migrations. It simply creates the tables based on your models.
This means that migrations that create some extension in your database, like pg_trgm do not run when creating the test database.
One way to overcome this is to use a fixture in your conftest.py file which will make create said extensions before any tests run.
So, in your conftest.py file add the following:
# the following fixture is used to add the pg_trgm extension to the test database
#pytest.fixture(scope="session", autouse=True)
def django_db_setup(django_db_setup, django_db_blocker):
"""Test session DB setup."""
with django_db_blocker.unblock():
with connection.cursor() as cursor:
cursor.execute("CREATE EXTENSION IF NOT EXISTS pg_trgm;")
You can of course replace pg_trgm with any other extension you require.
PS: You must make sure the extension you are trying to use works for the test database you have chosen. In order to change the database used by Django you can change the value of
DATABASES = {'default': env.db('your_database_connection_uri')}
in your application's settings.py.
One of my methods in a project I'm working on looks like this:
from django.core.cache import cache
from app import models
def _get_active_children(parent_id, timestamp):
children = cache.get(f"active_seasons_{parent_id}")
if children is None:
children = models.Children.objects.filter(parent_id=parent_id).active(
dt=timestamp
)
cache.set(
f"active_children_{parent_id}",
children,
60 * 10,
)
return children
The issue is I don't want caching to occur when this method is being called via the command line (it's inside a task). So I'm wondering if there's a way to disable caching of this form?
Ideally I want to use a context manager so that any cache calls inside the context are ignored (or pushed to a DummyCache/LocalMem cache which wouldn't effect my main Redis cache).
I've considered pasisng skip_cache=True through the methods, but this is pretty brittle and I'm sure there's a more elegant solution. Additionally, I've tried using mock.patch but I'm not sure this works outside of test classes.
My ideal solution would look something like:
def task():
...
_get_active_children(parent_id, timestamp):
with no_cache:
task()
I have a solution (but I think there's a better one out there):
from unittest.mock import patch
from django.core.cache.backends.dummy import DummyCache
from django.utils.module_loading import import_string
def no_cache(module_str, cache_object_str='cache'):
""" example usage: with no_cache('app.tasks', 'cache'): """
module_ = import_string(module_str)
return patch.object(module_, cache_object_str, DummyCache('mock', {}))
Inspired by this.
I'm scrapping a page successfully that returns me an unique item. I don't want neither to save the scrapped item in the database nor to a file. I need to get it inside a Django view.
My view is as follows:
def start_crawl(process_number, court):
"""
Starts the crawler.
Args:
process_number (str): Process number to be found.
court (str): Court of the process.
"""
runner = CrawlerRunner(get_project_settings())
results = list()
def crawler_results(sender, parse_result, **kwargs):
results.append(parse_result)
dispatcher.connect(crawler_results, signal=signals.item_passed)
process_info = runner.crawl(MySpider, process_number=process_number, court=court)
return results
I followed this solution but results list is always empty.
I read something as creating a custom middleware and getting the results at the process_spider_output method.
How can I get the desired result?
Thanks!
I managed to implement something like that in one of my projects. It is a mini-project and I was looking for a quick solution. You'll might need modify it or support multi-threading etc in case you put it in production environment.
Overview
I created an ItemPipeline that just add the items into a InMemoryItemStore helper. Then, in my __main__ code I wait for the crawler to finish, and pop all the items out of the InMemoryItemStore. Then I can manipulate the items as I wish.
Code
items_store.py
Hacky in-memory store. It is not very elegant but it got the job done for me. Modify and improve if you wish. I've implemented that as a simple class object so I can simply import it anywhere in the project and use it without passing its instance around.
class InMemoryItemStore(object):
__ITEM_STORE = None
#classmethod
def pop_items(cls):
items = cls.__ITEM_STORE or []
cls.__ITEM_STORE = None
return items
#classmethod
def add_item(cls, item):
if not cls.__ITEM_STORE:
cls.__ITEM_STORE = []
cls.__ITEM_STORE.append(item)
pipelines.py
This pipleline will store the objects in the in-memory store from the snippet above. All items are simply returned to keep the regular pipeline flow intact. If you don't want to pass some items down the to the other pipelines simply change process_item to not return all items.
from <your-project>.items_store import InMemoryItemStore
class StoreInMemoryPipeline(object):
"""Add items to the in-memory item store."""
def process_item(self, item, spider):
InMemoryItemStore.add_item(item)
return item
settings.py
Now add the StoreInMemoryPipeline in the scraper settings. If you change the process_item method above, make sure you set the proper priority here (changing the 100 down here).
ITEM_PIPELINES = {
...
'<your-project-name>.pipelines.StoreInMemoryPipeline': 100,
...
}
main.py
This is where I tie all these things together. I clean the in-memory store, run the crawler, and fetch all the items.
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
from <your-project>.items_store import InMemoryItemStore
from <your-project>.spiders.your_spider import YourSpider
def get_crawler_items(**kwargs):
InMemoryItemStore.pop_items()
process = CrawlerProcess(get_project_settings())
process.crawl(YourSpider, **kwargs)
process.start() # the script will block here until the crawling is finished
process.stop()
return InMemoryItemStore.pop_items()
if __name__ == "__main__":
items = get_crawler_items()
If you really want to collect all data in a "special" object.
Store the data in a separate pipeline like https://doc.scrapy.org/en/latest/topics/item-pipeline.html#duplicates-filter and in close_spider (https://doc.scrapy.org/en/latest/topics/item-pipeline.html?highlight=close_spider#close_spider) you open your django object.
I'm finally setting up testing for my Django app, but I'm having difficulties getting started. I'm using model_mommy to create dynamic data for my tests, but have the following problem:
The view I'm testing is supposed to show me all the assignments a particular user has to complete. To test this, I want to create 500 assignments, log into the app and check if they are shown. So far I have the following test cases:
class TestLogin(TestCase):
def setUp(self):
self.client = Client()
user = User.objects.create(username='sam')
user.set_password('samspassword')
user.save()
def test_login(self):
self.client.login(username='sam', password='samspassword')
response = self.client.get('/')
print (response.content)
self.assertEqual(response.status_code, 200)
and
class TestShowAssignments(TestCase):
def setUp(self):
user_recipe = Recipe(User, username='sam', password="samspassword")
self.assignment = Recipe(Assignment,
coders = related(user_recipe))
self.assignments = self.assignment.make(_quantity=500)
def test_assignments(self):
self.assertIsInstance(self.assignments[0],Assignment)
self.assertEqual(len(self.assignments),500)
The first test passes fine and does what it should: TestLogin logs the user in and shows his account page.
The trouble starts with TestShowAssignments, which creates 500 assignments but if I look at the assignments with print (self.assignments[0].coders), I get auth.User.None. So it doesn't add the user I defined as a relation to the assignments. What might be important here is that the coders field in the model is a m2m field, which I tried to address by using related, but that doesn't seem to work.
What also doesn't work is logging in: if I use the same code I use for logging in during TestLogin in TestShowAssignments, I can't log in and see the user page.
So, my question: How do I use model_mommy to create Assignments and add them to a specific user, so that I can log in as that user and see if the assignments are displayed properly?
Do you want 500 Assignments that all have User "sam" as a single entry in the 'coders' field? If so, try:
from model_mommy import mommy
...
class TestShowAssignments(TestCase):
def setUp(self):
self.user = mommy.make(User, username='sam', password='samspassword')
self.assignments = mommy.make(Assigment, coders=[self.user], _quantity=500)