I'm trying to find something that will return an exception upon finding anything that even remotely looks like HTML or Javascript. I've figured out how to do it for individual views, but it's not a scalable solution, and ultimately I need to prevent code from being saved to the database no matter what view gets targeted by the injection attack.
Here is the functionality I'm looking for.
ILLEGAL_CHARS = '<>[]{}():;,'.split()
# bunch of code in between
for value in [company_name, url, status, information, lt_type, company_source]:
if any(char in value for char in ILLEGAL_CHARS):
raise Exception(f"You passed one of several illegal characters: {ILLEGAL_CHARS}")
I'm using django rest framework so I have to handle it on the backend. Thanks.
actually you don't nead to sanitize any user input because when you show them int the template the jinja {{object}} will make sure that no html or java script will be executed until you mark them as safe {{object|safe}} but if you want want not to save them in database that might help Sanitizing HTML in submitted form data
I am very new to writing unit test cases for python and need some help.
I have a Flask based application which internally calls another url over REST
Sample code
#sample.route('/testing', methods=["GET"])
def testing():
resp = requests.get("some url")
data = resp.json()
resp1 = requests.post("another url", data)
return resp1.status_code()
Now since for unit testing, my other module is not available yet and i need to write unit test cases for this module.
so i would need to mock these rest requests and return custom data and status code for each request.
Can anyone please help on how to proceed for this.
I tried various links online but not working as per my expectation.
You can mock, that is, create a custom mock-up for external APIs in a number of ways. In general, it will involve a fake API that will return pre-set values when a call is made.
Also, please consider that tight coupling between the two APIs, that might cause problems in the future. Try to reformulate your arquitecture so that it use looser coupling
This is an embarrassingly simple question, but working in Django 1.8, what's the best way to write a test that a web page has rendered and has some text in it?
I already have simple tests for my views that check the status code and the template used:
def test_call_view_bnf_section(self):
response = self.client.get('/bnf/0202')
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, 'bnf_section.html')
Now I'd like to check, for example, that the text 'BNF Section' is present inside an <h1> tag.
How can I do this? And are these view tests the best place to do that, or should I be writing functional tests using Selenium instead?
For this particular test you can add:
self.assertContains(response, '<h1>BNF Section</h1>')
More in Django Documentation: Writing and running tests and SimpleTestCase.assertContains
This is an easy test and no need to get involved with Selenium, at least for this.
I am very new to unit testing and am probably doing something wrong, but when I simulate a post to update a model via the admin backend it seems like my save_model method in my AdminForm isn't being called. I am trying to test this method - what am I doing wrong?
My second, less relevant question is in general how can I make sure a method is being called when I use unit testing? Is there some way to list all the methods that were hit?
Below is the code my test is running. In my save_model method in my AdminForm for this model, I set this model's foobar attribute to the username of the currently signed in user. Below is my test:
self.client = Client()
self.client.login(username='username',password='password')
# self.dict is a dictionary of field names and values for mymodel to be updated
response = self.client.post('/admin/myapp/mymodel/%d/' % self.mymodel.id, self.dict)
self.assertEqual(response.status_code,200) # passes
self.assertEqual(self.mymodel.foobar,'username') # fails
self.client.logout()
It fails because it says that self.mymodel.foobar is an empty string. That was what it should have been before the update. No value for foobar is passed in the self.dict but my save_model method is designed to set it on its own when the update happens. It is also worth noting that my code works correctly and save_model seems to work fine, just my test is failing. Since I am a total noob at TDD, I'm sure the issue is with my test and not my code. Thoughts?
From the code it looks like the problem is that, after posting the form, you don't reload self.mymodel from the database. If you hold a reference to a model object stored in the database, and one or more of the fields on that object is changed in the database, then you will need to reload the object from the database to see the updated values. As detailed in this question, you can do this with something like:
self.mymodel = MyModelClass.objects.get(id=self.mymodel.id)
To answer your second question, probably the most useful way to see what is happening would be to use logging to output what is happening in your save_model method - this will not only help you debug the issue during testing, but also if you encounter any issues in this method when running your application. The django guide to logging gives an excellent introduction:
https://docs.djangoproject.com/en/dev/topics/logging/
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm NOT new to testing, but got really confused with the mess of recommendations for testing different layers in Django.
Some recommend (and they are right) to avoid Doctests in the model as they are not maintainable...
Others say don't use fixtures, as they are less flexible than helper functions, for instance..
There are also two groups of people who fight for using Mock objects. The first group believe in using Mock and isolating the rest of the system, while another group prefer to Stop Mocking and start testing..
All I have mentioned above, were mostly in regards to testing models. Functional testing is an another story (using test.Client() VS webTest VS etc. )
Is there ANY maintainable, extandible and proper way for testing different layers??
UPDATE
I am aware of Carl Meyer's talk at PyCon 2012..
UPDATE 08-07-2012
I can tell you my practices for unit testing that are working pretty well for my own ends and I'll give you my reasons:
1.- Use Fixtures only for information that is necessary for testing but is not going to change, for example, you need a user for every test you do so use a base fixture to create users.
2.- Use a factory to create your objects, I personally love FactoryBoy (this comes from FactoryGirl which is a ruby library). I create a separate file called factories.py for every app where I save all these objects. This way I keep off the test files all the objects I need which makes it a lot more readable and easy to maintain. The cool thing about this approach is that you create a base object that can be modified if you want to test something else based on some object from the factory. Also it doesn't depend on django so when I migrated these objects when I started using mongodb and needed to test them, everything was smooth. Now after reading about factories it's common to say "Why would I want to use fixtures then". Since these fixtures should never change all the extra goodies from factories are sort of useless and django supports fixtures very well out of the box.
3.- I Mock calls to external services, because these calls make my tests very slow and they depend on things that have nothing to do with my code being right or wrong. for example, if I tweet within my test, I do test it to tweet rightly, copy the response and mock that object so it returns that exact response every time without doing the actual call. Also sometimes is good to test when things go wrong and mocking is great for that.
4.- I use an integration server (jenkins is my recommendation here) which runs the tests every time I push to my staging server and if they fail it sends me an email. This is just great since it happens to me a lot that I break something else in my last change and I forgot to run the tests. It also gives you other goodies like a coverage report, pylint/jslint/pep8 verifications and there exists a lot of plugins where you can set different statistics.
About your question for testing front end, django comes with some helper functions to handle this in a basic way.
This is what I personally use, you can fire gets, posts, login the user, etc. that's enough for me. I don't tend to use a complete front end testing engine like selenium since I feel it's an overkill to test anything else besides the business layer. I am sure some will differ and it always depends on what you are working on.
Besides my opinion, django 1.4 comes with a very handy integration for in-browser frameworks.
I'll set an example app where I can apply this practices so it is more understandable. Let's create a very basic blog app:
structure
blogger/
__init__.py
models.py
fixtures/base.json
factories.py
tests.py
models.py
from django.db import models
class Blog(models.Model):
user = models.ForeignKey(User)
text = models.TextField()
created_on = models.DateTimeField(default=datetime.now())
fixtures/base.json
[
{
"pk": 1,
"model": "auth.user",
"fields": {
"username": "fragilistic_test",
"first_name": "demo",
"last_name": "user",
"is_active": true,
"is_superuser": true,
"is_staff": true,
"last_login": "2011-08-16 15:59:56",
"groups": [],
"user_permissions": [],
"password": "IAmCrypted!",
"email": "test#email.com",
"date_joined": "1923-08-16 13:26:03"
}
}
]
factories.py
import factory
from blog.models import User, Blog
class BlogFactory(factory.Factory):
FACTORY_FOR = Blog
user__id = 1
text = "My test text blog of fun"
tests.py
class BlogTest(TestCase):
fixtures = ['base'] # loads fixture
def setUp(self):
self.blog = BlogFactory()
self.blog2 = BlogFactory(text="Another test based on the last one")
def test_blog_text(self):
self.assertEqual(Blog.objects.filter(user__id=1).count(), 2)
def test_post_blog(self):
# Lets suppose we did some views
self.client.login(username='user', password='IAmCrypted!')
response = self.client.post('/blogs', {'text': "test text", user='1'})
self.assertEqual(response.status, 200)
self.assertEqual(Blog.objects.filter(text='test text').count(), 1)
def test_mocker(self):
# We will mock the datetime so the blog post was created on the date
# we want it to
mocker = Mock()
co = mocker.replace('datetime.datetime')
co.now()
mocker.result(datetime.datetime(2012, 6, 12))
with mocker:
res = Blog.objects.create(user__id=1, text='test')
self.assertEqual(res.created_on, datetime.datetime(2012, 6, 12))
def tearDown(self):
# Django takes care of this but to be strict I'll add it
Blog.objects.all().delete()
Notice I am using some specific technology for the sake of the example (which haven't been tested btw).
I have to insist, this may not be the standard best practice (which I doubt it exists) but it is working pretty well for me.
I really like the suggestions from #Hassek and want to stress out what an excellent point he makes about the obvious lack of standard practices, which holds true for many of Django's aspects, not just testing, since all of us approach the framework with different concerns in mind, also adding to that the great degree of flexibility we have with designing our applications, we often end up with drastically different solutions that are applicable to the same problem.
Having said that, though, most of us still strive for many of the same goals when testing our applications, mainly:
Keeping our test modules neatly organized
Creating reusable assertion and helper methods, helper functions that reduce the LOC for test methods, to make them more compact and readable
Showing that there is an obvious, systematic approach to how the application components are tested
Like #Hassek, these are my preferences that may directly conflict with the practices that you may be applying, but I feel it's nice to share the things we've proven that work, if only in our case.
No test case fixtures
Application fixtures work great, in cases you have certain constant model data you'd like to guarantee to be present in the database, say a collection of towns with their names and post office numbers.
However, I see this as an inflexible solution for providing test case data. Test fixtures are very verbose, model mutations force you to either go through a lengthy process of reproducing the fixture data or to perform tedious manual changes and maintaining referential integrity is difficult to manually perform.
Additionally, you'll most likely use many kinds of fixtures in your tests, not just for models: you'd like to store the response body from API requests, to create fixtures that target NoSQL database backends, to write have fixtures that are used to populate form data, etc.
In the end, utilizing APIs to create data is concise, readable and it makes it much easier to spot relations, so most of us resort to using factories for dynamically creating fixtures.
Make extensive use of factories
Factory functions and methods are preferable to stomping out your test data. You can create helper factory module-level functions or test case methods that you may want to either reuse
across application tests or throughout the whole project. Particularly, factory_boy, that #Hassek mentions, provides you with the ability to inherit/extend fixture data and do automatic sequencing, which might look a bit clumsy if you'd do it by hand otherwise.
The ultimate goal of utilizing factories is to cut down on code-duplication and streamline how you create test data. I cannot give you exact metrics, but I'm sure if you go through your test methods with a discerning eye you will notice that a large portion of your test code is mainly preparing the data that you'll need to drive your tests.
When this is done incorrectly, reading and maintaining tests becomes an exhausting activity. This tends to escalate when data mutations lead to not-so-obvious test failures across the board, at which point you'll not be able to apply systematic refactoring efforts.
My personal approach to this problem is to start with a myproject.factory module that creates easy-to-access references to QuerySet.create methods for my models and also for any objects I might regularly use in most of my application tests:
from django.contrib.auth.models import User, AnonymousUser
from django.test import RequestFactory
from myproject.cars.models import Manufacturer, Car
from myproject.stores.models import Store
create_user = User.objects.create_user
create_manufacturer = Manufacturer.objects.create
create_car = Car.objects.create
create_store = Store.objects.create
_factory = RequestFactory()
def get(path='/', data={}, user=AnonymousUser(), **extra):
request = _factory.get(path, data, **extra)
request.user = user
return request
def post(path='/', data={}, user=AnonymousUser(), **extra):
request = _factory.post(path, data, **extra)
request.user = user
return request
This in turn allows me to do something like this:
from myproject import factory as f # Terse alias
# A verbose, albeit readable approach to creating instances
manufacturer = f.create_manufacturer(name='Foomobiles')
car1 = f.create_car(manufacturer=manufacturer, name='Foo')
car2 = f.create_car(manufacturer=manufacturer, name='Bar')
# Reduce the crud for creating some common objects
manufacturer = f.create_manufacturer(name='Foomobiles')
data = {name: 'Foo', manufacturer: manufacturer.id)
request = f.post(data=data)
view = CarCreateView()
response = view.post(request)
Most people are rigorous about reducing code duplication, but I actually intentionally introduce some whenever I feel it contributes to test comprehensiveness. Again, the goal with whichever approach you take to factories is to minimize the amount of brainfuck you introduce into the header of each test method.
Use mocks, but use them wisely
I'm a fan of mock, as I've developed an appreciation for the author's solution to what I believe was the problem he wanted to address. The tools provided by the package allow you to form test assertions by injecting expected outcomes.
# Creating mocks to simplify tests
factory = RequestFactory()
request = factory.get()
request.user = Mock(is_authenticated=lamda: True) # A mock of an authenticated user
view = DispatchForAuthenticatedOnlyView().as_view()
response = view(request)
# Patching objects to return expected data
#patch.object(CurrencyApi, 'get_currency_list', return_value="{'foo': 1.00, 'bar': 15.00}")
def test_converts_between_two_currencies(self, currency_list_mock):
converter = Converter() # Uses CurrencyApi under the hood
result = converter.convert(from='bar', to='foo', ammount=45)
self.assertEqual(4, result)
As you can see, mocks are really helpful, but they have a nasty side effect: your mocks clearly show your making assumptions on how it is that your application behaves, which introduces coupling. If Converter is refactored to use something other than the CurrencyApi, someone may not obviously understand why the test method is suddenly failing.
So with great power comes great responsibility--if your going to be a smartass and use mocks to avoid deeply rooted test obstacles, you may completely obfuscate the true nature of your test failures.
Above all, be consistent. Very very consistent
This is the most important point to be made. Be consistent with absolutely everything:
how you organize code in each of your test modules
how you introduce test cases for your application components
how you introduce test methods for asserting the behavior of those components
how you structure test methods
how you approach testing common components (class-based views, models, forms, etc.)
how you apply reuse
For most projects, the bit about how your collaboratively going to approach testing is often overlooked. While the application code itself looks perfect--adhering to style guides, use of Python idioms, reapplying Django's own approach to solving related problems, textbook use of framework components, etc.--no one really makes it an effort to figure out how to turn test code into a valid, useful communication tool and it's a shame if, perhaps, having clear guidelines for test code is all it takes.