Django 1.5 Timezone.now() - django

I am newbie and trying to make my Unit Test pass but having problems with DateTimeField.
In my settings, I have USE_TZ = True and TIME_ZONE set.
Using MongoDb.
First the test is giving me an error complaining about comparing offset-naive and offset-aware. Changed auto_now_add=True to datetime.datetime.utcnow().replace(tzinfo=utc))
I Still couldn't get the right time and date to my TIME_ZONE.
After I put these in my Database (settings.py)
'OPTIONS' : {
'tz_aware' : True,
}
Now I can change my TIME_ZONE and the time and date shows my localtime, not utc.
But when I run a test model:
nf.data_emissao = timezone.now()
...
#check if the nf is in database
lista_nfse = Nfse.objects.all()
self.assertEquals(lista_nfse.count(), 1)
nfse_no_banco = lista_nfse[0]
...
self.assertEquals( nfse_no_banco.data_emissao, nf.data_emissao)
My test fails:
AssertionError: datetime.datetime(2013, 8, 10, 2, 49, 59, 391000, tzinfo=
<bson.tz_util.FixedOffset object at 0x2bdd1d0>) != datetime.datetime(2013, 8, 10, 2, 49, 59,
391122, tzinfo=<UTC>)
I see the diff between 391000 and 391122 but don't know how to fix that.

The problem look to be that you are comparing two values that were assigned with the time 'now' at two different points in time.
Writing unit tests to work with automatically generated dates is always tricky when trying to assert exact date values due to the ever changing nature of time. However, there are a couple of techniques that can be used to help create reliable tests in these scenarios:
If you are trying to assert 'nfse_no_banco.data_emissao contains the time now', instead of trying to assert an exact value you could assert that the time field value falls within the last x milliseconds of time. This allows you to gain a fair level of confidence that the value in the field was 'now' at the time it was assigned, but the downsides are (a) your test could unreliable if the execution time of the test happens to take longer than x milliseconds and (b) the test would return a false positive if for some reason the value was incorrectly assigned a time very close to now due to a programming error (which is highly unlikely).
You can monkey-patch datetime.datetime.utcnow to your own version of the method that returns a pre-set value for testing purposes, and then assert that value was assigned to nfse_no_banco.data_emissao. The downside is that it adds a little complexity to your test setup and teardown. However, it should result in a good test if the goal of your assertion is to verify that the field has been assigned the time now.
You can simply assert that the value of the field is not null (using self.assertNotNull(nfse_no_banco.data_emissao)) - although this is a much weaker assertion, in cases where you are using some framework functionality (such as auto_now_add=True in Django) then often this will suffice - of course the major upside to this test is it's very simple and reliable.
The best approach really depends on what you are trying to assert. From your question it appears that you are really trying to assert that nfse_no_banco.data_emissao was assigned the time now, and you are doing this yourself (rather than relying on a framework to do it for you), and therefore the second approach would make most sense.
Below is pseudo-code showing how you could do this in your test:
# Create a constant with a fixed value for utcnow
NOW = datetime.datetime.utcnow()
# Define a test method to replace datetime.datetime.utcnow
def utcnow_fixed_value():
return NOW
class MyTest(TestCase):
def setUp(self):
# Replace the real version of utcnow with our test version
self.real_utcnow = datetime.datetime.utcnow
datetime.datetime.utcnow = utcnow_fixed_value
def tearDown(self):
# Undo the monkey patch and replace the real version of utcnow
datetime.datetime.utcnow = self.real_utcnow
def test_value_is_now(self):
lista_nfse = Nfse.objects.all()
self.assertEquals(lista_nfse.count(), 1)
nfse_no_banco = lista_nfse[0]
...
self.assertEquals(NOW, nfse_no_banco.data_emissao)

Related

Django: Query gives zero results after first match in for loop

The following code works fine until the filter gets first match(es). After that on the following runs in the for loop, the query always returns 0. If in that result object would be a match for the next row also, it doesn't even see that, so I don't see this being a cache issue (which might have been far fetched anyhow).
for row in self._ordr.OrderRows.SalesOrderRow:
available = row.Row_amount - Slot.objects.filter(rows_ids=row.Row_id).count()
if available > 0 or row.Row_id in self.instance.rows_ids:
# some code
Any ideas what am I doing wrong here?
This is the models code on that rows_ids.
from django.db import models
from django_mysql.models import ListCharField
class Slot(models.Model):
rows_ids = ListCharField(base_field=models.IntegerField(), size=10, max_length=(10 * 21), null=True)
Went through the documentation once more, and apparently it's a ListCharField based issue. To find a match in all the options in it, you shouldn't compare it directly to the field, but rather to field__contains. So the correct code is:
for row in self._ordr.OrderRows.SalesOrderRow:
available = row.Row_amount - Slot.objects.filter(rows_ids__contains=row.Row_id).count()
if available > 0 or row.Row_id in self.instance.rows_ids:
# some code
Finally got it to work with this.

Custom Django count filtering

A lot of websites will display:
"1.8K pages" instead of "1,830 pages"
or
"43.2M pages" instead of "43,200,123 pages"
Is there a way to do this in Django?
For example, the following code will generate the quantified amount of objects in the queryset (i.e. 3,123):
Books.objects.all().count()
Is there a way to add a custom count filter to return "3.1K pages" instead of "3,123 pages?
Thank you in advance!
First off, I wouldn't do anything that alters the way the ORM portion of Django works. There are two places this could be done, if you are only planning on using it in one place - do it on the frontend. With that said, there are many ways to achieve this result. Just to spout off a few ideas, you could write a property on your model that calls count then converts that to something a little more human readable for the back end. If you want to do it on the frontend you might want to find a JavaScript lib that could do the conversion.
I will edit this later from my computer and add an example of the property.
Edit: To answer your comment, the easier one to implement depends on your skills in python vs in JavaScript. I prefer python so I would probably do it in there somewhere on the model.
Edit2: I have wrote an example to show you how I would do a classmethod on a base model or on the model that you need these numbers on. I found a python package called humanize and I took its function that converts these to readable and modified it a bit to allow for thousands and took out some of the super large number conversion.
def readable_number(value, short=False):
# Modified from the package `humanize` on pypy.
powers = [10 ** x for x in (3, 6, 9, 12, 15, 18)]
human_powers = ('thousand', 'million', 'billion', 'trillion', 'quadrillion')
human_powers_short = ('K', 'M', 'B', 'T', 'QD')
try:
value = int(value)
except (TypeError, ValueError):
return value
if value < powers[0]:
return str(value)
for ordinal, power in enumerate(powers[1:], 1):
if value < power:
chopped = value / float(powers[ordinal - 1])
chopped = format(chopped, '.1f')
if not short:
return '{} {}'.format(chopped, human_powers[ordinal - 1])
return '{}{}'.format(chopped, human_powers_short[ordinal - 1])
class MyModel(models.Model):
#classmethod
def readable_count(cls, short=True):
count = cls.objects.all().count()
return readable_number(count, short=short)
print(readable_number(62220, True)) # Returns '62.2K'
print(readable_number(6555500)) # Returns '6.6 million'
I would stick that readable_number in some sort of utils and just import it in your models file. Once you have that, you can just stick that string wherever you would like on your frontend.
You would use MyModel.readable_count() to get that value. If you want it under MyModel.objects.readable_count() you will need to make a custom object manager for your model, but that is a bit more advanced.

Python - null object pattern with generators

It is apparently Pythonic to return values that can be treated as 'False' versions of the successful return type, such that if MyIterableObject: do_things() is a simple way to deal with the output whether or not it is actually there.
With generators, bool(MyGenerator) is always True even if it would have a len of 0 or something equally empty. So while I could write something like the following:
result = list(get_generator(*my_variables))
if result:
do_stuff(result)
It seems like it defeats the benefit of having a generator in the first place.
Perhaps I'm just missing a language feature or something, but what is the pythonic language construct for explicitly indicating that work is not to be done with empty generators?
To be clear, I'd like to be able to give the user some insight as to how much work the script actually did (if any) - contextual snippet as follows:
# Python 2.7
templates = files_from_folder(path_to_folder)
result = list(get_same_sections(templates)) # returns generator
if not result:
msg("No data to sync.")
sys.exit()
for data in result:
for i, tpl in zip(data, templates):
tpl['sections'][i]['uuid'] = data[-1]
msg("{} sections found to sync up.".format(len(result)))
It works, but I think that ultimately it's a waste to change the generator into a list just to see if there's any work to do, so I assume there's a better way, yes?
EDIT: I get the sense that generators just aren't supposed to be used in this way, but I will add an example to show my reasoning.
There's a semi-popular 'helper function' in Python that you see now and again when you need to traverse a structure like a nested dict or what-have-you. Usually called getnode or getn, whenever I see it, it reads something like this:
def get_node(seq, path):
for p in path:
if p in seq:
seq = seq[p]
else:
return ()
return seq
So in this way, you can make it easier to deal with the results of a complicated path to data in a nested structure without always checking for None or try/except when you're not actually dealing with 'something exceptional'.
mydata = get_node(my_container, ('path', 2, 'some', 'data'))
if mydata: # could also be "for x in mydata", etc
do_work(mydata)
else:
something_else()
It's looking less like this kind of syntax would (or could) exist with generators, without writing a class that handles generators in this way as has been suggested.
A generator does not have a length until you've exhausted its iterations.
the only way to get whether it's got anything or not, is to exhaust it
items = list(myGenerator)
if items:
# do something
Unless you wrote a class with attribute nonzero that internally looks at your iterations list
class MyGenerator(object):
def __init__(self, items):
self.items = items
def __iter__(self):
for i in self.items:
yield i
def __nonzero__(self):
return bool(self.items)
>>> bool(MyGenerator([]))
False
>>> bool(MyGenerator([1]))
True
>>>

Django schedule: Difference between event.get_occurrence() and event.get_occurrences()?

Based on django-schedule. I can't find the guy who made it.
Pardon me if I'm missing something, but I've been trying to get an events occurrences, preferably for a given day.
When I use event.get_occurrence(date), it always returns nothing. But when I use event.get_occurrences(before_date, after_date), suddenly the occurrences on the previously attempted date show up.
Why won't this work with just one datetime object?
This difference is probably in the actual design of these two methods. Frankly, get_occurrence is rather flawed, in general. A method like this should always return something, even if it's just None, but there's scenarios where it doesn't return at all. Namely, if your event doesn't have an rrule, and the date you passed get_occurrence isn't the same as your event's start, then no value is returned.
There's not really anything that can be done about that. It's just flawed code.
Based on the above comment especially in the case when the event doesn't return and occurrence, the following snippet below can force the retrieval of an occurrence especially when you are sure that it exists
from dateutil.relativedelta import relativedelta
def custom_get_occurrence(event,start_date):
occurrence = event.get_occurrence(start_date)
if occurrence is None:
occurrences = event.get_occurrences(start_date, start_date+relative_delta(months=3)
result = filter(lambda x: x.start==start_date,occurrences)
occurence = result[0]
The above code resolves issue that might occur when the default get_occurrence doesn't return a result.

How to make this django attribute name search better?

lcount = Open_Layers.objects.all()
form = SearchForm()
if request.method == 'POST':
form = SearchForm(request.POST)
if form.is_valid():
data = form.cleaned_data
val=form.cleaned_data['LayerName']
a=Open_Layers()
data = []
for e in lcount:
if e.Layer_name == val:
data = val
return render_to_response('searchresult.html', {'data':data})
else:
form = SearchForm()
else:
return render_to_response('mapsearch.html', {'form':form})
This just returns back if a particular "name" matches . How do to change it so that it returns when I give a search for "Park" , it should return Park1 , Park2 , Parking , Parkin i.e all the occurences of the park .
You can improve your searching logic by using a list to accumulate the results and the re module to match a larger set of words.
However, this is still pretty limited, error prone and hard to maintain or even harder to make evolve. Plus you'll never get as nice results as if you were using a search engine.
So instead of trying to manually reinvent the wheel, the car and the highway, you should spend some time setting up haystack. This is now the de facto standard to do search in Django.
Use woosh as a backend at first, it's going to be easier. If your search get slow, replace it with solr.
EDIT:
Simple clean alternative:
Open_Layers.objects.filter(name__icontains=val)
This will perform a SQL LIKE, adding %` for you.
This going to kill your database if used too often, but I guess this is probably not going to be an issue with your current project.
BTW, you probably want to rename Open_Layers to OpenLayers as this is the Python PEP8 naming convention.
Instead of
if e.Layer_name == val:
data = val
use
if val in e.Layer_name:
data.append(e.Layer_name)
(and you don't need the line data = form.cleaned_data)
I realise this is an old post, but anyway:
There's a fuzzy logic string comparison already in the python standard library.
import difflib
Mainly have a look at:
difflib.SequenceMatcher(None, a='string1', b='string2', autojunk=True).ratio()
more info here:
http://docs.python.org/library/difflib.html#sequencematcher-objects
What it does it returns a ratio of how close the two strings are, between zero and 1. So instead of testing if they're equal, you chose your similarity ratio.
Things to watch out for, you may want to convert both strings to lower case.
string1.lower()
Also note you may want to impliment your favourite method of splitting the string i.e. .split() or something using re so that a search for 'David' against 'David Brent' ranks higher.