How to use username as a string in model in django? - django

I want to use the username of the account in which my django is running as a string to load the model fields specific to that username. I have created a file 'survey.py' which returns a dictionary and I want the keys as the fields.
How can I get the username as string?
from django.db import models
from django.contrib.auth.models import User
from multiselectfield import MultiSelectField
from survey_a0_duplicate import details, analysis
import ast
class HomeForm1(models.Model):
user= models.OneToOneField(User, on_delete=models.CASCADE,)
details.loadData(survey_name = user)#<=====This loads the data for specific user<======
global f1
f1=analysis.getQuestion(in_json=False)#<====We get the dictionary here<========
d=list(f1.keys())
###################assign the filters#######################################################
for k in d:
q=list(f1[k].keys())
q.sort()
choices=tuple(map(lambda f: (f,f),q))
locals()[k]=MultiSelectField(max_length=1000,choices=choices,blank=True)
def save(self, *args, **kwargs):
if self.pk is None:
self.user= self.user.username
super(HomeForm1,self).save(*args,**kwargs)
def __str__(self):
return self.title

This is not how you write Django code. Global variables are a bad idea anyway, but you must not use them in a multi-user, multi-process environment like Django. You will immediately have thread-safety issues; you must not do it.
Not only is there an explicit global in the code you have shown, there is clearly one inside survey_a0_duplicate - since details.loadData() does not actually return anything but you then "get the dictionary" from analysis.getQuestion. You must remove the globals from both locations.
Also, your save method is totally wrong. You have the user relationship; why would you overwrite it with the username? That not only makes no sense, it specifically destroys the type of the field that you have set. Just don't do it. Remove the entire save method.
But you need to stop messing about with choices at class level. That is never going to work. If you need to dynamically set choices, do in in a form, where you can customise the __init__ method to accept the current user and build up the choices based on that.

Related

Adding a common validation to all text fields on all serializers

I am looking for a way (or several ways if needed) to add a common/shared validation function to all text fields in an DRF API. I hope to be able to do this in the least intrusive way possible, since there are already so many serializers throughout the API.
This is a horrible thing, and wrong, but its a requirement. Saying "don't do that" or "you shouldn't do this" is not helpful. I know. Its not up to me.
Given a serializer like this:
class MySerializer(ModelSerializer):
description = CharField()
class Meta:
model = SomeModel
fields = ["name", "description"]
... both of these would somehow run a validation function. For example, in the base CharField the framework adds two validators, and essentially I'd like to add a third.
class CharField(Field): # site-packages/rest_framework/fields.py
def __init__(self):
..
self.validators.append(ProhibitNullCharactersValidator())
self.validators.append(ProhibitSurrogateCharactersValidator())
Is there some clever way to do this? I don't want to resort to literally hacking the source code, or replacing CharField throughout the application.
The solution I ended up going with is below. It loads at django startup in my settings module which has a nice z_patches.py where other things like this live (replacing the default filter classes, etc)
def wrap_init(old_init):
#functools.wraps(old_init)
def __new_init__(self, **kwargs):
old_init(self, **kwargs)
self.validators.append(MyCustomValidator())
return __new_init__
CharField.__init__ = wrap_init(CharField.__init__)
If you absolutely know the risks, then you could do something like this in one of your apps.py:
from django.apps import AppConfig
from rest_framework.fields import Field
from rest_framework.serializers import CharField
def _init(self, **kwargs):
...
Field.__init__(self, **kwargs)
...
self.validators.append(YourCustomValidator())
class MyAppConfig(AppConfig):
...
def ready(self):
CharField.__init__ = _init

Django custom validation before the data is saved (Enforce at the database level)

This is an extension from my post here preventing crud operations on django model
A short into to the problem , im currently using a package called django-river to implement a workflow system in my application. The issue is that they do not have a predefined 'start' , 'dropped' , 'completed' state. Their states are stored as a django model instance. This would mean that my application is unable to programmatically differentiate between the states. Therefore , the labels of these states has to be hardcoded into my program (Or does it? Maybe someone has a solution to this?)
Suppose that there is no solution to the issue other than hardcoding the states into my application , this would mean that i would have to prevent users from updating , or deleting these states that i have pre created initially.
My idea is to have a form of validation check within the django model's save method . This check would check that the first 3 instances of the State model is always start , deactivated and completed and in the same order. This would prevent the check from passing through whenever a user trys to change items at the ORM level.
However , it would seem that there is 2 issues with this:
I believe django admin doesn't run the model class save method
Someone is still able to change the states as long as the way they changed it does not pass through the save() method. AKA from the DB SQL commands
Although it is unlikely to happen , changing the name would 'break' my application and therefore i wish to be very sure that no one can edit and change these 3 predefined states.
Is there a fool proof way to do this?
My idea is to have a form of validation check within the django model's save method.
if i understand your description, maybe you can just override the save() function of your model like so:
class MyModel(models.Model):
[..]
def save(self, *args, **kwargs):
# Put your logic here ..
super(MyModel, self).save(*args, **kwargs)
I got the answer from django documentation
from django.core.exceptions import ValidationError
from django.utils.translation import gettext_lazy as _
def validate_even(value):
if value % 2 != 0:
raise ValidationError(
_('%(value)s is not an even number'),
params={'value': value},
)
You can add this to a model field via the field’s validators argument:
from django.db import models
class MyModel(models.Model):
even_field = models.IntegerField(validators=[validate_even])
FYI: It is not really mandatory to use gettext_lazy and you can use just message as follows
from django.core.exceptions import ValidationError
def validate_even(value):
if value % 2 != 0:
raise ValidationError(
('%(value)s is not an even number'),
params={'value': value},
)

How to plug in a specific validator for all cases of a built-in type?

I recently noticed that some of my entries in a database coming from users contain incorrectly encoded strings, such as ó when ó was clearly meant. It's coming from copy-pasting of other websites that aren't properly encoded, which is beyond my control. I discovered that I can add this validator to catch such cases and raise an exception - here's an example with an attached model:
from django.db import models
from django.utils.translation import gettext_lazy as _
from django.core.exceptions import ValidationError
import ftfy
def validate_ftfy(value):
value_ftfy = ftfy.ftfy(value)
if value_ftfy != value:
raise ValidationError(
_('Potential UTF-8 encoding error: %(value)r'
' decoded to %(value_ftfy)r.'),
params={'value': value, 'value_ftfy': value_ftfy}
)
class Message(models.Model):
content = models.CharField(max_length=1000, validators=[validate_ftfy])
def save(self, *args, **kwargs):
self.full_clean()
return super(Message, self).save(*args, **kwargs)
The problem is that now that I discovered it, I see no point skipping it in any of my instances of CharField, TextField and the like. Is there a way to plug in this validator to all data types, so that if anything non-binary has invalid UTF-8, I can count on it not making it to the database?
There is no hook to add additional validators to built-in fields and I'm not sure it's a good idea as they are used elsewhere in the Django core.
I think the best option for you is to define a custom field with the validation already applied, and use it in alternative to CharField, eg:
class FtfyCharField(CharField):
default_validators = [validate_ftfy]
class Message(models.Model):
content = FtfyCharField(max_length=1000)
If you wanted to apply it to other types of field you could implement it as a mixin, eg:
class FtfyFieldMixin(models.Field):
default_validators = [validate_ftfy]
class FtfyCharField(models.CharField, FtfyFieldMixin):
pass
class FtfyTextField(models.TextField, FtfyFieldMixin):
pass

Correct Way to Validate Django Model Objects?

I'm still trying to understand the correct way to validate a Django model object using a custom validator at the model level. I know that validation is usually done within a form or model form. However, I want to ensure the integrity of my data at the model level if I'm interacting with it via the ORM in the Python shell. Here's my current approach:
from django.db import models
from django.core import validators
from django.core exceptions import ValidationError
def validate_gender(value):
""" Custom validator """
if not value in ('m', 'f', 'M', 'F'):
raise ValidationError(u'%s is not a valid value for gender.' % value)
class Person(models.Model):
name = models.CharField(max_length=128)
age = models.IntegerField()
gender = models.CharField(maxlength=1, validators=[validate_gender])
def save(self, *args, **kwargs):
""" Override Person's save """
self.full_clean(exclude=None)
super(Person, self).save(*args, **kwargs)
Here are my questions:
Should I create a custom validation function, designate it as a validator, and then override the Person's save() function as I've done above? (By the way, I know I could validate my gender choices using the 'choices' field option but I created 'validate_gender' for the purpose of illustration).
If I really want to ensure the integrity of my data, should I not only write Django unit tests for testing at the model layer but also equivalent database-level unit tests using Python/Psycopg? I've noticed that Django unit tests, which raise ValidationErrors, only test the model's understanding of the database schema using a copy of the database. Even if I were to use South for migrations, any database-level constraints are limited to what Django can understand and translate into a Postgres constraint. If I need a custom constraint that Django can't replicate, I could potentially enter data into my database that violates that constraint if I'm interacting with the database directly via the psql terminal.
Thanks!
I had a similar misunderstanding of the ORM when I first started with Django.
No, don't put self.full_clean() inside of save. Either
A) use a ModelForm (which will cause all the same validation to occur - note: ModelForm.is_valid() won't call Model.full_clean explicitly, but will perform the exact same checks as Model.full_clean). Example:
class PersonForm(forms.ModelForm):
class Meta:
model = Person
def add_person(request):
if request.method == 'POST':
form = PersonForm(request.POST, request.FILES)
if form.is_valid(): # Performs your validation, including ``validate_gender``
person = form.save()
return redirect('some-other-view')
else:
form = PersonForm()
# ... return response with ``form`` in the context for rendering in a template
Also note, forms aren't for use only in views that render them in templates - they're great for any sort of use, including an API, etc. After running form.is_valid() and getting errors, you'll have form.errors which is a dictionary containing all the errors in the form, including a key called '__all__' which will contain non-field errors.
B) Simply use model_instance.full_clean() in your view (or other logical application layer), instead of using a form, but forms are a nice abstraction for this.
I don't really have a solution to, but I've never run into such a problem, even in large projects (the current project I work with my company on has 146 tables) and I don't suspect it'll be a concern in your case either.

django model - on_delete=models.PROTECT()

I'm trying to use on_delete with my models but my IDE is asking me for: collector, fields, sub_objs, using (i.e. ..., on_delete=models.PROTECT(collector, fields, sub_objs, using)).
Can someone please tell me what these are and give me a quick example because I can find them documented anywhere :(
Ignore your IDE. It is trying to get you to call the models.PROTECT function, which does indeed take those arguments. But you actually want to pass the function itself:
my_field = models.ForeignKey(..., on_delete=models.PROTECT)
ie without the parentheses that would call the function.
(Insert rant about using an IDE with a dynamic language here...)
Import like:(Python 2.7)
from django.db.models.deletion import PROTECT
Then you can use it directly.
category = ForeignKey(TCategory, PROTECT, null=False, blank=False)
models.PROTECT prevents deletions but does not raise an error by default.
you can create a custom exception for it, which is already protected.
from django.db import IntegrityError
class ModelIsProtectedError(IntegrityError):
pass
def prevent_deletions(sender, instance, *args, **kwargs):
raise ModelIsProtectedError("This model can not be deleted")
#in your models.py:
pre_delete.connect(prevent_deletions, sender=<your model>)