I have a model with a custom json serializer that performs some processing prior to dumping to json.
Now, when fetching a single obj i want to use the custom serializer from the model to fetch the entire object (with the processing mentioned above). When fetching a list i want to use the default serializer to fetch only the headers (render only the model fields).
I looked into three options
overriding obj_get
def obj_get(self, bundle, **kwargs):
obj = ComplexModel.objects.get(pk=kwargs['pk'])
return obj.to_serializable()
i got thrown with
{"error": "The object LONG JSON DUMP has an empty attribute 'description' and doesn't allow a default or null value."}
not sure why this is happening - the field description is nullable, Plus - why tastypie is checking validation for objects already in the database, and... while fetching ??
using dehydrate
def dehydrate(self, bundle):
return bundle.obj.to_serializable()
This is great but the cycle is executed before each object - so i cann't tell if I'm fetching a list or a single object. The result here is the full serizliazed objects whether it's a list or a single entry.
creating a custom serializer
class CustomComplexSerializer(Serializer):
def to_json(self, data, options=None):
if isinstance(data,ComplexModel):
data = data.to_serializable()
return super(CustomComplexSerializer,self).to_json(data)
Same problem here, when fetching one entry the serializer accepts the obj in data.obj, when it's fetching a list it accepts a dict (odd...). I can check if bundle is an instance of dict as well - but testing for the type of ComplexModel felt awkward enough.
So what is the best way to implement a custom serialization for fetching only a single entry ?
Just for future reference, I think i found the right way to do this and it's by using full_dehydrate.
def full_dehydrate(self, bundle, for_list=False):
if not for_list:
return bundle.obj.to_serializable()
return super(ReportResource,self).full_dehydrate(bundle,for_list)
Related
How to use PUT method for creating an object on particular id if no object is available
on that id in Django Rest Framework?
You can try update_or_create()
e.g:
class YourAPIView(APIView):
def put(self, request, **kwargs):
serializer = YourSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
obj, created = YourModel.objects.update_or_create(
id=kwargs['id'],
defaults=serializer.validated_data)
return Response()
A RESTFUL API should error out for a PUT request on an object that doesn't exist. The idea being that if it had existed at one point to create the id, it has since been deleted. It makes more sense to keep it deleted than to re-create it.
This is especially true if the id is auto-generated. And even more so if it's an auto-incrementing integer id like the default id of Django models. If you were to support this functionality in that case, a user would create an instance of data with an id that the table hasn't incremented over yet potentially leading to errors like this.
I have a POST endpoint that accepts a JSON payload - parses values from it, creates dictionaries from those values - then feeds those dictionaries to model serializers to create objects.
I don't believe DRF is meant for what I'm trying to do here, but it is a requirement.
My main question using the example below is this:
Currently if an instance exists, Django will throw an error about unique_constraint field error (exactly as it should). However, since this is kind of a weird endpoint, I need those errors ignored. So if for example a product already exists, instead of unique_constraint error, it just continues on and creates platform.
The rest of this app will require that error to be thrown, it is only in this function that I wish to ignore those errors. Serializers are shared throughout the app so I don't really want to touch or override the Serializer in this case.
def job_start(request, platform_name="other", script_version="1"):
# load in data
json_data = json.loads(request.body.decode('utf-8'))
jenkins_vars = jenkins_lib(json_data)
# create dictionary of key/values required to create Product
product_dict = {}
product_dict['name'] = jenkins_vars.product
product_dict['product_age'] = jenkins_vars.age
#create object via DRF
serializer = ProductSerializer(data.product_dict)
if serializer.is_valid():
serializer.save()
# create dictionary of key/values required to create Product
platform_dict = {}
platform_dict['name'] = jenkins_vars.platform_name
platform_dict['platform'] = jenkins_vars.platform_id
#create object via DRF
serializer = PlatformSerializer(data.platform_dict)
if serializer.is_valid():
serializer.save()
Also any advice on how to better accomplish above would be appreciated.
Using Django Rest Framework 3, Function Based Views, and the ModelSerializer (more specifically the HyperlinkedModelSerializer).
When a user submits a form from the client, I have a view that takes the request data, uses it to call to an external API, then uses the data from the external API to populate data for a model serializer.
I believe I have this part working properly, and from what I read, you are supposed to use context and validate()
In my model serializer, I have so far just this one overidden function:
from django.core.validators import URLValidator
def validate(self, data):
if 'foo_url' in self.context:
data['foo_url'] = self.context['foo_url']
URLValidator(data['foo_url'])
if 'bar_url' in self.context:
data['bar_url'] = self.context['bar_url']
URLValidator(data['bar_url'])
return super(SomeSerializer, self).validate(data)
Just in case, the relevant view code is like so:
context = {'request': request}
...
context['foo_url'] = foo_url
context['bar_url'] = bar_url
s = SomeSerializer(data=request.data, context=context)
if s.is_valid():
s.save(user=request.user)
return Response(s.data, status=status.HTTP_201_CREATED)
Now assuming I have the right idea going (my model does populate its foo_url and bar_url fields from the corresponding context data), where I get confused is how the validation is not working. If I give it bad data, the model serializer does not reject it.
I assumed that in validate(), by adding the context data to the data, the data would be checked for validity when is_valid() was called. Maybe not the case, especially when I print out s (after using the serializer but before calling is_valid()) there is no indication that the request object's data has been populated with the context data from validate() (I don't know if it should be).
So I tried calling the URLValidators directly in the validate() method, but still doesn't seem to be working. No errors despite giving it invalid data like 'asdf' or an empty python dict ({}). My test assertions show that the field indeed contains invalid data like '{}'.
What would be the proper way to do this?
You're not calling the validator.
By doing URLValidator(data['bar_url']) you're actually building an url validator with custom schemes (see the docs) and that's it. The proper code should be:
URLValidator()(data['bar_url'])
Where you build a default url validator and then validate the value.
But anyway I would not use this approach, what I would do instead is directly add the extra data (not using the context) and let DRF do the validation by declaring the right fields:
# Somewhere in your view
request.data['bar_url'] = 'some_url'
# In serializer:
class MySerializer(serializers.ModelSerializer):
bar_url = serializers.URLField()
class Meta:
fields = ('bar_url', ...)
To answer your comment
I also don't understand how this also manages to make it past the
Django's model validation
See this answer:
Why doesn't django's model.save() call full_clean()?
By default Django does not automatically call the .full_clean method so you can save a model instance with invalid values (unless the constraints are on the database level).
Complete DRF beginner here... I'm confused about the following concepts:
Let's say I POST some data, including a complex JSON blob for one of the fields, in order to create an object. Where should I actually create this object? Looking at the 3.1 docs, it seems like two places are equally valid for this: Serializer.create() and ViewSet.create(). How do I decide where to create my object and which way is considered "canonical"?
I understand that I need to run Serializer.is_valid() in order to validate the POSTed data. However, what is the difference between .data and .validated_data? They appear to be the same.
Finally, what is the "canonical" way to use a JSONField (e.g. django-jsonfield, but I'm not married to this package/implementation)? I have a model with several JSONFields and would like to use it "correctly" in DRF. I am aware of https://stackoverflow.com/a/28200902/585783, but it doesn't seem enough.
EDIT: My use case is an API POST that includes a complex JSON blob in one of the fields. I need to parse the JSON field, validate it, get/create several objects based on it, link new and existing objects, and finally store the JSON field in one of the new objects. So, I need to do custom validation for this JSON field by parsing it to python:
from django.utils.six import BytesIO
from rest_framework.parsers import JSONParser
class MySerializer(serializers.ModelSerializer):
my_json_field = JSONSerializerField()
def validate_my_json_field(self, value):
stream = BytesIO(value)
list_of_dicts = JSONParser().parse(stream)
# do lots of validation to list_of_dicts
# render list_of_dicts back to a JSON string
return validated_list_of_dicts_as_json
Now, depending on which way I choose in Concept 1, I have to parse the validated JSON again to create my objects in create(), which doesn't feel right.
Thanks in advance!
The contents of HTTP requests (POST, GET, PUT, DELETE) will always be processed by the views (View, APIView, generic views, viewsets). The serializers are just part of how these views process the requests. Serializers are the "means" to connect the View layer with the Model layer. For what serializers do specifically, please read the first paragraph of the this page of the official docs.
To answer #1: you almost always do not need to touch either unless you have a very specific use case. In those extraordinary cases:
You override Serializer.create() if you have to customize how model
instances are converted into native Python objects and vice versa. (e.g. create multiple objects)
You override ViewSet.create() if you need to customize how the actual request itself will be processed. (e.g. if there is an additional query parameter in the request, add some response headers)
To answer #2, you almost never need to use is_valid() when using generic views or ViewSets. They already do it under the hood for you. The serializer's .data and .validated_data are a bit tricky to explain. The former contains the Python datatype representation of the queryset/model instances you want to serialize, while the latter is the result of the validation process involved in checking if a Python object conforms to that particular Python datatype representation mentioned earlier, which in turn can be converted into a model instance. If that did not make sense, refer to Serializing objects and Deserializing objects.
As for #3, what do you mean by JSON field? As far as I know, Django does not have a model field called JSONField. Is this from a third party package or your own custom written model field? If so, then you will probably have to find or write a package that will let you integrate it with DRF smoothly and "correctly" whatever that means.
EDIT
Your use case is too complicated. I can only give you rough code for this one.
class MyModelSerializer(serializers.ModelSerializer):
my_json_field = JSONSerializerField()
class Meta:
model = MyModel
def validate(self, data):
# Get JSON blob
json_blob = data['my_json_field']
# Implement your own JSON blob cleanup method
# Return None if invalid
json_blob = clean_json_blob(json_blob)
# Raise HTTP 400 with your custom error message if not valid
if not json_blob:
raise serializers.ValidationError({'error': 'Your error message'})
# Reassign if you made changes and return
data['my_json_field'] = json_blob
return data
def create(self, validated_data):
json_blob = validated_data['my_json_field']
# Implement your object creation here
create_all_other_objects_from_json(json_blob)
# ...
# Then return a MyModel instance
return my_model
I have an instance of a Django (1.6) model (let's take User for example). I would like to get the field values for that model, like I can do for a QuerySet, by calling QuerySet().values('first_name', 'username'). Is that possible, or should I just create a dictionary with the required fields?
Edit: A bit more insight into why I need this (maybe there are other workarounds). I want to return a Django model as a JSON response (by using json.dumps, not Django's JSON serializer), and so far, I can do that by extending the default Python JSON encoder, and treating Django models specially, by converting them to dictionaries using model_to_dict. The problem is that this doesn't get me the related objects, which I need.
Here's my code, for reference:
class JsonEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, models.Model):
return model_to_dict(obj) # here I'd like to pull some related values
return json.JSONEncoder.default(self, obj)
If you want to pull all related values by default, you can do the following:
def default(self, obj):
if isinstance(obj, models.Model):
d = model_to_dict(obj) # here I'd like to pull some related values
for field in obj._meta.fields:
if field.rel: # single related object
d[field.name] = model_to_dict(getattr(obj, field.name))
return json.JSONEncoder.default(self, obj)
This will go one level deep for single related objects, but not for many-to-many relations or reverse foreign keys. Both are possible, but you'll have to find out which methods/attributes on obj._meta return the specific fields.
If you only want to retrieve specific fields, you'll have to manually specify and fetch these fields.