How to pass context when testing deletion of an instance? - django

I've got a couple models, Car and CertifyRequest. When a Car instance is created, modified or deleted I need to create a CertifyRequest, which in turn needs to be manually approved. The CertifyRequest instance contains the time it was created.
I've tested creating and modifying by injecting context={"now": …} into a CarSerializer instance, but I can't figure out how to do the equivalent when deleting:
Delete requests are never passed to the serializer, so I can't access the context in the same way.
I can override destroy in the ModelViewSet and use get_serializer_context within it, but
I can't seem to pass the serializer to the ModelViewSet instance and
the implementation returns a completely different context anyway.
I do not want to use a horrible hack like an optional query parameter or testing that the time is "close to" the current test client time.
The hack I'm using currently is to set an extra now property on the Request which I pass to the view, and to look for that inside destroy.

If you're using Django's timezone.now() in your view to get the current time, you can mock that method to return a specific time in your tests and assert against that.
def test_destroy_car():
client = APIClient()
#mock.patch("application.views.timezone.now") as now:
destroy_time = datetime.datetime(2019, 4, 23, 11, 2, 0)
now.return_value = destroy_time
response = client.destroy("/api/car/12345/")
assert response.status_code == status.HTTP_204_NO_CONTENT, "The request to delete did not return a 204 response"
certify_request = CertifyRequest.objects.order_by("id").last()
assert certify_request.created_at == destroy_time, "CertifyRequest destroy time is incorrect"

Related

django.db.transaction.TransactionManagementError: cannot perform saving of other object in model within transaction

Can't seem to find much info about this. This is NOT happening in a django test. I'm using DATABASES = { ATOMIC_REQUESTS: True }. Within a method (in mixin I created) called by the view, I'm trying to perform something like this:
def process_valid(self, view):
old_id = view.object.id
view.object.id = None # need a new instance in db
view.object.save()
old_fac = Entfac.objects.get(id=old_id)
new_fac = view.object
old_dets = Detfac.objects.filter(fk_ent__id__exact = old_fac.id)
new_formset = view.DetFormsetClass(view.request.POST, instance=view.object, save_as_new=True)
if new_formset.is_valid():
new_dets = new_formset.save()
new_fac.fk_cancel = old_fac # need a fk reference to initial fac in new one
old_fac.fk_cancel = new_fac # need a fk reference to new in old fac
# any save() action after this crashes with TransactionManagementError
new_fac.save()
I do not understand this error. I already created & saved a new object in db (when I set the object.id to None & saved that). Why would creating other objects create an issue for further saves?
I have tried not instantiating the new_dets objects with the Formset, but instead explicitely defining them:
new_det = Detfac(...)
new_det.save()
But then again, any further save after that raises the error.
Further details:
Essentially, I have an Entfac model, and a Detfac model that has a foreignkey to Entfac. I need to instantiate a new Enfac (distinct in db), as well as corresponding new Detfac for the new Entfac. Then I need to change some values in some of the fields for both new & old objects, and save all that to db.
Ah. The code above is fine.
But turns out, signals can be bad. I had forgotten that upon saving Detfac, there is a signal that goes to another class and that depending on the circumstances, adds a record to another table (sort of an history table).
Since that signal is just a single operation. Something like that:
#receiver(post_save, sender=Detfac)
def quantity_adjust_detfac(sender, **kwargs):
try:
detfac_qty = kwargs["instance"].qte
product = kwargs["instance"].fk_produit
if kwargs["created"]:
initial = {# bunch of values}
adjustment = HistoQuantity(**initial)
adjustment.save()
else:
except TypeError as ex:
logger.error(f"....")
except AttributeError as ex:
logger.error(f"....")
In itself, the fact that THIS wasn't marked as atomic isn't problematic. BUT if one of those exception throws, THEN I get the transactionmanagementerror. I am still not 100% sure why, tough the django docs do mention that when wrapping a whole view in atomic (or any chunk of code for that matter), then try/except within that block can yield unexpected result, because DJango does rely on exception to decide whether or not to commit the transaction as a whole. And the data I was testing with actually threw the exception (type error when creating the HistoQuantity object).
Wrapping the try/exception with a transaction.atomic manager worked however. Guessing that this... removed/handled the throw, thus the outer atomic could work.

How to call a function from views.py to tasks.py?

I'm trying this: but its throwing an TypeError: auto_sms() missing 1 required positional argument: 'request' error.
Now I'm thinking of getting the function from views.py instead and calling it on tasks.py if requests is not working on tasks.py, how can I do it? Thanks!
#shared_task
def auto_sms(request):
responses = Rainfall.objects.filter(
level='Torrential' or 'Intense',
timestamp__gt=now() - timedelta(days=1),
)
count = responses.count()
if not (count % 10) and count > 0:
send_sms(request)
return
Passing the entire request is probably not a good idea since it can include Django model objects such as a user object. Now the problem that you will face is that if there is an object that is not serializable, then you'll get an error while calling the function. So instead of passing the whole request, just send the data that you actually need.
For example, I'm guessing you need the user here to send an SMS to. So instead of passing the whole request with the user object included, then just send the user_id and then get the user there. basically, you have to make sure that the data you're passing is serializable.
It's generally a good idea to pass ids of the Django models since the data might change while your function is being processed and you might get the old data if you pass the whole data.

Choosing correct Django delete view approach

I'm working on Django website and I have problem in figuring out correct/good way to handle delete view. From what I found out there are two ways to approach it:
1
class ObjectDeleteView(DeleteView):
model = Object
def get_success_url(self):
objectid = self.kwargs['object_id']
object = Object.objects.get(id = objectid)
container = object.container
containerid = container.id
url = reverse('Containers:showContainerContent', args=[containerid])
return url
def get_object(self):
return get_object_or_404(Object, pk=self.kwargs['object_id'])
2
def objectDelete(request, object_id):
object = Object.objects.get(id = object_id)
container = object.container
containerid = container.id
url = reverse('Containers:showContainerContent', args=[containerid])
return HttpResponseRedirect(url)
From what I can tell both are doing exactly the same thing - once object is deleted present user with page under Containers:showContainerContent.
The big difference I am experiencing is error I am getting when running unit test for this (simple call of the website and check of response code). With option 1 I end up getting error
django.template.exceptions.TemplateDoesNotExist: ContainerApp/object_confirm_delete.html
Which I understand - I don't have this template, this is default template for DeleteView, hence error is correct. The thing is I don't want to have any extra page. Just redirect user and that's it.
Also, I tested changing return url to return HttpResponseRedirect(url) in option 1, but result is the same.
What should I do here? Should I just continue with option 2? What are or might be the drawbacks for this approach?
There is a major difference between two class based delete view and function based view (the way you declared it).
CBV accepts get, post and delete http methods. When you send a get request to class based view, it does not delete the object. Instead it renders template with object to be deleted in context. This is basically used to have confirmation. For example you can send a get request and it will render a template with text "Do you really want to delete?" or "Please confirm blah blah..". And if you send a post or delete request, it will actually delete the object and redirect to next page.
FBV, on the other hand, give you full control over what you want to do. And as you declared it, it will accept any request type and delete the object and redirect to next page because you have not done any request type check in your view which is not a great idea IMHO. You should not allow deletion on get requests. They should be idempotent. There are plenty of otherthings that CBV provides. For example in case the object does not exist your FBV will crash. CBV, on contrary, will return proper 404 response if object does not exist.
So I think there is no bad in using FBV, but make is strong and secure enough that it handles every case (what if object does not exist?, what about confirmation?, GET should be idempotent only allow deletion with post? etc etc). Or simply use CBV.

Grails 2.1.x Controller Unit Testing with services

Attempting to unit test a Grails 2.1.x controller that calls a template to show a list of history items with a status. This controller works fine in manual testing but were attempting to automate things and I'm getting stuck on this part. Part of our issue may be that the domain object is over engineered.
The setup for this test may be more integration rather than unit testing but I'm not sure I can test the function with out it.
The controller action generates a list of history items via a createCriteria query. This list is passed to the template to be rendered.
def loadHistValues(){
def histDomainObject = new historyDom()
def elements = histDomainObject.createCriteria().list(max: params.max, offset: params.offset)
render (template: 'searchResults', model:[elements: elements]
}
The template has code that iterates through the list putting values in each column. One of these items is getStatus(). This calls a utility service to return the values.
def getStatus(){
return historyUtillityService.getStatus(this)
}
The service gets the latest history event and returns the value.
def getStatus(HistoryDom hist){
def histStatus = HistoryEvent.createCriteria().get(
maxResults(1)
order('id', 'desc')
eq('historyDom', hist)
)
if (histStatus == null)
return 0
else
return histStatus.status
}
I'm getting a null pointer when the getStatus() is called.
I've setup both mock domain object and mock services but I'm not sure that these are getting down to this level or maybe I'm calling them wrong.
#TestFor (HistoryController)
#MockFor (HistoryDom, HistoryEventDom)
class HistoryControllerTests{
def util = new UnitTestUtil()
void testLoadHistValues(){
def mockHistoryUtilityService = mockfor (HistoryUtilityService)
mockHisotryUtilityService.demand.getStatus (-> return Status.QUEUED)
def histObj1 = util.initMockHistObj(1)
def histObj2 = util.initMockHistObj(2)
histObj1.save()
histObj2.save()
def mockHistEvent = new HistEvent(
histDate: histObj1.getHistDate(),
histObj: histObj1,
histStatus: Status.QUEUED
)
mockHistEvent.save()
controller.loadHistValues()
assert response.text contains("Something worth testing")
}
I tried setting a mock view before the call to the controller and checking the response text for that but it never gets past the call to the controller since its still trying to process the template. I'm at a loss at this point as to how to test this controller function, or is it that the object wasn’t architected properly; this seems a bit overly complicated.
answer was to mock things for constrainsts tests before they would get fully saved by mock GORM. I guess mockForConstraintsTests dosnt quite function as I expected

Django: Passing a request directly (inline) to a second view

I'm trying to call a view directly from another (if this is at all possible). I have a view:
def product_add(request, order_id=None):
# Works. Handles a normal POST check and form submission and redirects
# to another page if the form is properly validated.
Then I have a 2nd view, that queries the DB for the product data and should call the first one.
def product_copy_from_history(request, order_id=None, product_id=None):
product = Product.objects.get(owner=request.user, pk=product_id)
# I need to somehow setup a form with the product data so that the first
# view thinks it gets a post request.
2nd_response = product_add(request, order_id)
return 2nd_response
Since the second one needs to add the product as the first view does it I was wondering if I could just call the first view from the second one.
What I'm aiming for is just passing through the request object to the second view and return the obtained response object in turn back to the client.
Any help greatly appreciated, critism as well if this is a bad way to do it. But then some pointers .. to avoid DRY-ing.
Thanx!
Gerard.
My god, what was I thinking. This would be the cleanest solution ofcourse:
def product_add_from_history(request, order_id=None, product_id=None):
""" Add existing product to current order
"""
order = get_object_or_404(Order, pk=order_id, owner=request.user)
product = Product.objects.get(owner=request.user, pk=product_id)
newproduct = Product(
owner=request.user,
order = order,
name = product.name,
amount = product.amount,
unit_price = product.unit_price,
)
newproduct.save()
return HttpResponseRedirect(reverse('order-detail', args=[order_id]) )
A view is a regular python method, you can of course call one from another giving you pass proper arguments and handle the result correctly (like 404...). Now if it is a good practice I don't know. I would myself to an utiliy method and call it from both views.
If you are fine with the overhead of calling your API through HTTP you can use urllib to post a request to your product_add request handler.
As far as I know this could add some troubles if you develop with the dev server that comes with django, as it only handles one request at a time and will block indefinitely (see trac, google groups).