I am calling tastypie api from normal django views.
def test(request):
view = resolve("/api/v1/albumimage/like/user/%d/" % 2 )
accept = request.META.get("HTTP_ACCEPT")
accept += ",application/json"
request.META["HTTP_ACCEPT"] = accept
res = view.func(request, **view.kwargs)
return HttpResponse(res._container)
Using tastypie resource in view
Call an API on my server from another view
achieve the same thing but seems harder.
Is my way of calling api acceptable?
Besides, it would be awesome if I could get the result in python dictionary instead of json.
Is it possible?
If you need a dictionary, it means that you must design your application better. Don't do important stuff in your views, nor in the Tastypie methods. Refactor it to have common funcionality.
As a general rule, views must be small. No more than 15 lines. That makes the code readable, reusable and easy to test.
I'll provide an example to make it clearer, suppose in that Tastypie method you must be creating a Like object, maybe sending a signal:
class AlbumImageResource(ModelResource):
def like_method(self, request, **kwargs):
# Do some method checking
Like.objects.create(
user=request.user,
object=request.data.get("object")
)
signals.liked_object(request.user, request.data.get("object"))
# Something more
But, if you need to reuse that behavior in a view, the proper thing would be to factorize that in a different function:
# myapp.utils
def like_object(user, object):
like = Like.objects.create(
user=request.user,
object=request.data.get("object")
)
signals.liked_object(request.user, request.data.get("object"))
return like
Now you can call it from your API method and your view:
class AlbumImageResource(ModelResource):
def like_method(self, request, **kwargs):
# Do some method checking
like_object(request.user, request.data.get("object")) # Here!
And in your view...
# Your view
def test(request, object_id):
obj = get_object_or_404(Object, id=object_id)
like_object(request.user, obj)
return HttpResponse()
Hope it helps.
Related
My web application makes API calls to Spotify. In one of my Flask views I use the same method with different endpoints. Specifically:
sh = SpotifyHelper()
...
#bp.route('/profile', methods=['GET', 'POST'])
#login_required
def profile():
...
profile = sh.get_data(header, 'profile_endpoint')
...
playlist = sh.get_data(header, 'playlist_endpoint')
...
# There are 3 more like this to different endpoints -- history, top_artists, top_tracks
...
return render_template(
'profile.html',
playlists=playlists['items'],
history=history['items'],
...
)
I do not want to make an API call during testing so I wrote a mock.json that replaces the JSON response from the API. I have done this successfully when the method is only used once per view:
class MockResponse:
#staticmethod
def profile_response():
with open(path + '/music_app/static/JSON/mock.json') as f:
response = json.load(f)
return response
#pytest.fixture
def mock_profile(monkeypatch):
def mock_json(*args, **kwargs):
return MockResponse.profile_response()
monkeypatch.setattr(sh, "get_data", mock_json)
My problem is that I need to call get_data to different endpoints with different responses. My mock.json is written:
{'playlists': {'items': [# List of playlist data]},
'history': {'items': [# List of playlist data]},
...
So for each API endpoint I need something like
playlists = mock_json['playlists']
history = mock_json['history']
I can write mock_playlists(), mock_history(), etc., but how do I write a monkeypatch for each? Is there some way to pass the endpoint argument to monkeypatch.setattr(sh, "get_data", mock_???)?
from unittest.mock import MagicMock
#other code...
mocked_response = MagicMock(side_effect=[
# write it in the order of calls you need
profile_responce_1, profile_response_2 ... profile_response_n
])
monkeypatch.setattr(sh, "get_data", mocked_response)
I'm making a school records webapp. I want staff users to be able to view the user data pages for any pupil by going to the correct url, but without allowing pupils access to each others' pages. However I'm using the same view function for both urls.
I have a working #user_is_staff decorator based on the existence of a user.staff object. Pupil users have a user.pupil object instead. These are discrete, naturally, as no user can have both a .staff and a .pupil entry.
urls.py
(r'^home/(?P<subject>[^/]+)/$', 'myproject.myapp.views.display_pupil')
(r'^admin/user/(?P<user>\d+)/(+P<subject>[^/]+)/$', 'myproject.myapp.views.display_pupil')
views.py
#login_required
def display_pupil(request, subject, pupil=None):
if pupil:
try:
thepupil = get_object_or_404(Pupil, id = pupil, cohort__school = request.user.staff.school)
except Staff.DoesNotExist:
return HttpResponseForbidden()
else:
thepupil = request.user.pupil
thesubject = get_object_or_404(Subject, shortname = subject)
# do lots more stuff here
return render_to_response('pupilpage.html', locals(), context_instance=RequestContext(request))
Doing it this way works, but feels very hacky, particularly as my '#user_is_staff' decorator has a more elegant redirect to a login page than the 403 error here.
What I don't know is how to apply the #user_is_staff decorator to the function only when it has been accessed with the pupil kwarg. There's a lot more code in the real view function, so I don't want to write a second one as that would be severely non-DRY.
Sounds like you want two separate views - one for a specific pupil and one for the current user - and a utility function containing the shared logic.
#login_required:
def display_current_pupil(request, subject):
thepupil = request.user.pupil
return display_pupil_info(request, subject, thepupil)
#user_is_staff
def display_pupil(request, subject, pupil):
thepupil = get_object_or_404(Pupil, id=pupil, cohort__school=request.user.staff.school)
return display_pupil_info(request, subject, thepupil)
def display_pupil_info(request, subject, thepupil):
thesubject = get_object_or_404(Subject, shortname=subject)
# do lots more stuff here
return render_to_response('pupilpage.html', locals(), context_instance=RequestContext(request))
In my django piston API, I want to yield/return a http response to the the client before calling another function that will take quite some time. How do I make the yield give a HTTP response containing the desired JSON and not a string relating to the creation of a generator object?
My piston handler method looks like so:
def create(self, request):
data = request.data
*other operations......................*
incident.save()
response = rc.CREATED
response.content = {"id":str(incident.id)}
yield response
manage_incident(incident)
Instead of the response I want, like:
{"id":"13"}
The client gets a string like this:
"<generator object create at 0x102c50050>"
EDIT:
I realise that using yield was the wrong way to go about this, in essence what I am trying to achieve is that the client receives a response right away before the server moves onto the time costly function of manage_incident()
This doesn't have anything to do with generators or yielding, but I've used the following code and decorator to have things run in the background while returning the client an HTTP response immediately.
Usage:
#postpone
def long_process():
do things...
def some_view(request):
long_process()
return HttpResponse(...)
And here's the code to make it work:
import atexit
import Queue
import threading
from django.core.mail import mail_admins
def _worker():
while True:
func, args, kwargs = _queue.get()
try:
func(*args, **kwargs)
except:
import traceback
details = traceback.format_exc()
mail_admins('Background process exception', details)
finally:
_queue.task_done() # so we can join at exit
def postpone(func):
def decorator(*args, **kwargs):
_queue.put((func, args, kwargs))
return decorator
_queue = Queue.Queue()
_thread = threading.Thread(target=_worker)
_thread.daemon = True
_thread.start()
def _cleanup():
_queue.join() # so we don't exit too soon
atexit.register(_cleanup)
Perhaps you could do something like this (be careful though):
import threading
def create(self, request):
data = request.data
# do stuff...
t = threading.Thread(target=manage_incident,
args=(incident,))
t.setDaemon(True)
t.start()
return response
Have anyone tried this? Is it safe? My guess is it's not, mostly because of concurrency issues but also due to the fact that if you get a lot of requests, you might also get a lot of processes (since they might be running for a while), but it might be worth a shot.
Otherwise, you could just add the incident that needs to be managed to your database and handle it later via a cron job or something like that.
I don't think Django is built either for concurrency or very time consuming operations.
Edit
Someone have tried it, seems to work.
Edit 2
These kind of things are often better handled by background jobs. The Django Background Tasks library is nice, but there are others of course.
You've turned your view into a generator thinking that Django will pick up on that fact and handle it appropriately. Well, it won't.
def create(self, request):
return HttpResponse(real_create(request))
EDIT:
Since you seem to be having trouble... visualizing it...
def stuff():
print 1
yield 'foo'
print 2
for i in stuff():
print i
output:
1
foo
2
I want to check if a user has any new messages each time they load the page. Up until now, I have been doing this inside of my views but it's getting fairly hard to maintain since I have a fair number of views now.
I assume this is the kind of thing middleware is good for, a check that will happen every single page load. What I need it to do is so:
Check if the user is logged in
If they are, check if they have any messages
Store the result so I can reference the information in my templates
Has anyone ever had to write any middleware like this? I've never used middleware before so any help would be greatly appreciated.
You could use middleware for this purpose, but perhaps context processors are more inline for what you want to do.
With middleware, you are attaching data to the request object. You could query the database and find a way to jam the messages into the request. But context processors allow you to make available extra entries into the context dictionary for use in your templates.
I think of middleware as providing extra information to your views, while context processors provide extra information to your templates. This is in no way a rule, but in the beginning it can help to think this way (I believe).
def messages_processor(request):
return { 'new_messages': Message.objects.filter(unread=True, user=request.user) }
Include that processor in your settings.py under context processors. Then simply reference new_messages in your templates.
I have written this middleware on my site for rendering messages. It checks a cookie, if it is not present it appends the message to request and saves a cookie, maybe you can do something similar:
class MyMiddleware:
def __init__(self):
#print 'Initialized my Middleware'
pass
def process_request(self, request):
user_id = False
if request.user.is_active:
user_id = str(request.user.id)
self.process_update_messages(request, user_id)
def process_response(self, request, response):
self.process_update_messages_response(request, response)
return response
def process_update_messages(self, request, user_id=False):
update_messages = UpdateMessage.objects.exclude(expired=True)
render_message = False
request.session['update_messages'] = []
for message in update_messages:
if message.expire_time < datetime.datetime.now():
message.expired = True
message.save()
else:
if request.COOKIES.get(message.cookie(), True) == True:
render_message = True
if render_message:
request.session['update_messages'].append({'cookie': message.cookie(), 'cookie_max_age': message.cookie_max_age})
messages.add_message(request, message.level, message)
break
def process_update_messages_response(self, request, response):
try:
update_messages = request.session['update_messages']
except:
update_messages = False
if update_messages:
for message in update_messages:
response.set_cookie(message['cookie'], value=False, max_age=message['cookie_max_age'], expires=None, path='/', domain=None, secure=None)
return response
I am writing an application in Django, which uses [year]/[month]/[title-text] in the url to identitfy news items. To manage the items I have defined a number of urls, each starting with the above prefix.
urlpatterns = patterns('msite.views',
(r'^(?P<year>[\d]{4})/(?P<month>[\d]{1,2})/(?P<slug>[\w]+)/edit/$', 'edit'),
(r'^(?P<year>[\d]{4})/(?P<month>[\d]{1,2})/(?P<slug>[\w]+)/$', 'show'),
(r'^(?P<year>[\d]{4})/(?P<month>[\d]{1,2})/(?P<slug>[\w]+)/save$', 'save'),
)
I was wondering, if there is a mechanism in Django, which allows me to preprocess a given request to the views edit, show and save. It could parse the parameters e.g. year=2010, month=11, slug='this-is-a-title' and extract a model object out of them.
The benefit would be, that I could define my views as
def show(news_item):
'''does some stuff with the news item, doesn't have to care
about how to extract the item from request data'''
...
instead of
def show(year, month, slug):
'''extract the model instance manually inside this method'''
...
What is the Django way of solving this?
Or in a more generic way, is there some mechanism to implement request filters / preprocessors such as in JavaEE and Ruby on Rails?
You need date based generic views and create/update/delete generic views maybe?
One way of doing this is to write a custom decorator. I tested this in one of my projects and it worked.
First, a custom decorator. This one will have to accept other arguments beside the function, so we declare another decorator to make it so.
decorator_with_arguments = lambda decorator: lambda * args, **kwargs: lambda func: decorator(func, *args, **kwargs)
Now the actual decorator:
#decorator_with_arguments
def parse_args_and_create_instance(function, klass, attr_names):
def _function(request, *args, **kwargs):
model_attributes_and_values = dict()
for name in attr_names:
value = kwargs.get(name, None)
if value: model_attributes_and_values[name] = value
model_instance = klass.objects.get(**model_attributes_and_values)
return function(model_instance)
return _function
This decorator expects two additional arguments besides the function it is decorating. These are respectively the model class for which the instance is to be prepared and injected and the names of the attributes to be used to prepare the instance. In this case the decorator uses the attributes to get the instance from the database.
And now, a "generic" view making use of a show function.
def show(model_instance):
return HttpResponse(model_instance.some_attribute)
show_order = parse_args_and_create_instance(Order, ['order_id'])(show)
And another:
show_customer = parse_args_and_create_instance(Customer, ['id'])(show)
In order for this to work the URL configuration parameters must contain the same key words as the attributes. Of course you can customize this by tweaking the decorator.
# urls.py
...
url(r'^order/(?P<order_id>\d+)/$', 'show_order', {}, name = 'show_order'),
url(r'^customer/(?P<id>\d+)/$', 'show_customer', {}, name = 'show_customer'),
...
Update
As #rebus correctly pointed out you also need to investigate Django's generic views.
Django is python after all, so you can easily do this:
def get_item(*args, **kwargs):
year = kwargs['year']
month = kwargs['month']
slug = kwargs['slug']
# return item based on year, month, slug...
def show(request, *args, **kwargs):
item = get_item(request, *args, **kwargs)
# rest of your logic using item
# return HttpResponse...