django-piston appears to create a data attribute on the request object before it gets to the Handler phase. This data is available, for example, in the PUT and POST handlers by accessing request.data.
However, in the DELETE handler, the data is not available.
I would like to modify django-piston to make this data available but I have no real idea on where to start. Any ideas? Where does the data attribute originate from?
I solved this for myself. The short hacky answer is that the method
translate_mime(request)
from piston.utils needs to be run on the request to make the data attribute available.
The overall fix for this would be to make a change in the Piston source code itself in resource.py to execute the translate_mime method for DELETE actions. Currently it only does to automatically for PUT and POST.
But, like I said, you can actually just manually call translate_mime in the actual handler method and it works fine.
Related
Our callback system worked such that during a request where you needed more user input you would run the following:
def view(req):
# do checks, maybe get a variable.
bar = req.bar()
def doit():
foo = req.user
do_the_things(foo, bar)
req.confirm(doit, "Are you sure you want to do it")
From this, the server would store the function object in a dictionary, with a UID as a key that would be send to the client, where a confirmation dialog would be shown. When OK is pressed another request is sent to another view which looks up the stored function object and runs it.
This works in a single process deployment. However with nginx, if there's a process pool greater than 1, a different process gets the confirmation request, and thus doesn't have the stored function, and can no run.
We've looked into ways to force nginx to use a certain process for certain requests, but haven't found a solution.
We've also looked into multiprocessing libraries and celery, however there doesn't seem to be a way to send a predefined function into another process.
Can anyone suggest a method that will allow us to store a function to run later when the request for continuing might come from a separate process?
There doesn't seem to be a good reason to use a callback defined as an inline function here.
The web is a stateless environment. You can never be certain of getting the same server process two requests in a row, and your code should never be written to store data in memory.
Instead you need to put data into a data store of some kind. In this case, the session is the ideal place; you can store the IDs there, then redirect the user to a view that pops that key from the session and runs the process on the relevant IDs. Again, no need for an inline function at all.
I'm trying to set up python-telegram-bot library in webhook mode with Django. That should work as follows: on Django startup, I do some initial setting of python-telegram-bot and get a dispatcher object as a result. Django listens to /telegram_hook url and receives updates from Telegram servers. What I want to do next is to pass the updates to the process_update method of the dispatcher created on startup. It contains all the parsing logic and invokes callbacks specified during setup.
The problem is that the dispatcher object needs to be saved globally. I know that global states are evil but that's not really a global state because the dispatcher is immutable. However, I still don't know where to put it and how to ensure that it will be visible to all threads after setup phase is finished. So the question is how do I properly save the dispatcher after setup to invoke it from Django's viewset?
P.S. I know that I could use a built-in web server or use polling or whatever. However, I have reasons to use Django and I anyway would like to know how to deal with cases like that because it's not the only situation I can imagine when I need to store an immutable object created on startup globally.
It looks like you need thread safe singleton like this one https://gist.github.com/werediver/4396488 or http://alacret.blogspot.ru/2015/04/python-thread-safe-singleton-pattern.html
import threading
# Based on tornado.ioloop.IOLoop.instance() approach.
# See https://github.com/facebook/tornado
class SingletonMixin(object):
__singleton_lock = threading.Lock()
__singleton_instance = None
#classmethod
def instance(cls):
if not cls.__singleton_instance:
with cls.__singleton_lock:
if not cls.__singleton_instance:
cls.__singleton_instance = super(SingletonMixin, cls).__new__(cls)
return cls.__singleton_instance
I need to create a REST data source in two cases :
when there is nothing in the local browser cache - the standard way through ds.dataBind()
when there is something - by preloading the previously cached json result in it.
The browser cache may by any browser cache. I prefer localForage.
Is there any way through public API to push a json array into the REST ds after its creation and before databind() in order to prevent any inital GET/databind call?
There is nothing built-in for the igDataSource to handle cached data over requests. My suggestion would be to utilize the jQuery.ajaxSetup to intercept the requests and use cached data if such exists in localStorage or anywhere else.
$.ajaxSetup({
beforeSend: function (jqXHR, settings) {
// return from local storage instead of initiating the request
}
});
I have come to the conclusion that it is not possible to do it this way (on the level of $.ajax) because returning from beforeSend does not cancel the request. On the other hand aborting the request (jqXHR.abort()) aborts the whole request pipeline which aborts the execution of all other $.ajax callbacks which is a dead end - the whole dataSource pipeline is then aborted which stops me from getting any result.
The only solution for the moment is to create different type of dataSource (JSON ds) during the creation of the grid (in my case these are the ds for the combos).
Update
It is not at all impossible, but the pipeline consisting methods _remoteData->_processRequest->_successCallBack->CompleteCallBack must be abstracted away in a state-machine like class. The problem comes from the fact that the state-machine is implemented via $.ajax which is not really conceived with that kind of scenario in mind and hacking it is not a good idea.
If there is a lightweight js state-machine library it can be done.
I can't find any documentation on why does this happen, but according to the docs bulk operations should not trigger models signals.
Now the issue is this, if i do somequeryset.delete()a signal is triggered for each deleted object even if it is a bulk operation!
On the other hand, somequeryset.update(someField=5) will NOT trigger any signal!
So this is pretty much an unexpected result, I would hope for both to behave the same.
Django 1.7.7
Any ideas? I want deletes to have a signal but triggering it on bulk deletes is not acceptable
As explained here it really doesn't call delete() method on each item but it DOES calls signals. I don't know if that is possible but I also agree that there should be at least some option en queryset.delete() to skip signals execution.
My web app is based on (embedded) Jetty 9. The code that runs inside Jetty (i.e. from the *.war file) has the need to, at times, execute an HTTP request back into Jetty and itself, completely asynchronously to "real" HTTP requests coming from the network.
I know what you will say, but this is the situation I ended up with after merging multiple disparate products into one container and presently cannot avoid it. A stop-gap measure is in place - it actually sends a network HTTP request back to itself (presently using Jetty client, but that does not matter). However, not only that adds more overhead, it also does not allow us to pass actual object references we'd like to be able to pass via, say, request attributes.
Desire is to be able to do something like constructing new HttpServletRequest and HttpServletResponse pair and use a dispatcher to "include" (or similar) the other servlet we presently can only access via the network. We've built "dummy" implementations of those, but the this fails in Jetty's dispatcher line 120 with a null pointer exception:
public void include(ServletRequest request, ServletResponse response) throws ServletException, IOException
{
Request baseRequest=(request instanceof Request)?((Request)request):HttpChannel.getCurrentHttpChannel().getRequest();
... because this is not an instance of Jetty's Request class and getCurrentHttpChannel() returns null because the thread is a worker thread, not an http serving one and does not have Jetty's thread locals set up.
I am contemplating options, but would like some guidance if anyone can offer it. Some things I am thinking of:
Actually use Jetty's Request class as a base. Currently not visible to the web app (a container class, would have to play with classpath and class loaders perhaps. May still be impossible (don't know what to expect there).
Play with Jetty's thread locals, attempt to tell Jetty to set up current thread as necessary. Don't know where to begin. UPDATE Tried to setServerClasses([]) and then set the current HttpChannel to the one I 'stole' from another thread. Failed misearably: java.lang.IllegalAccessError: tried to access method org.eclipse.jetty.server.HttpChannel.setCurrentHttpChannel(Lorg/eclipse/jetty/server/HttpChannel;)V from class ...
Ideally, find a better/proper way of feeding a "top" request in without going via the network. Ideally would execute on the same thread, but I would be less concerned with that.
Remember that, unfortunately, I cannot avoid this at this time. I would much rather invoke code directly, but I cannot, as the code I had to add into mine is too big to handle at this time and too dependent on some third party filters I can't even modify (and only work as filters, on real requests).
Please help!