This question already has answers here:
zappa scheduling with Python
(2 answers)
Closed 5 years ago.
I have been experimenting with deploying Django apps to AWS Lambda with Zappa.
In some of my other (EC2/EBS hosted) Django projects, if there is a need to perform some heavier calculation that can take some time (such as sending a lot of emails, or just some processing that takes over a minute), Celery is used. It is a task queue system where the tasks are sent to a queue, a response can be returned immediately and workers can process the tasks later.
What would be the best way to implement a Celery-like task queuing system for a Zappa-Django app running in Lambda?
Zappa/Lambda supports scheduled tasks, and the models of the app could be designed in such a way that the processing could be done by scheduled functions later and the results could be saved to DB. But I do not think polling for tasks once a minute is robust enough, there is a oftena need to start the delayed task immediately.
Is there an easy way to return a response from a Django view immediately and have a function (from inside the Django app) with arbitrary parameters queued to be executed later?
You can do it using SNS. Subscribe lambda to topic and publish messages there with json payload.
I made a db driven task queue for zappa. https://github.com/andytwoods/zappa-call-later . Early days, but we are using it in production.
Every X minutes, a Zappa event pings a function that checks for tasks. Tasks can be delayed, repeated etc.
Related
Suppose we have a web-service written in python, that does some time-consuming file processing. It definitely should not be run inside the HTTP handler as it takes up to 10 mins to complete. Instead, the processing should be done asynchronously by some sort of workers, and it would be also nice to report the progress of the task execution to display to the user.
Would it be a good idea to setup Google Cloud Tasks with some Cloud Run or Cloud Functions service as a HTTP target to do this work?
Is Google Cloud Tasks suitable for handling this type of async tasks, where the user is sitting and waiting for result?
If not, is there any other options to achieve this with Google Cloud? (or should I use custom task services for this purpose, for instance, celery and redis) It also seems that Cloud Run Jobs features somewhat similar functionality, but there are not any queue systems to manage workers.
GC tasks is simply a tool for queuing tasks. It is a useful tool for ensuring that tasks do run, as it has built in retry mechanisms. How you use that in the context of an application depends on a lot of other detail of the application itself, but it is definitely possible to use it for background/asynchronous processing of tasks.
We use Google Cloud tasks to implement long running processes that report their progress via data store records. Some of these processes run longer than the standard 10 minute timeout, and trigger a new cloud task to complete the processing. We then have a simple lightweight handler that retrieves the status record from data store and reports that to the user. We poll that handler from the client, but you could also implement something like websockets.
GCP can handle Asynchronous tasks, Asynchronous execution is a well-established way to reduce request latency and make your application more responsive.
We can use cloud run or cloud functions for this type of tasks , Because we can increase the time limit upto 30 min in http task handlers in GCP cloud tasks.
For more information refer to this document.
We use Google Cloud tasks to implement long running processes that report their progress via data store records. Some of these processes run longer than the standard 10 minute timeout, and trigger a new cloud task to complete the processing.
My problem every 20minutes I want to execute the curl request which is around 25000 or more than that and save the curl response in database. In PHP it is not handled properly which is the best AWS services I can use except lambda.
A common technique for processing large number of similar calls is:
Create an Amazon Simple Queue Service (SQS) queue and push each request into the queue as a separate message. In your case, the message would contain the URL that you wish to retrieve.
Create an AWS Lambda function that performs the download and stores the data in the database.
Configure the Lambda function to trigger off the SQS queue
This way, the SQS queue can trigger hundreds of Lambda functions running parallel. The default concurrency limit is 1000 Lambda functions, but you can request for this to be increased.
You would then need a separate process that, every 20 minutes, queries the database for the URLs and pushes the messages into the SQS queue.
The complete process is:
Schedule -> Lambda pusher -> messages into SQS -> Lambda workers -> database
The beauty of this design is that it can scale to handle large workloads and operates in parallel, rather than each curl request having to wait. If a message cannot be processed, it Lambda will automatically try again. Repeated failures will send the message to a Dead Letter Queue for later analysis and reprocessing.
If you wish to perform 25,000 queries every 20 minutes (1200 seconds), this would need a query to complete every 0.05 seconds. That's why it is important to work in parallel.
By the way, if you are attempting to scrape this information from a single website, I suggest you investigate whether they provide an API otherwise you might be violating the Terms & Conditions of the website, which I strongly advise against.
So I'm struggling to figure out the optimal way to schedule some events to happen at some point in the future using celery. An example of this is when a new user has registered, we want to send them an email the next day.
We have celery setup and some of you may allude to the eta parameter when calling apply_async. However that won't work for us, as we use SQS which has a visibility timeout that would conflict and generally the eta param shouldn't be used for lengthy periods.
One solution we've implemented at this point is to create events and store them in the database with a 'to-process' timestamp (refers to when to process the event). We use the celery beat scheduler with a task that runs literally every second to see if there are any new events that are ready to process. If there are, we carry out the subsequent tasks.
This solution works, although it doesn't feel great since we're queueing a task every second on SQS. Any thoughts or ideas on this would be great?
So I have this 2 applications connected with a REST API (json messages). One written in Django and the other in Php. I have an exact database replica on both sides (using mysql).
When i press "submit" on one of them, i want that data to be saved on the current app database, and start a cron job with celery/redis to update the remote database for the other app using rest.
My question is, how do i attribute the same worker to my tasks in order to keep a FIFO order?
I need my data to be consistent and FIFO is really important.
Ok i am going to detail what i want to do a little further:
So i have this django app, and when i press submit after i fill in the form my celery worker wakes up and takes care of taking that submitted data and posting to a remote server. This i can do without problems.
Now, imagine that my internet goes down at that exact time, my celery worker keeps retrying to send until it is successful But imagine i do another submit before my previous data is submitted, my data wont be consistent on the other remote server.
Now that is my problem. I am not able to make this requests FIFO with the retry option given by celery so i that's were i need some help figuring that out.
this is the answer i got from another forum:
Use named queues with celery:
http://docs.celeryproject.org/en/latest/userguide/workers.html#queues
Start a worker process with a single worker:
http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html#starting-the-worker-process
Set this worker to consume from the appropriate queue:
http://docs.celeryproject.org/en/latest/userguide/workers.html#queues-adding-consumers
For the fifo part i can sort my celery broker in a fifo order before sending my requests
One of the characteristics I love most about Google's Task Queue is its simplicity. More specifically, I love that it takes a URL and some parameters and then posts to that URL when the task queue is ready to execute the task.
This structure means that the tasks are always executing the most current version of the code. Conversely, my gearman workers all run code within my django project -- so when I push a new version live, I have to kill off the old worker and run a new one so that it uses the current version of the code.
My goal is to have the task queue be independent from the code base so that I can push a new live version without restarting any workers. So, I got to thinking: why not make tasks executable by url just like the google app engine task queue?
The process would work like this:
User request comes in and triggers a few tasks that shouldn't be blocking.
Each task has a unique URL, so I enqueue a gearman task to POST to the specified URL.
The gearman server finds a worker, passes the url and post data to a worker
The worker simply posts to the url with the data, thus executing the task.
Assume the following:
Each request from a gearman worker is signed somehow so that we know it's coming from a gearman server and not a malicious request.
Tasks are limited to run in less than 10 seconds (There would be no long tasks that could timeout)
What are the potential pitfalls of such an approach? Here's one that worries me:
The server can potentially get hammered with many requests all at once that are triggered by a previous request. So one user request might entail 10 concurrent http requests. I suppose I could have a single worker with a sleep before every request to rate-limit.
Any thoughts?
As a user of both Django and Google AppEngine, I can certainly appreciate what you're getting at. At work I'm currently working on the exact same scenario using some pretty cool open source tools.
Take a look at Celery. It's a distributed task queue built with Python that exposes three concepts - a queue, a set of workers, and a result store. It's pluggable with different tools for each part.
The queue should be battle-hardened, and fast. Check out RabbitMQ for a great queue implementation in Erlang, using the AMQP protocol.
The workers ultimately can be Python functions. You can trigger workers using either queue messages, or perhaps more pertinent to what you're describing - using webhooks
Check out the Celery webhook documentation. Using all these tools you can build a production ready distributed task queue that implements your requirements above.
I should also mention that in regards to your first pitfall, celery implements rate-limiting of tasks using a Token Bucket algorithm.