Idling a Worker in Heroku, with Django - django

I am using Django in Heroku, and in my site I send batch emails every month through Celery. Since I only use this worker once a month, I don't want to pay for it all the time. I can stop the worker using a heroku scale workers=0 and scale it back up with heroku scale workers=1 manually before and after I send my emails.
However since other non-techinical staff will sending email from django as well, they can not run these commands. Can I stop a worker by executing a command from Python in my Heroku web process? I could execute any commands before sending email.

There is a bug with heroku.py see issues: https://github.com/heroku/heroku.py/issues/10 and https://github.com/heroku/heroku.py/issues/4
I made a quick work around, which uses the http resource directly:
cloud = heroku.from_key(settings.HEROKU_APIKEY)
cloud._http_resource(method='POST',
resource=('apps', 'appnane', 'ps', 'scale'),
data={'type': 'processname', 'qty': 1})
cloud._http_resource(method='POST', resource=('apps', 'appname', 'ps', 'scale'), data={'type': 'processname', 'qty': 0})

You could do this with heroku.py the python api client. Its available on PyPi with source available at https://github.com/heroku/heroku.py
You could also use the scheduler addon and have a command that is scheduled to run once a month to send out your emails, without having to scale up a process.

Related

Background tasks with Django on Heroku

So I'm building a Django app on Heroku. The tasks that the app performs frequently run longer than 30 seconds, and therefore I'm running into the 30s timeout by Heroku. I first tried solving it by submitting the task from my Django view to AWS lambda, but in that case, the view is waiting for the AWS Lambda function to finish, so it doesn't solve my problem.
I have already read the tutorials on Heroku on handling background tasks with Django. I'm now faced with a few different options on how to proceed, and would love to get outside input on which one makes the most sense:
Use Celery & Redis to manage the background tasks, and let the tasks be executed on AWS Lambda.
Use Celery & Redis to manage the background tasks, but let the tasks be executed in a Python script on Heroku.
Trying to solve it with asyncio in order to keep it leaner (not sure whether that specific case could be solved with asyncio, though?
Maybe there's an even better solution that I don't see?
Looking forward to any input/suggestions!

Django post-office setup

Perhaps it is just because I've never set up an e-mail system on Django before, or maybe I'm missing it... but does anyone have any insight on how to properly configure django post-office for sending queued e-mails?
I've got a mailing list of 1500 + people, and am hosting my app on heroku - using the standard email system doesn't work because I need to send customized emails to each user, and to connect to the server one by one leads to a timeout.
I've installed django-post_office via pip install, installed the app in settings.py, I've even been able to get an email to send by going:
mail.send(['recipient'],'sender',subject='test',message='hi there',priority='now')
However, if I try to schedule for 30 seconds from now let's say:
nowtime = datetime.datetime.now()
sendtime = nowtime + datetime.timedelta(seconds=30)
and then
mail.send(['recipient'],'sender',subject='test',message='hi there',scheduled_time=sendtime)
Nothing happens... time passes, and the e-mail is still listed as queued, and I don't receive any emails.
I have a feeling it's because I need to ALSO have Celery / RQ / Cron set up??? But the documentation seems to suggest that it should work out of the box. What am I missing?
Thanks folks
Actually, you can find this in the documentation (at the time I'm writing this comment):
Usage
If you use post_office’s EmailBackend, it will automatically queue emails sent using django’s send_mail in the database.
To actually send them out, run python manage.py send_queued_mail. You can schedule this to run regularly via cron:
* * * * * (/usr/bin/python manage.py send_queued_mail >> send_mail.log 2>&1)

Why does apscheduler not work in uwsgi mode?

I have a flask application. This application mimics routing of vehicles in a city and when a vehicle reaches a designated point, I have to wait for 30-180 seconds before starting it again. I am trying to use apscheduler for this.
When the vehicle arrives I start an apscheduler job (with the 'date' trigger for X seconds). When the job fires, I do my processing.
This works well on my dev machine when I am running the flask app standalone. But when I try to run it on my production server (where the app is running in uwsgi mode) the job never fires. I have already set --enable-threads=true for the app, so that doesn't seem to be the problem.
My relevant code is like this.
At my app initialization.
scheduler = BackgroundScheduler()
scheduler.start()
Whenever the trigger happens.
scheduler.add_job(func=myfunc, trigger='date', run_date=datetime.datetime.now() + datetime.timedelta(seconds=value)).
Anything I am missing in using apscheduler in uwsgi mode? Or any other options in flask to achieve what I want to?

Background Job and Scheduling with Resque

I have a Ruby on Rails 4.0 and PostgreSQL app hosted in an Ubuntu VPS. in this application I want to send email based on data in the database. for example a background job check a table content per hour and depend on content send email to user or not. I decided to do this work by Resque.
how can I do that?
should I do in Rails app or in an independent service?
and how can I schedule this job?
There are couple of more options I advise you to try to
1. Cron : One of most preferred approach for any unix developer to run a task based upon some interval . here are read more about
FYI: if you facing problem with understanding cron settings there are gem available to do the same for you its called whenever
2. Resque-Scheduler : Surely you missed one of Resque plugins that provide exactly the same feature that you need its called resque-scheduler . It too provide cron like settings for you to work on
Please check the above link for more info
Hope this helps.
I do not use Resque because I want a process in the Ubuntu server that in a schedule time (per hour). for example per hour check the table content and send alarm to the users by email.
I make a process by Daemon and rufus-scheduler for scheduling.
Process.daemon(true)
task_test = TaskTest.new
pid = Process.fork do
task_test.task
end
class TaskTest
def task
scheduler = Rufus::Scheduler.new
scheduler.every '1h' do
msg = "Message"
mailer = MailerProcess.new
mailer.send_mail('email-address', 'password', 'to-email', 'Subject', msg)
puts Time.now
end
scheduler.join
end
end

django celery rabbitmq execute delay

I use Django-Celery +rabbitmq to execute some asyn tasks,I define a queue 'sendmail' to execute send email task,send mail is triggered by a specific task(this task has own queue), but now I encounter a problem,after the specific task finish, the mail sometimes send at once, sometimes need 5-20minutes.I want to know what reason caused it.
Django-celery will package the taskname and param as message to rabbitmq when call task.delay().
I want to know when the message go to the rabbitmq, but use web management tool only can see total messages,can't see the every message's detail, especially the time the message reached. Django-celery log can only see the work got from broker time and execute task time.I want to know all related timepoint to sure which step the time main consumed.
Django-Celery does (I believe) report task data on a per-task basis. When you sync your database, it crates a bunch of monitoring tables which are accessible via the admin. However, in order for these tasks to be recorded in these tables, you need to run the celerycam program in the django context (python ./manage.py celerycam). The celerycam program will take "snapshots" of your tasks every second or so (by default) and record information about them. Another useful tool for monitoring is the celerymon program (which also has to run in the django context). This is a command line ncurses program that reports real-time information about tasks as they occur. Finally, rabbitmqctrl has a bunch of options that might help with monitoring.
This is a particularly useful page in the docs:
http://celery.github.com/celery/userguide/monitoring.html
Anyway, this is what I use to monitor my tasks when using celery.