Delayed Job on Heroku does not work - ruby-on-rails-4

My app runs fine on my local machine which has 16 Gig of Ram using 'heroku local' command to start both the dyno and workers using the Procfile. The background jobs queued in Delayed Job are processed one-by-one and then the table is emptied. When I run on Heroku, it fails to execute the background processing at all. It gets stuck with the following out of memory message in my logfile:
2016-04-03T23:48:06.382070+00:00 app[web.1]: Using rack adapter
2016-04-03T23:48:06.382149+00:00 app[web.1]: Thin web server (v1.6.4 codename Gob Bluth)
2016-04-03T23:48:06.382154+00:00 app[web.1]: Maximum connections set to 1024
2016-04-03T23:48:06.382155+00:00 app[web.1]: Listening on 0.0.0.0:7557, CTRL+C to stop
2016-04-03T23:48:06.711418+00:00 heroku[web.1]: State changed from starting to up
2016-04-03T23:48:37.519962+00:00 heroku[worker.1]: Process running mem=541M(105.8%)
2016-04-03T23:48:37.519962+00:00 heroku[worker.1]: Error R14 (Memory quota exceeded)
2016-04-03T23:48:59.317063+00:00 heroku[worker.1]: Process running mem=708M(138.3%)
2016-04-03T23:48:59.317063+00:00 heroku[worker.1]: Error R14 (Memory quota exceeded)
2016-04-03T23:49:21.449475+00:00 heroku[worker.1]: Error R14 (Memory quota exceeded)
2016-04-03T23:49:21.449325+00:00 heroku[worker.1]: Process running mem=829M(161.9%)
2016-04-03T23:49:24.273557+00:00 app[worker.1]: rake aborted!
2016-04-03T23:49:24.273587+00:00 app[worker.1]: Can't modify frozen hash
2016-04-03T23:49:24.274764+00:00 app[worker.1]: /app/vendor/bundle/ruby/2.2.0/gems/activerecord-4.2.6/lib/active_record/attribute_set/builder.rb:45:in `[]='
2016-04-03T23:49:24.274771+00:00 app[worker.1]: /app/vendor/bundle/ruby/2.2.0/gems/activerecord-4.2.6/lib/active_record/attribute_set.rb:39:in `write_from_user'
I know that R14 is out of memory error. so I have two questions:
Is there anyway that delayed job can be tuned to take less memory. There will be some disk swapping involved, but it least it will run.
Why do I keep getting rake aborted! Can't modify frozen hash error (lines 4 and 5 from bottom of the log shown below). I do not get it in my local environment. What does it mean? Is it memory related?
Thanks in advance for your time. I am running Rails 4.2.6 and delayed_job 4.1.1 as shown below:
→ gem list | grep delayed
delayed_job (4.1.1)
delayed_job_active_record (4.1.0)
delayed_job_web (1.2.10)
Bharat

I found the problem. I am posting my solution here for those who may be running in similar problems.
I increase the heroku worker memory to use 2 standard dynos meaning I gave it 1 Gig memory so as to remove the memory quota problem. That made R14 go away, but still I continued to get
rake aborted!
Can't modify frozen hash
error and the program will crash then. So the problem was clearly here. After much research, I found that the previous programmer had used the 'workless' gem to reduce heroku charges. Workless gem makes heroku workers go to sleep when not being used and therefore no charges are incurred when not running heroku.
What I did not post in my original question is that I have upgraded the app from Rails 3.2.9 to Rails 4.2.6. Also my research showed that the workless gem had not been upgraded in the last three years and there was no mention on rails 4 on their site. So the chances were that it may not work well with Rails 4.2.6 and Heroku.
I saw some lines in my stack trace which were related to the workless gem. This was a clue for me to see what happens if I subtract, i.e., remove this gem from production. So I removed it and redeployed.
The frozen hash error went away and my delayed_job worker ran successfully to completion on Heroku.
The lesson for me was carefully read the log and check out all the dependencies :)
Hope this helps.

Related

Celery beat process allocating large amount of memory at startup

I operate a Django 1.9 website on Heroku, with Celery 3.1.23. RabbitMQ is used as a broker.
After restarting the beat worker, the memory usage is always around 497Mb. This results in frequent Error R14 (Memory quota exceeded) as it quickly reaches the 512Mb limit.
How can I analyze what is in the memory at startup? I.e. how can I get a detail of what is in the memory when restarted?
Here is a detail of memory consumption obtained with the beta Heroku log-runtime-metrics:
heroku/beat.1:
source=beat.1 dyno=heroku.52346831.1ea92181-ab6d-461c-90fa-61fa8fef2c18
sample#memory_total=497.66MB
sample#memory_rss=443.91MB
sample#memory_cache=20.43MB
sample#memory_swap=33.33MB
sample#memory_pgpgin=282965pages
sample#memory_pgpgout=164606pages
sample#memory_quota=512.00MB
I had the same problem. Searching around, I followed How many CPU cores has a heroku dyno? and Celery immediately exceeds memory on Heroku.
So I typed:
heroku run grep -c processor /proc/cpuinfo -a <app_name>
It returned 8. So I added --concurrency=4 to my Procfile line:
worker: celery -A <app> worker -l info -O fair --without-gossip --without-mingle --without-heartbeat --concurrency=4
And memory usage seemed to be divided by almost 2:

GC overhead limit exceeded for assets:precompile in Rails

When I run rake assets:precompile RAILS_ENV=production , I got the below error
Java::JavaLang::OutOfMemoryError: GC overhead limit exceeded
(in /home/avijit/railswork/tracksynqv2/app/assets/javascripts/application.js)
org.mozilla.javascript.Interpreter.interpretLoop(org/mozilla/javascript/Interpreter.java:1382)
org.mozilla.javascript.Interpreter.interpret(org/mozilla/javascript/Interpreter.java:815)
org.mozilla.javascript.InterpretedFunction.call(org/mozilla/javascript/InterpretedFunction.java:109)
org.mozilla.javascript.ContextFactory.doTopCall(org/mozilla/javascript/ContextFactory.java:393)
org.mozilla.javascript.ScriptRuntime.doTopCall(org/mozilla/javascript/ScriptRuntime.java:3280)
org.mozilla.javascript.InterpretedFunction.call(org/mozilla/javascript/InterpretedFunction.java:107)
java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)
RUBY.call(/home/avijit/.rvm/gems/jruby-1.7.16/gems/therubyrhino-2.0.4/lib/rhino/rhino_ext.rb:193)
Tasks: TOP => assets:precompile
(See full trace by running task with --trace)
I update my production.rb file with config.assets.compile = true and config.serve_static_assets = true. I deploy my rails app using passenger and apache2.
assets:precompile comsume a lot of memory when you run it; try check you system monitor when you running it and increase you memory in the server where you execute that task.
By the way config.serve_static_assets = false should be false in production the server software (e.g. NGINX or Apache) used to run the application should serve static assets instead. Also now this property was rename to config.serve_static_files, as I remember.

500 Internal Server Error - ActionView::Template::Error in Rails Production second pass

Loved the previous question at:
500 Internal Server Error - ActionView::Template::Error in Rails Production
I get the same error when browsing the git tree via the web (internal 500), but the answer there said I should run
bundle exec rake assets:precompile
and referred me to
http://guides.rubyonrails.org/asset_pipeline.html#in-production
I am running GitLab 7.6.1 0286222 on Ubuntu 14.04 LTS fully up to date. That allows me to push and pull from local git machines fine and look around via the web service as well. I ran the revised assets:precompile as suggested below, but the problem continues for me.
So as to my specific error. In the production log I get:
git#git01:~/gitlab/log$ tail -n 20 production.log
Started GET "/chef/cheftest/tree/master/cookbooks" for 127.0.0.1 at 2014-12-24 16:03:25 -0500
Processing by Projects::TreeController#show as HTML
Parameters: {"project_id"=>"chef/cheftest", "id"=>"master/cookbooks"}
Completed 500 Internal Server Error in 490ms
ActionView::Template::Error (undefined method `[]' for nil:NilClass):
1: - tree, commit = submodule_links(submodule_item)
2: %tr{ class: "tree-item" }
3: %td.tree-item-file-name
4: %i.fa.fa-archive
app/models/repository.rb:162:in `method_missing'
app/models/repository.rb:228:in `submodule_url_for'
app/helpers/submodule_helper.rb:6:in `submodule_links'
app/views/projects/tree/_submodule_item.html.haml:1:in `_app_views_projects_tree__submodule_item_html_haml___742655240099390426_69818877669240'
app/helpers/tree_helper.rb:19:in `render_tree'
app/views/projects/tree/_tree.html.haml:42:in `_app_views_projects_tree__tree_html_haml__47884322835133800_69818822684460'
app/views/projects/tree/show.html.haml:9:in `_app_views_projects_tree_show_html_haml__1575471590709486656_69818822138660'
app/controllers/projects/tree_controller.rb:13:in `show'
I would be happy to run any commands and edit any configuration files as needed, but please let me know where the files are and how to run the commands. Thanks for your help with this.

error: failed to push some refs to 'git#heroku.com:dry-plains-3718.git'

Here what i am getting :
Counting objects: 10, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (8/8), done.
Writing objects: 100% (10/10), 3.60 KiB, done.
Total 10 (delta 0), reused 0 (delta 0)
-----> Heroku receiving push
! Heroku push rejected, no Cedar-supported app detected
To git#heroku.com:dry-plains-3718.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'git#heroku.com:dry-plains-3718.git'
Earlier i thought it was problem of .gitignore file but that also working fine . I have ingnored my virutal env and *.pyc as given in documentation.
I tried :
heroku create --stack cedar
also I had to add my pub key to heroku.
heroku keys:add ~/.ssh/id_rsa.pub
this is also not able solve my problem.
I don't know much about the heroku implementation. Anything specific I should check or try?
Please help me as i referred many documents but still getting same error . Thanks in advance :)
I believe cedar recognizes django apps by the existance of a requirements.txt file.
Pleaee check is to be sure you have created 'requirements.txt' and 'Procfile' in the root of your source tree that is being pushed. The names are case sensitive.
This tutorial includes instructions on creating them:
https://devcenter.heroku.com/articles/django

heroku outputs "error fetching custom buildpack", but only sometimes

I have a Django project hosted on Heroku with a buildpack forked from cirlabs/heroku-buildpack-geodjango. Sometimes when I push to Heroku it responds with
Counting objects: 16, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (9/9), done.
Writing objects: 100% (9/9), 790 bytes, done.
Total 9 (delta 7), reused 0 (delta 0)
-----> Heroku receiving push
-----> Fetching custom buildpack... failed
! Heroku push rejected, error fetching custom buildpack
To git#heroku.com:taplister-staging.git
! [remote rejected] dev -> master (pre-receive hook declined)
error: failed to push some refs to 'git#heroku.com:heroku-app.git'
I'm wondering if this may be an error with the buildpack itself, or if it's something about how Heroku interacts with github?
Oh, also among my heroku config is the buildpack URL
BUILDPACK_URL: https://github.com/taplister/heroku-buildpack-geodjango
Any insights are greatly appreciated.
This occasionally happens. Since you're using a custom buildpack, each time you push Heroku will download the buildpack over Git, and then use it to process your build.
Sometimes, depending on issues (network latency, temporary downtime, whatever) Heroku just won't be able to finish the Git clone, and will fail with the above error.
This is a known issue, and the only way around it is to retry the push.