In Rails deployment, some times we missed to execute rake db:migrate, rake db:seed or any rake job after deploying the project at regular interval. I am trying to add such script with Rails structure, which will call RakeJob and all such type of script automatically after deployment.
Just like in this sequential way :-
rake db:migrate
rake db:seed
rake ( Here Rake is the combination of all rake jobs )
I normally deploy to Heroku. I am not sure where you are deploying to but here is a copy of my deploy script
#vim lib/tasks/deploy.rake
namespace :deploy do
#rake deploy:production # Deploy to production
desc "Deploy to production"
task :production do
puts "THIS IS DEPLOYING TO PRODUCTION!!!"
puts "********* CAREFUL! *********"
sleep 10
backup_environment_db("heroku-prod")
deploy_sha_to_environment(ARGV[1], "heroku-prod")
system("open http://www.example.com/")
end
def deploy_sha_to_environment(sha, environment)
if sha = ARGV[1]
puts "About to push #{sha} to #{environment}."
message = `git log --format=%B -n 1 #{sha}`
puts "#{sha} -- #{message}"
sleep 2
Bundler.with_clean_env do
system "git push --force git#heroku.com:#{environment}.git #{sha}:master"
system "heroku run rake db:migrate --app #{environment}"
system "heroku restart --app #{environment}"
end
else
puts
puts "*** sha required! (pass it as an argument) ***"
puts
Rake.application.invoke_task("codeship:statuses")
exit
end
end
end
I hope that this helps
Related
When I run rake assets:precompile RAILS_ENV=production , I got the below error
Java::JavaLang::OutOfMemoryError: GC overhead limit exceeded
(in /home/avijit/railswork/tracksynqv2/app/assets/javascripts/application.js)
org.mozilla.javascript.Interpreter.interpretLoop(org/mozilla/javascript/Interpreter.java:1382)
org.mozilla.javascript.Interpreter.interpret(org/mozilla/javascript/Interpreter.java:815)
org.mozilla.javascript.InterpretedFunction.call(org/mozilla/javascript/InterpretedFunction.java:109)
org.mozilla.javascript.ContextFactory.doTopCall(org/mozilla/javascript/ContextFactory.java:393)
org.mozilla.javascript.ScriptRuntime.doTopCall(org/mozilla/javascript/ScriptRuntime.java:3280)
org.mozilla.javascript.InterpretedFunction.call(org/mozilla/javascript/InterpretedFunction.java:107)
java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)
RUBY.call(/home/avijit/.rvm/gems/jruby-1.7.16/gems/therubyrhino-2.0.4/lib/rhino/rhino_ext.rb:193)
Tasks: TOP => assets:precompile
(See full trace by running task with --trace)
I update my production.rb file with config.assets.compile = true and config.serve_static_assets = true. I deploy my rails app using passenger and apache2.
assets:precompile comsume a lot of memory when you run it; try check you system monitor when you running it and increase you memory in the server where you execute that task.
By the way config.serve_static_assets = false should be false in production the server software (e.g. NGINX or Apache) used to run the application should serve static assets instead. Also now this property was rename to config.serve_static_files, as I remember.
I'm trying to deploy my Rails 4 app using Capistrano 3. I'm getting error messages in running the db:migrations (i've been sloppy, sorry). Is there a way to have Capistrano deploy the app (at least the first time) using db:schema:load?
An excerpt of my deploy.rb:
namespace :deploy do
%w[start stop restart].each do |command|
desc 'Manage Unicorn'
task command do
on roles(:app), in: :sequence, wait: 1 do
execute "/etc/init.d/unicorn_#{fetch(:application)} #{command}"
end
end
end
I'm not sure how to override Capistrano 3's default behaviour. Can someone tell me how to add this to my script?
For first time deploys, I generally hack around it by logging into the server, cding into the release directory (which will have the deployed code at this point), and then manually running RAILS_ENV=yourenv bundle exec rake db:setup.
In Capistrano 3.10.1 with a Rails 5.1.6 application,
~/Documents/p.rails/perla-uy[staging]$ bundle exec cap staging deploy:updating
gives me enough to shell-in and run the db:structure:load or db:schema:load task manually. In the secure shell session to the host, switch to the newly created release directory and:
dclo#localhost:~/perla-uy/releases/20180412133715$ bundle install --without development test --deployment
dclo#localhost:~/perla-uy/releases/20180412133715$ bundle exec rails db:schema:load
Shelling into a (successful or failed) deploy that has tried deploy:migrate isn't quite the same.
Note: I have RAILS_ENV=production and RAILS_MASTER_KEY=... set-up by the shell login.
I'm pretty new to Chef deployments, and I'm trying to deploy a rails app with OpsWorks. The trouble is with asset precompilation.
I have this recipe to perform precompilations:
execute "rake assets:precompile" do
cwd release_path
command "bundle exec rake assets:precompile --trace"
environment "RAILS_ENV" => "production"
end
When I deploy with Chef, I get the following error:
ERROR: undefined method `release_path' for Chef::Resource::Execute
What's weird is that every example recipe I can find makes use of the release_path helper. How could it not be defined here?
Here is how I do precompile on a rails application on opsworks:
This code is placed in your applications deploy folder, in a file called "before_migrate.rb" in /approot/deploy/before_migrate.rb.
The environment variables are created in the application defined in opsworks.
rails_env = new_resource.environment["RAILS_ENV"]
secret_key_base = new_resource.environment["SECRET_KEY_BASE"]
devise_secret_key = new_resource.environment["DEVISE_SECRET_KEY"]
Chef::Log.info("Precompiling assets for RAILS_ENV=#{rails_env}...")
Chef::Log.info("SECRET_KEY_BASE=#{secret_key_base}, DEVISE_SECRET_KEY=#{devise_secret_key}")
execute "rake assets:precompile" do
cwd release_path
command "RAILS_ENV=#{rails_env} bundle exec rake assets:precompile"
environment "RAILS_ENV" => rails_env
environment "SECRET_KEY_BASE" => secret_key_base
environment "DEVISE_SECRET_KEY" => devise_secret_key
end
I fixed this by using node[:deploy]['appshortname'][:deploy_to]. My full recipe is below:
node[:deploy].each do |application, deploy|
execute "rake assets:precompile" do
cwd "#{deploy[:deploy_to]}/current"
command "bundle exec rake assets:precompile --trace"
environment deploy[:environment_variables].merge(
"RAILS_ENV" => deploy[:rails_env]
)
end
end
Let say I have this Rake task:
namespace :db do
namespace :dump do.
desc 'Backup database dump to s3'
task :backup => :environment do
cmd = ['backup', 'perform', '-t project_backup', "-c #{Rails.root.join 'lib', 'backup', 'config.rb'}"]
system(*cmd) # ...I've tried `` & exec() sa well, same thing
end
end
end
Backup gem is stand alone ruby gem application which dependencies needs to be isolated from application bundler. In other words it cannot be part of Gemfile. This gem is simply installed over gem install backup
When I run backup command over bash console, it successfully run:
$ backup perform -t validations_backup -c /home/equivalent/my_project/lib/backup/config.rb
When I execute rake db:dump:backup I will get
backup is not part of the bundle. Add it to Gemfile. (Gem::LoadError)
...which is the same thing when I run backup command with bundle exec from bash
$ bundle exec backup perform -t validations_backup -c /home/equivalent/my_project/lib/backup/config.rb
...meaning that the backup command is executed over bundler when run as part of rake task.
my question: How can I run rake db:dump:backup outsite the bundle scope, meaning that backup command won`t be executed over bundler?
Thank you
I found a workaround for this problem here:
namespace :db do
namespace :dump do
desc 'Backup database dump to s3'
task :backup do
Bundler.with_clean_env do
sh "backup perform -t project_backup -c #{Rails.root.join 'lib', 'backup', 'config.rb'}"
end
end
end
end
The key here is to enclose the code that must not run under bundler's environment in a block like this:
Bundler.with_clean_env do
# Code that needs to run without the bundler environment loaded
end
Here is the Capistrano solution I was mentioning for those who need it while we figure out how to fix Rake.
class BackupDatabaseCmd
def self.cmd
# some logic to calculate :
'RAILS_ENV=production backup perform -t name_of_backup_task -c /home/deploy/apps/my_project/current/lib/backup/config.rb'
# in the configuration file I'm loading `config/database.yml`
# and passing them to backup gem configuration
end
end
namespace :backup do
namespace :database do
task :to_s3 do
on roles(:web) do
within release_path do
with rails_env: fetch(:rails_env) do
execute(BackupDatabaseCmd.cmd)
end
end
end
end
end
end
# cap production backup:database:to_s3
At present, I have it setup so that capistrano git pulls the latest code on production servers, bundle installs and asset precompiles it individually on each web server.
The problem that I am running into is that occationally it will take a long time and take up a lot of resources that impacts the performance on the production servers.
I am looking for guidelines on how best to do this.
If anyone has experience with this and can share their opinions, I would really appreciate it.
I am looking to see if this is a good/bad idea and what are common pitfalls I should watch out for.
I would also appreciate any link to blog post/tutorial/documentation that could help with this.
Thanks for reading.
Ankit.
Here is my work around. Try adding it in namespace :deploy
namespace :assets do
desc 'Run the precompile task locally and rsync with shared'
task :precompile, :roles => :web, :except => { :no_release => true } do
unless skip_assets
%x{bundle exec rake assets:clean RAILS_ENV=#{rails_env}}
run_local "bundle exec rake assets:precompile RAILS_ENV=#{rails_env}"
servers = find_servers_for_task(current_task)
port_option = port ? "-e 'ssh -p #{port}'" : ''
servers.each do |server|
%x{rsync --recursive --times --rsh=ssh --compress --human-readable --progress #{port_option} public/assets #{user}##{server}:#{shared_path}}
end
%x{bundle exec rake assets:clean RAILS_ENV=#{rails_env}}
end
end
end
def run_local(cmd)
system cmd
if($?.exitstatus != 0) then
puts 'exit code: ' + $?.exitstatus.to_s
exit
end
end