Let say I have this Rake task:
namespace :db do
namespace :dump do.
desc 'Backup database dump to s3'
task :backup => :environment do
cmd = ['backup', 'perform', '-t project_backup', "-c #{Rails.root.join 'lib', 'backup', 'config.rb'}"]
system(*cmd) # ...I've tried `` & exec() sa well, same thing
end
end
end
Backup gem is stand alone ruby gem application which dependencies needs to be isolated from application bundler. In other words it cannot be part of Gemfile. This gem is simply installed over gem install backup
When I run backup command over bash console, it successfully run:
$ backup perform -t validations_backup -c /home/equivalent/my_project/lib/backup/config.rb
When I execute rake db:dump:backup I will get
backup is not part of the bundle. Add it to Gemfile. (Gem::LoadError)
...which is the same thing when I run backup command with bundle exec from bash
$ bundle exec backup perform -t validations_backup -c /home/equivalent/my_project/lib/backup/config.rb
...meaning that the backup command is executed over bundler when run as part of rake task.
my question: How can I run rake db:dump:backup outsite the bundle scope, meaning that backup command won`t be executed over bundler?
Thank you
I found a workaround for this problem here:
namespace :db do
namespace :dump do
desc 'Backup database dump to s3'
task :backup do
Bundler.with_clean_env do
sh "backup perform -t project_backup -c #{Rails.root.join 'lib', 'backup', 'config.rb'}"
end
end
end
end
The key here is to enclose the code that must not run under bundler's environment in a block like this:
Bundler.with_clean_env do
# Code that needs to run without the bundler environment loaded
end
Here is the Capistrano solution I was mentioning for those who need it while we figure out how to fix Rake.
class BackupDatabaseCmd
def self.cmd
# some logic to calculate :
'RAILS_ENV=production backup perform -t name_of_backup_task -c /home/deploy/apps/my_project/current/lib/backup/config.rb'
# in the configuration file I'm loading `config/database.yml`
# and passing them to backup gem configuration
end
end
namespace :backup do
namespace :database do
task :to_s3 do
on roles(:web) do
within release_path do
with rails_env: fetch(:rails_env) do
execute(BackupDatabaseCmd.cmd)
end
end
end
end
end
end
# cap production backup:database:to_s3
Related
I try to send message automatically by using whenever gem. I am in initial step. I install the gem 'whenever'. I done the following step.
1. Add "gem 'whenever', :require => false" to the gemfile.
2. bundle install.
3. wheneverize .
4. in schedule.rb add the following code,
set :output, "#{path}/log/cron.log"
#every 1.day, :at => '4:30 am' do
every 5.minutes do
runner "Payment.sendMessage", :environment => "development"
end
5.And model likes,
class Payment < ActiveRecord::Base
def sendMessage
puts"Hello"
end
end
6. When I use bundle exec whenever, I get like the following issue as
0,5,10,15,20,25,30,35,40,45,50,55 * * * * /bin/bash -l -c
'cd /home/prabha/rails_job && bundle exec bin/
rails runner -e development '\''Payment.sendMessage'\'' >>
/home/prabha/rails_job/log/cron.log 2>&1'
## [message] Above is your schedule file converted to cron syntax;
your crontab file was not updated.
## [message] Run `whenever --help' for more options.
I am stuck with this step. what I want to do the further proceed? Anyone guide me.
Thanks.
You need to update you crontab file.
Do the following -
whenever --update-crontab
For more information, please check whenever gem's Github ReadMe page.
1) sendMessage should be a class method.
2) You can use "whenever" command in your project directory to see the cron configuration and then copy into your crontab.
I'm trying to deploy my Rails 4 app using Capistrano 3. I'm getting error messages in running the db:migrations (i've been sloppy, sorry). Is there a way to have Capistrano deploy the app (at least the first time) using db:schema:load?
An excerpt of my deploy.rb:
namespace :deploy do
%w[start stop restart].each do |command|
desc 'Manage Unicorn'
task command do
on roles(:app), in: :sequence, wait: 1 do
execute "/etc/init.d/unicorn_#{fetch(:application)} #{command}"
end
end
end
I'm not sure how to override Capistrano 3's default behaviour. Can someone tell me how to add this to my script?
For first time deploys, I generally hack around it by logging into the server, cding into the release directory (which will have the deployed code at this point), and then manually running RAILS_ENV=yourenv bundle exec rake db:setup.
In Capistrano 3.10.1 with a Rails 5.1.6 application,
~/Documents/p.rails/perla-uy[staging]$ bundle exec cap staging deploy:updating
gives me enough to shell-in and run the db:structure:load or db:schema:load task manually. In the secure shell session to the host, switch to the newly created release directory and:
dclo#localhost:~/perla-uy/releases/20180412133715$ bundle install --without development test --deployment
dclo#localhost:~/perla-uy/releases/20180412133715$ bundle exec rails db:schema:load
Shelling into a (successful or failed) deploy that has tried deploy:migrate isn't quite the same.
Note: I have RAILS_ENV=production and RAILS_MASTER_KEY=... set-up by the shell login.
In Rails deployment, some times we missed to execute rake db:migrate, rake db:seed or any rake job after deploying the project at regular interval. I am trying to add such script with Rails structure, which will call RakeJob and all such type of script automatically after deployment.
Just like in this sequential way :-
rake db:migrate
rake db:seed
rake ( Here Rake is the combination of all rake jobs )
I normally deploy to Heroku. I am not sure where you are deploying to but here is a copy of my deploy script
#vim lib/tasks/deploy.rake
namespace :deploy do
#rake deploy:production # Deploy to production
desc "Deploy to production"
task :production do
puts "THIS IS DEPLOYING TO PRODUCTION!!!"
puts "********* CAREFUL! *********"
sleep 10
backup_environment_db("heroku-prod")
deploy_sha_to_environment(ARGV[1], "heroku-prod")
system("open http://www.example.com/")
end
def deploy_sha_to_environment(sha, environment)
if sha = ARGV[1]
puts "About to push #{sha} to #{environment}."
message = `git log --format=%B -n 1 #{sha}`
puts "#{sha} -- #{message}"
sleep 2
Bundler.with_clean_env do
system "git push --force git#heroku.com:#{environment}.git #{sha}:master"
system "heroku run rake db:migrate --app #{environment}"
system "heroku restart --app #{environment}"
end
else
puts
puts "*** sha required! (pass it as an argument) ***"
puts
Rake.application.invoke_task("codeship:statuses")
exit
end
end
end
I hope that this helps
I'm pretty new to Chef deployments, and I'm trying to deploy a rails app with OpsWorks. The trouble is with asset precompilation.
I have this recipe to perform precompilations:
execute "rake assets:precompile" do
cwd release_path
command "bundle exec rake assets:precompile --trace"
environment "RAILS_ENV" => "production"
end
When I deploy with Chef, I get the following error:
ERROR: undefined method `release_path' for Chef::Resource::Execute
What's weird is that every example recipe I can find makes use of the release_path helper. How could it not be defined here?
Here is how I do precompile on a rails application on opsworks:
This code is placed in your applications deploy folder, in a file called "before_migrate.rb" in /approot/deploy/before_migrate.rb.
The environment variables are created in the application defined in opsworks.
rails_env = new_resource.environment["RAILS_ENV"]
secret_key_base = new_resource.environment["SECRET_KEY_BASE"]
devise_secret_key = new_resource.environment["DEVISE_SECRET_KEY"]
Chef::Log.info("Precompiling assets for RAILS_ENV=#{rails_env}...")
Chef::Log.info("SECRET_KEY_BASE=#{secret_key_base}, DEVISE_SECRET_KEY=#{devise_secret_key}")
execute "rake assets:precompile" do
cwd release_path
command "RAILS_ENV=#{rails_env} bundle exec rake assets:precompile"
environment "RAILS_ENV" => rails_env
environment "SECRET_KEY_BASE" => secret_key_base
environment "DEVISE_SECRET_KEY" => devise_secret_key
end
I fixed this by using node[:deploy]['appshortname'][:deploy_to]. My full recipe is below:
node[:deploy].each do |application, deploy|
execute "rake assets:precompile" do
cwd "#{deploy[:deploy_to]}/current"
command "bundle exec rake assets:precompile --trace"
environment deploy[:environment_variables].merge(
"RAILS_ENV" => deploy[:rails_env]
)
end
end
At present, I have it setup so that capistrano git pulls the latest code on production servers, bundle installs and asset precompiles it individually on each web server.
The problem that I am running into is that occationally it will take a long time and take up a lot of resources that impacts the performance on the production servers.
I am looking for guidelines on how best to do this.
If anyone has experience with this and can share their opinions, I would really appreciate it.
I am looking to see if this is a good/bad idea and what are common pitfalls I should watch out for.
I would also appreciate any link to blog post/tutorial/documentation that could help with this.
Thanks for reading.
Ankit.
Here is my work around. Try adding it in namespace :deploy
namespace :assets do
desc 'Run the precompile task locally and rsync with shared'
task :precompile, :roles => :web, :except => { :no_release => true } do
unless skip_assets
%x{bundle exec rake assets:clean RAILS_ENV=#{rails_env}}
run_local "bundle exec rake assets:precompile RAILS_ENV=#{rails_env}"
servers = find_servers_for_task(current_task)
port_option = port ? "-e 'ssh -p #{port}'" : ''
servers.each do |server|
%x{rsync --recursive --times --rsh=ssh --compress --human-readable --progress #{port_option} public/assets #{user}##{server}:#{shared_path}}
end
%x{bundle exec rake assets:clean RAILS_ENV=#{rails_env}}
end
end
end
def run_local(cmd)
system cmd
if($?.exitstatus != 0) then
puts 'exit code: ' + $?.exitstatus.to_s
exit
end
end