Updating a value in rails - ruby-on-rails-4

I aam trying to update one value in rails which is in sidekiq worker. But for first time whenever i restart the worker it works but after it doesn't update the value. I tried putting that thing into model also and calling that method from worker itself but same thing happened.
def update_value
self.update :compressing => false
end
name.update_value
OR
name.update_attribute(:compressing, 0)
OR
name.update_attribute(:compressing, false)
nothing seems to work after first time, but no error. Any hint will be really helpful.

Is this in a controller? You need to get the name instance first e.g.
def update_value
#name = Name.find(params[:id])
#name.update_attributes(compressing: false)
end

Related

Rails 4 Action Mailer Previews and Factory Girl issues

I've been running into quite an annoying issue when dealing with Rails 4 action mailer previews and factory girl. Here's an example of some of my code:
class TransactionMailerPreview < ActionMailer::Preview
def purchase_receipt
account = FactoryGirl.build_stubbed(:account)
user = account.owner
transaction = FactoryGirl.build_stubbed(:transaction, account: account, user: user)
TransactionMailer.purchase_receipt(transaction)
end
end
This could really be any action mailer preview. Lets say I get something wrong (happens every time), and there's an error. I fix the error and refresh the page. Every time this happens I get a:
"ArgumentError in Rails::MailersController#preview
A copy of User has been removed from the module tree but is still active!"
Then my only way out is to restart my server.
Am I missing something here? Any clue as to what is causing this and how it could be avoided? I've restarted my server 100 times over the past week because of this.
EDIT: It may actually be happening any time I edit my code and refresh the preview?
This answers my question:
https://stackoverflow.com/a/29710188/2202674
I used approach #3: Just put a :: in front of the offending module.
Though this is not exactly an answer (but perhaps a clue), I've had this problem too.
Do your factories cause any records to actually be persisted?
I ended up using Factory.build where I could, and stubbing out everything else with private methods and OpenStructs to be sure all objects were being created fresh on every reload, and nothing was persisting to be reloaded.
I'm wondering if what FactoryGirl.build_stubbed uses to trick the system into thinking the objects are persisted are causing the system to try and reload them (after they are gone).
Here's a snippet of what is working for me:
class SiteMailerPreview < ActionMailer::Preview
def add_comment_to_page
page = FactoryGirl.build :page, id: 30, site: cool_site
user = FactoryGirl.build :user
comment = FactoryGirl.build :comment, commentable: page, user: user
SiteMailer.comment_added(comment)
end
private
# this works across reloads where `Factory.build :site` would throw the error:
# A copy of Site has been removed from the module tree but is still active!
def cool_site
site = FactoryGirl.build :site, name: 'Super cool site'
def site.users
user = OpenStruct.new(email: 'recipient#example.com')
def user.settings(sym)
OpenStruct.new(comments: true)
end
[user]
end
site
end
end
Though I am not totally satisfied with this approach, I don't get those errors anymore.
I would be interested to hear if anyone else has a better solution.

Rufus-scheduler not working on nginx/passenger in environment production

I’m having problems with rufus-scheduler not working in environment production, i’m tried adding:
passenger_spawn_method direct;
passenger_min_instances 1;
rails_app_spawner_idle_time 0;
in the nginx config but it would still not solve the problem.
my code using rufus-scheduler:
def expired_at=(datetime)
datetime = Time.zone.parse(datetime) if datetime.class == String && !datetime.empty?
if expired_at
expired_at
else
if datetime > Time.zone.now
scheduler = Rufus::Scheduler.new
begin
scheduler.at datetime.strftime("%Y/%m/%d %H:%M") do
self.update_attributes(:finished => true )
end
rescue => ex
Rails.logger.info ex.message
Rails.logger.info ex.backtrace
end
else
self[:finished] = true
end
self[:expired_at] = datetime
end
end
I'm stuck in this problem. Your help will be appreciated, thank you in advance.
I'm using:
nginx: 1.8.0
fusion passenger: 5.0.10
rufus-scheduler: 3.1.3
I cannot solve your problem, because you're not describing what actually happens (or doesn't happen).
But, I can tell you that your code is poorly thought.
Your expired_at= method, I guess it's a model method. So, do you realize that you are initializing a rufus-scheduler instance each time you are calling expired_at=?
you'd better try
def expired_at=(datetime)
datetime = Time.zone.parse(datetime) \
if datetime.class == String && !datetime.empty?
if expired_at
expired_at
else
if datetime > Time.zone.now
begin
Rufus::Scheduler.singleton.at datetime.strftime("%Y/%m/%d %H:%M") do
self.update_attributes(:finished => true )
end
rescue => ex
Rails.logger.info ex.message
Rails.logger.info ex.backtrace
end
else
self[:finished] = true
end
self[:expired_at] = datetime
end
end
It relies on Rufus::Scheduler.singleton so you'll use a single rufus-scheduler instance. Another alternative would be to start the scheduler in an initializer and then leverage it from your model. Perhaps you'll be forced to do just that since Passenger probably won't keep a thread created by an incoming request around. You'll have to carefully test (and thus learn).
But, you could also change your way of thinking and have a single rufus-scheduler job that wakes up, say twice a day, and queries for models with an expired_at value and set finished to true if necessary. If you go that way, you could use Whenever: wrap your query+update in a Rake task and let Whenever (in fact your system's crond) call it twice a day for you (advantage: you don't have to tweak your Passenger settings).
Good luck!

Overriding Devise::RegistratoinsController

So I am trying to override Devise::RegistrationsController which they do have wiki for and tons of tutorial out there. The one thing that I can not find is the best implementation of how to override the controller whilst implementing the require admin approval feature as well.
I think I got the hang of it but before I go any further (from all the reading on the Devise's source code) I want to know, on the registrations controller there's a line that does:
resource.active_for_authentication?
However, on the Sessions controller it's just this:
def create
self.resource = warden.authenticate!(auth_options)
set_flash_message(:notice, :signed_in) if is_flashing_format?
sign_in(resource_name, resource)
yield resource if block_given?
respond_with resource, location: after_sign_in_path_for(resource)
end
What I want to know is, if it's not confirmed or the active_for_authentication returns false, where or how does the session controller check this? I tried tracing back the source code but no luck.
So anyone who's very familiar with Devise perhaps you could answer my question? Thank you.
After authenticating a user and in each request, Devise checks if your model is active by calling model.active_for_authentication?. This method is overwritten by other devise modules. For instance, :confirmable overwrites .active_for_authentication? to only return true if your model was confirmed.
You can overwrite this method yourself, but if you do, don't forget to call super:
def active_for_authentication?
super && special_condition_is_valid?
end
Whenever active_for_authentication? returns false, Devise asks the reason why your model is inactive using the inactive_message method. You can overwrite it as well:
def inactive_message
special_condition_is_valid? ? super : :special_condition_is_not_valid
end

Sidekiq job execution context

I want to perform some tasks in background but with something like "run as". In other words, like a task was launched by the user from the context of his session.
Something like
def perform
env['warden'].set_user(#task_owner_user)
MyService::current_user_dependent_method
end
but I'm not sure it' won't collide with other tasks. I'm not very familiar with Sidekiq.
Can I safely perform separate tasks, each with a different user context, somehow?
I'm not sure what your shooting for with the "run as" context, but I've always setup sidekiq jobs that need a unique object by passing the id in the perform. This way the worker always knows which object it is trying to work on. Maybe this is what you're looking for?
def perform id
user = User.find(id)
user.current_user_dependent_method
end
Then setup a route in a controller for triggering this worker to start, something like:
def custom_route_for_performing_job
#users= User.where(your_conditions)
#users.each do |user|
YourWorker.perform_async user.id
end
redirect_to :back, notice: "Starting background job for users_dependent_method"
end
The proper design is to use a server-side middleware + a thread local variable to set the current user context per job.
class MyServerMiddlware
def call(worker, message, queue)
Thread.current[:current_user] = message['uid'] if message['uid']
yield
ensure
Thread.current[:current_user] = nil
end
end
You'd create a client-side middleware to capture the current uid and put it in the message. In this way, the logic is encapsulated distinctly from any one type of Worker. Read more:
https://github.com/mperham/sidekiq/wiki/Middleware

Django: Getting Django-cron Running

I am trying to get Django-cron running, but it seems to only run after hitting the site once. I am using Virtualenv.
Any ideas why it only runs once?
On my PATH, I added the location of django_cron: '/Users/emilepetrone/Workspace/zapgeo/zapgeo/django_cron'
My cron.py file within my Django app:
from django_cron import cronScheduler, Job
from products.views import index
class GetProducts(Job):
run_every = 60
def job(self):
index()
cronScheduler.register(GetProducts)
class GetLocation(Job):
run_every = 60
def job(self):
index()
cronScheduler.register(GetLocation)
The first possible reason
There is a variable in django_cron/base.py:
# how often to check if jobs are ready to be run (in seconds)
# in reality if you have a multithreaded server, it may get checked
# more often that this number suggests, so keep an eye on it...
# default value: 300 seconds == 5 min
polling_frequency = getattr(settings, "CRON_POLLING_FREQUENCY", 300)
So, the minimal interval of checking for time to start your task is polling_frequency. You can change it by setting in settings.py of your project:
CRON_POLLING_FREQUENCY = 100 # use your custom value in seconds here
To start a job hit your server at least one time after starting Django web server.
The second possible reason
Your job has an error and it is not queued (queued flag is set to 'f' if your job raises an exception). In this case it stores in field 'queued' of table 'django_cron_job' string value 'f'. You can test it making the query:
select queued from django_cron_job;
If you change the code of your job the field may stay as 'f'. So, if you correct the error of your job you should manually set in queued field: 't'. Or the flag executing in the table django_cron_cron is 't'. It means that your app. server was stopped when your task was in progress. In this case you should manually set it into 'f'.