How to force Mongoid to refresh query - ruby-on-rails-4

A background job polls devices table in a Mongoid mapped database in a Rails 4.0.2 application like that:
while go
Device.all.each do |device|
#do something
end
end
I can verify that it load the first set correctly. However, it doesn't refresh the set if new devices are added to the database.
Identity map is off (removed in the current version).

This does the trick:
Mongoid::QueryCache.enabled = false

Related

Postgresql get updated by sql script, get notification in backend

I'm using django-rest-framework as backend and postgresql as the database. The database might be changed by raw SQL script and I want to get notified in the backend when those changes happen so that I can notify different users about the change.
I've checked about posts like https://gist.github.com/pkese/2790749 for receiving notification in python and some SQL scripts for
CREATE TRIGGER rec_notify_trig AFTER INSERT OR UPDATE OR DELETE ON rec
FOR EACH ROW EXECUTE PROCEDURE rec_notify_func()
My question is that I don't know how to hook them together in the django-rest-framework, like where should I put the SQL script, where to put the python settup so that I can connect them together. Any advice will be appreciated.
I would create an endpoint on the djangorestframework side to accept a notification.
Then, in your rec_notify_func() you can call out and hit your endpoint where you can perform any enduser notification necessary.
CREATE EXTENSION plpython3u;
CREATE FUNCTION rec_notify_func(notification_endpoint_uri text) RETURNS text AS $$
from urllib.request import urlopen
data = urlopen(notification_endpoint_uri)
return data.read()
$$ LANGUAGE plpython3u;
NOTE:
You need to have plpython installed on the system in order to enable the extension.
On ubuntu something like this:
sudo apt-get install postgresql-plpython3-9.6

WSO2 Siddhi RDBMS Store Extension - how to set batchEnable to false

I'm using siddhi to create some app which also interacts with PostgreSQL DB. Although I'm not sure, I believe, there is a bug about making multiple updates on the same PG table, within a single event (i.e. upon receiving an event, update a record in the table, and create another one again in the same table) it seems the batch updates are causing some problems. SO, I just want to give it a try after disabling batchUpdate (it is enabled by default). I just don't know how to configure it using siddhi-sdk (via Intellij plugin). There are two related tickets:
https://github.com/wso2-extensions/siddhi-store-rdbms/issues/43
https://github.com/wso2/product-sp/issues/472
Until these are documented, I'd like to get some quick response how to set these fields.
Best regards...
I'm using siddhi to create some app which also interacts with PostgreSQL DB. Although I'm not sure, I believe, there is a bug about making multiple updates on the same PG table, within a single event (i.e. upon receiving an event, update a record in the table, and create another one again in the same table) it seems the batch updates are causing some problems.
When batchEnabled has been set to true, it will perform the insert/update operation on batch of events instead of performing those operations on each and every single event. Simply, this has been introduced to improve the performance.
The default value of this parameter is currently set to "true".
However, batchEnable configurations is done through a system parameter called, "{{RDBMS-Name}}.batchEnable" which have to be configured in the WSO2 Stream Processor's deployment.yaml
If you want to overide this property in Product-SP please find the steps below.
Open the deployment.yaml file located in {Product-SP-Home}/conf/editor/
Insert the following lines in the file.
siddhi:
extensions:
extension:
name: store
namespace: rdbms
properties:
PostgreSQL.batchEnable: true
But currently there is no way to overwrite those system configurations from the siddhi app level. Since you are using the SDK, what you can do is changing the default value of above parameter to "false".
Please find the steps below do it.
Find the siddhi-store-rdbms-4.x.xx.jar file in the siddhi
sdk. This is located in the {siddhi-sdk-home}/lib/ .
Open the jar file using an archive manager and open the
rdbms-table-config.xml file located inside it with a text editor.
Set false in <batchEnable>true</batchEnable> attribute under the
<database name="PostgreSQL"> tag and save it.
Thanks Raveen. with a simple dash (-) before "extension" I was able to set the config.
siddhi:
extensions:
- extension:
name: store
namespace: rdbms
properties:
PostgreSQL.batchEnable: false

Refresh Sitecore index to include CD's

I've written some code to refresh an index when an item is programmatically added to Sitecore. Now as the live system is made up of 1 CM and 2 CD Servers I need my code to also trigger the indexing to be refreshed on the CD Servers (unfortunately my dev machine is just a single box so I can't test this fully). I've looked online but can't find anything about this when triggering a re-index programmatically.
So the question is do I need to write code for this or does Sitecore do this by default and if I do need to write code, does anyone have ideas how I go about this. My current code is below.
ISearchIndex index = ContentSearchManager.GetIndex("GeorgeDrexler_web_index");
Sitecore.Data.Database database = Sitecore.Configuration.Factory.GetDatabase("web");
Item item = database.GetItem("/sitecore/content/GeorgeDrexler/Global/Applications");
index.Refresh(new SitecoreIndexableItem(item));
My config for the index has the remotebuild strategy enabled
<strategy ref="contentSearch/indexConfigurations/indexUpdateStrategies/remoteRebuild" />
As #Hishaam Namooya pointed out in his comment, publishing from master to web should trigger the web index updates out of the box, unless you've disabled something in the configurations.
Note that items won't publish unless they are in a final workflow state, so if you want a completely automated process that creates the item, updates the local index, and then immediately updates the web index, you will also need to update the workflow state to your final approved state and then trigger a publish of the item.

Elasticsearch self.published?

I am using elasticsearch-rails gem For my site i need to create custom callbacks. https://github.com/elastic/elasticsearch-rails/tree/master/elasticsearch-model#custom-callbacks
But i really confused by one thing. What means if self.published? on this code?
i try to use this for my models
after_commit on: [:update] do
place.__elasticsearch__.update_document if self.published?
end
but for model in console i see self.published? => false but i don`t know what this means
From the document of elasticsearch-rails.
For ActiveRecord-based models, use the after_commit callback to protect your data against inconsistencies caused by transaction rollbacks:
I think it was used to make sure everything is updated successfully into database before we sync to elasticsearch server

Rails 4: run migrations as separate DB user

The situation I have is our normal Rails DB user has full ownership in order to run migrations.
However, we use a shared DB for development, so we can't run "destructive" DB tasks against the development DB, such as rake db:drop/reset/etc....
My thought is to create 2 DB users:
rails-service
rails-migrator
The service user is the "normal" web app user that connects to the DB when the app is live. This DB user would only have standard CRUD privileges but no dropping rights.
The migrator user is the "admin" user that is only used for running migrations. This DB user would have normal "full" access to the DB such that it "could" drop the DB if that command were executed.
Question: Is there a clean way to tell Rails migrations to run as the rails-migrator user? I'm not sure how I would accomplish this aside from somehow altering the connection strings for every rails migration file, which seems like a bad idea.
In tandem with the above, I'm going to "delete" the destructive rake tasks so that a developer can't even run them.
# lib/tasks/db.rake
# See: https://coderwall.com/p/jt4e1q/disable-destructive-rake-tasks-by-environment
tasks = Rake.application.instance_variable_get '#tasks'
tasks.delete 'db:reset'
tasks.delete 'db:drop'
namespace :db do
desc 'db:reset not available in this environment'
task :reset do
puts 'db:reset has been disabled'
end
desc 'db:drop not available in this environment'
task :drop do
puts 'db:drop has been disabled'
end
end
I refer you to the answer of Matthew Rudy Jacobs from 2007 (!) https://www.ruby-forum.com/topic/123618
Lucky enough it works also now :)
I just changed DEFINED? and the rest to ENV['AS_DB_ADMIN'] and used it to separate migration access to another user.
On migration I used
set :default_env, { as_db_admin: true }