iron.io / iron_worker scheduling task run_every is ignored....(using ruby gem), - iron

using iron.io within rails app. their ironcasts have been really good and its pretty simple.
weird behaviour is that i've setup a scheduled task during rails startup (application.rb), i've set scheduled task to 10min (600 seconds), but the tasks are being kicked off every minute (or so, almost as if its ... as often as possible).
The tasks create and run. The first one connects to my heroku DB, but other tasks often fail (connection issue, but I think this is because its happening so quickly).
schedule = #iwclient.schedules.create("schedule", {:database => dbsettings },{:run_every => 600 } )
have I missed something?
I have tried different timings, does not seem to make a difference.
i've got debug output displaying the results of the task creation e.g. puts schedule.inspect.to_s
welcome any guidance if anyone has had similar issues
Ben
these are db settings are confirmed during iron_worker task debug output that they are correctly populated, indeed the task does execute and exit ok.
dbsettings = {
:adapter => "postgresql",
:database => "xxx",
:host => "ec2-107-20-224-35.compute-1.amazonaws.com",
:port => 5432,
:username => "xxxx",
:password => "xxx",
:sslmode => "require"}

Related

Firebase function connection with GCP Redis instance in the same VPC keeps on disconnecting

I am working on multiple Firebase cloud functions (all hosted in the same region) that connect with a GCP hosted Redis instance in the same region, using a VPC connector. I am using version 3.0.2 of the nodejs library for Redis. In the cloud functions' debug logs, I am seeing frequent connection reset logs, triggered for each cloud function with no fixed pattern around the timeline for the connection reset. And each time, the error captured in the error event handler is ECONNRESET. While creating the Redis instance, I have provided a retry_strategy to reconnect after 5 ms with maximum of 10 such attempts, along with the retry_unfulfilled_commands set to true, expecting that any unfulfilled command at the time of connection reset will be automatically retried (refer the code below).
const redisLib = require('redis');
const client = redisLib.createClient(REDIS_PORT, REDIS_HOST, {
enable_offline_queue: true,
retry_unfulfilled_commands: true,
retry_strategy: function(options) {
if (options.error && options.error.code === "ECONNREFUSED") {
// End reconnecting on a specific error and flush all commands with
// a individual error
return new Error("The server refused the connection");
}
if (options.attempt > REDIS_CONNECTION_RETRY_ATTEMPTS) {
// End reconnecting with built in error
console.log('Connection retry count exceeded 10');
return undefined;
}
// reconnect after 5 ms
console.log('Retrying connection after 5 ms');
return 5;
},
});
client.on('connect', () => {
console.log('Redis instance connected');
});
client.on('error', (err) => {
console.error(`Error connecting to Redis instance - ${err}`);
});
exports.getUserDataForId = (userId) => {
console.log('getUserDataForId invoked');
return new Promise((resolve, reject) => {
if(!client.connected) {
console.log('Redis instance not yet connected');
}
client.get(userId, (err, reply) => {
if(err) {
console.error(JSON.stringify(err));
reject(err);
} else {
resolve(reply);
}
});
});
}
// more such exports for different operations
Following are the questions / issues I am facing.
Why is the connection getting reset intermittently?
I have seen logs that even if the cloud function is being executed, the connection to Redis server lost resulting in failure of the command.
With retry_unfulfilled_commands set to true, I hoped it will handle the scenario as mentioned in point number 2 above, but as per debug logs, the cloud function times out in such scenario. This is what I observed in the logs in that case.
getUserDataForId invoked
Retrying connection after 5 ms
Redis instance connected
Function execution took 60002 ms, finished with status: 'timeout' --> coming from wrapper cloud function
Should I, instead of having a Redis connection instance at global level, try to have a connection created during each such Redis operation? It might have some performance issues as well as issues around number of concurrent Redis connections (since I have multiple cloud functions and all those will be creating Redis connections for each simultaneous invocation), right?
So, how to best handle it since I am facing all these issues during development itself, so not really sure if it's code related issue or some infrastructure configuration related issue.
This behavior could be caused by background activities.
"Background activity is anything that happens after your function has
terminated"
When the background activity interferes with subsequent invocations in Cloud Functions, unexpected behavior and errors that are hard to diagnose may occur. Accessing the network after a function terminates usually leads to "ECONNRESET" errors.
To troubleshoot this, make sure that there is no background activity by searching the logs for entries after the line saying that the invocation finished. Background activity can sometimes be buried deeper in the code, especially when asynchronous operations such as callbacks or timers are present. Review your code to make sure all asynchronous operations finish before you terminate the function.
Source

This website is under heavy load (queue full)

I am having problems with my site in aws, it shows me the following message: "This website is under heavy load (queue full)" I was reading and apparently it is because multiple users access at the same time, I tried to modify the value of the queue but I did not find which was the file, in the end I modified the file located in /usr/lib/ruby/vendor_ruby/phusion_passenger/nginx/config_options.rb but I'm not sure if it was there that I should modify it. Modified from this:
{
:name => 'passenger_max_request_queue_size',
:scope => :application,
:type => :uinteger,
:default => DEFAULT_MAX_REQUEST_QUEUE_SIZE
},
to this:
{
:name => 'passenger_max_request_queue_size',
:scope => :application,
:type => :uinteger,
:default => 2000
},
I restarted nginx with sudo service nginx restart but I try to access the site and it is loading for a long time and fails to show the content. my application is developed with rails 5 is hosted on aws (my plan is the basic one), with passenger and nginx
My application was already working correctly but today it presented that problem.

Chef service restart_command not running on AWS opsworks instnace

We are facing a strange issue with a restart_command on Chef service definition not being executed. We have a task in AWS OpsWorks that is being executed with the following service definition :
service "celery-worker-1" do
action [ :nothing ]
supports :restart=>true, :status=>true
retries 3
restart_command 'sv force-stop celery-worker-1 ; sv start celery-worker-1'
if node[:opsworks][:instance][:layers].include?('celery-worker')
subscribes :restart, "deploy_revision[testapp]", :delayed
end
end
Then this is called at the end of the file from notifies
elsif node[:opsworks][:instance][:layers].include?('celery-worker')
notifies :restart, resources(:service => "celery-worker-0", :service => "celery-worker-1")
When this task is executed from OpsWorks the logs show no errors or issues :
[2018-02-09T08:33:34+00:00] INFO: Processing service[celery-worker-0] action nothing (testapp::configure line 17)
[2018-02-09T08:33:34+00:00] INFO: Processing service[celery-worker-1] action nothing (testapp::configure line 27)
But when we check on the server itself the celery-workers were not restarted. Executing manually the command from restart_command on the server, works without any issues. So, it seems Chef is not executing this restart_command for some reason :
'sv force-stop celery-worker-1 ; sv start celery-worker-1'
Thanks in advance for the help.
That would mean that either node[:opsworks][:instance][:layers].include?('celery-worker') is false or deploy_revision[testapp] is not updating. For the latter you can see in the output, if you get something like deploy_revision[testapp] (up-to-date) then it isn't updating so no notification trigger. For the layers data, you would have to check that manually.

Restarting an EC2 instance on the instance that is to be restarted

I'm running through a loop with a PHP script that's on AWS instance. From my experiences with AWS, as soon as the instance is stopped, all of the code that's in the process of being executed is stopped. What I have is this:
<?php
require("vendor/autoload.php");
use Aws\Ec2\Ec2Client;
$instance_id = 'instance_id';
$creds = array('key' => 'key',
'secret' => 'secret',
'region' => 'us-west-2');
$client = Ec2Client::factory($creds);
$instance = array('InstanceIds' => array($instance_id), 'DryRun' => false);
for($i=0;$i<10;$i++) {
// Execute irrelevant code
// .....
$result = $client->stopInstances($instance);
sleep(300);
$result = $client->startInstances($instance);
}
?>
So, my question is this: Once the instance is stopped, everything that is written after that will not be executed since the instance will be stopped, right? The loop will not continue on to the next iteration, right? If so, then how could I get around that?
When you call the stopinstances api, EC2 will start shutting down your instance (and the OS inside the instance will kill running processes as part of it)
There's no guarantee exactly how long this will take, although in my experience you'll rarely get more than a couple of seconds, so that sleep(300) pretty much guarantees that the call to stopInstances is the last thing that your code will do.
There's nothing you can do about this other than not stopping the instance you are running on. To that end you can query the instance metadata service to find out what the id of the instance running your code is. You can get this data by making a request to http://169.254.169.254/latest/meta-data/instance-id
You cannot start an instance that is Stopped from the same instance. You can keep an additional (external) server either on EC2 or otherwise to control automatic shutdowns/startups.
To follow on from #TJ-'s answers...
You can check to see if the instance is stopped and then continue with your code
$client->waitUntil('InstanceStopped', array('InstanceIds' => $instanceId));
But you have to run this from a different instance than the one being terminated.

Background Job and Scheduling with Resque

I have a Ruby on Rails 4.0 and PostgreSQL app hosted in an Ubuntu VPS. in this application I want to send email based on data in the database. for example a background job check a table content per hour and depend on content send email to user or not. I decided to do this work by Resque.
how can I do that?
should I do in Rails app or in an independent service?
and how can I schedule this job?
There are couple of more options I advise you to try to
1. Cron : One of most preferred approach for any unix developer to run a task based upon some interval . here are read more about
FYI: if you facing problem with understanding cron settings there are gem available to do the same for you its called whenever
2. Resque-Scheduler : Surely you missed one of Resque plugins that provide exactly the same feature that you need its called resque-scheduler . It too provide cron like settings for you to work on
Please check the above link for more info
Hope this helps.
I do not use Resque because I want a process in the Ubuntu server that in a schedule time (per hour). for example per hour check the table content and send alarm to the users by email.
I make a process by Daemon and rufus-scheduler for scheduling.
Process.daemon(true)
task_test = TaskTest.new
pid = Process.fork do
task_test.task
end
class TaskTest
def task
scheduler = Rufus::Scheduler.new
scheduler.every '1h' do
msg = "Message"
mailer = MailerProcess.new
mailer.send_mail('email-address', 'password', 'to-email', 'Subject', msg)
puts Time.now
end
scheduler.join
end
end