Laravel 4.2 AWS SQS queue setup using EB worker environment - amazon-web-services

I'm trying to set up Laravel 4.2 queue using AWS SQS and an EB Worker environment. I'm pushing the job into the queue from another server and I want the worker environment to execute it. But so far it looks like the worker tries to execute it, but for some reason gets a 405 error in the access log...
I'm trying to get a very simple test code... On the worker env. I pretty much clean Laravel installation just with queue config and stuff and this class:
class TestQueue {
public function fire($job, $data)
{
File::append(storage_path().'/sqs_push.txt', $data['date']);
$job->delete();
}
}
Now on the main server, from where I want to push, I have this:
public function getTestQueue(){
$data = ['date' => date('Y-m-d H:i:s')];
$queue = \Queue::push('TestQueue', $data);
var_dump($queue);
}
On the worker I have launched the
php artisan queue:listen
When I run that method, it adds it to the SQS queue (I can see it in the SQS console) and the worker tries to execute it, but all I see is some 405 errors in the access logs...
Maybe im doing something wrong in my queue setup? Can anyone help me please?

Error 405 stands for "MethodNotAllowed" where the specified method is not allowed against this. Since you have mentioned that Main Server successfully sends the messages to SQS (you have verified it via the console), I will provide a solution to implement a worker thread. This was taken from this repository in GitHub. Have a look at the worker.php file.
$queue = new Queue(QUEUE_NAME, unserialize(AWS_CREDENTIALS));
// Continuously poll queue for new messages and process them.
while (true) {
$message = $queue->receive();
if ($message) {
try {
$message->process();
$queue->delete($message);
} catch (Exception $e) {
$queue->release($message);
echo $e->getMessage();
}
} else {
// Wait 20 seconds if no jobs in queue to minimise requests to AWS API
sleep(20);
}
}

Related

AWS SQS pause consumer

Lets say for some reason I want to pause consuming reading messages from my SQS ... like a service on the client side will be down for maintanence. Can I pause from my SQS Listener?
#SqsListener(value = "${aws.sqs.listener.myqueue}", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
public void processMessage(MyObj myObj) {
// do something with myObj
// if db is down then throw error and pause reading from SQS
}
I have run access some post that suggest to have a killSwitch but this is not ideal for production as it keeps consuming the messages and reposting to the queue.

Google GCP cloud run redis client loses connection to the instance

I am running my nodejs application on google cloudrun. My application connects to google memorystore redis. Every few mins am getting the following error
Error: read Connection Reset
Followed by
AbortError: Redis connection lost and command aborted. It might have been processed.
Please help what am I missing?
My nodejs code
const redis = require('redis')
const redisClient = redis.createClient({host:'xxx', port: 6379})
redisClient.on('error, function (err) {
console.log(err)
}
const data = await redisClient.getExAsync('key')
Use "setInterval" function in order to invoke Redis operation every minute.
async function RedisKA() {
client.get("key2", (err, reply) => {
console.log(`${kaCount} redis keep `);
});
}
let updateIntervalId = setInterval(RedisKA, 60000);
If you want to avoid the request timeout on the Cloud Run side, which is 5 minutes by default then set your value based on your requirement.
The issue may be caused due a socket time out. This is expected to happen when there is no activity for a period of time.
This could be avoided by periodically executing any command on the connection, for example one command per minute, so it will keep the socket alive and will not abort the connection.

Firebase function connection with GCP Redis instance in the same VPC keeps on disconnecting

I am working on multiple Firebase cloud functions (all hosted in the same region) that connect with a GCP hosted Redis instance in the same region, using a VPC connector. I am using version 3.0.2 of the nodejs library for Redis. In the cloud functions' debug logs, I am seeing frequent connection reset logs, triggered for each cloud function with no fixed pattern around the timeline for the connection reset. And each time, the error captured in the error event handler is ECONNRESET. While creating the Redis instance, I have provided a retry_strategy to reconnect after 5 ms with maximum of 10 such attempts, along with the retry_unfulfilled_commands set to true, expecting that any unfulfilled command at the time of connection reset will be automatically retried (refer the code below).
const redisLib = require('redis');
const client = redisLib.createClient(REDIS_PORT, REDIS_HOST, {
enable_offline_queue: true,
retry_unfulfilled_commands: true,
retry_strategy: function(options) {
if (options.error && options.error.code === "ECONNREFUSED") {
// End reconnecting on a specific error and flush all commands with
// a individual error
return new Error("The server refused the connection");
}
if (options.attempt > REDIS_CONNECTION_RETRY_ATTEMPTS) {
// End reconnecting with built in error
console.log('Connection retry count exceeded 10');
return undefined;
}
// reconnect after 5 ms
console.log('Retrying connection after 5 ms');
return 5;
},
});
client.on('connect', () => {
console.log('Redis instance connected');
});
client.on('error', (err) => {
console.error(`Error connecting to Redis instance - ${err}`);
});
exports.getUserDataForId = (userId) => {
console.log('getUserDataForId invoked');
return new Promise((resolve, reject) => {
if(!client.connected) {
console.log('Redis instance not yet connected');
}
client.get(userId, (err, reply) => {
if(err) {
console.error(JSON.stringify(err));
reject(err);
} else {
resolve(reply);
}
});
});
}
// more such exports for different operations
Following are the questions / issues I am facing.
Why is the connection getting reset intermittently?
I have seen logs that even if the cloud function is being executed, the connection to Redis server lost resulting in failure of the command.
With retry_unfulfilled_commands set to true, I hoped it will handle the scenario as mentioned in point number 2 above, but as per debug logs, the cloud function times out in such scenario. This is what I observed in the logs in that case.
getUserDataForId invoked
Retrying connection after 5 ms
Redis instance connected
Function execution took 60002 ms, finished with status: 'timeout' --> coming from wrapper cloud function
Should I, instead of having a Redis connection instance at global level, try to have a connection created during each such Redis operation? It might have some performance issues as well as issues around number of concurrent Redis connections (since I have multiple cloud functions and all those will be creating Redis connections for each simultaneous invocation), right?
So, how to best handle it since I am facing all these issues during development itself, so not really sure if it's code related issue or some infrastructure configuration related issue.
This behavior could be caused by background activities.
"Background activity is anything that happens after your function has
terminated"
When the background activity interferes with subsequent invocations in Cloud Functions, unexpected behavior and errors that are hard to diagnose may occur. Accessing the network after a function terminates usually leads to "ECONNRESET" errors.
To troubleshoot this, make sure that there is no background activity by searching the logs for entries after the line saying that the invocation finished. Background activity can sometimes be buried deeper in the code, especially when asynchronous operations such as callbacks or timers are present. Review your code to make sure all asynchronous operations finish before you terminate the function.
Source

The azure continuous webjob is running but sometime it was stopped/restarted unexpectedly

Our application is using a webjob to generate the data, for a moment we are facing a problem that is sometime it was stopped/restarted unexpectedly when it is processing the messages queue. It leads to our webjob don't know when it is forcing restarting/stopping to mark which data were processed then let the webjob restart/stop afterward.
Is there any idea to get the stopping/restarting notification to synchronize data?
Many thanks!
If you're using queues, a restarting webjob shouldn't cause you to have any data loss. Since the message will not be completed, it will be put back on the queue for (re)processing.
As far as the restarting goes: make sure you don't have any scenario's in code that break the webjob completely.
Add Application Insights and add an alert for the specific case you're looking for.
See Set Alerts in Application Insights
Sometimes webjobs can get killed by scale-in procedures. You can make sure they have a graceful death by listening to the shutdown event by using the class Microsoft.Azure.WebJobs.WebJobsShutdownWatcher in nuget package Microsoft.Azure.WebJobs.
As in version 1.1.2 of the nuget package:
public sealed class WebJobsShutdownWatcher : IDisposable
{
// Begin watching for a shutdown notification from Antares.
public WebJobsShutdownWatcher();
// Get a CancellationToken that is signaled when the shutdown notification is detected.
public CancellationToken Token { get; }
// Stop watching for the shutdown notification
public void Dispose();
}
A way to use this: in your webjob Program.cs class you get a cancellation token and write the code you want to be executed when shutdown happens.
private static void Main()
{
...
var cancellationToken = new WebJobsShutdownWatcher().Token;
...
cancellationToken.Register(() =>
{
//Your data operations here
});
...
}
Thank Diana for your information. I tried this approach but it was not work very well, webjob is just waiting for 5 seconds before restarting/stopping although I set 60 seconds in the settings.job file. Here is my code below
static void Main()
{
var config = new JobHostConfiguration();
var host = new JobHost();
var cancellationToken = new WebJobsShutdownWatcher().Token;
cancellationToken.Register(() =>
{
//Raise the signal
});
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}

How to apply Singleton Attribute for NonTriggered method in Azure Webjobs

If I apply [Singleton] and [NoAutomaticTrigger] attributes and publish the webjob, it goes to pending restart state.
We want to solve multiple instance issue which is occurring in a method.
Please help.
it goes to pending restart state.
In your case, you need to check the reason why webjob goes to pending restart state.
There are lots of reasons that goes to pending restart state. maybe due to an issues or webjob thread is finished needs to restart. We could check it with Webjob log.
Before publish it to azure, we make sure that it works correctly locally and add the appsetting AzureWebJobsDashboard and AzureWebJobsStorage with storage connection string then we could get the webjob log from webjob dashboard.
If you publish it as continuous type webjob, and method is executed completely. And the status will become to pending restart. It is a normal behavior.
[Singleton] and [NoAutomaticTrigger] attributes could work correctly, please refer to the following demo code.
static void Main()
{
JobHost host = new JobHost();
host.Call(typeof(Functions).GetMethod("CreateQueueMessage"), new { value = "Hello world!" + Guid.NewGuid() });
}
[Singleton]
[NoAutomaticTrigger]
public static void CreateQueueMessage(TextWriter logger,string value,[Queue("outputqueue")] out string message)
{
message = value;
logger.WriteLine("Creating queue message: ", message);
Console.WriteLine(message);
}