GCP Pub/Sub Push Subscription - google-cloud-platform

I am working with GCP Pub/Sub within a Cloud Run service, as per the documentation I am required to use a push subscription.
The push endpoint is configured and is working, I was testing the error case on the push endpoint to make sure that Pub/Sub would not retry delivery in an infinite loop.
After following the documentation I have been unable to get the Dead Letter Queues working with the push subscription.
How can I configure a fixed amount of retries with a push subscription? Code below.
const pubSubClient = new PubSub();
async function pubsubInit() {
await pubSubClient.createTopic(config.asyncTasks.topic);
await pubSubClient.createTopic(config.asyncTasks.deadletter);
await pubSubClient.createSubscription(config.asyncTasks.topic, config.asyncTasks.sub, {
deadLetterPolicy: {
deadLetterTopic: config.asyncTasks.deadletter,
maxDeliveryAttempts: 5,
},
pushEndpoint: config.asyncTasks.endpoint,
});
await pubSubClient.createSubscription(config.asyncTasks.deadletter, config.asyncTasks.deadletterSub, {
pushEndpoint: config.asyncTasks.deadEndpoint,
});
}```

Related

Do Google Cloud Tasks Delete themselves once they have executed?

In my application, I have implemented Google Tasks so that my users can receive notifications on when their ToDo item is due.
My main issue is that when my Cloud Task fires, I noticed that I still see it located in my Cloud Task Console. So, do they delete themselves once they are fired? For my application, I want the cloud tasks to delete themselves once they are done.
I noticed in the documentation this line you can also fine-tune the configuration for the task, like scheduling a time in the future when it should be executed or limiting the number of times you want the task to be retried if it fails. The thing is, my task is not failing and yet I see the number of retries at 4.
firebase cloud functions
exports.firestoreTtlCallback = functions.https.onRequest(async (req, res) => {
try {
const payload = req.body;
let entry = await (await admin.firestore().doc(payload.docPath).get()).data();
let tokens = await (await admin.firestore().doc(`/users/${payload.uid}`).get()).get('tokens')
await admin.messaging().sendMulticast({
tokens,
notification: {
title: "App",
body: entry['text']
}
}).then((response) => {
log('Successfully sent message:')
log(response)
}).catch((error) => {
log('Error in sending Message')
log(error)
})
const taskClient = new CloudTasksClient();
let { expirationTask } = admin.firestore().doc(payload.docPath).get()
await taskClient.deleteTask({ name: expirationTask })
await admin.firestore().doc(payload.docPath).update({ expirationTask: admin.firestore.FieldValue.delete() })
res.status(200)
} catch (err) {
log(err)
res.status(500).send(err)
}
})
A task can be deleted if it is scheduled or dispatched. A task cannot be deleted if it has completed successfully or permanently failed according to this documentation.
The task attempt has succeeded if the app's request handler returns
an HTTP response code in the range [200 - 299].
The task attempt has failed if the app's handler returns a non-2xx
response code or Cloud Tasks does not receive response before the
deadline which is :
For HTTP tasks, 10 minutes. The deadline must be in the interval [15
seconds, 30 minutes]
For App Engine tasks, 0 indicates that the request has the default
deadline. The default deadline depends on the scaling type of
the service: 10 minutes for standard apps with automatic scaling, 24
hours for standard apps with manual and basic scaling, and 60
minutes for flex apps.
Failed tasks will be retried according to the retry
configuration. Please check your queue.yaml file for the retry
configuration set and if you want to specify and set them as per your
choice follow this.
The task will be pushed to the worker as an HTTP request. If the worker or the redirected worker acknowledges the task by returning a successful HTTP response code ([200 - 299]), the task will be removed from the queue as per this documentation. If any other HTTP response code is returned or no response is received, the task will be retried according to the following:
User-specified throttling: retry configuration, rate limits, and
the queue's state.
System throttling: To prevent the worker from overloading, Cloud
Tasks may temporarily reduce the queue's effective rate.
User-specified settings will not be changed.

Using AWS SDK functions in Lambda

I would like to use AWS Lambda to restart an app server in an elastic beanstalk instance.
After extensive googling and not finding anything, I finally found an article that I have now lost that described how to do what I want. I used it to set up a Lambda function and an EventBridge cron schedule to run the function. Looking at my the cloudwatch logs for the eventbridge, I can see the function is successfully running. However, it is definitely not restarting my app server. When I use the console button to restart, the data I pulled in with another cron gets updated and I can see the date on my main page update. This does not happen when the lambda function runs, though it should.
Here is my function:
const AWS = require('aws-sdk');
const eb = new AWS.ElasticBeanstalk({apiVersion: '2010-12-01'});
exports.handler = async (event) => {
const params = {
EnvironmentName: 'my-prod-environment'
};
eb.restartAppServer(params, function(err, data) {
if (err) {
console.log(err, err.stack);
return err;
}
else {
console.log(data);
return data;
}
});
};
The logs, unfortunately, tell me nothing. Despite the console.log statement, no error or data appears in my logs, so I don't know if the function is even completing. Does anyone have any idea why this won't work?

Google cloud storage function sending already delivered message when subscriber connects

I have a bucket in google storage cloud. Also, I have a storage function that gets triggered every time there is a new file/folder created on this bucket. The idea of this function is to publish on a google PubSub the files that were created under "monitoring" folder. So, it will get triggered once there is a new file, but only sending the message to PubSub if the file was created under the mentioned folder. Besides, I have a Java application subscribed to the PubSub receiving this messages. It is able to receive messages without issues at all, but when I shut down the application and lunch it again, after some minutes the messages that were delivered previously, are coming again. I checked the logs and see if the storage function was triggered, but it is not the case and it seems that no message was sent to PubSub again. All messages were Acked and PubSub was empty. Am I missing something related to storage function or PubSub ?
This is my storage function definition:
const {PubSub} = require('#google-cloud/pubsub');
const topicName = 'test-topic-1';
const monitoringFolder = 'monitoring/';
exports.handler = (event, context) => {
console.log(event);
if (isMonitoringFolder(event.name)) {
publishEvent(event);
}
};
const publishEvent = (event) => {
const pubSub = new PubSub();
const payload = {
bucket: event.bucket,
filePath: event.name,
timeCreated: event.timeCreated
};
const data = Buffer.from(JSON.stringify(payload));
pubSub
.topic(topicName)
.publish(data)
.then(id => console.log(`${payload.filePath} was added to pubSub with id: ${id}`))
.catch(err => console.log(err));
};
const isMonitoringFolder = filePath => filePath.search(monitoringFolder) != -1
I would really appreciate any advice
Pubsub doesn't guarantee a single occurance of the message
Google Cloud Pubsub have a At-Least-Once delivery policy. This delivers each published message at least once for every subscription. But it can be delivered multiple times
https://cloud.google.com/pubsub/docs/subscriber

How to subscribe AWS Lambda to Salesforce Platform Events

We want to integrate Salesforce into out Micro Service Structure in AWS.
There is a article about this here
So we want to subscribe lambda to certain platform events in salesforce.
But i found no code examples for this. I gave it a try using node.js (without lambda). This works great:
var jsforce = require('jsforce');
var username = 'xxxxxxxx';
var password = 'xxxxxxxxxxx';
var conn = new jsforce.Connection({loginUrl : 'https://test.salesforce.com'});
conn.login(username, password, function(err, userInfo) {
if (err) { return console.error(err); }
console.error('Connected '+userInfo);
conn.streaming.topic("/event/Contact_Change__e").subscribe(function(message) {
console.dir(message);
});
});
But i am not sure if this is the right way to do it in lambda.
My understanding of Salesforce Platform Events is that they use CometD under the hood. CometD allows the HTTP client (your code) to subscribe to events published by the HTTP server.
This means your client code needs to be running and be in a state where it is subscribed and listening for server events for the duration of time that you expect to be receiving events. In most cases, this duration is indefinate i.e. your client code expects to wait forever in a subscribed state, ready to receive events.
This is at odds with AWS Lambda functions, which are expected to complete execution in a relatively short amount of time (max 15 minutes last time I checked).
I would suggest you need a long running process, such as a nodejs application running in Elastic Beanstalk, or in a container. The nodejs application can stay running indefinately, in a subscribed state. Each time it receives an event, it could call your AWS Lambda function in order to implement the required actions.

How to check AWS IoT Device connection status on the web console?

I just started playing with AWS IoT. I created a thing and use mqtt-spy to connect to AWS server. All is ok.
Now I'd like to check the status of each thing in the web console, however I couldn't find such useful info near the device.
By enabling AWS IoT Fleet Indexing Service, you can get the connectivity status of a thing. Also, you can query for currently connected/disconnected devices.
First, you have to enable indexing (thingConnectivityIndexingMode) by aws-cli or through the console.
aws iot update-indexing-configuration --thing-indexing-configuration thingIndexingMode=REGISTRY_AND_SHADOW,thingConnectivityIndexingMode=STATUS
Then you can query a thing's connectivity status like the following
aws iot search-index --index-name "AWS_Things" --query-string "thingName:mything1"
{
"things":[{
"thingName":"mything1",
"thingGroupNames":[
"mygroup1"
],
"thingId":"a4b9f759-b0f2-4857-8a4b-967745ed9f4e",
"attributes":{
"attribute1":"abc"
},
"connectivity": {
"connected":false,
"timestamp":1641508937
}
}
}
Note: Fleet Indexing Service index connectivity data with device lifecycle events ($aws/events/presence/connected/). In some cases it may take a minute or so for the service to update indexing after a connect or disconnect event occurs.
EDIT: The javascript version of this:
var iot = new AWS.Iot({
apiVersion: "2015-05-28"
});
...
var params = {
queryString: "thingName:" + data.Item.thingName, // using result from DynamoDB
indexName: 'AWS_Things'
// maxResults: 'NUMBER_VALUE',
// nextToken: 'STRING_VALUE',
// queryVersion: 'STRING_VALUE'
};
iot.searchIndex(params, function(err, data) {
if (err) {
console.log("error from iot.searchIndex");
console.log(err, err.stack); // an error occurred
} else {
console.log("success from iot.searchIndex");
console.log(data.things[0].connectivity.connected); // t/f
}
});
You need to subscribe to the topic on the aws iot console , test section on the right corner of AWS IoT-core. for example you to subscribe to this topic replace your client with the .
$aws/events/presence/connected/<Your_clientId>
if you have more than one thing then you have to subscribe using your ClientID
for reference check this link https://docs.aws.amazon.com/iot/latest/developerguide/life-cycle-events.html