Do push notifications, https://developers.google.com/admin-sdk/reports/v1/guides/push, have the same lag time as querying for login events?
In other words, if I set a push notification for login events, will I wait 1-2 days to receive POST requests from Google?
Related
We ran into this issues a few times and hope to find a workaround.
screenshot of Cloud PubSub web console
As in the picture, the PubSub topic has a push subscription on it. When 'VIEW MESSAGES' was clicked, the side panel seemed to allow the user to choose a subscription. But when the user clicked on it, it was not showing the subscription. As a result, the user was not able to 'view messages'.
Is the type of subscription related to this issue or is this functionality not available? If it is related, is there a way to see messages for a topic with only a push subscription?
A push subscription don't stack the message. Each time there is a message, the push subscription sent it to the HTTP endpoint. Because of this, the subscription is always "empty".(This is not exactly true, the not acked message are in memory and are retried until the reception of HTTP code 2XX of the TTL expiration (7 days by default). But there is nothing really stored, at rest)
At the opposite, the pull subscription stack the message until the polling by a client. By the way, you can see the messages stacked.
When I debug a push subscription, especially for seeing what is the structure, the type of message and to validate this, I create a pull subscription in addition and I look into it the messages published in the topic.
I'm working on an event-based email notification service. An email is sent on every single event at the moment. The problem with this approach is sometimes there are too many events for a short period of time, thus the user gets too many emails at once. Instead of this, I'd like to throttle and group these emails by user.
I'm looking at AWS SQS to pipe these events into it and somehow consume them with a lambda that picks and groups the ones ready (ready=been there for at least 3 minutes) to be sent. Is there a built-in solution? Can I tag events with user ID and on the consumer side I pick the latest one, check if it's been there for 3 minutes, if so I pick all the remaining ones...? Just and idea...
I'm troubleshooting an issue with a Node application I've inherited serving as a webhook callback endpoint.
To debug, I'm posting messages to a page that has been subscribed by the Facebook app associated with the endpoint and following my Node app's log.
After several hours, I still see no update requests from Facebook for my page posts.
Comparing timestamps on posts to my app's logs for the last update requests it received (several days ago), I see that it appears that there was about an 8 hour lag between the post and the update request.
I've searched the documentation for help but could only find this:
Update notifications are aggregated and sent in a batch of up to 1000 updates.
If any update sent to your server fails, we will retry immediately, then try a few more times with decreasing frequency over the next 24 hours. Your server should handle deduplication in these cases. Updates unaccepted for 24 hours will be dropped.
This gives me the impression that updates are not instantaneous. But are several hour delays the norm?
Can anybody with more experience with Graph API webhooks provide a ballpark for normal lag?
For various reasons we have run into scenarios where we would like to "pause" push notifications from a Google Cloud Platform (GCP) Pubsub subscription and just allow them to queue up, and then eventually "unpause" and allow pushes to continue without losing any messages.
Is this a built in feature?
Can you suggest a workaround?
Good news. I stumbled upon the answer at https://cloud.google.com/pubsub/docs/subscriber#receive_push
Stopping/pausing and resuming push delivery
To pause receiving messages for a subscription, send a
modifyPushConfigRequest to set the push endpoint to an empty
string. The messages will accumulate, but will not be delivered. To
resume receiving messages, send another modifyPushConfigRequest
request with a populated push endpoint.
To permanently stop delivery, you should delete the subscription.
There is no "pause" feature with push subscriptions. If you can, you might consider switching to a pull subscription. Then you can control exactly when you request messages.
If you can't switch to a pull subscription, you could just return an error response when you receive messages or make your endpoint unavailable. Google Cloud Pub/Sub will backoff redelivery of messages, waiting up to 10 seconds between attempts. It will try to redeliver messages for 7 days. Depending on how long you need to pause your message consumption, this might be a viable option.
If your not going to need to switch between "paused" and "unpaused" frequently, less than once per minute, then you can accomplish this behavior by switching your subscriber to a pull subscription (and not pulling) to get the pause behavior and then switching back to a push subscription to start receiving messages again.
I don't think there's such a pause feature. Instead, you can use polling Consumers and you can stop polling when you need to pause. That's all I can think of.
I'm having issues where my SQS Messages are never deleted from the SQS Queue. They are only removed when the lifetime ends, which is 4 days.
So to summarize the app:
Send URL to SQS Queue to wait to be crawled
Send message to Elastic Beanstalk App that crawls the data and store it in database
The script seems to be working in the meaning that it does receive the message, and it does crawl it successfully and store the data successfully in the database. The only issue is that the messages remain in the queue, stuck at "Message Available".
So if I for example load the queue with 800 messages, it will be stuck at ~800 messages for 4 days and then they will all be deleted instantly because of the lifetime value. It seems like a few messages get deleted because the number changes slightly, but a large majority is never removed from the queue.
So question:
Isn't SQS supposed to remove the message as soon as it has been send and received by the script?
Is there a manual way for me to in the script itself, delete the current message? From what I know the message is only sent 1 way. From SQS -> App. So from what I know, I can not do SQS <-> App.
Any ideas?
A web application in a worker environment tier should only listen on
the local host. When the web application in the worker environment
tier returns a 200 OK response to acknowledge that it has received and
successfully processed the request, the daemon sends a DeleteMessage
call to the SQS queue so that the message will be deleted from the
queue. (SQS automatically deletes messages that have been in a queue
for longer than the configured RetentionPeriod.) If the application
returns any response other than 200 OK or there is no response within
the configured InactivityTimeout period, SQS once again makes the
message visible in the queue and available for another attempt at
processing.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-tiers.html
So I guess that answers my question. Some messages do not return HTTP 200 and then they are stuck in an infinite loop.
No the messages won't get deleted when you read a Queue Item; it is only hidden for a specific amount of time it is called as Visibility Timeout. The idea behind visibility timeout is to ensure that if there are multiple consumers for a single queue, no two consumer pick the same item and start processing.
The is the change you need to do your app to get the expected behavior
Send URL to SQS Queue to wait to be crawled
Send message to Elastic Beanstalk App that crawl the data and store it in database
On the event of successful crawled status, use the receipt-handle(not the message id) and delete the Queue Item from the Queue.
AWS Documentation - DeleteMessage