I have a large backlog of undelivered messages for one Google Cloud Pub/Sub subscription. I would rather not have to process every message to get caught up, and I cannot delete the subscription manually because is was created using Cloud Deployments.
The gcloud seek command appears to be what I need (https://cloud.google.com/sdk/gcloud/reference/alpha/pubsub/subscriptions/seek). However, upon running this command in the Google Cloud Shell, I receive a "method not found" exception:
gcloud alpha pubsub subscriptions seek my_subscription__name --time=2016-11-11T06:20:57
ERROR: (gcloud.alpha.pubsub.subscriptions.seek) Subscription [my_subscription__name:seek] not found: Method not found.
The subscription type is "Pull".
The API for this method is white-list only at the moment -- but stay tuned. We'll find a way to clarify this in the CLI documentation or output.
Related
In GCP console you have the possibility to purge messages in a subscription.
It looks like "gcloud pubsub subscriptions" do not support a "PURGE MESSAGES" command.
What are the best alternatives are there any other libraries or API?
With gcloud tool this functionality is provided by the seek method. It effectively resets a subscription's backlog to a point in time or to a given snapshot. So, to purge all messages from a subscription you'd pass the current time via the --time flag, for example:
gcloud pubsub subscriptions seek my_subscription__name --time=2022-02-10T11:00:00Z
Messages in the subscription that were published before this time are marked as acknowledged, and messages retained in the subscription that were published after this time are marked as unacknowledged.
The equivalent API endpoint for it is projects.subscriptions.seek.
I want to generate an alert in Monitoring and Login and I want that when that alert is triggered, my script is executed (which I already have generated in my cloud repository), how could I do it?
This could be possible with Cloud Functions this could trigger a declaration of what occurrence should cause your function to execute.
You can use Google Cloud Pub/Sub Triggers, so when an event is presenting on the system a message is published to a Pub/Sub topic that is specified when a function is deployed. Every message published to this topic will trigger function execution with message contents passed as input data.
In the next guide “Alert-based event” you can find the steps to implement this solution.
I have streaming Dataflow job that sink the output data to a pub/sub topic. But randomly the job logs throws an error:
There were errors attempting to publish to topic projects/my_project/topics/my_topics. Recent publish calls completed successfully: 574, recent publish calls completed with error: 1.
there is no stack trace provided by the dataflow, and from the job metrics the error type is "unavailable". After some time, the error will stopped and the pipeline still running as usual. Does this error occurs because of internal error in the GCP service, or because of quota issue? The output request was peaked at 10 req/s.
Found the similar issue, it resolved by adding the "Pub/Sub Admin" permission to Compute Engine default service account under IAM permissions.
I'd like to get notifications for all standard error logs sent to Google Cloud Logging. Preferably, I'd like to get the notifications through Google Cloud Error Reporting, so I can easily get notifications on my phone through the GCP mobile app.
I've deployed applications to Google Kubernetes Engine that are writing logs to standard error, and GKE is nicely forwarding all the stderr logs to Google Cloud Logging with logName: "projects/projectName/logs/stderr"
I see the logs show up in Google Cloud Logging, but Error Reporting does not pick up on them.
I've tried troubleshooting as described here: https://cloud.google.com/error-reporting/docs/troubleshooting. But the proposed solutions revolve around formatting the logs in a certain way. What if I've deployed applications for which I can't control the log messages?
A (totally ridiculous) option could be to create a "logs-based metric" based on any log sent to stderr, then get notified whenever that metric exceeds 1.
What's the recommended way to get notified for stderr logs?
If Error Reporting is not recognizing the stderr logs from your container it means they are not displayed in the correct format for this API to be able to detect them.
Take a look at this guide on how to setup error reporting for GKE API
There are other ways to do this with third party products like gSlack where basically you will export the stackdriver logs to pub/sub and then send them into the Slack channel with Cloud Functions.
You can also try to do it by using Cloud Build and try to integrate it with GKE container logs.
Still, I think the best and easiest option will be by using the Monitoring Alert.
You can force the error by setting the #type in the context as shown in the docs. For some reason, even if this is the Google library, and it has code to detect that an Exception is thrown, with its stack trace, it won't recognize it as an error worth reporting.
I also added the service array to be able to identify my service in the error reporting.
I have schedule a HTTP call type job using Google Cloud Scheduler. How do I send out email alert if the job failed?
I have read the Cloud Scheduler documentation, and google around but the answer is not obvious. I had also attempted the stackdriver alert policy but can't find the corresponding metrics for the failed log entry.
I expect an email notification can be configured to send out if the scheduled job failed.
One way to handle this is to create a new Log-Based Metric with this filter:
resource.type="cloud_scheduler_job" severity != INFO.
Then you can create an alert based on this new metric.
I use a workaround to solve my own problem.
Since my Cloud Scheduler is calling a HTTP call to my Cloud Function.
I use stack driver to create an alert to monitor my function execution with status code != ok. Any time the function execute with failure, an email alert will be send to my inbox.
This for the time being solve my problem.
Nevertheless, perhaps Cloud Scheduler can provide such enhancement to send alert as part of the configuration.
thank you.
You can use log-based metrics in Stackdriver along with email notifications to get email notifications when your job fails.
October 2022: You no longer need to create a metric for this, you can skip that step and create an alert directly from Logs Explorer after entering the query already described:
resource.type="cloud_scheduler_job" severity != INFO