Since recently GCP Error Reporting shows this error: "Deployment limit reached. You are only seeing data for the most recent deployments."
Unfortunately I cannot find any documentation which states which deployment this message talks about, how high the limit is or if there is a way to extend the limit.
Can anyone shed light on this? Maybe referencing hidden documentation?
This deployment limit is documented here. It actually mentions a "10,000 deployment-error group pairs, where a deployment is a combination of service and version".
So this isn't about a specific service that reached a deployment limit but about the sum of all the service-version (App Engine, Cloud Functions, Cloud Run, ...) that you've ever deployed and enabled error reporting on.
This also means that the oldest service-version pairs are most likely not serving anymore.
Related
I Google Cloud's ErrorReporting the "Seen in" section doesn't show anything useful for my GKE deployments. It's either empty or says gke_instance which is pretty useless. I have set the serviceContext correctly in my logs and the container name is also set in the labels of the log entries and yet it's not showing up. Is this a bug or am I missing something obvious here?
To resolve your issue try below 3 solutions :
Solution 1 : If you're using Legacy Stackdriver disable it and enable Stackdriver Kubernetes Engine Monitoring, for more information refer to the similar Stack Question.
Solution 2 : As stated by this release notes, the stackdriver agent actually becomes disabled by default in 1.15. To activate it again you need to edit the cluster following these instructions. Also refer to this Stack Question.
Solution 3 : If a new GKE cluster has Cloud Operations for GKE set to System and workload logging and monitoring, however no application logs are showing up refer to this Stack Question.
Note : Issue was the node pool using the default service account (which no longer existed). Created a new node pool following the document.
When accessing Google Cloud Platform Services, I'm requested to retry due to an unknown error. It affects all services I want to access.
The snapshot below showcases the issue with Google Cloud Build.
Here's the same error with Google Cloud Storage.
This has first occurred month ago but been automagically resolved without any further action from my side. This second appearance has now lasted 4 days.
Looking at Google Cloud Status, I can't link any incident to this behaviour.
Options I've been exploring with no success are:
Logging back in
Checking credential access
Important notes:
I have access to the global project dashboard: https://console.cloud.google.com/home/dashboard?project=<project-name>
Other teammates do not face this. I'm now left with a few further actions since I access have been verified.
I tried disabling all the browser extensions. Before narrowing it down to find that the Apollo Client DevTools extension was the culprit.
I had the same error resolved disabling "Keepa - Amazon Price Tracker" extension.
Some time ago a warning showed up in loggings, as long as I can see it does not affect the projects workflow but it mess the logs.
The warning is:
[Warning] User 'mysql.session'#'localhost' was assigned access 0x8000 but was allowed to have only 0x0.
What is it about?
Thanks!
I’m gonna list the most common root causes for that error message you are getting:
Permissions: There are user permissions for the database, cloud functions, and the IAM roles. You can make it work by giving "All User" complete access. This is not secure, of course, but works.
Prod vs Dev (emulator): You can recall that access is less strict within the Google ecosystem, so running your functions on your emulator was a bad idea for troubleshooting. There are other ways (such as SSL or VM). As a last scenario, maybe you will have to deploy every time.
ORM: If you are using TypeORM for the first time, you will see that many of the tutorials are outdated.
I have been trying to find a way to save on the costs of Airflow by disabling it when not in use. I have discovered that if we disable the composer.googleapis.com service while not in use that Google does not charge for the service while it is disabled, although it does continue to charge for other resources that are still active. Unfortunately, if the service is disabled for more than an hour or so, the service is not usable after re-enabling it. After the service has been disabled for an extended period of time, the Composer Environment Details Page shows
An error occurred with retrieving the last operation on this environment
and
This environment cannot be edited due to the errors that occurred during environment creation/update. Please investigate the logs to determine the cause, or create a new environment.
And gcloud composer environments describe shows state: ERROR
The one error that I did see in the logs was due to a duplicate key when the airflow_monitoring DAG was rescheduled after a little over an hour. Therefore, created a new Composer environment, disabled all DAGs, disabled the composer service, waited a while, then enabled it again. The environment was once again in an error state.
The Cloud Composer documentation states:
If you disable the Cloud Composer API, environments become unusable within an hour of service deactivation unless you re-enable the API. If you re-enable the API, you are billed for the service usage that occurs while the Cloud Composer service is deactivating.
Maybe this is poorly worded, but to me it sounds like it would become unusable within an hour if you disable it, but if you re-enabled it anytime later, it will become usable again. I am wondering if it really means that if you disable it, you must re-enable it within 1 hour or it will become permanently unusable.
Is there a way to disable the composer.googleapis.com service for longer than an hour and then get it working again after the service has been re-enabled? Is there something I can restart, or some way to clear the error state? Is there more I should do before disabling it?
I am using composer-1.10.4-airflow-1.10.6 with Python 3.
Thanks.
No, there is no way to disable the composer.googleapis.com service for more than an hour and then have environments be functional after re-enablement.
GCP services are not meant to be enabled/disabled on the fly in this manner, and disablement of a service is meant to be performed with the intention of disabling it for the long term. Keeping a service disabled for long enough means Google-managed components created for the service (specifically for your project) will be decommissioned, and in Composer's case, this will render your environments permanently unusable.
The error state in the environment cannot be cleared. If you want to save on costs, you should delete Composer environments as opposed to deactivating the service entirely. The "service" is not cluster-like and isn't meant to be toggled on and off.
I'm newbie in AWS, with my free tier account I'm trying to build my nodeJS project with AWS CodeBuild but I get this error:
Build failed to start The build failed to start. The following error occured: Cannot have more than 0 builds in queue for the account
I followed the simple aws tutorial, leaving all default settings for let aws create all service, image etc for me.
Also I stored source code in a AwsCodeCommit repository.
Could anybody help me?
In my case, there was a security vulnerability in my account and AWS automatically raised a support ticket and suspended all resources that were linked to it. I had to fix it and then on chat with aws support they resumed my service.
I've seen a lot of answers around the web suggesting to call support, which is a great idea, but I was actually able to get around this on my own.
As the root user I went in and put in a current credit card. The one that was currently there was expired. I then deleted my CodeBuild project and create a new one. Now my builds work! It makes sense that AWS just needed a valid payment method before it allowed me to use premium services.
My solution may not work for you, but sure I hope it does!
My error was Project-level concurrent build limit cannot exceed the account-level concurrent build limit of 1 when I tried to increase the Concurrent build limit under checkbox Restrict number of concurrent builds this project can start in CodeBuild Project Configuration. I resolved it by writing to support to increase the limit. They increased it to 20 and it works now as expected. They increased it even though I'm on Basic plan on AWS if anyone's wondering.
My solution was to add new service role name and the concurrent build to 1. This worked
I think your issue is resolved at the moment. Any way I faced the same issue. In my case I had a "code build project" connecting to a GitHub repository. And then I added AWS Access Key and Secret hard coding the buildspec.yml file. With this AWS identified it as an unauthorized login. So they added security restrictions to the resources while opening a support issue. In such a case you can look for the emails from AWS in which they explain the reason for this behavior and the steps to get this corrected.