Warning in GCP Cloud SQL Database - google-cloud-platform

Some time ago a warning showed up in loggings, as long as I can see it does not affect the projects workflow but it mess the logs.
The warning is:
[Warning] User 'mysql.session'#'localhost' was assigned access 0x8000 but was allowed to have only 0x0.
What is it about?
Thanks!

I’m gonna list the most common root causes for that error message you are getting:
Permissions: There are user permissions for the database, cloud functions, and the IAM roles. You can make it work by giving "All User" complete access. This is not secure, of course, but works.
Prod vs Dev (emulator): You can recall that access is less strict within the Google ecosystem, so running your functions on your emulator was a bad idea for troubleshooting. There are other ways (such as SSL or VM). As a last scenario, maybe you will have to deploy every time.
ORM: If you are using TypeORM for the first time, you will see that many of the tutorials are outdated.

Related

What's behind GCP `There was an error while loading xxx. Please try again.` UI error?

When accessing Google Cloud Platform Services, I'm requested to retry due to an unknown error. It affects all services I want to access.
The snapshot below showcases the issue with Google Cloud Build.
Here's the same error with Google Cloud Storage.
This has first occurred month ago but been automagically resolved without any further action from my side. This second appearance has now lasted 4 days.
Looking at Google Cloud Status, I can't link any incident to this behaviour.
Options I've been exploring with no success are:
Logging back in
Checking credential access
Important notes:
I have access to the global project dashboard: https://console.cloud.google.com/home/dashboard?project=<project-name>
Other teammates do not face this. I'm now left with a few further actions since I access have been verified.
I tried disabling all the browser extensions. Before narrowing it down to find that the Apollo Client DevTools extension was the culprit.
I had the same error resolved disabling "Keepa - Amazon Price Tracker" extension.

Can't use Composer environment after re-enabling service

I have been trying to find a way to save on the costs of Airflow by disabling it when not in use. I have discovered that if we disable the composer.googleapis.com service while not in use that Google does not charge for the service while it is disabled, although it does continue to charge for other resources that are still active. Unfortunately, if the service is disabled for more than an hour or so, the service is not usable after re-enabling it. After the service has been disabled for an extended period of time, the Composer Environment Details Page shows
An error occurred with retrieving the last operation on this environment
and
This environment cannot be edited due to the errors that occurred during environment creation/update. Please investigate the logs to determine the cause, or create a new environment.
And gcloud composer environments describe shows state: ERROR
The one error that I did see in the logs was due to a duplicate key when the airflow_monitoring DAG was rescheduled after a little over an hour. Therefore, created a new Composer environment, disabled all DAGs, disabled the composer service, waited a while, then enabled it again. The environment was once again in an error state.
The Cloud Composer documentation states:
If you disable the Cloud Composer API, environments become unusable within an hour of service deactivation unless you re-enable the API. If you re-enable the API, you are billed for the service usage that occurs while the Cloud Composer service is deactivating.
Maybe this is poorly worded, but to me it sounds like it would become unusable within an hour if you disable it, but if you re-enabled it anytime later, it will become usable again. I am wondering if it really means that if you disable it, you must re-enable it within 1 hour or it will become permanently unusable.
Is there a way to disable the composer.googleapis.com service for longer than an hour and then get it working again after the service has been re-enabled? Is there something I can restart, or some way to clear the error state? Is there more I should do before disabling it?
I am using composer-1.10.4-airflow-1.10.6 with Python 3.
Thanks.
No, there is no way to disable the composer.googleapis.com service for more than an hour and then have environments be functional after re-enablement.
GCP services are not meant to be enabled/disabled on the fly in this manner, and disablement of a service is meant to be performed with the intention of disabling it for the long term. Keeping a service disabled for long enough means Google-managed components created for the service (specifically for your project) will be decommissioned, and in Composer's case, this will render your environments permanently unusable.
The error state in the environment cannot be cleared. If you want to save on costs, you should delete Composer environments as opposed to deactivating the service entirely. The "service" is not cluster-like and isn't meant to be toggled on and off.

Cannot access GCP projects anymore

my development team has been sparingly trying out Google Cloud Platform for about 10 months. Every member was using the same account to access GCP, say team#example.com. We created three projects under this account.
Starting in about July, we cannot see these projects in the GCP console anymore. Instead, there is one project named My First Project, which we have never created.
However, our original GCP projects still seem to exist, as we can still access for example some of the Google Cloud Functions via HTTP.
Therefore, I have the impression that the connection between our account and the projects has been lost.
OR
A second account with the same name has been accidentally created?
Additional curiosities:
Yesterday I tried to create a Google Cloud Identity account, using team#example.com. It did not work; when entering that address the input field showed an error like "Please use another email address. This is a private Google account." (It was actually in German, so I'm guessing the translation.)
When I go to accounts.google.com, the account selection screen offers team#example.com twice. No matter which entry I choose, I always end up in the GCP console with My First Project.
How can I recover my team's GCP projects?
Which Google support site may I consult to check on the account(s)?
Usually, there is a 1:1 mapping between a certain email address and a Google Account. However, this can be broken under certain situations - for example when creating / deleting / migrating G Suite or Cloud Identity accounts under the domain the email address uses.
If you hit such an edge case, there's not much you can do yourself. Reach out to GCP Support who should be able to resolve the issue for you.
Keep in mind that orphaned resources have a timer on them before they are deleted - so act quickly and do not rely on apps still responding being a sign that they will continue indefinitely.

Cannot have more than 0 builds in queue for the account

I'm newbie in AWS, with my free tier account I'm trying to build my nodeJS project with AWS CodeBuild but I get this error:
Build failed to start The build failed to start. The following error occured: Cannot have more than 0 builds in queue for the account
I followed the simple aws tutorial, leaving all default settings for let aws create all service, image etc for me.
Also I stored source code in a AwsCodeCommit repository.
Could anybody help me?
In my case, there was a security vulnerability in my account and AWS automatically raised a support ticket and suspended all resources that were linked to it. I had to fix it and then on chat with aws support they resumed my service.
I've seen a lot of answers around the web suggesting to call support, which is a great idea, but I was actually able to get around this on my own.
As the root user I went in and put in a current credit card. The one that was currently there was expired. I then deleted my CodeBuild project and create a new one. Now my builds work! It makes sense that AWS just needed a valid payment method before it allowed me to use premium services.
My solution may not work for you, but sure I hope it does!
My error was Project-level concurrent build limit cannot exceed the account-level concurrent build limit of 1 when I tried to increase the Concurrent build limit under checkbox Restrict number of concurrent builds this project can start in CodeBuild Project Configuration. I resolved it by writing to support to increase the limit. They increased it to 20 and it works now as expected. They increased it even though I'm on Basic plan on AWS if anyone's wondering.
My solution was to add new service role name and the concurrent build to 1. This worked
I think your issue is resolved at the moment. Any way I faced the same issue. In my case I had a "code build project" connecting to a GitHub repository. And then I added AWS Access Key and Secret hard coding the buildspec.yml file. With this AWS identified it as an unauthorized login. So they added security restrictions to the resources while opening a support issue. In such a case you can look for the emails from AWS in which they explain the reason for this behavior and the steps to get this corrected.

Missing SQL tab when using AD provider

I've deployed a copy of opserver, and it is working perfectly when using alladmin as the security setting. However, once I switch it to ad and configure the groups, the SQL tab goes away and I get an access denied message if I try browsing directly to it. The dashboard still displays all Solar Winds data as expected.
The build I'm using is actually from November. I tried a more recent build, but I lose the network information from Solar Winds (the CPU and Mem graphs show, but Net is all blank)
Is there a separate place to configure the SQL permissions that I'm missing?
I think perhaps there was some caching going on for the hub that wasn't happening for the provider, because they are both working now. Since it was a new security group, perhaps it hadn't replicated yet (causing the SQL auth to fail) but the dashboard provider was still using the previous authentication?
I also did discover a neat option while researching this though - the GitHub page mentions that you can also specify security at a provider level in the JSON using the AdminGroups and ViewGroups properties!