GCP Vertex AI "Enable necessary APIs" when already enabled - google-cloud-platform

I am new to GCP's Vertex AI and suspect I am running into an error from my lack of experience, but Googling the answer has brought me no fruitful information.
I created a Jupyter Notebook in AI Platform but wanted to schedule it to run at a set period of time. So I was hoping to use Vertex AI's Execute function. At first when I tried accessing Vertex I was unable to do so because the API had not been enabled in GCP. My IT team then enabled the Vertex AI API and I can now utilize Vertex. Here is a picture showing it is enabled. Enabled API Picture
I uploaded my notebook to a JupyterLab instance in Vertex, and when I click on the Execute button, I get an error message saying I need to "Enable necessary APIs", specifically for Vertex AI API. I'm not sure why this is considering it's already been enabled. I try to click Enable, but it just spins and spins, and then I can only get out of it by closing or reloading the tab.
One other thing I want to call out in case it's a settings issue is that currently my Managed Notebooks tab says "PREVIEW" in the Workbench. I started thinking maybe this was an indicator that there was a separate feature that needed to be enabled to use Managed Notebooks (which is where I can access the Execute button from). When I click on the User-Managed Notebooks and open JupyterLab from there, I don't have the Execute button.
The GCP account I'm using does have billing enabled.
Can anyone point me in the right direction to getting the Execute button to work?

Based on #JamesS comments, the issue was solved by adding necessary permissions on his individual account since it is the account configured on OP's Managed Notebook Instance in which has an access mode of Single user only.
Based on my testing when I tried to replicate the scenario, "Enable necessary APIs" message box will continue to show when the user has no "Vertex AI User" role assigned to it. And in conclusion of my testing, below are the minimum roles required when trying to create a Scheduled run on a Managed Notebook Instance.
Notebook Admin - For access of the notebook instance and open it through Jupyter. User will be able to run written codes in the Notebook as well.
Vertex AI User - So that the user can create schedule run on the notebook instance since the creation of the scheduled run is under the Vertex AI API itself.
Storage Admin - Creation of scheduled run will require a Google Cloud Storage bucket location where the job will be saved
Posting the answer as community wiki for the benefit of the community that might encounter this use case in the future.
Feel free to edit this answer for additional information.

Related

How can I change the security setting and enable terminal for a Vertex AI managed notebook?

I created a notebook using Vertex AI without enabling terminal first, but I want to enable terminal now so that I can run a Python file from a terminal. Is there any way I can change the setting retrospectively?
As of now, when you create a Notebook instance with unchecked "Enable terminal" like the below screenshot, you cannot re-enable this option once the Notebook instance is already created.
The only workaround is to recreate the Notebook instance and then enable it.
Right now, there is already a Feature Request for this. You can star the public issue tracker feature request and add ‘Me too’ in the thread. This will bring more attention to the request as more users request support for it.

Error when trying to connect to a Cloud SQL instance using the Cloud Shell

I've had a Cloud SQL instance for about a year now.
I always accessed it the same way:
I would go to my project on the Cloud Console.
Click on the Cloud Shell icon at the top right (a small right pointing arrow).
A black shell screen would pop up where I would type
gcloud sql connect <my instance> --user=root.
Enter my password.
Now, all of a sudden, I am getting an error message saying:
There was no instance found at projects//instances/ or you are not authorized to connect to it.
I am the owner of the project, and also have Admin rights to the Cloud SQL instance. The project and instance are still there, and my app that accesses the data stored in the instances' database is working fine - therefore I know the database is also present, otherwise my app wouldn't work.
I didn't touch or change anything in the Cloud SQL instance. Suddenly, I simply can't access my database using the exact same procedure I have been using almost every day over the past year now.
I am able to access the database using a local Python script on my laptop and the Cloud SQL Proxy, but I would like to access it from the Cloud Shell again.
Any ideas on what could the problem be?
gcloud components update - update all of your installed components to the latest version
gcloud init - reinitialize gcloud shell. It performs the following setup steps:
Authorizes gcloud and other SDK tools to access Google Cloud Platform using your user account credentials, or from an account of your choosing whose credentials are already available.
It seems like there was a problem with the GCP Cloud Shell (even though there was no mention of it on the GCP error tracking page). When I logged back in today and followed the same above process everything worked well.
Looks like GCP Cloud Shell could occasionally go rouge and start producing errors. Word of advice, don't panic when this happens (like I did) and start resetting, rebooting and messing up things. Just wait a day and check back again.

Can not remove a container image from the Google Container Registry from Console

I have project OWNER right, but can not remove image from console,
Delete is disabled and there is tag "you do not have pemission to delete this image".
By gcloud everithing works.
I remove buckets - _cloudbuild and artifacts.appspot.com but it was restored after first image build.
How can i resolve this situation?
If everything works correctly using gcloud means that the GCP API is not the issue. Maybe is a glitch in the console.
First ensure that your Cloud Console session corresponds to the user with owner permissions (could be that you've sign up in gcloud with a different account).
If that is correct, I'd file a support request or create an issue tracker case.
This seems like a case that needs to be addressed by the GCP team and StackOverflow is not the best channel to get support for that.
Hope that helps.
According to Google's notice:
Project owners and editors are currently unable to edit tags or delete
images in the Container Registry UI. The workarounds are to either use
the command line or grant the Storage Object Viewer IAM role. A fix is
expected by December 5th.
I'll put the link to the docs, so that other people know what to do until the issue is fixed:
I use this command to delete images while the Console does not work:
gcloud container images delete [HOSTNAME]/[PROJECT-ID]/[IMAGE]:[TAG] --force-delete-tags

Google Cloud service stopped and never restarting

I have been using the Google Cloud speech recognition service for some time, through a python application.
Due to accidentally copying my Google Cloud json file to a GitHub shared location (I was doing a backup), I suddenly got a warning from Google Cloud that I was violating the rules as json is private. Then, I promptly removed the file, but nevertheless, I got an email saying that my resources for my project "santo1" were suspended, saying some reasons of "cryptocurrency mining" which I have no idea.
I applied to reactivate and my appeal was accepted promptly, saying that my resources about santo1 were reinstated.
Unfortunately, the speech recognition still didn't work.
Launching it from python, it records from the microphone but no answer from the service - and no error messages at all.
Then I attempted the following:
regenerate API
create a new json
create a new project with its own json under my same google account
as suggested by the Google Cloud chat operator, I manually clicked play to the VM resource that appeared stopped
create a new gmail account, with another new project, setup with billing and everything (also reconfigured through "gcloud init")
None of these attempts worked.
I need assistance on this, as the chat operator didn't seem capable of telling me more.
Thank you in advance
Best regards
I would recommend you to contact GCP support for this case as your cloud service could be still in suspended status regardless your access is OK
Apparently, the access key is stolen and used by hackers and they did crypto mining using your GCP account, hence your service account was banned
If it's your testing account/project, you should consider to create a new project rather than continue with it, the hacker could create some other services which you may not realize until too late
Worse case is it's your PROD service, then you'd better review the bill and transaction report thoroughly

How to set up ray project autoscaling on GCP

I am having real difficulty setting up ray auto-scaling on google cloud compute. I can get it to work on AWS no problem, but I keep running into the following error when running ray up:
googleapiclient.errors.HttpError: https://cloudresourcemanager.googleapis.com/v1/projects?alt=json returned "Service accounts cannot create projects without a parent.">
My project is part of an organization, so I don't understand where this is coming from, or why it would need to create a project in the first place. I have entered my project id in the yaml file like I normally do for AWS.
Thank you very much. I appreciate any help I can get!!
The error message referring to service account, together with the fact that the project already exists, suggests that the googlecloudapiclient used by Ray Autoscaler is authenticated for a service account that doesn't have access to the project.
If this is true, then here's what I believe happens. Typically, when running Ray GCP Autoscaler, it will first check if the project with the given id exists. In your case, this request returns "not found" because there's no project with the given id associated with the service account. Now, because the project did not exist, Ray will automatically try to create one for you. Typically, if we created a new GCP project with a user account (i.e. non-service account), the newly created project would be associated with the user account's default organization. Service accounts, however, must specify a parent organization explicitly when creating a new project. If we look at the ray.autoscaler.config._create_project function, we see that the arguments passed to the projects.create method omit the 'parent' argument, which explains why you see the error.
To verify if this is true (and hopefully fix the problem), you could change the account used for authenticating with the googlecloudapiclient. I believe that the credentials used for the googlecloudapiclient requests are the same as used by the Google Cloud SDK, so you should be able to configure the accounts using the gcloud auth login command.
I think the Ray Autoscaler could be improved by either allowing user to explicitly specify the parent organization when creating a new project, or at least by providing a more elaborate error message for this particular case.
I hope this fixes your problem. If it doesn't, and you believe that that it's a problem with the Autoscaler, don't hesitate to open an issue or feature request to the Ray Issues page!