I would like to access a list of all assets inside an organization. However, I am unable to access gcloud asset at the organizational level from SDK. I can do it without any issues from the console, but I need to use the SDK to create a script.
I have set the project ID to the organization ID. But I get the following error
The value of ``core/project'' property is set to project number.To use this command, set ``--project'' flag to PROJECT ID or set ``core/project'' property to PROJECT ID
There is no project ID to be set other than the organization ID.
This is what I see in the console.
Name
ID
sample-org
123456789
project -1
project-1-34d3
project -2
project-2-ds2f
...
...
Project and Organization represent different resources and setting one to the value of the other is meaningless.
Typing gcloud asset list --help or Googling "gcloud asset list" provide an explanation:
Listing assets
gcloud asset list --organization
ORG="..." # Your Organization ID
gcloud asset list --organization=${ORG}
Related
When listing projects, we can filter on name using: gcloud projects list --filter='name:xxxx*'
How to store this filter in gcloud configuration so everytime we run gcloud projects list we get filtered projects ?
Note: I don't want a command alias, I need a "per gcloud configuration" filter.
Thanks
#guillaume-blaquiere is correct, you cannot.
However, users are only able to list projects to which they have access.
So, if you're trying to limit the list of projects that your users(' credentials) can list, a more robust solution would be to consider limiting the projects to which the users(' credentials) have access.
Depending on whether you're using a organization you can do this either at the organization/folder(s) level or at the project-level directly.
I'm struggling to execute a query with Bigquery python client from inside a training custom job of Vertex AI from Google Cloud Platform.
I have built a Docker image which contains this python code then I have pushed it to Container Registry (eu.gcr.io)
I am using this command to deploy
gcloud beta ai custom-jobs create --region=europe-west1 --display-name="$job_name" \
--config=config_custom_container.yaml \
--worker-pool-spec=machine-type=n1-standard-4,replica-count=1,container-image-uri="$docker_img_path" \
--args="${model_type},${env},${now}"
I have even tried to use the option --service-account to specify a service account with admin Bigquery role, it did not work.
According to this link
https://cloud.google.com/vertex-ai/docs/general/access-control?hl=th#granting_service_agents_access_to_other_resources
the Google-managed service accounts for AI Platform Custom Code Service Agent (Vertex AI) have already the right to access to BigQuery, so I do not understand why my job fails with this error
google.api_core.exceptions.Forbidden: 403 POST https://bigquery.googleapis.com/bigquery/v2/projects/*******/jobs?prettyPrint=false:
Access Denied: Project *******:
User does not have bigquery.jobs.create permission in project *******.
I have replaced the id with *******
Edit:
I have tried several configuration, my last config YAML file only contents this
baseOutputDirectory:
outputUriPrefix:
Using the field serviceAccount does not seem to edit the actual configuration unlike --service-account option
Edit 14-06-2021 : Quick Fix
like #Ricco.D said
try explicitly defining the project_id in your bigquery code if you
have not done this yet.
bigquery.Client(project=[your-project])
has fixed my problem. I still do not know about the causes.
To fix the issue it is needed to explicitly specify the project ID in the Bigquery code.
Example:
bigquery.Client(project=[your-project], credentials=credentials)
I need to list out all the instance, container, function, notebooks, bucket, dataproc and composer running under project in all the region/locations.
Is it possible to list resources of all the regions location. Gcloud or python script both can work for me
My ultimate goal after listing is to put tag as per its name of the resource.
Thanks
You can use Google Asset inventory feature and query your project like this
gcloud asset search-all-resources --scope=projects/<PROJECT_ID> --page-size=500 --format=json
More detail in the documentation about the query format.
All the ressources aren't supported. You can find the full list here (For example, Cloud Run isn't yet supported, but it's coming soon!)
If you want to access through console, you could go to IAM & Admin Menu, then select Asset Inventory.
Then you could see bunch of asset list.
Click Resource tab if you want download all the details in csv format.
In search asset you will get abundance of irrelevant data. Better to use resource api of the resource you think relevant to you. Like
compute.googleapis.com/Instance
storage.googleapis.com/Bucket
dataproc.googleapis.com/Cluster
container.googleapis.com/Cluster
cloudfunctions.googleapis.com/CloudFunction
dataflow.googleapis.com/Job //Notebook
gcloud asset search-all-resources --asset-types='compute.googleapis.com/Instance,storage.googleapis.com/Bucket' --query='labels.name:*' --format='table(name, assetType, labels)'”
Is there any command to list all the GCP project quota in a single excel file with only top headers. I tried to apply FOR loop on quota management however it gives me output with header included every time with new projects when appended.
gcloud compute project-info describe --flatten=quotas -- format='csv(quotas.metric,quotas.limit,quotas.usage)' will provide for one project. However require for all project on Org level and folder level in a single excel file.
I crafted this bash code that can help you in order to iterate all projects related with the account used with GCloud feel free to modify this code according your use case
#!/bin/bash
#unique header
echo "ProjectId,Metric,Quota,Usage"
gcloud projects list --format="csv[no-heading](projectId,name)" |\
while IFS="," read -r ID NAME
do
RESULT=$(\
gcloud compute project-info describe --project ${ID} \
--flatten=quotas \
--format="csv[no-heading](quotas.metric,quotas.limit,quotas.usage)")
# Prefix ${ID} to each line in the result
for LINE in ${RESULT}
do
echo ${ID},${LINE}
done
done
it is important that the account authenticated has the role project/viewer over all projects associated, also Compute Engine API must be enabled in the projects.
Having said that, you can create a service account associated per organization or by folder in order to get all necessary information.
I am trying to connect BigQuery of ProjectA to Data Fusion of ProjectB and its asking me to enter a service key file. I have tried to upload the service key file to Cloud Storage of ProjectB and provided the link but it's asking me to provide a local file path.
Can someone help me on this?
Thanks in advance.
Can you try this, grant BQ permission of project A to data fusion in project B.
service-project_number#gcp-sa-datafusion.iam.gserviceaccount.com.
project_number-compute#developer.gserviceaccount.com.
Steps:
Navigate to the customer project that contains the CDF instance and copy the project number (this is found on the Home Page in the Project Info card)
Navigate to the project that contains the resources you would like to interact with.
In the sidebar, click on ‘IAM & Admin’
Click on ‘Add’ at the top of the page.
Provide the first service account name from the table above, be sure to replace with the actual number you obtained in step 1
Grant the Admin role for the resource you would like to interact with. Ex. BigQuery Admin for reading/writing to BigQuery. For BigQuery, you will also need to grant the BigQuery Data Owner role as well.
Repeat steps 5 & 6 for the second service account in the table above.
In your pipeline, ensure you define the correct Project Id for the sources/sinks. Using ‘auto-detect’ will default to the customer project that contains the CDF instance.
Can you try download the service key json file to the local, ie you local computer? And try to put the file into some folder and provide the full path to that service key file in the BigQuery properties.