GCP: Is it possible to have an access to a resource if don't have project access? - google-cloud-platform

It is my first expirience in Google Cloud Platform and I'm confused.
I've got an access to a resource:
xxx#gmail.com has granted you the following roles for resource resource_name(projects/project_name/datasets/ClientsExport/tables/resource_name) BigQuery Data Editor
But if I open BigQuery Data Editor, I don't see project_name and resource_name. Search by resource_name also returns no result.
Is it only access that I have in the project (I didn't get another accesses and mails).
Could you please help me with this? Maybe should I get some additional access to resource_name will be available? If is there another way to find the resource?
Thank you in advance!

In the message you have access to BigQuery data inside a table. You can query them from your project, you are autorised to access them (and to write also, because you are editor).
However, this table isn't in your project, it's in another project that's why you don't see it directly in the BigQuery console. In addition, you haven't the right to read the metadata (roles/bigquery.metadataViewer) on the dataset of the other project. Eventually, you can't also view the table schema in the console, but the bq CLI allow you to view it.
I had some discussions with Google BigQuery team about that (because I got the same issue in my company), and updates should happen by the end of the year (or soon in 2022) to fix this "view" issue in the console.

It looks like you have IAM permission to access a specific resource in BigQuery but cannot access it from the GUI.
Some reasons you may not see access on your GUI:
You have permission to interact with BigQuery but don't have access to any of the data.
You aren't a member of the organization which provided the resources and they have higher level permissions (on the org level) which prevents sharing of resources outside of the org.
Your access is restricted to the command line/app level. (If your account is a service account then this is likely the case.)

Related

Connect BigQuery as a source to Data Fusion in another GCP project

I am trying to connect BigQuery of ProjectA to Data Fusion of ProjectB and its asking me to enter a service key file. I have tried to upload the service key file to Cloud Storage of ProjectB and provided the link but it's asking me to provide a local file path.
Can someone help me on this?
Thanks in advance.
Can you try this, grant BQ permission of project A to data fusion in project B.
service-project_number#gcp-sa-datafusion.iam.gserviceaccount.com.
project_number-compute#developer.gserviceaccount.com.
Steps:
Navigate to the customer project that contains the CDF instance and copy the project number (this is found on the Home Page in the Project Info card)
Navigate to the project that contains the resources you would like to interact with.
In the sidebar, click on ‘IAM & Admin’
Click on ‘Add’ at the top of the page.
Provide the first service account name from the table above, be sure to replace with the actual number you obtained in step 1
Grant the Admin role for the resource you would like to interact with. Ex. BigQuery Admin for reading/writing to BigQuery. For BigQuery, you will also need to grant the BigQuery Data Owner role as well.
Repeat steps 5 & 6 for the second service account in the table above.
In your pipeline, ensure you define the correct Project Id for the sources/sinks. Using ‘auto-detect’ will default to the customer project that contains the CDF instance.
Can you try download the service key json file to the local, ie you local computer? And try to put the file into some folder and provide the full path to that service key file in the BigQuery properties.

GCP Bigquery create autorized views by terraform

I have a specific question regarding the authorized views in Bigquery and terraform.
Situation: I have already created the simple terraform script to create some Bigquery datasets, tables, views and an IAM entries also. Especially, I create two datasets (source_dataset and target_dataset), some tables in the source_dataset and views in the target_dataset, which are based on the source_database. The clue is to use Bigquery authorized views to separate permissions - the views should be accessible by the group od viewers, which don't have an access to the original source_dataset, but are still able to query the views.
Question: Is it possible to authorized the views from the terraform code? When i try to use the terraform code, the chicken/egg issue emerge. I know, that it's possible to separate to build configuration - write some code in the terraform and authorize the views after that by the python code, but ideally would be to use 100% terraform only.
Thanks.
Seems the chicken and egg issue has been resolved on the upcoming release:
bigquery dataset view access creates circular dependency #2686
The problem as defined:
The view can't be created because it depends on another dataset,
That other dataset can't be created because it depends on the view
A circular dependency exists.
The resolution:
Once it gets released (it should appear in version 3.17.0) it'll be usable as a google_bigquery_dataset_access resource. Here's a preview of what the docs page will look like: https://github.com/terraform-providers/terraform-provider-google/blob/master/website/docs/r/bigquery_dataset_access.html.markdown

Is there a way to create Quicksight analysis purely through code (boto3)?

What I currently have in my Quicksight account is a Data Source (Redshift), some datasets (some Redshift views) and an analysis (graphs and charts that use the datasets). I can view all of these on the AWS Quicksight Console. But when I use boto3 to create a data source and datasets, nothing shows up on the console. They do however show up when I use the list_data_sources and list_data_sets calls.
After this, I need to create all the graphs by code that I created manually. I can't currently find an option to do this through code. There is a 'create_template' api call which is supposed to create a template through an existing Quicksight analysis. But it requires the ARN of the analysis which I can't find.
Any suggestions on what to do?
Note: this only answers why the data sets/sources do not appear in the console. As for the other question, I assume mjgpy3 was of some help.
Summary
Add the permissions at the bottom of this post to your data set and data source in order for them to appear in the console. Make sure to fill in the principal arn with your details.
Details
In order for data sets and data sources to appear in the console when created via the API, you must ensure that the correct permissions have been added to them. Without adding the correct permissions, it is true that the CLI lists them whereas the console does not.
If you have created data sets/sources via the console, you can use the CLI (aws quicksight describe-data-set-permissions and aws quicksight describe-data-source-permissions) to view what permissions AWS gives them so that your account can interact with them.
I've tested this and these are what AWS assigns them as of 25/03/2020.
Data Set permissions:
"permissions": [
{
"Principal": "arn:aws:quicksight:<region>:<aws_account_id>:user/default/{IAM user name}",
"Actions": [
"quicksight:UpdateDataSetPermissions",
"quicksight:DescribeDataSet",
"quicksight:DescribeDataSetPermissions",
"quicksight:PassDataSet",
"quicksight:DescribeIngestion",
"quicksight:ListIngestions",
"quicksight:UpdateDataSet",
"quicksight:DeleteDataSet",
"quicksight:CreateIngestion",
"quicksight:CancelIngestion"
]
}
]
Data Source permissions:
"permissions": [
{
"Principal": "arn:aws:quicksight:<region>:<aws_account_id>:user/default/{IAM user name}",
"Actions": [
"quicksight:UpdateDataSourcePermissions",
"quicksight:DescribeDataSource",
"quicksight:DescribeDataSourcePermissions",
"quicksight:PassDataSource",
"quicksight:UpdateDataSource",
"quicksight:DeleteDataSource"
]
}
]
It sounds like your smaller question is regarding the ARN of the analysis.
The format of analysis ARNs is
arn:aws:quicksight:$AWS_REGION:$AWS_ACCOUNT_ID:analysis/$ANALYSIS_ID
Where
$AWS_REGION is replaced with the region in which the analysis lives
$AWS_ACCOUNT_ID is replaced with your AWS account ID
$ANALYSIS_ID is replaced with the analysis ID
If you're looking for the $ANALYSIS_ID it's the GUID-looking thing on the end of the URL for the analysis in the QuickSight URL
So, if you were on an analysis at the URL
https://quicksight.aws.amazon.com/sn/analyses/018ef6393-2c71-4842-9798-1aa2f0902804
the analysis ID would be 018ef6393-2c71-4842-9798-1aa2f0902804 (this is a fake ID I injected for this example).
Your larger question seems to be whether you can use the create_template API to duplicate your analysis. The answer at this moment (12/16/19) is, unfortunately, no.
You can use the create_dashboard API to publish a Dashboard from a template made with create_template but you can't create an Analysis from a template.
I'm answering this bit just to clarify since you may actually be okay with creating a dashboard (basically the published version of an analysis) rather than another analysis.
There are multiple ways you can find analysis id associated. Use any of the following.
A dashboard url has dashboard id included, Use this ID to execute API call describe-dashboard and you would see analysis ARN in the source entity.
Click on "save as" option on the dashboard and it would take you to the associated analysis. [ One might not see this option if a dashboard is created from a template ]
A dashboard ID can also be found by using list_dashboards API call. Print all the dashboard ID and name. You can match the ID with the given dashboard name.Look at the whole list because a dashboard id is unique but the dashboard name is not. One can have multiple dashboards with the same name.
Yes you can create lambda and trigger using cron Job
import boto3
quicksight = boto3.client('quicksight')
response = quicksight.create_ingestion(AwsAccountId=XXXXXXX,
DataSetId=YYYY,IngestionId=ZZZZ)
https://aws.amazon.com/blogs/big-data/automate-dataset-monitoring-in-amazon-quicksight/
https://aws.amazon.com/blogs/big-data/event-driven-refresh-of-spice-datasets-in-amazon-quicksight/
I've been playing with this as well and ran into the same issue. Make sure that your permissions are set up properly for the data source and the data set by referencing the quicksight user as follows:
arn:aws:quicksight:{region}:xxxxxxxxxx:user/default/{user}
I would include all the quicksight permissions found in the docs to start with and shave down from there. If nothing else, create the data source/set from the console, and then use the describe-* CLI call to see what they use.
It's kind of wonky.

Spotify spark-bigquery connector issue while using BigQuerySelect method from Dataproc cluster

I am new to BigQuery GCP and to access BigQuery data we are using Spotify spark-bigquery connector as provided here.
We are able to use sqlContext.bigQueryTable("project_id:dataset.table") and its working.
When we are using sqlContext.bigQuerySelect("SELECT * FROM [project_id:dataset.table]") it is giving error:
The user xyz-compute#developer.gserviceaccount.com does not have permission to query table.
We have done necessary settings w.r.t json file and location. But don't have any clue about from where it is taking this user account details.
Please provide help regarding its cause and how to fix it in code.
This error indicates that the service account you are using (xyz-compute#developer.gserviceaccount.com) doesn’t have enough IAM permissions. You should go to your IAM settings and make sure it has at least BigQuery Data Viewer permissions.

Sitecore allow role to publish content in specific areas only

I am trying to create a role within Sitecore which can publish content, but only within a specific area(s) of the site. I've added the standard Sitecore\Client Publishing role to my role, but I can't see how to prevent the role from being able to publish all areas of the site. I've looked at the Security editor and the Access viewer, but setting the write access of the sections only seems to affect the ability to edit those sections and has no effect on the ability to publish on those sections.
Workflow is the typical way this is handled. Giving roles access to approve (this could be called 'publish') content of certain sections of the content tree will be the best way to achieve what you are describing. Combine this with an auto-publish action to make it more user friendly.
One thing to keep in mind though using this method is referenced items (images from media library the content may be using for example). Take a look at the 'Publishing Spider' module on the shared source library http://trac.sitecore.net/PublishingSpider
EDIT: Update
I recently discovered this setting in the web.config: "Publishing.CheckSecurity". If set to true, this setting will only publish items if the user has read + write on the item and will only remove items from the web DB if the user has delete permissions.
I had a similar situation once and I created roles per section which only had read and write to that section and no where else (let say 'editor section 1') and another role which only had publishing permission for that section (let say 'publisher section 1'). Then added 'editor section 1' role to 'publisher section 1' role which gives you the role for publishing only specific section.
You do not need multiple workflows, same workflow with multiple roles can also achieve this goal
Answer to this is to set Publishing.CheckSecurity to true
You need to find this code inside web
<!-- PUBLISHING SECURITY
Check security rights when publishing?
When CheckSecurity=true, Read rights are required for all source items. When it is
determined that an item should be updated or created in the target database,
Write right is required on the source item. If it is determined that the item
should be deleted from target database, Delete right is required on the target item.
In summary, only the Read, Write and Delete rights are used. All other rights are ignored.
Default value: false
-->
<setting name="Publishing.CheckSecurity" value="false" />
Set the value="true"
But again you have to govern the security tightly, and assign user role properly. Failed to
do so you will experience buggy publishing.
Hope that will help