Want to remove multiple dataset permission by cli (if possible by one go). Is there a way for the same via CLI?
for example,
abc#gmail.com roles/bigquery.dataowner to dataset/demo_aa
xyzGroupSA#gmail.com role/bigquery.user to dataset/demo_bb
Want to remove the email id permission from the respective dataset via CLI with "bq".
[Went through the ref https://cloud.google.com/bigquery/docs/dataset-access-controls#bq_1 , its having local file reference and its very lengthy. But what about when you having a jump server in production environment and need to perform via running commands.]
You can delegate this responsability to CI CD for example.
Solution 1 :
You can create a project :
my-project
------dataset_accesses.json
Run your script on Cloud Shell on your production project or in a Docker image from gcloud-sdk image.
Use a service account having the permissions to update dataset accesses
run the bq script with bq update-access-control :
bq update --source dataset_accesses.json mydataset
dataset_accesses.json :
{
"access": [
{
"role": "READER",
"specialGroup": "projectReaders"
},
{
"role": "WRITER",
"specialGroup": "projectWriters"
},
{
"role": "OWNER",
"specialGroup": "projectOwners"
},
{
"role": "READER",
"specialGroup": "allAuthenticatedUsers"
},
{
"role": "READER",
"domain": "domain_name"
},
{
"role": "WRITER",
"userByEmail": "user_email"
},
{
"role": "WRITER",
"userByEmail": "service_account_email"
},
{
"role": "READER",
"groupByEmail": "group_email"
}
],
...
}
Solution 2
Use Terraform to update the permission on your dataset :
resource "google_bigquery_dataset_access" "access" {
dataset_id = google_bigquery_dataset.dataset.dataset_id
role = "OWNER"
user_by_email = google_service_account.bqowner.email
}
With Terraform, it is also easy to pass a list and apply the resource on it :
Json file :
{
"datasets_members": {
"dataset_your_group1": {
"dataset_id" : "your_dataset",
"member": "group:your_group#loreal.com",
"role": "bigquery.dataViewer"
},
"dataset_your_group2": {
"dataset_id" : "your_dataset",
"member": "group:your_group2#loreal.com",
"role": "bigquery.dataViewer"
}
}
}
locals.tf :
locals {
datasets_members = jsondecode(file("${path.module}/resource/datasets_members.json"))["datasets_members"]
}
resource "google_bigquery_dataset_access" "accesses" {
for_each = local.datasets_members
dataset_id = each.value["dataset_id"]
role = each.value["role"]
group_by_email = each.value["member"]
}
This work also with google_bigquery_dataset_iam_binding
Related
AWS maintains a secret versioning system, a new version is created if the secret value is updated or if the secret is rotated.
I am in the process of getting existing secrets in AWS under the purview of Terraform. As step 1 I declared all the Terraform resources I needed :
resource "aws_secretsmanager_secret" "secret" {
name = var.secret_name
description = var.secret_description
kms_key_id = aws_kms_key.main.id
recovery_window_in_days = var.recovery_window_in_days
tags = var.secret_tags
policy = data.aws_iam_policy_document.secret_access_policy.json
}
// AWS secrets manager secret version
resource "aws_secretsmanager_secret_version" "secret" {
secret_id = aws_secretsmanager_secret.secret.id
secret_string = jsonencode(var.secret_name_in_secrets_file)
}
Next I imported :
Import secret to state :
terraform import module.<module_name>.aws_secretsmanager_secret.secret
arn:aws:secretsmanager:<region>:<account_id>:secret:<secret_name>-<hash_value>```
Import secret version to state :
terraform import module.<module_name>.aws_secretsmanager_secret_version.secret
arn:aws:secretsmanager:<region>:<account_id>:secret:<secret_name>-<hash_value>|<unique_secret_id aka AWSCURRENT>
Post this I expected the Terraform plan to only involve changes to the resource policy. But Terraform tried to destroy and recreate the secret version, which did not make sense to me.
After going ahead with the plan the secret version that was initially associated with the AWSCURRENT staging label, the one that I used above in the import became the AWSPREVIOUS staging label id and a new AWSCURRENT was created.
Before terraform import :
{
"Versions": [
{
"VersionId": "initial-current",
"VersionStages": [
"AWSCURRENT"
],
"LastAccessedDate": "xxxx",
"CreatedDate": "xxx"
},
{
"VersionId": "initial-previous",
"VersionStages": [
"AWSPREVIOUS"
],
"LastAccessedDate": "xxxx",
"CreatedDate": "xxxx"
}
],
"ARN": "xxxx",
"Name": "xxxx"
}
After TF import and apply:
{
"Versions": [
{
"VersionId": "post-import-current",
"VersionStages": [
"AWSCURRENT"
],
"LastAccessedDate": "xxxx",
"CreatedDate": "xxx"
},
{
"VersionId": "initial-current",
"VersionStages": [
"AWSPREVIOUS"
],
"LastAccessedDate": "xxxx",
"CreatedDate": "xxxx"
}
],
"ARN": "xxxx",
"Name": "xxxx"
}
I was expecting initial-current to remain in the AWSCURRENT stage part. Why did AWS make the initial AWSCURRENT secret ID that I imported using TF into AWSPREVIOUS and create a new one since nothing changed in terms of value or rotation? I expected no changes on that front since TF imported the version
I have few terraform state file with empty attributes in there and few with some values
{
"version": 3,
"terraform_version": "0.11.14",
"modules": [
{
"path": [
"root"
],
"outputs": {},
"resources": {},
"depends_on": []
}]
}
I want to pick the default value say "Null" if i don't have the attributes and my remote config is
data "terraform_remote_state" "name" {
backend = "s3"
config {
.....
}
}
If i use "${lookup(data.terraform_remote_state.name.outputs, "attribute", "Null")}" it throws error 'data.terraform_remote_state.name' does not have attribute 'outputs'
Can someone guide me to solve this?
I have added an EMR cluster to a stack. After updating the stack successfully (CloudFormation), I can see the master and slave nodes in EC2 console and I can SSH into the master node. But AWS console does not show the new cluster. Even aws emr list-clusters doesn't show the cluster. I have triple checked the region and I am certain I'm looking at the right region.
Relevant CloudFormation JSON:
"Spark01EmrCluster": {
"Type": "AWS::EMR::Cluster",
"Properties": {
"Name": "Spark01EmrCluster",
"Applications": [
{
"Name": "Spark"
},
{
"Name": "Ganglia"
},
{
"Name": "Zeppelin"
}
],
"Instances": {
"Ec2KeyName": {"Ref": "KeyName"},
"Ec2SubnetId": {"Ref": "PublicSubnetId"},
"MasterInstanceGroup": {
"InstanceCount": 1,
"InstanceType": "m4.large",
"Name": "Master"
},
"CoreInstanceGroup": {
"InstanceCount": 1,
"InstanceType": "m4.large",
"Name": "Core"
}
},
"Configurations": [
{
"Classification": "spark-env",
"Configurations": [
{
"Classification": "export",
"ConfigurationProperties": {
"PYSPARK_PYTHON": "/usr/bin/python3"
}
}
]
}
],
"BootstrapActions": [
{
"Name": "InstallPipPackages",
"ScriptBootstrapAction": {
"Path": "[S3 PATH]"
}
}
],
"JobFlowRole": {"Ref": "Spark01InstanceProfile"},
"ServiceRole": "MyStackEmrDefaultRole",
"ReleaseLabel": "emr-5.13.0"
}
}
The reason is missing VisibleToAllUsers property, which defaults to false. Since I'm using AWS Vault (i.e. using STS AssumeRole API to authenticate), I'm basically a different user every time, so I couldn't see the cluster. I couldn't update the stack to add VisibleToAllUsers either as I was getting Job flow ID does not exist.
The solution was to login as root user and fix things from there (I had to delete the cluster manually, but removing it from the stack template JSON and updating the stack would probably have worked if I hadn't messed things up already).
I then added the cluster back to the template (with VisibleToAllUsers set to true) and updated the stack as usual (AWS Vault).
I'm trying to learn working of CustomResources in CloudFormation Template. Created simple template to create s3 bucket. But on creating stack, it remains in Create in progress state for long time and no bucket is created.
Is there anything, I'm missing in below validated template:
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Description" : "Building A bucket With customeResources in CloudFormation",
"Parameters" : {
"NewBucket": {
"Default": "",
"Description": "S3 bucket containing customer assets",
"Type": "String"
}
},
"Conditions": {
"NewBucket": {
"Fn::Not": [
{
"Fn::Equals": [
{
"Ref": "NewBucket"
},
""
]
}
]
}
},
"Resources" : {
"CustomResource": {
"Properties": {
"S3Bucket": {
"Ref": "NewBucket"
},
"ServiceToken": "SNS topic ARN"
},
"Type": "AWS::CloudFormation::CustomResource"
}
},
"Outputs": {
"BucketName": {
"Value": {
"Fn::GetAtt": [ "CustomResource", {"Ref": "NewBucket"} ]
}
}
}
}
It would appear that your SNS-backed custom resource is not sending a response back to cloud formation, and it is stuck waiting for that response.
From Amazon Simple Notification Service-backed Custom Resources:
The custom resource provider processes the data sent by the template
developer and determines whether the Create request was successful.
The resource provider then uses the S3 URL sent by AWS CloudFormation
to send a response of either SUCCESS or FAILED.
When the request is made to the SNS service provider, it include the following object:
{
"RequestType": "Create",
"ServiceToken": "arn:aws:sns:us-west-2:2342342342:Critical-Alerts-development",
"ResponseURL": "https:\/\/cloudformation-custom-resource-response-uswest2.s3-us-west-2.amazonaws.com\/arn%3Aaws%3Acloudformation%3Aus-west-2%3A497903502641%3Astack\/custom-resource\/6bf07a80-d44a-11e7-84df-503aca41a029%7CCustomResource%7C5a695f41-61d7-475b-9110-cdbaec04ee55?AWSAccessKeyId=AKIAI4KYMPPRGIACET5Q&Expires=1511887381&Signature=WmHQVqIDCBwQSfcBMpzTfiWHz9I%3D",
"StackId": "arn:aws:cloudformation:us-west-2:asdasdasd:stack\/custom-resource\/6bf07a80-d44a-11e7-84df-503aca41a029",
"RequestId": "5a695f41-61d7-475b-9110-cdbaec04ee55",
"LogicalResourceId": "CustomResource",
"ResourceType": "AWS::CloudFormation::CustomResource",
"ResourceProperties": {
"ServiceToken": "arn:aws:sns:us-west-2:234234234:Critical-Alerts-development",
"S3Bucket": "test-example-com"
}
}
You will need to send a success/fail response to the ResponseURL provided in the event for Cloud Formation to continue processing.
I would also note that the bucket will not be created unless your custom service provider creates it. The Custom Resource function is only sending the request to the provider.
I am trying to set up a workflow via the ask-cli for developing an Amazon skill with an Amazon lambda backend. I have created a skill and it works fine when using "ask api ..." commands, but when I use an "ask lambda ..." command, such as "ask lambda download -f MySkill", it uses the wrong region setting. I get the error:
ResourceNotFoundException: Function not found: arn:aws:lambda:us-east-1:123456789:function:MySkill
As you can see, it is looking for the lambda instance in us-east-1. But My lambda instance is in eu-west-1, as specified in my skill.json file below. This questions is pretty much a duplicate of https://forums.developer.amazon.com/questions/87922/ask-cli-does-not-use-region-setting-from-aws-confi.html. The answer in that question implies that you can add a region field somewhere in one of the json files, but I can't figure out where. Any help would be appreciated.
This is my ~/.ask/cli_config
{
"profiles": {
"default": {
"aws_profile": "default",
"token": {
"access_token": "My_access_token",
"refresh_token": "My_refresh_token",
"token_type": "bearer",
"expires_in": 3600,
"expires_at": "2017-10-06T14:12:26.171Z"
},
"vendor_id": "My_vendor_id"
}
}
}
this is my ~/.aws/config
[default]
output = text
region = eu-west-1
This is my skill.json, that i get when i call: "ask api get-skill -s skill_id > skill.json"
{
"skillManifest": {
"publishingInformation": {
"locales": {
"en-GB": {
"name": "My Skill"
}
},
"isAvailableWorldwide": true,
"category": "PUBLIC_TRANSPORTATION",
"distributionCountries": []
},
"apis": {
"custom": {
"endpoint": {
"uri": "arn:aws:lambda:eu-west-1:123456789:function:MySkill"
},
"interfaces": []
}
},
"manifestVersion": "1.0"
}
}
for me it works if I edit the following file:
~/.aws/credentials (Linux, macOS, or Unix)
C:\Users\USERNAME\.aws\credentials (Windows)
[ask_cli_default]
aws_access_key_id=YOUR_AWS_ACCESS_KEY
aws_secret_access_key=YOUR_AWS_SECRET_KEY
region=eu-west-1
The region os specified in the lambda section of .ask/config. Example:
"lambda": [
{
"alexaUsage": [
"custom/default"
],
"arn": "arn:aws:lambda:eu-west-1:XXXXXXXXX:function:ask-premium-hello-world",
"awsRegion": "eu-west-1",
"codeUri": "lambda/custom",
"functionName": "ask-premium-hello-world",
"handler": "index.handler",
"revisionId": "XXXXXXXXXXXXXXXXXX",
"runtime": "nodejs8.10"
}
]