Is there a way to apply policy tags to columns in BigQuery via Terraform? - google-cloud-platform

Is there a way to apply policy tags to columns in BigQuery (GCP) via Terraform? Any pointers would be appreciated. I believe we can create policy tags and taxonomies like this.
But how do I map/apply them to a column in a table? E.g such as the one given below, I’m creating a table using terraform and want to apply a policy tag to one of its columns. Any tutorials, code samples, guides etc. would be highly appreciated if there exists a possibility to do this via Terraform:
resource "google_bigquery_table" "table_with_pii" {
provider = google-beta
dataset_id = google_bigquery_dataset.integration_testing_dataset.dataset_id
table_id = "table_with_pii"
schema = <<EOF
[
{
"name": "col1",
"type": "STRING",
"mode": "NULLABLE",
"description": "This is col1. It's a PII column."
},
{
"name": "col2",
"type": "BOOLEAN",
"mode": "NULLABLE",
"description": "This is col2"
}
]
EOF
}
I've scanned through the relevant resources on the Terraform registry but I haven't come across such options yet. I'm not sure if the code block mentioned in this thread is a rolled out feature of just pseudo-code. Because, whenever, I run terraform validate after adding such a mapping, I get the error that policy_tags is not a valid option. Am I missing something?

Related

Credential should be scoped to a valid region when reaching DynamoDB from ECS

I'm setting up an ECS instance for my backend that interacts with DynamoDB tables
The tasks are running, the healthcheck has passed and the tasks have been assigned with a role that should grant access to those tables
But when I call the API to interact with the database, it shows me this error
InvalidSignatureException: Credential should be scoped to a valid region.
The role contains this policies
https://i.stack.imgur.com/h1Q14.png
And this are the env variables for the task definition
"environment": [
{
"name": "AWS_REGION",
"value": "eu-west-2"
},
{
"name": "DATABASE_URL",
"value": "http://dynamodb.eu-west-2.amazonaws.com"
},
{
"name": "PORT",
"value": "3000"
},
{
"name": "REFERRAL_CHARS",
"value": "abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"
},
{
"name": "USERS_TABLE",
"value": "SparadoxUsers"
}
],
This is not an issue with roles, more so the request itself. Typically you would see this issue when you sign a request for 1 region (eu-west-1) and the you submit that request to a second region (eu-west-2).
My suggestion is to take a close look at how you make your API call and how you define the region and endpoint in your DynamoDB client.

gcloud alpha monitoring policies create --policy-from-file throws error "must specify a restriction on "resource.type" in the filter"

I've created a couple alert policies using cloud console, but after exporting them and changing name (via Download JSON or gcloud CLI) I can't import them back.
Details below:
Payload (name fields are removed after export):
{
"displayName": "somename",
"conditions": [
{
"displayName": "somename",
"conditionAbsent": {
"aggregations": [
{
"alignmentPeriod": "300s",
"crossSeriesReducer": "REDUCE_MEAN",
"perSeriesAligner": "ALIGN_DELTA"
}
],
"duration": "300s",
"filter": "metric.type=\"logging.googleapis.com/user/some-metric\""
}
}
],
"combiner": "OR",
"enabled": true,
"notificationChannels": [
"projects/my-prod-dod/notificationChannels/1962880049684990238",
"projects/my-prod-dod/notificationChannels/9131919367771592634"
]
}
Command:
gcloud alpha monitoring policies create --policy-from-file alert.json
Error:
Field alert_policy.conditions[0].condition_absent.filter had an invalid value of "metric.type="logging.googleapis.com/user/some-metric"": must specify a restriction on "resource.type" in the filter
Metric type is:
Screenshot of alert policy:
Adding additional filter like below solved the problem
"filter": "metric.type=\"logging.googleapis.com/user/celery-person\" resource.type=\"k8s_container\"",
Similar question:
Use a Stackdriver resource group's ID in a GCP Deployment Manager configuration

Using ETL for AWS Glue on Lambda instead of EC2

I have a workflow where I require the use of ETL function to be able to read a json file (stored in S3) in AWS Athena, however, as the JSON file contains nested arrays, Athena refuses to continue with the query. I know we can use Zeppelin on an EC2 machine to run ETL but I do not wish to run an EC2 machine, was wondering if its possible to use Lambda instead? Has anyone tried this before ?
The nested json that I am using:
{
"version": "0.1.0",
"generated": "Wed, 1 May 2021 02:11:23",
"site": [
{
"name": "Alaska",
"host": "example.com",
"port": "443",
"ssl": "true",
"details": [
{
"name": "alaska",
"record": "0000100",
"type": "example",
"count": 11
"description": [
{
"meta": "meta",
"method": "GET",
"key": "abc"
},
]
}
}
]
}
Eg of a query that I would need: How can I query all the type and its count ?
This is the error when I try to query
Your query has the following error(s):
HIVE_PARTITION_SCHEMA_MISMATCH: There is a mismatch between the table and partition schemas. The types are incompatible and cannot be coerced. The column 'site' in table 'sampledb.codebuildprojectname' is declared as type array<struct<...

How do you insert values into dynamodb through cloudformation?

I'm creating a table in cloudformation:
"MyStuffTable": {
"Type": "AWS::DynamoDB::Table",
"Properties": {
"TableName": "MyStuff"
"AttributeDefinitions": [{
"AttributeName": "identifier",
"AttributeType": "S"
]},
"KeySchema": [{
"AttributeName": "identifier",
"KeyType": "HASH",
}],
"ProvisionedThroughput": {
"ReadCapacityUnits": "5",
"WriteCapacityUnits": "1"
}
}
}
Then later on in the cloudformation, I want to insert records into that table, something like this:
identifier: Stuff1
data: {My list of stuff here}
And insert that into values in the code below. I had seen somewhere an example that used Custom::Install, but I can't find it now, or any documentation on it.
So this is what I have:
MyStuff: {
"Type": "Custom::Install",
"DependsOn": [
"MyStuffTable"
],
"Properties": {
"ServiceToken": {
"Fn::GetAtt": ["MyStuffTable","Arn"]
},
"Action": "fields",
"Values": [{<insert records into this array}]
}
};
When I run that, I'm getting this Invalid service token.
So I'm not doing something right trying to reference the table to insert the records into. I can't seem to find any documentation on Custom::Install, so I don't know for sure that it's the right way to go about inserting records through cloudformation. I also can't seem to find documentation on inserting records through cloudformation. I know it can be done. I'm probably missing something very simple. Any ideas?
Custom::Install is a Custom Resource in CloudFormation.
This is a special type of resource which you have to develop yourself. This is mostly done by means of Lambda Function (can also be SNS).
So to answer your question. To add data to your table, you would have to write your own custom resource in lambda. The lambda would put records into the table.
Action and fields are custom parameters which CloudFormation passes to the lambda in the example of Custom::Install. The parameters can be anything you want, as you are designing the custom resource tailored to your requirements.

AWS CloudFormation condition to only execute when running on the master org (root) account?

All,
I am creating a CloudFormation template. I would like to conditionally add an IAM policy only if the template is being run in the root organization's master account.
I searched around but wasn't able to find an example.
This is what I am doing now. I am just asking if the template should include the policy during creation.
"Parameters": {
"IncludeOrganizationPolicy": {
"Description": "Only set to true for the root org",
"Type": "String",
"Default": "false",
"AllowedValues": [
"true",
"false"
]
},
}
Ideally, I'd like to do this without having to ask for an input parameter. Something like shown below, but where AWS::AccountId is the master root account.
"Conditions": {
"CreateSPOrganizationPolicy": {
"Fn::Equals": [
{
"Ref": "AWS::AccountId"
},
"<the root account id>"
]
}
}
Also, I am unable to hard-code the root account id. These scripts are going to be given to customers to run in their AWS environment.
Thanks!
Pink
This doesn't answer the question, but this question came up on a related search so I thought I'd post what I did.
I wanted to a condition to be true for a single AWS account, so I could create a resource in a single account only. I didn't want to have to use a parameter as I already have a bunch and then I'd have to run the stackset / template again.
Here's the condition that worked
Conditions:
Account123Only: !Equals [ !Ref AWS::AccountId, "123123123123"]