how to read second block in tfstate with terraform? - amazon-web-services

I have a remote state on s3 and i don't know how to access to the second block of the tfstate json file.
My tfstate looks like this:
{
"version": 3,
"terraform_version": "0.11.7",
"serial": 1,
"lineage": "79b7840d-5998-1ea8-2b63-ca49c289ec13",
"modules": [
{
"path": [
"root"
],
"outputs": {},
"resources": {},
"depends_on": []
},
{
"path": [
"root",
"vpc"
],
"outputs": {
"database_subnet_group": {
all my resources are listed here...
}
and I can access it this code:
data "terraform_remote_state" "network" {
backend = "s3"
config = {
bucket = "bakka-tfstate"
key = "global/network.tfstate"
region = "eu-west-1"
}
}
but the output
output "tfstate" {
value = data.terraform_remote_state.network.outputs
}
shows me nothing
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
tfstate = {}
I believe it's because there's two json blocks in this tfstate and terraform read the first one which is empty, so i'm asking how to read the second one ?

Only root outputs are accessible through remote state. This limitation is documented here. It also suggests a solution - in the project that produces the state you're referring to, you will need to thread the output through to a root output.

Related

How to remove bigquery dataset permission via CLI

Want to remove multiple dataset permission by cli (if possible by one go). Is there a way for the same via CLI?
for example,
abc#gmail.com roles/bigquery.dataowner to dataset/demo_aa
xyzGroupSA#gmail.com role/bigquery.user to dataset/demo_bb
Want to remove the email id permission from the respective dataset via CLI with "bq".
[Went through the ref https://cloud.google.com/bigquery/docs/dataset-access-controls#bq_1 , its having local file reference and its very lengthy. But what about when you having a jump server in production environment and need to perform via running commands.]
You can delegate this responsability to CI CD for example.
Solution 1 :
You can create a project :
my-project
------dataset_accesses.json
Run your script on Cloud Shell on your production project or in a Docker image from gcloud-sdk image.
Use a service account having the permissions to update dataset accesses
run the bq script with bq update-access-control :
bq update --source dataset_accesses.json mydataset
dataset_accesses.json :
{
"access": [
{
"role": "READER",
"specialGroup": "projectReaders"
},
{
"role": "WRITER",
"specialGroup": "projectWriters"
},
{
"role": "OWNER",
"specialGroup": "projectOwners"
},
{
"role": "READER",
"specialGroup": "allAuthenticatedUsers"
},
{
"role": "READER",
"domain": "domain_name"
},
{
"role": "WRITER",
"userByEmail": "user_email"
},
{
"role": "WRITER",
"userByEmail": "service_account_email"
},
{
"role": "READER",
"groupByEmail": "group_email"
}
],
...
}
Solution 2
Use Terraform to update the permission on your dataset :
resource "google_bigquery_dataset_access" "access" {
dataset_id = google_bigquery_dataset.dataset.dataset_id
role = "OWNER"
user_by_email = google_service_account.bqowner.email
}
With Terraform, it is also easy to pass a list and apply the resource on it :
Json file :
{
"datasets_members": {
"dataset_your_group1": {
"dataset_id" : "your_dataset",
"member": "group:your_group#loreal.com",
"role": "bigquery.dataViewer"
},
"dataset_your_group2": {
"dataset_id" : "your_dataset",
"member": "group:your_group2#loreal.com",
"role": "bigquery.dataViewer"
}
}
}
locals.tf :
locals {
datasets_members = jsondecode(file("${path.module}/resource/datasets_members.json"))["datasets_members"]
}
resource "google_bigquery_dataset_access" "accesses" {
for_each = local.datasets_members
dataset_id = each.value["dataset_id"]
role = each.value["role"]
group_by_email = each.value["member"]
}
This work also with google_bigquery_dataset_iam_binding

Terraform v0.11 accessing remote state empty outputs

I have few terraform state file with empty attributes in there and few with some values
{
"version": 3,
"terraform_version": "0.11.14",
"modules": [
{
"path": [
"root"
],
"outputs": {},
"resources": {},
"depends_on": []
}]
}
I want to pick the default value say "Null" if i don't have the attributes and my remote config is
data "terraform_remote_state" "name" {
backend = "s3"
config {
.....
}
}
If i use "${lookup(data.terraform_remote_state.name.outputs, "attribute", "Null")}" it throws error 'data.terraform_remote_state.name' does not have attribute 'outputs'
Can someone guide me to solve this?

Cloudformation template: Cloudfront DomainName WebsiteURL returns Error

i have created a Cloudformation template for a S3 static site deployment.
In the cloudfront part the domain name must be specified. First I created the template like this and tried it out:
...
"DomainName": {
"Fn::GetAtt": [
"S3BucketRoot",
"DomainName"
]
},
...
So everything was put on so far correctly. Only the one the Origin domain was not taken correctly. The Amazon S3 bucket Origin domain name was returned and not the Amazon S3 bucket configured as a website. This caused requests to the domain to not load correctly.
I then manually entered the Amazon S3 bucket configured as a website Origin domain name in cloudfront.
After this worked correctly I wanted to change it in the template and so I adjusted it:
...
"DomainName": {
"Fn::GetAtt": [
"S3BucketRoot",
"WebsiteURL"
]
},
...
When updating the stack I now get the error that there must not be a colon in the domain name. This comes from the http://...
So my question now is, how can i either get the right DomainName or how can i remove the http:// from the WebsiteURL?
My S3 configuration looks like this:
"S3BucketRoot": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Delete",
"Properties": {
"AccessControl": "PublicRead",
"BucketName": {
"Fn::Sub": "${AWS::StackName}-root"
},
"WebsiteConfiguration": {
"ErrorDocument": "404.html",
"IndexDocument": "index.html"
}
}
},
Edit:
Origin Domain Name Documentation
I solved it with a little hack
...
"DomainName": {
"Fn::Select": [
"1",
{
"Fn::Split": [
"http://",
{
"Fn::GetAtt": [
"S3BucketRoot",
"WebsiteURL"
]
}
]
}
]
},
...
If someone finds a better way, pleas let me know. For now, this works.
Resource
Don't make it to complicated
!Sub ${S3BucketRoot}.s3-website-${AWS::Region}.amazonaws.com

Terraform import ECS task definition from another project

I have multiple projects, each with their own Terraform to manage the AWS infrastructure specific to that project. Infrastructure that's shared (a VPC for example): I import into the projects that need it.
I want to glue together a number of different tasks from across different services using step functions, but some of them are Fargate ECS tasks. This means I need to specify the task definition ARN in the step function.
I can import a task definition but if I later update the project that manages that task definition, the revision will change while the step function will continue to point at the old task definition revision.
At this point I might as well hard-code the task ARN into the step function and just have to remember to update it in the future.
Anyone know a way around this?
You can use the aws_ecs_task_definition data source to look up the latest revision of a task definition family:
data "aws_ecs_task_definition" "example" {
task_definition = "example"
}
output "example" {
value = data.aws_ecs_task_definition.example
}
Applying this gives the following output (assuming you have an example service in your AWS account):
example = {
"family" = "example"
"id" = "arn:aws:ecs:eu-west-1:1234567890:task-definition/example:333"
"network_mode" = "bridge"
"revision" = 333
"status" = "ACTIVE"
"task_definition" = "example"
"task_role_arn" = "arn:aws:iam::1234567890:role/example"
}
So you could do something like this:
data "aws_ecs_task_definition" "example" {
task_definition = "example"
}
data "aws_ecs_cluster" "example" {
cluster_name = "example"
}
resource "aws_sfn_state_machine" "sfn_state_machine" {
name = "my-state-machine"
role_arn = aws_iam_role.iam_for_sfn.arn
definition = <<EOF
{
"StartAt": "Manage ECS task",
"States": {
"Manage ECS task": {
"Type": "Task",
"Resource": "arn:aws:states:::ecs:runTask.waitForTaskToken",
"Parameters": {
"LaunchType": "FARGATE",
"Cluster": ${data.aws_ecs_cluster.example.arn},
"TaskDefinition": ${data.aws_ecs_task_definition.example.id},
"Overrides": {
"ContainerOverrides": [
{
"Name": "example",
"Environment": [
{
"Name": "TASK_TOKEN_ENV_VARIABLE",
"Value.$": "$$.Task.Token"
}
]
}
]
}
},
"End": true
}
}
}
EOF
}

EMR cluster created with CloudFormation not shown

I have added an EMR cluster to a stack. After updating the stack successfully (CloudFormation), I can see the master and slave nodes in EC2 console and I can SSH into the master node. But AWS console does not show the new cluster. Even aws emr list-clusters doesn't show the cluster. I have triple checked the region and I am certain I'm looking at the right region.
Relevant CloudFormation JSON:
"Spark01EmrCluster": {
"Type": "AWS::EMR::Cluster",
"Properties": {
"Name": "Spark01EmrCluster",
"Applications": [
{
"Name": "Spark"
},
{
"Name": "Ganglia"
},
{
"Name": "Zeppelin"
}
],
"Instances": {
"Ec2KeyName": {"Ref": "KeyName"},
"Ec2SubnetId": {"Ref": "PublicSubnetId"},
"MasterInstanceGroup": {
"InstanceCount": 1,
"InstanceType": "m4.large",
"Name": "Master"
},
"CoreInstanceGroup": {
"InstanceCount": 1,
"InstanceType": "m4.large",
"Name": "Core"
}
},
"Configurations": [
{
"Classification": "spark-env",
"Configurations": [
{
"Classification": "export",
"ConfigurationProperties": {
"PYSPARK_PYTHON": "/usr/bin/python3"
}
}
]
}
],
"BootstrapActions": [
{
"Name": "InstallPipPackages",
"ScriptBootstrapAction": {
"Path": "[S3 PATH]"
}
}
],
"JobFlowRole": {"Ref": "Spark01InstanceProfile"},
"ServiceRole": "MyStackEmrDefaultRole",
"ReleaseLabel": "emr-5.13.0"
}
}
The reason is missing VisibleToAllUsers property, which defaults to false. Since I'm using AWS Vault (i.e. using STS AssumeRole API to authenticate), I'm basically a different user every time, so I couldn't see the cluster. I couldn't update the stack to add VisibleToAllUsers either as I was getting Job flow ID does not exist.
The solution was to login as root user and fix things from there (I had to delete the cluster manually, but removing it from the stack template JSON and updating the stack would probably have worked if I hadn't messed things up already).
I then added the cluster back to the template (with VisibleToAllUsers set to true) and updated the stack as usual (AWS Vault).