I have a Terraform structure like:
prod
nonprod
applications
+-- continuous/common/iam/iam.tf <-- create the role
+-- dataflow/firehose/firehose.tf <-- want to refer to the role created above
I don't know how to do this. In the iam's .tf file I have:
resource "aws_iam_role" "policy_daily_process_role" {
...
}
output "svc_daily_process_role_arn" {
value = "${aws_iam_role.policy_daily_process_role.arn}"
}
I am not sure how (or if) I can then refer to the svc_daily_process_role_arn from the firehose's .tf.
My understanding that you already use modules to manage terraform codes.
So in your case, there should be two modules at least.
continuous/common
dataflow/firehose
In continuous/common module, you have defined output.tf
output "svc_daily_process_role_arn" {
value = "${aws_iam_role.policy_daily_process_role.arn}"
}
So you create the resources with common module first.
module "common" {
source = "./continuous/common"
...
}
Now you are fine to refer the output from module common with below codes:
module "firehost" {
source = "./dataflow/firehose"
some_variable = "${module.common.svc_daily_process_role_arn}"
...
}
Please go through below documents for better understanding.
https://www.terraform.io/docs/modules/usage.html#outputs
Using Terraform Modules.
https://www.terraform.io/docs/modules/usage.html
From a top level make a call to the two subdirectories.
In module 1 (your IAM role) add an output like you have, but ensure it's outputted from module 1.
In module 2 reference it via ${module..}
If you're not using modules (or even if you are), you can use remote state. This means you will save your state in S3 or Consul and then refer to it from anywhere in your code.
Related
Please instead of doing negative voting, kindly read complete problem first.
Hi I am new to terraform.
I have 3 modules in terraform.
/module1/eip/main.tf
/module1/eip/output.tf
/module2/eip/main.tf
/module2/eip/output.tf
/module3/eip/main.tf
/module3/eip/output.tf
These all 3 modules create an eip along with showing it in outputs.
From main.tf on root level i am using these functions like this.
module "module-one-eip" {
source = "./modules/module1/eip/"
instance = module.ec2-for-one-module.ec2-one-id
}
module "module-two-eip" {
source = "./modules/module2/eip/"
instance = module.ec2-for-two-module.ec2-two-id
}
module "module-three-eip" {
source = "./modules/module3/eip/"
instance = module.ec2-for-three-module.ec2-three-id
}
Now I want to remove repetitive files and I want to use one file for all modules, like all code from these 3 will reside in one file, and all outputs mentioned above will have in same file, but main problem here is how I will handle different instance variable data being passed and how it will be synced with right code section in same file.
/module/generic-eip/main.tf
/module/generic-eip/outputs.tf
and from main.tf I want to call it like this.
module "module-generic-eip" {
source = "./modules/generic-eip/"
instance = (how to manage different instance names for same file)
}
I know there is for_each and count stuff of terraform, problem is internal configurations are diffrent, i mean how i can make dynamic naming as well.
inside ./modules/eip/main.tf
resource "aws_eip" "(how to manage different names here)" {
instance = var.instance
vpc = true
}
Assuming you keep your inctance module separate, and only want to consolidate EIP module, it would be as follows:
locals {
instance_ids = [module.ec2-for-one-module.ec2-one-id,
module.ec2-for-two-module.ec2-two-id,
module.ec2-for-three-module.ec2-three-id]
}
module "module-generic-eip" {
source = "./modules/generic-eip/"
count = length(local.instance_ids)
instance = local.instance_ids[count.index]
}
Code inside ./modules/eip/main.tf does not change, as for each iteration of count, var.instance will be a single element from local.instance_ids.
Then you access, individual EIPs, using indieces, 0, 1, 2:
module.module-generic-eip[0]
module.module-generic-eip[1]
module.module-generic-eip[2]
module "s3_bucket" {source = "./module/s3_bucket"}
module "s3_bucket_2" {source = "./module/s3_bucket_2"}
These are my two modules which I am calling in the main.tf file, but I want to use some conditions so that I can call any one module which I want at any point of time and only that module gets executed at that time, so is their any way to do that?
I didnt understand quite your question but i guess what you want or at least would be helpfull to answer your question is the following.
You can create a variable in variables.tf called for example create and then pass it to a module.
# Set a variable to know if the resources inside the module should be created
module "s3_bucket" {
source = "./module/s3_bucket"
create = var.create
}
# For every resource inside use the count to create or not each resource
resource "resource_type" "resource_name" {
count = var.create ? 1 : 0
... other resource attributes
}
I have parent directory from there i am calling local module ,but Terraform.tfvars file present in parent directory not considered when calling local module .Its taking values from variable file present in local module.
my code is available in GitHub. My code is working fine except that its not considering terraform.tfvars file. Can anyone let me know what is issue in this code ?
https://github.com/sammouse/terraform-code.git
Its taking values from variable file present in local module.
That's how it works. There is no inheritance of variables in TF. You have to explicitly pass all the variables in parent module when to the child module. Otherwise, you have to copy-and-paste all veriables in tfvars in parent module, to tfvars file in child module.
My application uses log4j but OkHttpClient uses java util logging. So apart from log4j.properties, I created a logging.properties file with the following contents:
handlers=java.util.logging.FileHandler
.level=FINE
okhttp3.internal.http2.level=FINE
java.util.logging.FileHandler.pattern = logs/%hjava%u.log
java.util.logging.FileHandler.limit = 50000
java.util.logging.FileHandler.count = 1
java.util.logging.FileHandler.formatter = java.util.logging.XMLFormatter
java.util.logging.ConsoleHandler.level = FINE
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
I then added this to jvm params used for starting the application -Djava.util.logging.config.file="file://${BASE_DIR}/logging.properties"
But I don't see any new folders being created as indicated by the Filehandler. Any one know why?
But I don't see any new folders being created as indicated by the Filehandler. Any one know why?
The FileHandler will not create any new folders. A directory must be created before the FileHandler will create a file.
The system property requires a path to file that is located on the filesystem It will not expand system properties or environment variables by using the dollar sign syntax.
You can use a relative path based off of the working directory or you have to use an absolute path to the logging.properties. The logging properties can not be packaged inside of an archive.
If you want to work around this limitation then you want to create a custom config class and use the java.util.logging.config.class property in conjunction with the java.util.logging.config.file property. You then write a class that reads the file://${BASE_DIR}/logging.properties and performs the needed transformation into a path to a file. Then update the configuration if you are using JDK9 or newer. On older versions you need to use readConfiguration and add code to work work around limitations of the LogManager
I'm trying to use the bazel restricted_to attribute for a test.
I want the test to only run on a specific cpu = build.
To make this somewhat more complicated, the cpu type is defined in our
/tools/cpp/CROSSTOOL file (cpu=armhf-debian).
I've had no luck with guessing the syntax of the restricted_to parameter
(my first guess was //cpu:armhf-debian, which just looked for a cpu package)
Any Suggestions?
There's not a lot of documentation on restricted_to, and the other rules it works with, environment and environment_group. Mostly this is because the use case they are for is very specific to Google's repository setup, and we're in the process of replacing them with a more flexible system.
To use restricted_to, you would need to define several environment rules, and an environment_group to contain them, and then specify which environment the test is restricted to, and finally always use the "--target_environment" flag to specify the current environment group. That would look something like this:
environment(name = "x86")
environment(name = "ppc")
environment_group(
name = "cpus",
defaults = [":x86"],
environments = [
":x86",
":ppc",
])
cc_test(
name = "test",
other config
restricted_to = [":ppc"],)
You could then run the test as so:
bazel test --target_environment=//:ppc //:test
to get the environment checking.
This isn't terribly useful, as whoever is running the test has to also remember to set "--target_environment" properly.
A better way to disable the test, using currently supported code, is to use config_setting and select, like this:
config_setting(
name = "k8",
values = {"cpu": "k8"})
config_setting(
name = "ppc",
values = {"cpu":, "ppc")
cc_test(
name = "test",
other config
srcs = [other sources] +
select({
"//:x86": ["x86_test_src.cpp"],
"//:ppc": ["ppc_test_src.cpp"],
"//conditions:default": ["default_test_src.cpp"],
})
config_setting will take a value based on the current "--cpu" flag. By changing the files included in the select, you can control what files are included in the test for each cpu setting.
Obviously, these don't have to be in the same package, and the usual Bazel visibility rules apply. See Bazel's src/BUILD for an example of config_setting, and src/test/cpp/BUILD for an example of using it in select.
We're working hard on platforms, which is a better way to describe and query Bazel's execution environment, and we'll make sure to post documentation and a blog post when that's ready for people to test.