How can I reference a value from a different module? - amazon-web-services

I want to deploy an RDS database to AWS with a secret from AWS Secrets Manager. I have:
├─ environments
│ └─ myenv
│ ├── main.tf
│ ├── locals.tf
│ └── variables.tf
└─ modules
├─ db
│ ├── main.tf
│ └── variables.tf
└─ secrets
└── main.tf
In myenv/main.tf I define a module mydb that has modules/db/main.tf as source where a resource database is defined. Save for the password it all works, I specify values in blocks in myenv and the values "trickle down".
But for the credentials, I don't want to hard code them in myenv of course.
Instead in modules/secrets I define
data "aws_secretsmanager_secret_version" "my_credentials" {
# Fill in the name you gave to your secret
secret_id = "my-secret-id"
}
and with another block:
locals {
decoded_secrets = jsondecode(data.aws_secretsmanager_secret_version.my_credentials.secret_string)
}
I decode the secrcets and now I want to reference them as e.g. local.decoded_secrets.username in myenv/main. That is my interpretation of the tutorials. But it doesn't work: If I put the locals block in myenv it cannot reference data, and when I put it in modules/secrets then myenv cannot reference locals.
How can I combine the values of these two modules in my myenv/main?

Define an output in the secrets module. Define an input in the db module. Pass the output value from secrets to the input property in db.
For example if you defined an output named "password" in secrets and an input named "password" in db, then in your db module declaration you would pass the value like this:
module "secrets" {
source = "../modules/secrets"
}
module "db" {
source = "../modules/db"
password = module.secrets.password
}

You have a few options available to you here to pass the secret to the database module.
The smallest thing you need to do from your existing setup would be to call both modules at the same time and pass an output from the secrets module to the database module like this:
.
├── environments
│   └── myenv
│   ├── locals.tf
│   ├── main.tf
│   └── variables.tf
└── modules
├── db
│   ├── main.tf
│   └── variables.tf
└── secrets
├── main.tf
└── outputs.tf
modules/secrets/outputs.tf
output "secret_id" {
value = aws_secretsmanager_secret_version.secret.secret_id
}
environments/myenv/main.tf
module "secrets" {
source = "../../modules/secrets"
# ...
}
module "db" {
source = "../../modules/db"
# ...
secret_id = module.secrets.secret_id
}
A better approach however might be to have the database module create and manage its own secret and not require the secret to be passed into the database module as a parameter at all. If you wanted to reuse the secrets module with other modules then you could make it a child module of the database module or if this is the only place you currently use the secrets module then unnesting these makes things simpler.
Nesting modules
modules/db/main.tf
module "database_password_secret_id" {
source = "../secrets"
# ...
}
data "aws_secretsmanager_secret_version" "database_password" {
secret_id = module.database_password_secret_id.secret_id
}
Unnesting the modules
.
├── environments
│   └── myenv
│   ├── locals.tf
│   ├── main.tf
│   └── variables.tf
└── modules
└── db
├── main.tf
├── secrets.tf
└── variables.tf
modules/db/secrets.tf
resource "aws_secretsmanager_secret" "database_password" {
name = "database-password"
}
resource "random_password" "database_password" {
length = 32
}
resource "aws_secretsmanager_secret_version" "database_password" {
secret_id = aws_secretsmanager_secret.example.id
secret_string = random_password.database_password.result
}
modules/db/main.tf
resource "aws_db_instance" "database" {
# ...
password = aws_secretsmanager_secret_version.database_password.secret_string
}

Related

Terraform "Unsupported attribute" error when access output variable of sub-module

Gurus!
I'm under developing Terraform modules to provision NAT resource for production and non-production environment. There are two repositories one for Terraform modules another for the live environment for each account (ex: dev, stage, prod..)
I have an problem when access output variable of network/nat module.
It makes me very tired. Please refer below.
for Terraform module (sre-iac-module repo)
❯ tree sre-iac-modules/network/nat/
sre-iac-modules/network/nat/
├── main.tf
├── non_production
│   └── main.tf
├── outputs.tf
├── production
│   ├── main.tf
│   ├── outputs.tf
│   └── variables.tf
└── variables.tf
for live environment (sre-iac-modules repo)
❯ tree sre-iac-modules/network/nat/
sre-iac-modules/network/nat/
├── main.tf
├── non_production
│   └── main.tf
├── outputs.tf
├── production
│   ├── main.tf
│   ├── outputs.tf
│   └── variables.tf
└── variables.tf
In the main code snippet, sre-iac-live/dev/services/wink/network/main.tf
I cannot access output variable named module.wink_nat.eip_ids.
When I run terraform plan or terraform console, always I reached following error.
│ Error: Unsupported attribute
│
│ on ../../../../../sre-iac-modules/network/nat/outputs.tf line 2, in output "eip_ids":
│ 2: value = module.production.eip_ids
│ ├────────────────
│ │ module.production is tuple with 1 element
│
│ This value does not have any attributes.
╵
Here is the ../../../../../sre-iac-modules/network/nat/outputs.tf and main.tf
output "eip_ids" {
value = module.production.eip_ids
# value = ["a", "b", "c"]
}
----
main.tf
module "production" {
source = "./production"
count = var.is_production ? 1 : 0
env = ""
region_id = ""
service_code = ""
target_route_tables = []
target_subnets = var.target_subnets
}
module "non_production" {
source = "./non_production"
count = var.is_production ? 0 : 1
}
However, if I use value = ["a", "b", "c"] then it works well!
I couldn't re what is the problem.
Below is the code snippet of ./sre-iac-modules/network/nat/production/outputs.tf
output "eip_ids" {
value = aws_eip.for_nat[*].id
# value = [aws_eip.nat-gw-eip.*.id]
# value = aws_eip.for_nat.id
# value = ["a", "b", "c"]
}
Below is the code snippet of ./sre-iac-modules/network/nat/production/main.tf
resource "aws_eip" "for_nat" {
count = length(var.target_subnets)
vpc = true
}
And finally, here is the main.tf code snippet. (sre-iac-live/dev/services/wink/network/main.tf)
module "wink_vpc" {
.... skip ....
}
module "wink_nat" {
# Relative path references
source = "../../../../../sre-iac-modules/network/nat"
region_id = "${var.region_id}"
env = "${var.env}"
service_code = "${var.service_code}"
target_subnets = module.wink_vpc.protected_subnet_ids
is_production = true
depends_on = [module.wink_vpc]
}
I'm stuck this issue for one day.
I needs Terraform Guru's help.
Please give me your great advice.
Thank you so much in advance.
Cheers!
Your production module has a count meta attribute. To reference the module you have to use an index, like:
value = module.production[0].eip_ids

Can't export fromIni from #aws-sdk/credential-providers

I'm working on a React/Node.js app and I'm trying to read my IAM User credentials from ~/.aws/credentials file. I am trying to use fromIni from the #aws-sdk/credential-providers node package. According to the AWS SDK v3 documentation, I can do the following:
import { fromIni } from "#aws-sdk/credential-providers"; // ES6 import
// const { fromIni } = require("#aws-sdk/credential-providers"); // CommonJS import
const client = new FooClient({
credentials: fromIni({
// Optional. The configuration profile to use. If not specified, the provider will use the value
// in the `AWS_PROFILE` environment variable or a default of `default`.
profile: "profile",
// Optional. The path to the shared credentials file. If not specified, the provider will use
// the value in the `AWS_SHARED_CREDENTIALS_FILE` environment variable or a default of
// `~/.aws/credentials`.
filepath: "~/.aws/credentials",
// Optional. The path to the shared config file. If not specified, the provider will use the
// value in the `AWS_CONFIG_FILE` environment variable or a default of `~/.aws/config`.
configFilepath: "~/.aws/config",
// Optional. A function that returns a a promise fulfilled with an MFA token code for the
// provided MFA Serial code. If a profile requires an MFA code and `mfaCodeProvider` is not a
// valid function, the credential provider promise will be rejected.
mfaCodeProvider: async (mfaSerial) => {
return "token";
},
// Optional. Custom STS client configurations overriding the default ones.
clientConfig: { region },
}),
});
But when I try this in my index.js file:
import { fromIni } from '#aws-sdk/credential-providers';
const createLink = {
url: config.aws_appsync_graphqlEndpoint,
region: config.aws_appsync_region,
auth: {
type: config.aws_appsync_authenticationType,
credentials: fromIni()
}
};
and then run npm start, I get the following error:
export 'fromIni' (imported as 'fromIni') was not found in '#aws-sdk/credential-providers' (possible exports: fromCognitoIdentity, fromCognitoIdentityPool, fromTemporaryCredentials, fromWebToken)
It seems like the function I want isn't exported from the package but the documentation says otherwise.
Edit:
The output to #aws-sdk/credential-providers #aws-sdk/credential-provider-ini
port-dashboard#0.1.0 C:\Users\kshang\Documents\pov-ui
├─┬ #aws-sdk/client-cognito-identity-provider#3.79.0
│ ├─┬ #aws-sdk/client-sts#3.79.0
│ │ └─┬ #aws-sdk/credential-provider-node#3.79.0
│ │ └── #aws-sdk/credential-provider-ini#3.79.0
│ └─┬ #aws-sdk/credential-provider-node#3.79.0
│ └── #aws-sdk/credential-provider-ini#3.79.0
├─┬ #aws-sdk/credential-providers#3.79.0
│ ├─┬ #aws-sdk/client-cognito-identity#3.79.0
│ │ └─┬ #aws-sdk/credential-provider-node#3.79.0
│ │ └── #aws-sdk/credential-provider-ini#3.79.0 deduped
│ └── #aws-sdk/credential-provider-ini#3.79.0
└─┬ aws-amplify#4.3.20
├─┬ #aws-amplify/analytics#5.2.5
│ └─┬ #aws-sdk/client-firehose#3.6.1
│ └─┬ #aws-sdk/credential-provider-node#3.6.1
│ ├── #aws-sdk/credential-provider-ini#3.6.1
│ └─┬ #aws-sdk/credential-provider-process#3.6.1
│ └── #aws-sdk/credential-provider-ini#3.6.1 deduped
└─┬ #aws-amplify/geo#1.3.1
└─┬ #aws-sdk/client-location#3.48.0
└─┬ #aws-sdk/credential-provider-node#3.48.0
└── #aws-sdk/credential-provider-ini#3.48.0
Update: After doing more research and talking to some AWS Experts, it turns out we'll need to use Amazon Cognito in order to get credentials for our Browser based app.
I get the same error trying to use fromIni() for a simple CreateInvalidationCommand using the CloudFrontClient.
Trying to configure the client without the credentials key results in a Error: Credential is missing
My current workaround for developing locally is using a .env.local file, and using the accessKeyId and secretAccessKey properties for my config:
const cloudfront = new CloudFrontClient({
credentials: {
accessKeyId: process.env.REACT_APP_AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.REACT_APP_SECRET_ACCESS_KEY,
},
region: "us-east-1",
});
Which is obviously not ideal, but it works locally for testing and development.
I also wanted to note that fromEnv() is also missing from the exports. Which causes issues with this workaround in EC2 instances.
Either way, hoping to see this resolved in an update soon.
cache-invalidator#0.1.0 /Users/miguelmaldonado/Desktop/projects/prototypes/cache-invalidator
├─┬ #aws-sdk/client-cloudfront#3.85.0
│ └─┬ #aws-sdk/credential-provider-node#3.85.0
│ └── #aws-sdk/credential-provider-ini#3.85.0 deduped
└─┬ #aws-sdk/credential-providers#3.85.0
└── #aws-sdk/credential-provider-ini#3.85.0

Convert relative imports into absolute imports in Typescript

Setting the baseUrl property to src in the .tsconfig has allowed me to import modules relative to the baseUrl like this:
import { ArrowRightIcon } from 'Shared/icons/navigation'
instead of :
import { ArrowRightIcon } from '../../../Shared/icons/navigation'
Question
How can I efficiently transform the thousands of such relative imports into absolute imports in my application since Find All functionality of any IDE does not understand the context?
Currently, I have to replace all the occurrences of ../../../Shared/, ../../../../Shared/, so on... with Shared/ and for all the directories in src. Even if I do that with help of regex, it won't help in the cases like example no. 2 below.
Other examples:
import { getActiveTabIdx } from 'Shared/utils/shared.helpers'
instead of import { getActiveTabIdx } from '../../../../../../../Shared/utils/shared.helpers'
import { TabWithLink } from 'Shared/components/Header'
instead of import { TabWithLink } from '../../components/Header'
import { layoutReducer } from 'Layout/state/layout.reducer'
instead of import { layoutReducer } from '../../Layout/state/layout.reducer'
My directory structure is:
.
├── src
│ ├── Shared
│ │ ├── components
│ │ ...
│ ├── Notification
│ │ ├── components
│ │ ...
│ ├── Planning
│ │ ├── components
│ │ ...
│ ├── Behaviour
│ │ ├── components
│ │ ...
│ └── Layout
│ │ ├── components
│ │ ├── state
│ ...
...
Currently installed version:
typescript: 4.2.4
vscode: 1.57.1

How to properly server static files from a Flask server?

What is the proper way of serving static files (images, PDFs, Docs etc) from a flask server?
I have used the send_from_directory method before and it works fine. Here is my implementation:
#app.route('/public/assignments/<path:filename>')
def file(filename):
return send_from_directory("./public/assignments/", filename, as_attachment=True)
However if I have multiple different folders, it can get a bit hectic and repetitive because you are essentially writing the same code but for different file locations - meaning if I wanted to display files for a user instead of an assignment, I'd have to change it to /public/users/<path:filename> instead of /public/assignments/<path:filename>.
The way I thought of solving this is essentially making a /file/<path:filepath> route, where the filepath is the entire path to the destination folder + the file name and extension, instead of just the file name and extension. Then I did some formatting and separated the parent directory from the file itself and used that data when calling the send_from_directory function:
#app.route('/file/<path:filepath>', methods=["GET"])
def general_static_files(filepath):
filepath = filepath.split("/")
_dir = ""
for i, p in enumerate(filepath):
if i < len(filepath) - 1:
_dir = _dir + (p + "/")
return send_from_directory(("./" + _dir), filepath[len(filepath) - 1], as_attachment=True)
if we simulate the following request to this route:
curl http://127.0.0.1:5000/file/public/files/jobs/images/job_43_image_1.jpg
the _dir variable will hold the ./public/files/jobs/images/ value, and then filepath[len(filepath) - 1] holds the job_43_image_1.jpg value.
If i hit this route, I get a 404 - Not Found response, but all the code in the route body is being executed.
I suspect that the send_from_directory function is the reason why I'm getting a 404 - Not Found. However, I do have the image job_43_image_1.jpg stored inside the /public/files/jobs/images/ directory.
I'm afraid I don't see a lot I can do here except hope that someone has encountered the same issue/problem and found a way to fix it.
Here is the folder tree:
├── [2050] app.py
├── [2050] public
│   ├── [2050] etc
│   └── [2050] files
│   ├── [2050] jobs
│   │   ├── [2050] files
│   │   └── [2050] images
│   │   ├── [2050] job_41_image_decline_1.jpg
│   │   ├── [2050] job_41_image_decline_2554.jpg
│   │   ├── [2050] ...
│   ├── [2050] shop
│   └── [2050] videos
└── [2050] server_crash.log
Edit 1: I have set up the static_url_path. I have no reason to believe that that could be the cause of my problem.
Edit 2: Added tree
Pass these arguments when you initialise the app:
app = Flask(__name__, static_folder='public',
static_url_path='frontend_public' )
This would make the file public/blah.txt available at http://example.com/frontend_public/blah.txt.
static_folder sets the folder on the filesystem
static_url_path sets the path used within the URL
If neither of the variables are set, it defaults to 'static' for both.
Hopefully this is what you're asking.

AWS Glue Crawler adding tables for every partition?

I have several thousand files in an S3 bucket in this form:
├── bucket
│ ├── somedata
│ │   ├── year=2016
│ │   ├── year=2017
│ │   │   ├── month=11
│ │   | │   ├── sometype-2017-11-01.parquet
│ | | | ├── sometype-2017-11-02.parquet
│ | | | ├── ...
│ │   │   ├── month=12
│ │   | │   ├── sometype-2017-12-01.parquet
│ | | | ├── sometype-2017-12-02.parquet
│ | | | ├── ...
│ │   ├── year=2018
│ │   │   ├── month=01
│ │   | │   ├── sometype-2018-01-01.parquet
│ | | | ├── sometype-2018-01-02.parquet
│ | | | ├── ...
│ ├── moredata
│ │   ├── year=2017
│ │   │   ├── month=11
│ │   | │   ├── moretype-2017-11-01.parquet
│ | | | ├── moretype-2017-11-02.parquet
│ | | | ├── ...
│ │   ├── year=...
etc
Expected behavior:
The AWS Glue Crawler creates one table for each of somedata, moredata, etc. It creates partitions for each table based on the childrens' path names.
Actual Behavior:
The AWS Glue Crawler performs the behavior above, but ALSO creates a separate table for every partition of the data, resulting in several hundred extraneous tables (and more extraneous tables which every data add + new crawl).
I see no place to be able to set something or otherwise prevent this from happening... Does anyone have advice on the best way to prevent these unnecessary tables from being created?
Adding to the excludes
**_SUCCESS
**crc
worked for me (see aws page glue/add-crawler). Double stars match the files at all folder (ie partition) depths. I had an _SUCCESS living a few levels up.
Make sure you set up logging for glue, which quickly points out permission errors etc.
Use the Create a Single Schema for Each Amazon S3 Include Path option to avoid the AWS Glue Crawler adding all these extra tables.
I had this problem and ended up with ~7k tables 😅 so wrote the following script to remove them. It requires jq.
#!/bin/sh
aws glue get-tables --region <YOUR AWS REGION> --database-name <YOUR AWS GLUE DATABASE> | jq '.TableList[] | .Name' | grep <A PATTERN THAT MATCHES YOUR TABLENAMEs> > /tmp/table-names.json
cd /tmp
mkdir table-names
cd table-names
split -l 50 ../table-names.json
for f in `ls`; cat $f | tr '\r\n' ' ' | xargs aws glue batch-delete-table --region <YOUR AWS REGION> --database-name <YOUR AWS GLUE DATABASE> --tables-to-delete;
check if you have empty folders inside. When spark writes to S3, sometimes, the _temporary folder is not deleted, which will make Glue crawler create table for each partition.
I was having the same problem.
I added *crc* as exclude pattern to the AWS Glue crawler and it worked.
Or if you crawl entire directories add */*crc*.
So, my case was a little bit different and I was having the same behaviour.
I got a data structure like this:
├── bucket
│ ├── somedata
│ │ ├── event_date=2016-01-01
│ │ ├── event_date=2016-01-02
So when I started AWS Glue Crawler instead of update the tables, this pipeline was creating a one table per date. After digging into the problem I found that someone added a column as a bug at the json file instead of id was ID. Because my data is parquet the pipeline was working well to store the data and retrieve inside the EMR. But Glue was crashing pretty bad because Glue convert everything to lowercase and probably that was the reason why it was crashing. Removing the uppercase column glue start to work like a charm.
You need to have separate crawlers for each table / file type. So create one crawler that looks at s3://bucket/somedata/ and a 2nd crawler that looks at s3://bucket/moredata/.