I'm working on a React/Node.js app and I'm trying to read my IAM User credentials from ~/.aws/credentials file. I am trying to use fromIni from the #aws-sdk/credential-providers node package. According to the AWS SDK v3 documentation, I can do the following:
import { fromIni } from "#aws-sdk/credential-providers"; // ES6 import
// const { fromIni } = require("#aws-sdk/credential-providers"); // CommonJS import
const client = new FooClient({
credentials: fromIni({
// Optional. The configuration profile to use. If not specified, the provider will use the value
// in the `AWS_PROFILE` environment variable or a default of `default`.
profile: "profile",
// Optional. The path to the shared credentials file. If not specified, the provider will use
// the value in the `AWS_SHARED_CREDENTIALS_FILE` environment variable or a default of
// `~/.aws/credentials`.
filepath: "~/.aws/credentials",
// Optional. The path to the shared config file. If not specified, the provider will use the
// value in the `AWS_CONFIG_FILE` environment variable or a default of `~/.aws/config`.
configFilepath: "~/.aws/config",
// Optional. A function that returns a a promise fulfilled with an MFA token code for the
// provided MFA Serial code. If a profile requires an MFA code and `mfaCodeProvider` is not a
// valid function, the credential provider promise will be rejected.
mfaCodeProvider: async (mfaSerial) => {
return "token";
},
// Optional. Custom STS client configurations overriding the default ones.
clientConfig: { region },
}),
});
But when I try this in my index.js file:
import { fromIni } from '#aws-sdk/credential-providers';
const createLink = {
url: config.aws_appsync_graphqlEndpoint,
region: config.aws_appsync_region,
auth: {
type: config.aws_appsync_authenticationType,
credentials: fromIni()
}
};
and then run npm start, I get the following error:
export 'fromIni' (imported as 'fromIni') was not found in '#aws-sdk/credential-providers' (possible exports: fromCognitoIdentity, fromCognitoIdentityPool, fromTemporaryCredentials, fromWebToken)
It seems like the function I want isn't exported from the package but the documentation says otherwise.
Edit:
The output to #aws-sdk/credential-providers #aws-sdk/credential-provider-ini
port-dashboard#0.1.0 C:\Users\kshang\Documents\pov-ui
├─┬ #aws-sdk/client-cognito-identity-provider#3.79.0
│ ├─┬ #aws-sdk/client-sts#3.79.0
│ │ └─┬ #aws-sdk/credential-provider-node#3.79.0
│ │ └── #aws-sdk/credential-provider-ini#3.79.0
│ └─┬ #aws-sdk/credential-provider-node#3.79.0
│ └── #aws-sdk/credential-provider-ini#3.79.0
├─┬ #aws-sdk/credential-providers#3.79.0
│ ├─┬ #aws-sdk/client-cognito-identity#3.79.0
│ │ └─┬ #aws-sdk/credential-provider-node#3.79.0
│ │ └── #aws-sdk/credential-provider-ini#3.79.0 deduped
│ └── #aws-sdk/credential-provider-ini#3.79.0
└─┬ aws-amplify#4.3.20
├─┬ #aws-amplify/analytics#5.2.5
│ └─┬ #aws-sdk/client-firehose#3.6.1
│ └─┬ #aws-sdk/credential-provider-node#3.6.1
│ ├── #aws-sdk/credential-provider-ini#3.6.1
│ └─┬ #aws-sdk/credential-provider-process#3.6.1
│ └── #aws-sdk/credential-provider-ini#3.6.1 deduped
└─┬ #aws-amplify/geo#1.3.1
└─┬ #aws-sdk/client-location#3.48.0
└─┬ #aws-sdk/credential-provider-node#3.48.0
└── #aws-sdk/credential-provider-ini#3.48.0
Update: After doing more research and talking to some AWS Experts, it turns out we'll need to use Amazon Cognito in order to get credentials for our Browser based app.
I get the same error trying to use fromIni() for a simple CreateInvalidationCommand using the CloudFrontClient.
Trying to configure the client without the credentials key results in a Error: Credential is missing
My current workaround for developing locally is using a .env.local file, and using the accessKeyId and secretAccessKey properties for my config:
const cloudfront = new CloudFrontClient({
credentials: {
accessKeyId: process.env.REACT_APP_AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.REACT_APP_SECRET_ACCESS_KEY,
},
region: "us-east-1",
});
Which is obviously not ideal, but it works locally for testing and development.
I also wanted to note that fromEnv() is also missing from the exports. Which causes issues with this workaround in EC2 instances.
Either way, hoping to see this resolved in an update soon.
cache-invalidator#0.1.0 /Users/miguelmaldonado/Desktop/projects/prototypes/cache-invalidator
├─┬ #aws-sdk/client-cloudfront#3.85.0
│ └─┬ #aws-sdk/credential-provider-node#3.85.0
│ └── #aws-sdk/credential-provider-ini#3.85.0 deduped
└─┬ #aws-sdk/credential-providers#3.85.0
└── #aws-sdk/credential-provider-ini#3.85.0
Related
I have a module, let's say 'redshift_dw' that contains the aws_iam_service_linked_role for redshift.amazonaws.com.
The service role already exists in AWS, so I import it:
terraform import aws_iam_service_linked_role.redshift "arn:aws:iam::${AWSACCOUNTID}:role/aws-service-role/redshift.amazonaws.com/AWSServiceRoleForRedshift"
In my config I call the module multiple times (via locals, yamldecode(config.yml) and for_each, similar to this blog post).
My set up:
.
├── environments
│ ├── prod
│ └── test
│ ├── config.yml
│ ├── locals.tf
│ ├── iam.tf
│ ├── modules.tf
│ ├── providers.tf
│ ├── terraform.tf
└── modules
└── redshift_dw
├── iam.tf
├── outputs.tf
├── providers.tf
├── redshift.tf
└── variables.tf
environment folder
Input values:
# environments/test/config.yml
clients:
- code: XYZ
other_var: 123
Reading the input values as locals:
# environments/test/locals.tf
locals {
config = yamldecode(file("${path.root}/config.yml"))
}
# environments/test/iam.tf
resource "aws_iam_service_linked_role" "redshift" {
aws_service_name = "redshift.amazonaws.com"
}
Create module instances for each client:
# environments/test/modules.tf
module "my_module_call_for_redshift_dw" {
for_each = { for x in local.config.clients : x.code => x }
source = '../../modules/redshift_dw'
some_var = each.value.some_var
# further configuration
}
modules folder
Creating the resource in the module:
# modules/redshift_dw/iam.tf
resource "aws_iam_service_linked_role" "redshift" {
aws_service_name = var.rs_service_linked_role
}
EDIT: Consuming the linked role in aws_redshift_cluster.iam_roles:
# modules/redshift_dw/redshift.tf
resource "aws_redshift_cluster" "default" {
cluster_identifier = 'cluster-XYZ'
# ...
iam_roles = [
aws_iam_role.client.arn,
aws_iam_service_linked_role.redshift.arn # <---
]
/EDIT
Input value as a default value:
# modules/redshift_dw/variables.tf
variable "rs_service_linked_role" {
type = string
default = "redshift.amazonaws.com"
}
What happens
If I do a terraform plan now, it wants to create a resource for each module, instead of "re-using" the only one that I initially imported:
Terraform will perform the following actions:
# module.redshift_dw["XYZ"].aws_iam_service_linked_role.redshift will be created
+ resource "aws_iam_service_linked_role" "redshift" {
+ arn = (known after apply)
+ aws_service_name = "redshift.amazonaws.com"
+ create_date = (known after apply)
+ id = (known after apply)
+ name = (known after apply)
+ path = (known after apply)
+ tags_all = (known after apply)
+ unique_id = (known after apply)
}
But I would expect it to READ the existing resource. As I understand I cannot create a service linked role for each instance/client but need to re-use the only existing one. How is that possible?
Based on the requirements defined in the question, I would say you need to either:
Use the service linked role IAM resource attribute reference (a better way)
Hardcode the IAM service linked role
To achieve that, in the test environment, when calling the Redshift module, you would just specify the attribute exported by the resource as the input for a module variable and remove the resource for the service linked role completely:
# NOT NEEDED in modules/redshift_dw/iam.tf
resource "aws_iam_service_linked_role" "redshift" {
aws_service_name = var.rs_service_linked_role
}
Based on the fact that iam.tf and Redshift module call are in the same directory, change you would have to make is:
module "my_module_call_for_redshift_dw" {
for_each = { for x in local.config.clients : x.code => x }
source = '../../modules/redshift_dw'
# further configuration
service_linked_role_arn = aws_iam_service_linked_role.redshift.arn
}
You would also have to have the variable defined on the Redshift module level:
variable "service_linked_role_arn" {
description = "Service linked role ARN for Redshift."
type = string
}
The module code for the Redshift cluster would then look like:
resource "aws_redshift_cluster" "default" {
cluster_identifier = 'cluster-XYZ'
# ...
iam_roles = [
aws_iam_role.client.arn,
var.service_linked_role_arn
]
...
}
Adding resource blocks in modules (and anywhere else for that matter) will always try to create a new resource. In this case, it would fail because that service linked role already exists, but plan shows what it would have tried to do.
Usually, there is another way to get information about what was created already, and those are data sources. However, I do not think there is one for the service linked role. Not entirely sure, but you could maybe fetch the service linked role by using data source [1] if you do not want to update the module code.
[1] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_role
Gurus!
I'm under developing Terraform modules to provision NAT resource for production and non-production environment. There are two repositories one for Terraform modules another for the live environment for each account (ex: dev, stage, prod..)
I have an problem when access output variable of network/nat module.
It makes me very tired. Please refer below.
for Terraform module (sre-iac-module repo)
❯ tree sre-iac-modules/network/nat/
sre-iac-modules/network/nat/
├── main.tf
├── non_production
│ └── main.tf
├── outputs.tf
├── production
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
└── variables.tf
for live environment (sre-iac-modules repo)
❯ tree sre-iac-modules/network/nat/
sre-iac-modules/network/nat/
├── main.tf
├── non_production
│ └── main.tf
├── outputs.tf
├── production
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
└── variables.tf
In the main code snippet, sre-iac-live/dev/services/wink/network/main.tf
I cannot access output variable named module.wink_nat.eip_ids.
When I run terraform plan or terraform console, always I reached following error.
│ Error: Unsupported attribute
│
│ on ../../../../../sre-iac-modules/network/nat/outputs.tf line 2, in output "eip_ids":
│ 2: value = module.production.eip_ids
│ ├────────────────
│ │ module.production is tuple with 1 element
│
│ This value does not have any attributes.
╵
Here is the ../../../../../sre-iac-modules/network/nat/outputs.tf and main.tf
output "eip_ids" {
value = module.production.eip_ids
# value = ["a", "b", "c"]
}
----
main.tf
module "production" {
source = "./production"
count = var.is_production ? 1 : 0
env = ""
region_id = ""
service_code = ""
target_route_tables = []
target_subnets = var.target_subnets
}
module "non_production" {
source = "./non_production"
count = var.is_production ? 0 : 1
}
However, if I use value = ["a", "b", "c"] then it works well!
I couldn't re what is the problem.
Below is the code snippet of ./sre-iac-modules/network/nat/production/outputs.tf
output "eip_ids" {
value = aws_eip.for_nat[*].id
# value = [aws_eip.nat-gw-eip.*.id]
# value = aws_eip.for_nat.id
# value = ["a", "b", "c"]
}
Below is the code snippet of ./sre-iac-modules/network/nat/production/main.tf
resource "aws_eip" "for_nat" {
count = length(var.target_subnets)
vpc = true
}
And finally, here is the main.tf code snippet. (sre-iac-live/dev/services/wink/network/main.tf)
module "wink_vpc" {
.... skip ....
}
module "wink_nat" {
# Relative path references
source = "../../../../../sre-iac-modules/network/nat"
region_id = "${var.region_id}"
env = "${var.env}"
service_code = "${var.service_code}"
target_subnets = module.wink_vpc.protected_subnet_ids
is_production = true
depends_on = [module.wink_vpc]
}
I'm stuck this issue for one day.
I needs Terraform Guru's help.
Please give me your great advice.
Thank you so much in advance.
Cheers!
Your production module has a count meta attribute. To reference the module you have to use an index, like:
value = module.production[0].eip_ids
Setting the baseUrl property to src in the .tsconfig has allowed me to import modules relative to the baseUrl like this:
import { ArrowRightIcon } from 'Shared/icons/navigation'
instead of :
import { ArrowRightIcon } from '../../../Shared/icons/navigation'
Question
How can I efficiently transform the thousands of such relative imports into absolute imports in my application since Find All functionality of any IDE does not understand the context?
Currently, I have to replace all the occurrences of ../../../Shared/, ../../../../Shared/, so on... with Shared/ and for all the directories in src. Even if I do that with help of regex, it won't help in the cases like example no. 2 below.
Other examples:
import { getActiveTabIdx } from 'Shared/utils/shared.helpers'
instead of import { getActiveTabIdx } from '../../../../../../../Shared/utils/shared.helpers'
import { TabWithLink } from 'Shared/components/Header'
instead of import { TabWithLink } from '../../components/Header'
import { layoutReducer } from 'Layout/state/layout.reducer'
instead of import { layoutReducer } from '../../Layout/state/layout.reducer'
My directory structure is:
.
├── src
│ ├── Shared
│ │ ├── components
│ │ ...
│ ├── Notification
│ │ ├── components
│ │ ...
│ ├── Planning
│ │ ├── components
│ │ ...
│ ├── Behaviour
│ │ ├── components
│ │ ...
│ └── Layout
│ │ ├── components
│ │ ├── state
│ ...
...
Currently installed version:
typescript: 4.2.4
vscode: 1.57.1
I want to deploy an RDS database to AWS with a secret from AWS Secrets Manager. I have:
├─ environments
│ └─ myenv
│ ├── main.tf
│ ├── locals.tf
│ └── variables.tf
└─ modules
├─ db
│ ├── main.tf
│ └── variables.tf
└─ secrets
└── main.tf
In myenv/main.tf I define a module mydb that has modules/db/main.tf as source where a resource database is defined. Save for the password it all works, I specify values in blocks in myenv and the values "trickle down".
But for the credentials, I don't want to hard code them in myenv of course.
Instead in modules/secrets I define
data "aws_secretsmanager_secret_version" "my_credentials" {
# Fill in the name you gave to your secret
secret_id = "my-secret-id"
}
and with another block:
locals {
decoded_secrets = jsondecode(data.aws_secretsmanager_secret_version.my_credentials.secret_string)
}
I decode the secrcets and now I want to reference them as e.g. local.decoded_secrets.username in myenv/main. That is my interpretation of the tutorials. But it doesn't work: If I put the locals block in myenv it cannot reference data, and when I put it in modules/secrets then myenv cannot reference locals.
How can I combine the values of these two modules in my myenv/main?
Define an output in the secrets module. Define an input in the db module. Pass the output value from secrets to the input property in db.
For example if you defined an output named "password" in secrets and an input named "password" in db, then in your db module declaration you would pass the value like this:
module "secrets" {
source = "../modules/secrets"
}
module "db" {
source = "../modules/db"
password = module.secrets.password
}
You have a few options available to you here to pass the secret to the database module.
The smallest thing you need to do from your existing setup would be to call both modules at the same time and pass an output from the secrets module to the database module like this:
.
├── environments
│ └── myenv
│ ├── locals.tf
│ ├── main.tf
│ └── variables.tf
└── modules
├── db
│ ├── main.tf
│ └── variables.tf
└── secrets
├── main.tf
└── outputs.tf
modules/secrets/outputs.tf
output "secret_id" {
value = aws_secretsmanager_secret_version.secret.secret_id
}
environments/myenv/main.tf
module "secrets" {
source = "../../modules/secrets"
# ...
}
module "db" {
source = "../../modules/db"
# ...
secret_id = module.secrets.secret_id
}
A better approach however might be to have the database module create and manage its own secret and not require the secret to be passed into the database module as a parameter at all. If you wanted to reuse the secrets module with other modules then you could make it a child module of the database module or if this is the only place you currently use the secrets module then unnesting these makes things simpler.
Nesting modules
modules/db/main.tf
module "database_password_secret_id" {
source = "../secrets"
# ...
}
data "aws_secretsmanager_secret_version" "database_password" {
secret_id = module.database_password_secret_id.secret_id
}
Unnesting the modules
.
├── environments
│ └── myenv
│ ├── locals.tf
│ ├── main.tf
│ └── variables.tf
└── modules
└── db
├── main.tf
├── secrets.tf
└── variables.tf
modules/db/secrets.tf
resource "aws_secretsmanager_secret" "database_password" {
name = "database-password"
}
resource "random_password" "database_password" {
length = 32
}
resource "aws_secretsmanager_secret_version" "database_password" {
secret_id = aws_secretsmanager_secret.example.id
secret_string = random_password.database_password.result
}
modules/db/main.tf
resource "aws_db_instance" "database" {
# ...
password = aws_secretsmanager_secret_version.database_password.secret_string
}
I have a website I am currently working on in clojure. The local version of the site works perfectly, but when I push it to heroku, it is unable to find any of my static resources. I've checked, and git is including the resources/public folder, so it should be getting uploaded to heroku.
Here is my handler code:
(defroutes app
(GET "/" [] (layout/application "Home" (content/index)))
(GET "/commission" [] (layout/application "Commission Calculator" (content/commissionForm)))
(POST "/commission" [report] (layout/application "Commission Calculator" (content/commissionResults report)))
(route/resources "/"))
The resources directory is at the same level as the src folder, and the assets are named correctly. The project is located here.
Navigating directly to https://lit-falls-39572.herokuapp.com/css/master.css, results in a 404 error, but navigating to localhost:5000/css/master.css correctly returns the css content, when I host the site locally. (Navigating to either /resources/public/css/master.css or /public/css/master.css does not work either) Does anyone have any idea why it would be behaving differently on heroku?
Edit: Here is the project stucture:
├───.idea
│ ├───copyright
│ ├───inspectionProfiles
│ └───libraries
├───dev-resources
├───lib
├───resources
│ └───public
│ ├───CSS
│ ├───images
│ │ └───home
│ └───material-design
│ ├───css
│ ├───img
│ │ ├───examples
│ │ └───flags
│ ├───js
│ └───sass
│ └───material-kit
│ └───plugins
├───src
│ └───kandj
│ └───views
├───target
│ ├───classes
│ │ └───META-INF
│ │ └───maven
│ │ └───kandj
│ │ └───kandj
│ └───stale
└───test
The issue is that your CSS directory has upper case name while your URLs use css lower case.