How do you reference a dynamic terraform output in application code? - amazon-web-services

I'm creating a dynamodb table using terraform and the name attribute of the table looks something like this...
name = "${var.service}-${var.environment}-Item-table"
Depending on the environment the name could be items-service-dev-Item-table or items-service-prod-Item-table. In my application code (JS) I obviously need to know the name of the table in order to interact with it but the dynamic nature makes it trickier.
I've considered going down the route of environment variables that are referenced by both the terraform and application code, but it seems messy. What's the best practice approach for handling something like this?

Is terraform also deploying your application code? Usually you would have Terraform inject that value as an environment variable in the application it deploys.
If that's not possible, store the value in AWS Parameter Store.

Related

AWS CDK - Exclude stage name from logical ID of resource

I have a CDK project where initially it was deployed via CLI. I am now wrapping it in a pipelines construct.
Old:
Project
|
Stacks
|
Resources
New:
Project
|
Pipeline
|
Stage
|
Stacks
|
Resources
The issue I'm running into is that there are resources I would rather not be deleted in the application, however adding the stage causes the logical ID's to change to Stage-Stack-Resource from Stack-Resource. I found this article that claims you can provide an id of 'Default' to a resource, and cause it to go unused in the process of making the logical ID. however for some reason when I pass an Id of Default to the stage it simply uses that "Default" literal value instead of omitting it.
End goal is that I can keep my existing cloudformation resources, but have them deployed via this pipeline.
You can override the logical id manually like this:
S3 example:
const cfnBucket = s3Bucket.node.defaultChild as aws_s3.CfnBucket;
cfnBucket.overrideLogicalId('CUSTOMLOGICALID');
However, if you did not specify a logical id initially and do it now, CloudFormation will delete the original resource and create a new one with the new custom logical id because CloudFormation identifies resources by their logical ID.
Stage is something you define and it is not related to CloudFormation. You are probably using it in your Stack name or in your Resource names and that's why it gets included in the logical id.
Based on your project description, the only option to not have any resources deleted is: make one of the pipeline stages use the exact same stack name and resource names (without stage) as the CLI deployed version.
I ended up doing a full redeploy of the application. Luckily this was a development environment where trashing our data stores isn't a huge loss. But would be much more of a concern in a production environment.

Access current environment name for a NextJS app running on amplify

I have added a few tables on DynamoDB using the amplify add storage command.
But the table has a suffix that is the environment name (dev, prod, etc).
How can I access the environment name on my NextJS backend so I can suffix the DynamoDB table name on my code ?
Or there is another way to achieve what I want ?
Amplify automatically creates DynamoDB tables (and also AppSync queries, etc) to match your current Amplify environment. When you create a new environment (eg, 'dev'), the Amplify will automatically create duplicate 'prod' tables, that will perform the same as you 'dev' tables. I'm guessing in your case, you won't need to access environment variables.
If you are using AppSync/GraphQL to make calls, then you can use Amplify's built in dynamic env features here: https://docs.amplify.aws/cli-legacy/graphql-transformer/function/#usage
For example, you could set up a custom Lambda function to update your DynamoDB. You could then set up an AppSync call to that Lambda in your schema.graphql file.
There are some cases where you may need to access your environment variables. You can either set them up manually in .env.local, or possibly easier to run a query in your NextJS javascript to determine the current domain:
const origin =
typeof window !== "undefined" && window.location.origin
? window.location.origin
: "";
console.log(origin); // "https://dev.<>.amplifyapp.com"
An better solution would be to follow this Amplify documentation, except I've tried it and it doesn't work.
I get this in the left nav panel. I've explored each one and no sign of the described Environment Variables section:
It describes accessing/updating env vars here, but apparently you can only find/use this feature if you've connected your Amplify app to Github first. (It would have been nice if the docs had clarified this!)

Terraform `name` vs `self_link` in GCP

In GCP, when using Terraform, I see I can use name attribute as well as self_link. So, I am wondering if there are cases where I must use any of those.
For example:
resource "google_compute_ssl_policy" "custom_ssl_policy" {
name = "my-ssl-policy"
profile = "MODERN"
min_tls_version = "TLS_1_1"
}
this object, then can be referred as:
ssl_policy = google_compute_ssl_policy.custom_ssl_policy.name
and
ssl_policy = google_compute_ssl_policy.custom_ssl_policy.self_link
I know that object.name returns the Terraform object name, and object.self_link returns GCP's resources's URI.
I have tried with several objects, and it works with both attributes, so I want to know if this is trivial or there are situations where I should use one of them.
Here is the definition from the official documentation:
Nearly every GCP resource will have a name field. They are used as a
short way to identify resources, and a resource's display name in the
Cloud Console will be the one defined in the name field.
When linking resources in a Terraform config though, you'll primarily
want to use a different field, the self_link of a resource. Like name,
nearly every resource has a self_link. They look like:
https://www.googleapis.com/compute/v1/projects/foo/zones/us-central1-c/instances/terraform-instance
A resource's self_link is a unique reference to that resource. When
linking two resources in Terraform, you can use Terraform
interpolation to avoid typing out the self link!
Reference: https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/getting_started
One example, I can deploy two cloud functions with the same name/same project but in different regions. In this case, if you had to reference both resources in Terraform code, you would be better by using the self_link since it's a unique URI.

How to conditionally update a resource in Terraform

Seems it's common practice to make use of count on a resource to conditionally create it in Terraform using a ternary statement.
I'd like to conditionally update an AWS Route 53 entry based on a push_to_prod variable. Meaning I don't want to delete the resource if I'm not pushing to production, I only want to update it, or leave the CNAME value as it is.
Has anyone done something like this before in Terraform?
Currently as it stands interpolation syntax isn't supported in lifecycle tags. You can read more here. Which will make this harder because you could use the "Prevent Destroy". However, without more specifics I am going to take my best guess on how to get your there.
I would use the allow_overwrite property on the Route53 record and set that based on your flag. That way if you are pushing to prod you can set it it false. Which should trigger creating a new one. I haven't tested that.
Also note that if you don't make any changes to the Route53 resource it should trigger any changes in Terraform to be applied. So updating any part of the record will trigger the deployment.
You may want to combine this with some lifecycle events, but I don't have enough time to dig into that specific resource and how it happens.
Two examples I can think of are:
type = "${var.push_to_prod == "true" ? "CNAME" : var.other_value}" - this will have a fixed other_value, there is no way to have terraform "ignore" the resource once it's being managed by terraform.
or
type = "${var.aws_route53_record_type}" and you can have dev.tfvars and prod.tfvars, with aws_route53_record_type defined as whatever you want for dev and CNAME for prod.
The thing is with what you're trying to do, "I only want to update it, or leave the CNAME value as it is.", that's not how terraform works. Terraform either manages the resource for you or it doesn't. If it's managing it, it'll update the resource based on the config you've defined in your .tf file. If it's not managing the resource it won't modify it. It sounds like what you're really after is the second solution where you pass in two different configs from your .tfvars file into your .tf file and based off the different configs, different resources are created. You can couple this with count to determine if a resource should be created or not.

Can I parameterize AWS lambda functions differently for staging and release resources?

I have a Lambda function invoked by S3 put events, which in turn needs to process the objects and write to a database on RDS. I want to test things out in my staging stack, which means I have a separate bucket, different database endpoint on RDS, and separate IAM roles.
I know how to configure the lambda function's event source and IAM stuff manually (in the Console), and I've read about lambda aliases and versions, but I don't see any support for providing operational parameters (like the name of the destination database) on a per-alias basis. So when I make a change to the function, right now it looks like I need a separate copy of the function for staging and production, and I would have to keep them in sync manually. All of the logic in the code would be the same, and while I get the source bucket and key as a parameter to the function when it's invoked, I don't currently have a way to pass in the destination stuff.
For the destination DB information, I could have a switch statement in the function body that checks the originating S3 bucket and makes a decision, but I hate making every function have to keep that mapping internally. That wouldn't work for the DB credentials or IAM policies, though.
I suppose I could automate all or most of this with the SDK. Has anyone set something like this up for a continuous integration-style deployment with Lambda, or is there a simpler way to do it that I've missed?
I found a workaround using Lambda function aliases. Given the context object, I can get the invoked_function_arn property, which has the alias (if any) at the end.
arn_string = context.invoked_function_arn
alias = arn_string.split(':')[-1]
Then I just use the alias as an index into a dict in my config.py module, and I'm good to go.
config[alias].host
config[alias].database
One thing I'm not crazy about is that I have to invoke my function from an alias every time, and now I can't use aliases for any other purpose without affecting this scheme. It would be nice to have explicit support for user parameters in the context object.