I have a AWS Codebuild project connected to my Github account. Within my github I have separate branches for each environment.
I have in total 4 environments (and by that relationship, 4 github branches) currently: dev, qa, customer1-poc, customer2-prod.
Now I use multitude of environment variables within my project and initially I was setting up these env vars within the Codebuild project under Environment > Environment variables section. So ideally per env there are 4 env vars which are distinguished using the env name.
For example if there is an env var called apiKey it is saved in codebuild 4 times by the name
apiKey_dev
apiKey_qa
apiKey_customer1poc
apiKey_customer2prod
You get the idea. Same goes for other env vars which need to be different across all envs.
These env vars are read from the buildspec file and passed on to serverless.yml file.
Now the issue is as I keep creating new environments (like more poc, prod envs) I need to keep replicating the set of env vars for each env and its getting tedious.
Is there some way I can save these env vars outside the Codebuild project which can then be passed on to the Lambda function upon successful builds?
CodeBuild has native integration with Parameter store:
https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec.env.parameter-store
In Paramter store, you can keep your variable as a json with name like /config/prod":
... then retrieve it in CodeBuild and parse via 'jq' 2. This way, all the environment specific variables are in one place. If you go this way, make sure to encrypt the Param Store variable with a KMS key if it contains secrets. Also check AWS Secrets Manager.
Related
I wish to deploy a infrastructure that is written as a terraform module. This module is as follows:
module "my-module" {
count = var.env == "prod" ? 1 : 0
source = "s3::https://s3-us-east-1.amazonaws.com/my-bucket/my-directory/"
env = var.env
deployment = var.deployment
}
Right now this is in a my-module.tf file, and I am deploying it by running the usual terraform init, plan and apply commands (and passing in the relevant variables).
However, for my specific requirements, I wish to be able to deploy this only by running terraform init, plan and apply commands (and passing in the relevant variables), and not having to store the module in a file on my own machine. I would rather have the module file be stored remotely (e.g. s3 bucket) so other teams/users do not need to have the file on their own machine. Is there any way this terraform could be deployed in such a way that the module file can be stored remotely, and could for example be passed as an option when running terraform plan and apply commands?
could for example be passed as an option when running terraform plan and apply commands?
It's not possible. As explained in TF docs, source must be a literal string. Which means it can't be any dynamic variable.
You would have to develop your own wrapper around TF, which would do simple find-and-replace source place-holders with actual correct values before you use terraform.
I have added a few tables on DynamoDB using the amplify add storage command.
But the table has a suffix that is the environment name (dev, prod, etc).
How can I access the environment name on my NextJS backend so I can suffix the DynamoDB table name on my code ?
Or there is another way to achieve what I want ?
Amplify automatically creates DynamoDB tables (and also AppSync queries, etc) to match your current Amplify environment. When you create a new environment (eg, 'dev'), the Amplify will automatically create duplicate 'prod' tables, that will perform the same as you 'dev' tables. I'm guessing in your case, you won't need to access environment variables.
If you are using AppSync/GraphQL to make calls, then you can use Amplify's built in dynamic env features here: https://docs.amplify.aws/cli-legacy/graphql-transformer/function/#usage
For example, you could set up a custom Lambda function to update your DynamoDB. You could then set up an AppSync call to that Lambda in your schema.graphql file.
There are some cases where you may need to access your environment variables. You can either set them up manually in .env.local, or possibly easier to run a query in your NextJS javascript to determine the current domain:
const origin =
typeof window !== "undefined" && window.location.origin
? window.location.origin
: "";
console.log(origin); // "https://dev.<>.amplifyapp.com"
An better solution would be to follow this Amplify documentation, except I've tried it and it doesn't work.
I get this in the left nav panel. I've explored each one and no sign of the described Environment Variables section:
It describes accessing/updating env vars here, but apparently you can only find/use this feature if you've connected your Amplify app to Github first. (It would have been nice if the docs had clarified this!)
Im working on creating the CodeBuild which is integrated with SonarQube, So I pass values and sonar credentials directly in my Buildspec.yaml
Instead of Hardcoding directly, I tried to retrieve using the below command from SecretManager as it is mentioned in the below link. But it is not getting the correct values. it throws an error.
Command : '{{resolve:secretsmanager:MyRDSSecret:SecretString:username}}'
Link: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html#dynamic-references-secretsmanager
Error [ERROR] SonarQube server [{{resolve:secretsmanager:arn:aws:secretsmanager:us-east-1:********:secret:**********:SecretString:SonarURL}}] can not be reached
How I used echo '{{resolve:secretsmanager:arn:aws:secretsmanager:us-east-1:***:secret:**************:SecretString:*******}}'
Note: All the * inside my commard are the secretname and secreturl
CodeBuild just launched this today - https://aws.amazon.com/about-aws/whats-new/2019/11/aws-codebuild-adds-support-for-aws-secrets-manager/
If you wish to retrieve secrets in your buildspec file, I would recommend to use Systems Manager Parameter Store which is natively integrated with CodeBuild. Systems Manager is a service in itself, search it from the AWS Console homepage, then Paramater Store is in the bottom left of the Systems Manager Console page.
Lets assume you want to include Access Key and Secret Key in buildspec.yml file:
- Create AccessKey/SecretKey pair for a IAM User
- Save the above keys in an SSM parameter store as secure string (e.g. '/CodeBuild/AWS_ACCESS_KEY_ID' and '/CodeBuild/AWS_SECRET_ACCESS_KEY')
- Export the two values in your build environment using the following buildspec directive(s):
version: 0.2
env:
parameter-store:
AWS_ACCESS_KEY_ID_PARAM: /CodeBuild/AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY_PARAM: /CodeBuild/AWS_SECRET_ACCESS_KEY
phases:
build:
commands:
- export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID_PARAM
- export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY_PARAM
# Your Ansible commands below
- ansible-playbook -i hosts ec2-key.yml
[1] Build Specification Reference for CodeBuild - Build Spec Syntax - https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-syntax
The dynamic reference syntax you are trying to use only works with the Cloud Formation (CFN) service. In some cases, CFN restricts where these dynamic references to secrets will expand. Specifically, they do not expand in places where the secrets might be visible in the console, such as in EC2 metadata.
If you are trying to setup Code Build via CFN, this may be what you are seeing. However, as shariqmaws mentioned, you can use parameter store and either store your secret there or use parameter store as a pass through to secrets manager (in case you want to use secrets manager to rotate your secrets or for other reasons).
version: 0.2
env:
parameter-store:
AWS_ACCESS_KEY_ID : /terraform-cicd/AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY : /terraform-cicd/AWS_SECRET_ACCESS_KEY
AWS_CODECOMMIT_SSH_ID : /terraform-cicd/AWS_CODECOMMIT_SSH_ID
secrets-manager:
AWS_CODECOMMIT_SSH_PRIVATE: /terraform-cicd/AWS_CODECOMMIT_SSH_PRIVATE
Is there a way to have global environment variables in a AWS CloudFormation yaml file for Lambdas?
Currently we are using the SSM Parameter Store for global variables, but we don't want to use that anymore.
I looking forward to have something like this:
Environment:
Variables:
variable1: xxx // local variables
variable2: xxx
...
${file(./globalvariables.yml)} // global variables
Or even better: every lambda is including the global environment variables as default without explicitly calling it.
Is this possible? Or what approach would you suggest? Thanks in advance!
Sadly I'm unaware of having predefined defaults for environment variables to be set through CloudFormation for Lambdas however - One possible option is instead of using env variables in CloudFormation add a lambda layer with all the config and pull the values from there.
Benefits of this are that if a value changes you only have to update your layer once then update lambdas to use new layer which could be a single parameter instead of manually updating every single one.
Docs here: https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html
Another option would be to use AWS Secrets Manager Or SSM Parameter Store as ServerMonkey suggested.
I am trying to setup some automation around AWS infrastructure. Just bumped into one issue about module dependencies. Since there is no "Include" type of option in terraform so it's becoming little difficult to achieve my goal.
Here is the short description of scenario:
In my root directory I've a file main.tf
which consists of multiple module blocks
eg.
module mytest1
{
source = mymod/dev
}
module mytest2
{
source = mymod2/prod
}
each dev and prod have lots of tf files
Few of my .tf file which exists inside prod directory needs some output from the resources which exists inside dev directory
Since module has no dependency, I was thinking if there is any way to run modules in sequence or any other ideas ?
Not entirely sure about your use case for having prod and dev needing to interact in the way you've stated.
I would expect you to maybe have something like the below folder structure:
Folder 1: Dev (Contains modules for dev)
Folder 2: Prod (Contains modules for prod)
Folder 3: Resources (Contains generic resource blocks that both dev and prod module utilise)
Then when you run terraform apply for Folder 1, it will create your dev infrastructure by passing the variables from your modules to the resources (in Folder 3).
And when you run terraform apply for Folder 2, it will create your prod infrastructure by passing the variables from your modules to the resources (in Folder 3).
If you can't do that for some reason, then Output Variables or Data Sources can potentially help you retrieve the information you need.
There is no reason for you to have different modules for different envs. Usually, the difference between lower envs and prod are the number and the tier for each resource, and you could just use variables to pass that to inside the modules.
To deal with this, you can use terraform workspaces and create one workspace for each env, e.g:
terraform worskspace new staging
This will create a completely new workspace, with its own state. If you need to define the number of resouces to be created, you can use the variable sor the terraform workspace name itself, e.g:
# Your EC2 Module
"aws_instance" "example" {
count = "${terraform.workspace == "prod" ? 3 : 1}"
}
# or
"aws_instance" "example" {
count = "${lenght(var.subnets)}" # you are likely to have more subnets for prod
}
# Your module
module "instances" {
source = "./modules/ec2"
subnets = "my subnets list"
}
And that is it, you can have all your modules working for any environment just by creating workspaces and changing the variables for each one on your pipeline and applying the plan each time.
You can read more about workspaces here
I'm not too sure about your requirement of having the production environment depend on the development environment, but putting the specifics aside, the idiomatic way to create sequencing between resources and between modules in Terraform is to use reference expressions.
You didn't say what aspect of the development environment is consumed by the production environment, but for the sake of example let's say that the production environment needs the id of a VPC created in the development environment. In that case, the development module would export that VPC id as an output value:
# (this goes within a file in your mymod/dev directory)
output "vpc_id" {
value = "${aws_vpc.example.id}"
}
Then your production module conversely would have an input variable to specify this:
# (this goes within a file in your mymod2/prod directory)
variable "vpc_id" {
type = "string"
}
With these in place, your parent module can then pass the value between the two to establish the dependency you are looking for:
module "dev" {
source = "./mymod/dev"
}
module "prod" {
source = "./mymod2/prod"
vpc_id = "${module.dev.vpc_id}"
}
This works because it creates the following dependency chain:
module.prod's input variable vpc_id depends on
module.dev's output value vpc_id, which depends on
module.dev's aws_vpc.example resource
You can then use var.vpc_id anywhere inside your production module to obtain that VPC id, which creates another link in that dependency chain, telling Terraform that it must wait until the VPC is created before taking any action that depends on the VPC to exist.
In particular, notice that it's the individual variables and outputs that participate in the dependency chain, not the module as a whole. This means that if you have any resources in the prod module that don't need the VPC to exist then Terraform can get started on creating them immediately, without waiting for the development module to be fully completed first, while still ensuring that the VPC creation completes before taking any actions that do need it.
There is some more information on this pattern in the documentation section Module Composition. It's written with Terraform v0.12 syntax and features in mind, but the general pattern is still applicable to earlier versions if you express it instead using the v0.11 syntax and capabilities, as I did in the examples above.