how to set up AWS Secrets with static credentials using terraform - amazon-web-services

I have a requirement.
I am deploying a application into AWS using terraform.
A part of this contains creating of a secrets resource "aws_secretsmanager_secret", for this secret I have to add userid/password of an external system which will be static and will never change.
Now while deploying this I have to declare the values for the userid/password. Since this terraform will code will also get stored in the git repository. This storing of credential in plain text form is not allowed.
How to solve this problem ?
Thanks,
Abhi
I have stored the credential in variables.tf that will eventually create the secrets with the variables, but this is not allowed

Instead of storing your credentials in your variables.tf file, you can store them into environment variables, and have your tf code read those variables.
This way, when you commit your tf files to git, the variables will no be pushed in plain text.
Medium article explaining how to do it
Official documentation

Related

Is it possible to mount (S3) file into AWS::ECS::TaskDefinition ContainerDefinition using cloudformation?

I have this ECS cluster that is running task definitions with singular container inside each group. I'm trying to add some fancy observability to my application by introducing OpenTelemetry. Following the AWS'es docs I found https://github.com/aws-observability/aws-otel-collector which is the AWS version of OTEL collector. This collector needs a config file (https://github.com/aws-observability/aws-otel-collector/blob/main/config/ecs/ecs-default-config.yaml) that specifies stuff like receivers, exporters, etc. I need to be able to create my own config file with 3rd party exporter (also need to add my secret API key somewhere inside there - maybe it can go to secrets manager and get mounted as env var :shrug:).
I'm wondering if this is doable without having to build my own image with baked config somewhere inside purely using cloudformation (what I use to deploy my app) and other amazon services?
The plan is to add this container besides each other app container (inside the task definition) [and yeah I know this is overkill but for now simple > perfect]
Building additional image will require some cardinal changes to the CI/CD so if I can go without those it will be awesome.
You can't mount an S3 bucket in ECS. S3 isn't a file system, it is object storage. You would need to either switch to EFS, which can be mounted by ECS, or add something to the startup script of your docker image to download the file from S3.
I would recommend to check a doc for AWS ADOT. You will find that it supports config variable AOT_CONFIG_CONTENT (doc). So you don't need a config file, only a config env variable. That plays very well with AWS ecosystem, because you can use AWS Systems Manager Parameter Store and/or AWS Secrets Manager, where you can store otel collector configuration (doc).

How to set a DynamoDB environment variable in AWS?

I have a Golang which connects to a DynamoDB.
The name of the db table was hard-coded and now it is set to read from an environment variable using os.LookupEnv().
I unit tested locally with the variable being read from a secrets.env file, but I wonder how to proceed to make it work when deployed in production?
I suppose that I need to set this on AWS config somehow?
No, you would not store it in aws config that is for configuring access keys, which I also recommend not doing, use an IAM role attached to whichever service you are using is best practice.
As for DynamoDB table name as an env variable, its totally up to how you wish to do it. You can store it as an env variable on the OS in EC2 for example. Or if using Lambda you can use its Environment Variables
You can also use Parameter Store to store environments variables and is common practice.

Retrieve AWS SM secrets and export to container environment variable

Can anyone suggest what's the best way to retrieve AWS secrets-manager secrets from Dockerfile and send the secret values to Docker container as environment variables after Docker ran.
The reason behind why I am asking this, I am trying to remove all sensitive password information hard coded in different places of git code repository and move the passwords to AWS secrets-manager.
https://github.com/s12v/secure-exec is a similar tool, which supports the Secrets Manager (including JSON).
As mentioned above, with ECS there's no need in such tools.
Take a look at ssm-env, which populates ENV vars from Parameter Store. There is an example of using it with Docker.
If you are using ECS, there is built-in support for this.

Terraform to create multiple vpc by re-executing same main.tf

I am trying to create a terraform script which will create a vpc and other resources. I am passing the parameters for scripts from a .tfvars file. I have successfully created the vpc and resources by executing the script. Now I want to create another vpc with same set of resources but with different set of parameter values. I have created a new .tfvars file with new values and tried to execute it with the old main.tf file. When I execute the 'terraform plan' command its showing that it will delete the vpc and resources created during my first run will create a new vpc with the new values.
Is there any method to create resources using same terraform main.tf file and by changing the .tfvars file.
You are running into a state-based issue. When you define a resource you give it a name. Those names are used in the state file and that is what is making Terraform to think you are trying to alter an existing resource. You have a couple of ways to address this and it depends on what you are really doing.
Terraform Workspaces
You could use workspaces in terraform for each VPC you are creating, this would keep the state separated, however, workspaces are really intended to separate environments, not multiple resources in the same environment. You can read more here.
Terraform Modules
What it sounds like to me is that you really want to create a terraform module for your VPC configuration. Then create each VPC using your module in the same main.tf. That way you will have unique names resources which will not confuse the state management. You can read more about modules here. A good resource for information about it can be found in this blog post.
The way to do this is by creating a module. You should be able to pretty much cut / paste your current code in to your module. You may only need to remove the provider definition from your module. Then in your new main code (root module) reference the module for each set of resources you want to create.
Ah the reason TF is trying to remove the resources you already created is because they've been captured in its state.
When you create the module add the resources you already created back in. TF will always try and configure as per the code, if the resources are remove it will try and destroy them
Create a module in terraform
This is because you are working on the same tfstate file.
Following you could do :
1. If you are working with local state: copy the whole code in a different directory and with new tfvars file and work there. This will start a new clean tfstate
If you are working with remote state:
a. Configure different remote state and then use new tfvars file, or
b. Create a different directory, symlink your code to this directory and replace old backend config and tfvars file with the new one.
I have sample code of working with multi-env https://github.com/pradeepbhadani/tf-course/tree/master/Lesson5
Create a Terraform module of your VPC code and then call it from a separate directory.

Amazon Elastic Beanstalk Configuration Questions

Alright, so I get you can define environment variables within an environment.config file in the .ebextensions folder.
But let's say I want the AWS Account ID to be available as an environment variable here. Is there a way to dynamically retrieve that value given the context in which the Elastic BeanStalk application is deployed?
Also is there a way to refer to other environment variables within the config file? For example, given we're working within .ebextensions/environment.config:
aws:elasticbeanstalk:application:environment:
foo: '123'
bar: hello-${foo}
Any pointers are greatly appreciated here, thanks!
Per the documentation, the aws sts get-caller-identity command can be used to retrieve the AWS Account ID.
Regarding your second question, I believe .config files are in YAML format, which would mean that you cannot reuse a value as part of another value. See this thread for more information.