Amazon Elastic Beanstalk Configuration Questions - amazon-web-services

Alright, so I get you can define environment variables within an environment.config file in the .ebextensions folder.
But let's say I want the AWS Account ID to be available as an environment variable here. Is there a way to dynamically retrieve that value given the context in which the Elastic BeanStalk application is deployed?
Also is there a way to refer to other environment variables within the config file? For example, given we're working within .ebextensions/environment.config:
aws:elasticbeanstalk:application:environment:
foo: '123'
bar: hello-${foo}
Any pointers are greatly appreciated here, thanks!

Per the documentation, the aws sts get-caller-identity command can be used to retrieve the AWS Account ID.
Regarding your second question, I believe .config files are in YAML format, which would mean that you cannot reuse a value as part of another value. See this thread for more information.

Related

Is it possible to mount (S3) file into AWS::ECS::TaskDefinition ContainerDefinition using cloudformation?

I have this ECS cluster that is running task definitions with singular container inside each group. I'm trying to add some fancy observability to my application by introducing OpenTelemetry. Following the AWS'es docs I found https://github.com/aws-observability/aws-otel-collector which is the AWS version of OTEL collector. This collector needs a config file (https://github.com/aws-observability/aws-otel-collector/blob/main/config/ecs/ecs-default-config.yaml) that specifies stuff like receivers, exporters, etc. I need to be able to create my own config file with 3rd party exporter (also need to add my secret API key somewhere inside there - maybe it can go to secrets manager and get mounted as env var :shrug:).
I'm wondering if this is doable without having to build my own image with baked config somewhere inside purely using cloudformation (what I use to deploy my app) and other amazon services?
The plan is to add this container besides each other app container (inside the task definition) [and yeah I know this is overkill but for now simple > perfect]
Building additional image will require some cardinal changes to the CI/CD so if I can go without those it will be awesome.
You can't mount an S3 bucket in ECS. S3 isn't a file system, it is object storage. You would need to either switch to EFS, which can be mounted by ECS, or add something to the startup script of your docker image to download the file from S3.
I would recommend to check a doc for AWS ADOT. You will find that it supports config variable AOT_CONFIG_CONTENT (doc). So you don't need a config file, only a config env variable. That plays very well with AWS ecosystem, because you can use AWS Systems Manager Parameter Store and/or AWS Secrets Manager, where you can store otel collector configuration (doc).

how to set up AWS Secrets with static credentials using terraform

I have a requirement.
I am deploying a application into AWS using terraform.
A part of this contains creating of a secrets resource "aws_secretsmanager_secret", for this secret I have to add userid/password of an external system which will be static and will never change.
Now while deploying this I have to declare the values for the userid/password. Since this terraform will code will also get stored in the git repository. This storing of credential in plain text form is not allowed.
How to solve this problem ?
Thanks,
Abhi
I have stored the credential in variables.tf that will eventually create the secrets with the variables, but this is not allowed
Instead of storing your credentials in your variables.tf file, you can store them into environment variables, and have your tf code read those variables.
This way, when you commit your tf files to git, the variables will no be pushed in plain text.
Medium article explaining how to do it
Official documentation

Trying to write dry code in Terraform, using Amazon S3 as backend but local terraform state is preventing success

I have really simplified everything down to the basics to demonstrate the following: Create Two VPC structures, one for test & one for development, then trying to use exactly the same code (from the same folder) to place a security group into each environment, (test-vpc and dev-vpc).
Each VPC deployment is using a unique Amazon S3 backend, by using a unique key within the AWS S3 bucket to store the remote state file.
The security_group.tf, is utilizing a variable to point at the different S3 key file for terraform remote state files (key=var.vpc_choice). Where vpc_choice will equal the key value for S3 backend.
Then executing the terraform apply command twice from the same folder "terraform apply -vars-file=test.tfvars" and then once again with a different variable "terraform apply -vars-file=dev.tfvars".
My expectation is that the security group is provisioned into a different VPC because the variable is point to the different backend state.
However, the local terraform state in that folder is getting in my way. It doesn't matter that I'm pointing at a remote state, the local state file knows the security group was already provisioned and wants to destroy that security group and create the security group in the other VPC.
IT works if I copy the code to another folder like "groups2". The first Terraform apply, provisions to test-vpc and the second terraform apply (as long as the code is in a different folder), provisions into dev-vpc. So while the code is exactly the same, and does provision into two different VPC's because of the variable answered with a .tfvars file, I have not achieved the ability to provision from the same folder.
The BIG question is, is that possible, have I missed something like an ability to not care about the local state file so I can provision to different VPCs by using a variable?
You will find a copy of my code at https://github.com/surfingjoe/Proposed_Terraform_Modules
Mark B commented on my question, but in fact, answered the question. Thank you Mark!
Using Terraform Workspaces works perfectly!
One environment = one remote backend (one tfstate file)
So, If you have two environments you have to open each folder, set unique name for tfstate in the remote backend, and run terraform apply.

Deploying jhipster registry on Amazon ECS

I am developing microservice based app with jHipster (but the question is for spring cloud config in general), for development purposes I was using docker-compose and now I'm creating stage environment on Amazon Elastic Container Service.
I'm facing a problem with connecting registry to bitbucket to download spring cloud config files. With docker-compose I was mounting a volume which contained ssh key, that is required to access BitBucket:
services:
jhipster-registry:
image: jhipster/jhipster-registry:v3.2.3
volumes:
- /home/ubuntu/bb-key:/root/.ssh
I don't know how I can pass this key to container running in ECS?
I can't put it directly on EC2 - I don't know on which instance in cluster registry will start. Maybe I must put it on s3 and change registry image to download it from s3? But it sounds somehow not right.
I know this is a bit late, but you can add user environment variables. https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html
Much like export commands within Linux you can use ECS to pass those variables to the docker instances much the same way you would using -e switch command. This allows you to pass secrets. There might be a better way, but since you can restrict access to those variables this may be an ok work around. You just need to work any scripts within the docker image to use those environment variables, since the variables can change overtime but not the image I normally make my scripts to accept/look for environment variables and document those.
In your case, you can write a script to export the SSH key, found in the rsa key file, and export the string since it is all one line, and have a script to output that export into a file in the .ssh directory.
echo $SSH_KEY > ~/.ssh/some_key Just have this line of code in an entry.sh script or something similar and you should be good. So when ever the container starts it will output the key into the .ssh file.
The other way, is as you described, use an S3 bucket and leave the key value pairs in there, or in this case an ssh key, and ECS can load those through the task scripts, or through AWS cli commands in the docker container. However, the last part means you need to add AWS CLI to your image, which may not be an option depending on what you need the image for and requires a small script to run at startup IE an entry script.
If this doesn't solve your issue, let me know, and I'll rework this answer to better suit the issue you are having. But from what I read, this should get you in the ball park of what you need.
One more way is to make an API key that will allow you to access the bitbucket repo, or other repo depending on ever changing needs, and feed that key in the same way you were thinking of doing the SSH key and just use the variable in the git command to pull the image and use http(s) if that is an option for your setup.

`eb config save`: The specified key does not exist

Why is it saving configuration is giving me S3 error?
The configuration is actually saved. I can see it in aws console:
and in the S3 bucket:
But not locally in .elasticbeanstalk/. What am I missing?
Double which application eb cli is using. The eb cli allows you to operate on Environment which are not part of the Application which is currently selected by the EB CLI. To fix it you can eb init again or change directory to a dir which has been initialized with the right application.
I realize I'm answer far too late to help with the original question, but I arrived here by google search, so I'm hoping to help someone further down the timestream.