With secret code such as MongoDB password, Firebase admin password in my NodeJS server code, I am wondering how I should go about deploying this to EC2 (and multiple EC2 instances with CodeDeploy / AutoScaling, in the future).
Is there a common way to go about this - keeping your credentials secure? You could argue that the security layer is at the instance: make sure that there is no unwanted access to your instance(s) and you should be good. But is this really the way to go?
Given a service that has a secret password in its config file called config.json, create a software config file called config-development.json:
password=[PASSWORD]
During Codedeploy, there are scripts or hooks, that run during the deployment cycle eg BeforeInstall, Install, AfterInstall. During the AfterInstall script execution, get the secret from the parameter store via cli, store it in a variable, and then replace the [PASSWORD] value in the json file, using sed or any search and replace command line tool.
Rename the resulting file to the config.json, and restart the service.
This approach will allow you to keep secrets out of your repo, and use only value from the parameter store.
See https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#reference-appspec-file-structure-hooks-list
Related
I have this ECS cluster that is running task definitions with singular container inside each group. I'm trying to add some fancy observability to my application by introducing OpenTelemetry. Following the AWS'es docs I found https://github.com/aws-observability/aws-otel-collector which is the AWS version of OTEL collector. This collector needs a config file (https://github.com/aws-observability/aws-otel-collector/blob/main/config/ecs/ecs-default-config.yaml) that specifies stuff like receivers, exporters, etc. I need to be able to create my own config file with 3rd party exporter (also need to add my secret API key somewhere inside there - maybe it can go to secrets manager and get mounted as env var :shrug:).
I'm wondering if this is doable without having to build my own image with baked config somewhere inside purely using cloudformation (what I use to deploy my app) and other amazon services?
The plan is to add this container besides each other app container (inside the task definition) [and yeah I know this is overkill but for now simple > perfect]
Building additional image will require some cardinal changes to the CI/CD so if I can go without those it will be awesome.
You can't mount an S3 bucket in ECS. S3 isn't a file system, it is object storage. You would need to either switch to EFS, which can be mounted by ECS, or add something to the startup script of your docker image to download the file from S3.
I would recommend to check a doc for AWS ADOT. You will find that it supports config variable AOT_CONFIG_CONTENT (doc). So you don't need a config file, only a config env variable. That plays very well with AWS ecosystem, because you can use AWS Systems Manager Parameter Store and/or AWS Secrets Manager, where you can store otel collector configuration (doc).
Bear with me, what I am requesting may be impossible. I am a AWS noob.
So I am going to describe to you the situation I am in...
I am doing a freelance gig and was essentially handed the keys to AWS. That is, I was handed the root user login credentials for the AWS account that powers this website.
Now there are 3 EC2 instances. One of the instances is a linux box that, from what I am being told, is running a Django Python backend.
My new "service" if you will must exist within this instance.
How do I introduce new source code into this instance? Is there a way to pull down the existing source code that lives within it?
I am not be helped by any existing/previous developers so I am kind of just handed the AWS credentials and have no idea where to start.
Is this even possible. That is, is it possible to pull the source code from an EC2 instance and/or modify the code? How do I do this?
EC2 instances are just virtual machines. So you can use SSH/SCP/SFTP files to and from. You can use the AWS CLI tools to copy stuff from S3. Dealers choice...
Now to get into this instance... If you look in the web console you can find its IP(s), what the security groups (firewall rules), and the key pair name. Hopefully they gave you the keys. You need these to SSH in.
You'll also want to check to make sure there's a security group applied that has SSH open. Hopefully only to your IP :)
If you don't have the keys you'll have to create an AMI image of the instance so you can create a new one with a key pair you do have.
Amazon has a set of tools for you in Amazon CodeSuite.
The tool used for "deploying" the code is Amazon CodeDeploy. By using this service you install an agent onto your host, then when triggered it will pull down an artifact of a code base and install it matching hosts. You can even specify additional commands through the hook system.
But you also want to trigger this to happen, maybe even automatically? CodeDeploy can be orchestrated using the CodePipeline tool.
I am developing microservice based app with jHipster (but the question is for spring cloud config in general), for development purposes I was using docker-compose and now I'm creating stage environment on Amazon Elastic Container Service.
I'm facing a problem with connecting registry to bitbucket to download spring cloud config files. With docker-compose I was mounting a volume which contained ssh key, that is required to access BitBucket:
services:
jhipster-registry:
image: jhipster/jhipster-registry:v3.2.3
volumes:
- /home/ubuntu/bb-key:/root/.ssh
I don't know how I can pass this key to container running in ECS?
I can't put it directly on EC2 - I don't know on which instance in cluster registry will start. Maybe I must put it on s3 and change registry image to download it from s3? But it sounds somehow not right.
I know this is a bit late, but you can add user environment variables. https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html
Much like export commands within Linux you can use ECS to pass those variables to the docker instances much the same way you would using -e switch command. This allows you to pass secrets. There might be a better way, but since you can restrict access to those variables this may be an ok work around. You just need to work any scripts within the docker image to use those environment variables, since the variables can change overtime but not the image I normally make my scripts to accept/look for environment variables and document those.
In your case, you can write a script to export the SSH key, found in the rsa key file, and export the string since it is all one line, and have a script to output that export into a file in the .ssh directory.
echo $SSH_KEY > ~/.ssh/some_key Just have this line of code in an entry.sh script or something similar and you should be good. So when ever the container starts it will output the key into the .ssh file.
The other way, is as you described, use an S3 bucket and leave the key value pairs in there, or in this case an ssh key, and ECS can load those through the task scripts, or through AWS cli commands in the docker container. However, the last part means you need to add AWS CLI to your image, which may not be an option depending on what you need the image for and requires a small script to run at startup IE an entry script.
If this doesn't solve your issue, let me know, and I'll rework this answer to better suit the issue you are having. But from what I read, this should get you in the ball park of what you need.
One more way is to make an API key that will allow you to access the bitbucket repo, or other repo depending on ever changing needs, and feed that key in the same way you were thinking of doing the SSH key and just use the variable in the git command to pull the image and use http(s) if that is an option for your setup.
My Application is deployed on ElasticBeanStalk on AWS. It is accessing an API that needs SSL certificate to be installed on the instance. I have to manually run the keytool command to import the certificate file every time the instance rebuilds. And whenever EBS rebuilds the EC2 instance, the installed certificates are lost and I have to again transfer the certificate file and install the certificates again.
I think ebextensions can be a solution to this problem but I am not able to understand the exact way to use it.
Please help me with some directions here.
First you need to create the file you want in question, then put it into an S3 bucket. I'd recommend you have it encrypted, and that there's no public permissions on the file for security purposes. From there, in your application root you'll create a .ebextensions folder in your application source root. In there you'll create a .config file named however you want.
This file will need to spell out where to grab the cert you need from and where to put it. The AWS documents spell out how to grab a file from S3 and put it somewhere. The instance profile it's talking about is described here. It's basically a way to allow your instance to talk to S3 without needing to store credentials in a file somewhere. You'll need to make sure it has at least read permissions on the bucket to pull the file.
Once this is all setup beanstalk should have the file on the instance when all is said and done. Another option is to generate a custom AMI with the key already on the file system. Just be aware of the performance considerations it mentions in the document.
We have a couple of environments in Engine Yard. Each of them runs the same application, but on different stages: production, staging, etc. In total about 10 environments. Now, we want to dump the production database every night, and restore it on the rest of environments to have the latest data.
The problem is, an instance from one environment can't access instances in other environments. There are two ways to connect that are suitable for us:
SSH.
Specify the RDS host as the --host parameter to mysqldump. The RDS host is of the form environment.random_string.region.rds.amazonaws.com as opposed to a regular EC2 host name.
Neither of them works out of box. The straightforward solution would be to generate RSA keys on all the servers that want access, and add them to authorized_hosts to all the servers that should allow access. However, this solution isn't scalable: once we add or recreate an environment we'd need to repeat process.
Is there any better solution?
There is a way to setup a special backup configuration file on your other instances that would allow you to directly access the Production S3 bucket from another environment within the same account. There is some risk involved with this since it would also technically allow your non-production environment the ability to edit the contents of the production bucket.
There may be some other options depending on the specifics of your configuration. Your best option would be to open a ticket with the Engine Yard Support team so we can discuss your needs further.
Is it possible to set up a separate HUB server with FTP or SFTP service only?
open inbound port 21/22 from all environments to that HUB server, so all clients can download the database dump.
open inbound port 3306 or other database port from Hub Server to RDS/Database.
run cron job on Hub server to get the db dump, push the dump to other environment and so on.
Backup your production to a S3 bucket created for this purpose.
Use IAM roles to control how your other environments can connect to the same bucket.
Since the server of your Production environment should be known you can use a script to mysqldump that one server to the shared S3 bucket.
Once complete, your other servers can collect the data from that S3 bucket using a properly authorized IAM role.