How to bootstrap droplets using Terraform? - digital-ocean

When creating droplets on Digital Ocean using Terraform, the created machines' passwords are sent via mail. If I get the documentation on the Digital Ocean provider right, you can also specify the SSH IDs of of keys to use.
If I am bootstrapping a data center using Terraform, which option should I choose?
Somehow, it feels wrong to have a different password for every machine (somehow using passwords per se feels wrong), but it also feels wrong if every machine is linked to the SSH key of my user.
How do you do that? Is there a way that can be considered good (best?) practice here? Should I create an SSH key pair only for this and commit it with the Terraform files to Git as well? …?

As you mentioned, using passwords on instances is an absolute pain once you have an appreciable number of them. It's also less secure than SSH keys that are properly managed (kept secret). Obviously you are going to have some trouble linking the rest of your automation to some credentials that are delivered out of band to your automation tooling so if you need to actually configure these servers to do something then the password by email option is pretty much out.
I tend to use a different SSH key for each application and development stage (eg. dev, testing/staging, production) but then everything inside that combination gets the same public key put on it for ease of management. Separating it that way means if you have one key compromised you don't need to replace the public key everywhere and so minimises blast radius of this event. It also means you can rotate them independently, especially as some environments may move faster than others.
As a final word of warning, do not put your private SSH key into the same git repo as the rest of your code and definitely do not publish the private SSH key to a public repo. You will probably want to look into some secrets management such as Hashicorp's Vault if you are in a large team or at least distributing these shared private keys out of band if they need to be used by multiple people.

Related

How to protect aws secret access key

I am beginner to aws.
aws keys stored in pc in folder "C:\Users\USERNAME.aws\credentials" is simple text file.
Q1) Why it is stored in easy readable format?
Q2) How to protect these credentials or is there any way to store it in encrypted way?
Q2) One option is to create an environment variable within your code’s operating environment to store the key. Environment variables are managed by the environment’s operating system and can be accessed via system libraries or the AWS SDK for your preferred programming language.
Q1) Thats they way it stores when you run aws configure post awscli installation.
On ways to secure it more :
Follow AccessKey Rotation practice to make sure that even if ur access key falls into wrong hands rotating it and creating a new accesskey would protect from any mishaps.
Use AWS Secret Manager to store your secrets which also gives you options to rotate secret values as well.
The user folder is generally considered to be private to the user. In typical OS setup, the permissions would be set, so that only the user has access to their home directory. Of course anyone who gains access to that folder also has access to the keys, but that's not any different for other common scenarios like storing ssh keys in so called 'hidden' .ssh/ folder
In any case, if you are not comfortable with that, the other option is to store them where ever you feel safe, then retrieve them and temporarily add them to your user environment profile.
The environment variables are documented here: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html

Where should private service account key for Google be stored on Mac

I've created a public/private key pair as described here for the Google Cloud Platform (see graphic below)
The problem: I can't find a shred of documentation describing where to put it. This thing is not the typical SSH key pair, but rather a JSON file.
Where should it be stored on a mac to allow the gcloud command to authenticate and push to the GCP?
If you are authenticating locally with a service account to build/push with gcloud, you should set the environment variable on your mac terminal to point to the JSON key file.
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/service-account-file.json"
Once this environment variable is defined, all the requests will be authenticated against that Service Account using the key info from the json file.
Please consider looking at the doc below for reference:
https://cloud.google.com/docs/authentication/production
The CaioT answer is the right one if you want to use a service account key file locally.
However, the question shouldn't be asked because it's a bad practice to have service account key files. They have to be used in only few cases. Else, they are security weakness in your projects.
Have a higher look on this key file. At the end, it's only a file, stored on your mac (or elsewhere) without special security dispositions. You can copy it without any problem, edit it, copy the content. You can send it by email, push it in Git repository (might be public!)...
If you are several developers to work on the same project, it because quickly a mess to know who manage the keys. When you have a leak, it's hard to know which key has been used and need to be removed,...
So, have a closer look to this part of the documentation. I also wrote some articles to propose alternative to use them. Let me know if you are interested.

AWS Parameter Store vs .env on nginx

I'm trying to understand why having parameters stored in AWS Parameter Store (SSM) encrypted using KMS is a better solution than saving parameters in .env file on a nginx server.
As far as the nginx server setup goes, .env file is not exposed to public hence it can't be viewed unless someone breaks into the server.
nginx config has public folder set as a root
root /home/user/app/public;
Looked like the consensus was if someone manages to break into the server, they will be able to see all the parameters stored in .env file contents in plain text hence less secure than Parameter Store.
But isn't that the same for AWS Parameter Store? (Main question)
In the php file, the way I load parameters from the Parameter Store is using SSM Client.
e.g.
$client = new SsmClient([
'version' => 'latest',
'region' => 'us-west-2',
]);
$credentials = $client->getParameters([
'Names' => ['MY_KEY', 'MY_SECRET'],
'WithDecryption' => true
]);
$key = $credentials['Parameters'][0]['Value'];
$secret = $credentials['Parameters'][1]['Value'];
If someone breaks into the server, they will be able to perform these and retrieve any parameters.
So what makes SSM more secure than .env?
SSM makes it easy to coordinate values across multiple machines. How does that .env file get onto your servers? What happens if you have to change a value in it? SSM helps make that process easier. And when it's easy to replace secrets, it's more likely you will rotate them on a regular basis. AWS Secrets Manager makes that process even simpler by automating rotation. It runs a Lambda function that modifies both the secret value and what uses it (for example it can change the database password for you). So even if your secret does get leaked somehow, it's only good for a limited time.
Another reason having secrets on a separate server can be more secure, is that breaking into a server doesn't always mean full control. Sometimes hackers can just access files using a misconfigured service. If there are no files containing secret information, there is nothing for them to see. Other times hackers can access other servers in your network using a misconfigured proxy. If all your internal servers require some sort of authentication, the hackers won't be able to hack those too.
This is where the concept of defense in depth comes in. You need multiple layers of security. If you just assume "once they're in, they're in", you are actually making it easier on hackers. They can exploit any small opening and get complete control of your system. This becomes even more important when you factor in the weakest link in your security -- people. You, your developers, your operators, and everyone in your company will make mistakes eventually. If every little mistakes gave complete access to the system, you'd be in a pretty bad shape.
Adding to above answer
With KMS, secrets are stored in AWS Managed HSM devices which is surely more secure than storing in a regular servers we manage in plain text.
Storing in SSM Params with KMS Encryption we get fine-grained access control along with CloudTrail audits.

Why do people use .env file on server?

Why do people put a .env file to store all their secrets in a server? If someone hacks it, isn't the .env equally accessible as all the other files? Thanks!
You are correct that storing environmental secrets in a .env file poses a risk of plain text secrets being exposed to a third party if they gained access to raw code.
Just like other areas with sensitive material there are ways to get around this, generally an approach that people might take it to use a secrets management system which instead replaces any secrets values from a .env file to be accessed via a validated request.
AWS supports a couple of official services that can do this:
Secrets Manager - This service is specifically built for this purpose, you define a secret and give it either a string or JSON value that is then retrieved via a call using the SDK. All values are encrypted using a KMS key.
Systems Manager Parameter Store - Similar to secrets manager, you provide a key name and give it a value. It supports both unencrypted and encrypted values (use SecureString type).
In addition there are other services such as Hashicorp Vault that provide similar functionality.
For environmental configuration a .env file can still be appropriate i.e. enable this feature flag but if you want to try and reduce the blast radius of your application then storing secrets outside a plain text file will help to reduce this risk.
That is not the main reason for using environment variables. However, it is secure enough for saving secret values too especially when they’re combined with hashing methods.
Environment variables are most useful in the actual production level of programming. Your application must have different environments to run upon. Development: that your host is local and as a developer you need to test your code and set the debug variable to true to get stateful errors which is not something you want on the production environment. Production: that your host is your domain or server IP and you need different middleware than of the development stage. There are also staging and test environments for bigger projects. There could be a lot of things that should be handled differently on different environments: database is a great example. Besides, environment variables are useful for when there is more than one person working with the code base and people can configure the project based on their machine/OS using environment variables.

Persist ASP.NET Core auth cookies between docker image launches

Each time a Docker image containing a .NET Core MVC web application starts up, all authentication cookies are invalidated, presumably due to a fresh machine key (which is used when signing the cookies) being generated.
This could traditionally be set via the <machineKey/> element in the web.config of a .NET app.
This link suggests that the DataProtection package would fit the bill, but the package seems to require the full fat framework.
What would be the correct way to ensure that every time a Docker image restarts it doesn't invalidate existing auth cookies?
You want to put the keys for data protection into a persistent and shareable location.
If you're on AWS, AspNetCore.DataProtection.Aws allows to put the keyring on S3 with just a few lines of configuration code. Additionally you can leverage AWS KMS to encrypt the keys, which is especially useful to achieve consistent encryption algorithms, allowing to reuse the same key accross different operating systems which have different default encryption algorithms. The KMS option is also part of the same library.
If you're on another platform than AWS, you'll need another library or mount a shared drive. But the concept of sharing the same location for the keys remains the same.