Changing the auto-generated kops kubernetes admin password - amazon-web-services

I have been using kops to build the kubernetes cluster which is really easy-to-go tool, however i am unable to find a way-out on how to change the admin password which is auto-generated while the cluster is being created.

As it is currently not possible to modify or delete + create secrets of type "Secret" with the CLI you have to modify them directly in the kops s3 bucket.
They are stored /clustername/secrets/ and contain the secret as a base64 encoded string. To change the secret base64 encode it with:
echo -n 'MY_SECRET' | base64
and replace it in the "Data" field of the file. Verifiy your change with get secrets and perform a rolling update of the cluster
Seen in Managing secrets Kops’s documentation: Workaround for changing secrets with type "Secret"

Related

Not able to read the Kubernetes secret from a nested

I am very new to Kubernetes. My task is to move the existing application from Kubernetes to EKS. I am using CDK EKS Blueprints to create the cluster in AWS and have AWS secret manager to create the Kubernetes secret. I followed the same steps as given in here https://aws-quickstart.github.io/cdk-eks-blueprints/addons/secrets-store/
As mentioned on the above page I got the service account, a role in the service account to access the secret and the secret created.
Though I have a volume block, mount path for the secret and used env variables to refer the secret, I am not able to get my pod up and running. Instead it complains that the key is not found in the secret.
The reason may be because when I try to create a secret manually using the create command the Kubernetes create the secret as below.
enter image description here
But when the Kubernetes secret is created by EKS blueprints by lookingup the existing AWS secret like
secretProvider: new blueprints.LookupSecretsManagerSecretByName('test-aws-secret'),
it is creating as an encoded object.
enter image description here
Now I am not sure how to reference the nested object in the yaml. I tried many iterations, something like enter image description here. But no luck. Any help is much appreciated.
Thanks.
The value of the key field should be key1:
- name: key1-value
valueFrom:
secretKeyRef:
name: secret-test
key: key1
Including data/secret-test/ before the key name is unnecessary because Kubernetes already knows the secret name from the name field and knows to look for keys under the data field of secrets.
See Secrets for more information.

How to use Terraform to store a new secret in AWS Secrets Manager using already KMS-encrypted string?

I need to write Git-revisioned Terraform code to put a secret string into AWS Secrets Manager. Given a secret string in a textfile:
% cat /tmp/plaintext-password
my-super-secret-password
I am able to make an encrypted version of it using a KMS key:
# Prints base64-encoded, encrypted string.
aws kms encrypt --key-id my_kms_uuid --plaintext fileb:///tmp/plaintext-password --output text --query CiphertextBlob
# abcdef...123456789/==
What Terraform code can be written to get that base64 string in AWS Secrets Manager, such that AWS knows it was encrypted with my_kms_uuid? I have tried the following:
resource "aws_secretsmanager_secret" "testing-secrets-secret" {
name = "secret-for-testing"
kms_key_id = "<my_kms_uuid>"
}
resource "aws_secretsmanager_secret_version" "testing-secrets-version" {
secret_string = "abcdef...123456789/=="
}
The problem is I can't figure out a way to tell AWS Secrets Manager that the string is already encrypted by KMS so it doesn't have to encrypt it again. Can this be done?
If your goal is to keep secret values out of the statefile, then you have two choices:
Encrypt the secret outside of Terraform, then store the encrypted value in Secrets Manager.
This will force all consumers of the secret to decrypt it before use. Since an encrypted secret includes the CMK used to encrypt it, there's no need for you to separately track the key ID.
There are several drawbacks to this approach. For one thing, you have to do two steps to use any secret: retrieve it and decrypt it. If you use ECS, you can't provide the name of the secret and let ECS to provide the decrypted value to your container.
A bigger drawback is that it can be very easy to forget which CMK is used for which secret, and accidentally delete the CMK (at which point the secret becomes unusable). Related is knowing which permissions to grant to the consumers, especially if you have a lot of CMKs.
Create the secret inside Terraform, and set its value manually.
This keeps the actual value in Secrets Manager, so you don't need to use two steps to decrypt it.
It is possible to use local-exec to generate the secret within the Terraform configuration: write a script that generates random data and then invokes the AWS CLI to store the value. However, this technique is more frequently used for things like SSH private keys that are created outside of the Terraform provisioning process.
Better than either of these solutions is to store your statefile somewhere that it isn't generally accessible. There are a bunch of backends that can do this for you.
When encrypting a secret string with KMS, the key that was used is actually specified within the output of the resulting ciphertext. This means we can do this the following way:
Encrypt the secret string with an existing KMS key in AWS aws kms encrypt --key-id arn:aws:kms:us-east-1:<account_id>:key/mrk-blahblahblah --plaintext fileb://<(printf 'SOME_SECRET_TEXT') --output text --query CiphertextBlob --region us-east-1
Then use aws_kms_secrets to decrypt it within Terraform (something like this).
Then you push the decrypted secret up into AWS Secrets Manager using aws_secretsmanager_secret_version.
The net result is only the encoded secret is kept in version control, but the password that's actually stored in Secrets Manager is the decoded string.

AWS EC2 userdata encryption

We have usecase of taking input which includes password from user and pass it to on EC2 instance. From with in Ec2 instance we hit the URL - http://169.254.169.254/latest/user-data/ and get the userdata and set appropriate passwords.
The issue is user data is visible by AWS CLI tool:
aws ec2 describe-instance-attribute --instance-id --attribute userData --output text --query "UserData.Value" | base64 --decode
This imposes huge security risk.
Whats the best way to send sensitive / secret data ?
I tried creating a key-pair, which creates the private key on local instance and public key on EC2. What would be right way to encrypt / decrypt using PowerShell and fetch it back in EC2?
The suggested approach would be to store any secrets in an external source.
AWS has a service for storing secrets, Secrets Manager. By using this service you would create a secret containing the secrets that your instance will need to access in its user data. Then give your instance an IAM role with privileges to get the secret value, via the AWS CLI.
Alternatively you could also make use of the AWS SSM Parameter Store service, storing the secrets as a SecureString type. This would work similar to secrets manager with you retrieving the secret via the AWS CLI and then using it in your script.
There are also third party solutions such as Hashicorp Vault that provide similar functionality if you do not want to store your secrets in an AWS solution.

What is the best possible way to pass API key for AWS EC2 user data script

I have bash script to run as user data script when launching EC2 instance. For that I need to pass external API access key id and secret key. I don't want to store these keys in my user data scripts as it is visible in plaintext. Is there any way that I can store this keys in somewhere such as AWS Secret Manager and use that in user data scripts?
I would suggest either storing it in Secrets Manager or SSM Parameter Store.
You would need to use the CLI in your userdata script to retrieve the value.
For SSM you would retrieve the secret by using the get-parameter function.
secret=$(aws ssm get-parameter --name "MyStringParameter")
For Secrets Manager you would retrieve the secret using the get-secret-value function.
secret=$(aws secretsmanager get-secret-value --secret-id MyTestDatabaseSecret)
Then in your bash script when you want to reference it you would just need to use the variable $secret to actually replace with your secret.
If you decide to use either of these you will need to ensure EC2 instance has an IAM role attached to the instance with the correct policy to apply the permissions you require.
Alternatively if this is a process that happens frequently (autoscaled instance for example) then you should take a look at configuring the base server image (AMI) ahead of time and then referencing this as the source AMI.
With tools such as Ansible, Chef and Puppet you could provision the base image with your secret which would replace any need to do anything in the UserData as it would be available ahead of time.
Usually you can store such secrets in AWS Systems Manager Parameter Store which is free, unlike AWS Secret Manager:
AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, Amazon Machine Image (AMI) IDs, and license codes as parameter values.
To use that in your UserData, the instance role has to be set with permissions to access the Parameter Store. Then in your UserData you can use aws cli get-parameter to get the value of your secrets.

Passing a new keys into AWS CLI

I installed aws cli and supplied the aws access key ID and secret access key. Everything worked perfectly!. I then deleted the user as I have no need for it anymore. I have then created a new user (which has different access key ID and secret access key).
The issue:
When I type
aws configure
I get:
AWS Access Key ID [****…]
AWS Secret Access Key [****...]
So the command prompt is using the previous keys.
How do I enter the new keys into the command prompt?
Just ignore the old key and input your new key, It will be overwrited.
Just want to add one more way to do it. This is particularly useful, if you do not want to override your current user but add another one instead.
You can use the profile option to add more credentials:
aws configure --profile <my-new-profile-name> [1]
If you do not use the profile option, you are implicitly configuring the default's profile credentials.
If you want to use a profile afterwards, each aws cli command provides the profile option, e.g.: aws s3 ls --profile <my-new-profile-name> [2]
References
[1] https://docs.aws.amazon.com/cli/latest/reference/configure/
[2] https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-options.html