AWS Elastic Beanstalk and Secret Manager - amazon-web-services

Does anyone know is it possible to pass a secret value as an environment variable in elastic beanstalk?
The alternative obviously is to use the sdk in our codebase but I want to explore the environment variable approach first
Cheers
Damien

Per #Ali's answer, it is not built-in at this point. However, it is relatively easy to use .ebextensions and the AWS cli. Here is an example that extracts a secret to a file, according to an MY_ENV environment variable. This value could then be set to an environment variable, but keep in mind environment variables are specific to the shell. You'd need to pass them in to anything you are launching.
10-extract-htpasswd:
env:
MY_ENV:
"Fn::GetOptionSetting":
Namespace: "aws:elasticbeanstalk:application:environment"
OptionName: MY_ENV
command: |
aws secretsmanager get-secret-value --secret-id myproj/$MY_ENV/htpasswd --region=us-east-1 --query=SecretString --output text > /etc/nginx/.htpasswd
chmod o-rwx /etc/nginx/.htpasswd
chgrp nginx /etc/nginx/.htpasswd
This also requires giving the EB service role IAM permissions to the secrets. i.e. A policy like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "xxxxxxxxxx",
"Effect": "Allow",
"Action": "secretsmanager:GetSecretValue",
"Resource": "arn:aws:secretsmanager:us-east-1:xxxxxxxxxxxx:secret:myproj*"
}
]
}

As above answers mention, there is still no build-in solution if you want to do this in Elastic Beanstalk. However a work around solution is to use "platform hook". Unfortunately it is poorly documented at this point.
To store your secret, best solution is to create a custom secret in AWS-Secret-Manager. In secret manager you can create a new secret by clicking "Store a new secret", then selecting "Other type of secret" and entering your secret key/value (see ). At the next step you need to provide a Secret Name (say "your_secret_name") and you can leave everything else to their default settings.
Then, you need to allow Elastic Beanstalk to get this secret. You can do it by creating a new IAM policy, for instance with this content:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Getsecretvalue",
"Effect": "Allow",
"Action": [
"secretsmanager:GetResourcePolicy",
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret",
"secretsmanager:ListSecretVersionIds"
],
"Resource": "your-secret-arn"
}
]}
You need to replace "your-secret-arn" with your secret ARN which you can get on AWS-secret-manager interface. Then, you need to add the policy you created to EB roles (it should be either "aws-elasticbeanstalk-ec2-role" or "aws-elasticbeanstalk-service-role").
Finally you need to add a hook file in your application. From the root of your application location should be ".platform/hooks/prebuild/your_hook.sh". Content of your file can be something like this:
#!/bin/sh
export your_secret_key=$(aws secretsmanager get-secret-value --secret-id your-secret-name --region us-east-1 | jq -r '.SecretString' | jq -r '. your_secret_key')
touch .env
{
printf "SECRET_KEY=%s\n" "$your_secret_key"
# printf whatever other variable you want to pass
} >> .env
Obviously you need to replace "your_secret_name" and the other variable by your own values and set the region to the region where your secret is stored (if it is not us-east-1). And don't forget to make it executable ("chmod +x your_hook.sh").
This assumes that your application can load its env from a .env file (which works fine with docker / docker-compose for example).
Another option is to store the variable in an ".ebextensions" config file but unfortunately it doesn't seem to work with the new Amazon Linux 2 platform. What's more you should not store sensitive information such as credentials directly in your application build. Builds of the application can be accessed by anyone with Elastic Beanstalk Read Access and they are also store unencrypted on S3.
With the hook approach, the secret is only stored locally on your Elastic Beanstalk underlying EC2 instances, and you can (should!) restrict direct SSH access to them.

Unfortunately, EB doesn't support secrets at this point, this might be added down the road. You can use them in your environment variables as the documentation suggests but they will appear in plain text in the console. Another, and IMO better, approach would be to use ebextensions, and use AWS CLI commands to grab secrets from the secrets manager, which needs some set up (e.g. having AWS CLI installed and having your secrets stored in SM). You can set these as environment variables in the same eb configuration. Hope this helps!

I'm just adding to #kaliatech's answer because while very helpful, it had a few gaps that left me unable to get this working for a few days. Basically you need to add a config file to the .ebextensions directory of your EB app, which uses a container_commands section retrieve your secret (in JSON format) and output it as a .env. file into the /var/app/current directory of the EC2 instances where your app's code lives:
# .ebextensions/setup-env.config
container_commands:
01-extract-env:
env:
AWS_SECRET_ID:
"Fn::GetOptionSetting":
Namespace: "aws:elasticbeanstalk:application:environment"
OptionName: AWS_SECRET_ID
AWS_REGION: {"Ref" : "AWS::Region"}
ENVFILE: .env
command: >
aws secretsmanager get-secret-value --secret-id $AWS_SECRET_ID --region $AWS_REGION |
jq -r '.SecretString' |
jq -r 'to_entries|map("\(.key)=\(.value|tostring)")|.[]' > $ENVFILE
Note: this assumes the AWS_SECRET_ID is configured in the app environment, but it can easily be hardcoded here as well.
All the utils needed for this script to work are already baked into the EC2 Linux image, but you'll need to grant permissions to the IamInstanceProfile role (usually named aws-elasticbeanstalk-ec2-role) which is assumed by EC2 to allow it access SecretManager:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "SecretManagerAccess",
"Effect": "Allow",
"Action": "secretsmanager:GetSecretValue",
"Resource": "arn:aws:secretsmanager:ap-southeast-2:xxxxxxxxxxxx:secret:my-secret-name*"
}
]
}
Finally, to debug any issues encountered during EC2 instance bootstrap, download the EB logs and check the EC2 log files at /var/log/cfn-init.log and /var/log/cfn-init-cmd.log.

This answer only applies if you're using code pipeline.
I think you can add a secret in the environment variables section now

If you use AWS CodeBuild use pre_build, add the following commands in your project's buildspec.yml to retrieve your environment variables from AWS Secrets Manager use sed to do some substituting/formatting and append them to .ebextensions/options.config's aws:elasticbeanstalk:application:environment namespace:
phases:
pre_build:
commands:
- secret=$(aws secretsmanager get-secret-value --secret-id foo-123 --region=bar-xyz --query=SecretString --output text)
- regex=$(cat ./sed_substitute)
- echo $secret | sed "${regex}" >> .ebextensions/options.config
Bit of a hack but the sed_substitute used in the commands above used to get the correct indentation/formatting that .ebextensions/options.config demands was:
s/",/\n /g; s/":/": /g; s/{"/ /g; s/"}//g; s/"//g;

Related

AWS CDK deploy using role

You can find the cdk app that you can use to replicate my issue here varvay/issue-replication.git. The usage instruction explained in the README
I need to deploy CDK app using a role by issuing this command
cdk -r arn:aws:iam::000000000000:role/fooRole deploy
but then an error thrown
Assuming role failed: User: arn:aws:iam::000000000000:user/fooUser is not authorized to
perform: sts:AssumeRole on resource: arn:aws:iam::000000000000:role/barRole
to be sure, I tried to simulate it by assuming the arn:aws:iam::000000000000:role/barRole role using arn:aws:iam::000000000000:role/fooRole in AWS IAM Policy Simulator and it works just fine. One thing that bothers me is that the error said that a User tried to assume the role, not Role.
Why is that? or should I assume the fooRole, update the AWS-related environment variable and then deploy? if so then what's the point of having -r option on cdk
as additional information, here's the trust relationship of the barRole
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam:: 000000000000:root"
},
"Action": "sts:AssumeRole"
}
]
}
also I even tried to attach AdministratorAccess AWS managed policy to the fooRole used to deploy
I managed to fulfill my needs by creating a bash script to switch to the destination role and use the credential to perform the CDK command as the script written below,
#!/bin/bash
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_SESSION_TOKEN
AWS_CREDENTIAL=$(aws sts assume-role \
--role-arn <destination role ARN> \
--role-session-name <role session name> \
--duration-seconds 3600)
export AWS_ACCESS_KEY_ID=$(echo $AWS_CREDENTIAL \
| jq -r '.Credentials''.AccessKeyId')
export AWS_SECRET_ACCESS_KEY=$(echo $AWS_CREDENTIAL \
| jq -r '.Credentials''.SecretAccessKey')
export AWS_SESSION_TOKEN=$(echo $AWS_CREDENTIAL \
| jq -r '.Credentials''.SessionToken')
cdk deploy
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_SESSION_TOKEN
So there are 2 ways you might be running cdk deploy command from.
1- You're running this command from your local computer's CLI using IAM keys. In this case, this role must be assumable by the AWS account (IAM User) being used.
2- You're running this command from any AWS service (cicd agent on EC2 instance for e.g.:) then the role attached with the instance should be allowed to assume this deployment role.
mention how you're running this command and you might get a better answer.
UPDATE:
Based on the updated question:
Add assume role part in your IAM USER not your deployment role. Your IAM User from which you're trying to deploy should be allowed to assume the role through which the CDK will be deployed.
To diagramise it a bit:
(IAM-USER -> Assume -> Role) -> cdk deploy
The error is in the process of cross account role accessing, as is written in your error message.
I assume that you start with AWS configuration for one account, lets call it "Provisioning" and then you need to assume role in different account (dev or prod) depending on branches or something ?
I smell an error in setup of cross account roles.
https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html
One possibility is : the Rolle you want to assume, does not have your provisioning account as trusted entity.
Another is : the user which is trying to assume the role, does not have the policy for that.
Just follow the tutorial from AWS and see what is missing in your setup :)

Pass AWS CodeBuild IAM Role inside Docker container [unable to locate credentials]

The role configured on CodeBuild project works fine with the runtime environment but doesn't work when we run a command from inside the container, it says "unable to locate credentials".
Let me know how can we use the role out of the box inside the container.
You can make use of credential source "EcsContainer" to assume role seamlessly without having to export new credentials in your buildspec.yml.
credential_source - The credential provider to use to get credentials for the initial assume-role call. This parameter cannot be provided alongside source_profile. Valid values are:
Environment to pull source credentials from environment variables.
Ec2InstanceMetadata to use the EC2 instance role as source credentials.
EcsContainer to use the ECS container credentials as the source credentials.
From: https://docs.aws.amazon.com/cli/latest/topic/config-vars.html
Steps:
Step-0: Create a new Role 'arn:aws:iam::0000000000:role/RoleToBeAssumed' and attach required policies to provide the permission required for the commands you are running during the build.
Step-1: Add sts:assumeRole permissions to your CodeBuild Service Role. Here is a sample policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "sts:*",
"Resource": "arn:aws:iam::0000000000:role/RoleToBeAssumed"
}
]
}
Step-2: Configure your build container to use the credential metadata as source for assuming the role. Here is a buildspec example:
version: 0.2
phases:
install:
runtime-versions:
nodejs: 8
commands:
- aws sts get-caller-identity
- mkdir ~/.aws/ && touch ~/.aws/config
- echo "[profile buildprofile]" > ~/.aws/config
- echo "role_arn = arn:aws:iam::0000000000:role/RoleToBeAssumed" >> ~/.aws/config
- echo "credential_source = EcsContainer" >> ~/.aws/config
- aws sts get-caller-identity --profile buildprofile
If you need to run a Docker container in a build environment and the container requires AWS credentials, you must pass through the credentials from the build environment to the container.
docker run -e AWS_DEFAULT_REGION -e AWS_CONTAINER_CREDENTIALS_RELATIVE_URI your-image-tag aws s3 ls
https://docs.aws.amazon.com/codebuild/latest/userguide/troubleshooting.html#troubleshooting-versions
Another way is to assume the role manually and export the auth tokens. Make sure you have ASSUME_ROLE_ARN available as environment variable -
commands:
- TEMP_ROLE=`aws sts assume-role --role-arn $ASSUME_ROLE_ARN --role-session-name temp`
- export TEMP_ROLE
- export AWS_ACCESS_KEY_ID=$(echo "${TEMP_ROLE}" | jq -r '.Credentials.AccessKeyId')
- export AWS_SECRET_ACCESS_KEY=$(echo "${TEMP_ROLE}" | jq -r '.Credentials.SecretAccessKey')
- export AWS_SESSION_TOKEN=$(echo "${TEMP_ROLE}" | jq -r '.Credentials.SessionToken')
- docker push $ECR_IMAGE_URL:$IMAGE_TAG

Spinnaker + ECR access

I'm having trouble setting up Spinnaker with ECR access.
Background: I installed spinnaker using helm on an EKS cluster and I've confirmed that the cluster has the necessary ECR permissions (by manually running ECR commands from within the clouddriver pod). I am following the instructions here to get Spinnaker+ECR set up: https://www.spinnaker.io/setup/install/providers/docker-registry/
Issue: When I run:
hal config provider docker-registry account add my-ecr-registry \
--address $ADDRESS \
--username AWS \
--password-command "aws --region us-west-2 ecr get-authorization-token --output text --query 'authorizationData[].authorizationToken' | base64 -d | sed 's/^AWS://'"
I get the following output:
+ Get current deployment
Success
- Add the some-ecr-registry account
Failure
Problems in default.provider.dockerRegistry.some-ecr-registry:
- WARNING Resolved Password was empty, missing dependencies for
running password command?
- WARNING You have a supplied a username but no password.
! ERROR Unable to fetch tags from the docker repository: code, 400
Bad Request
? Can the provided user access this repository?
- WARNING None of your supplied repositories contain any tags.
Spinnaker will not be able to deploy any docker images.
? Push some images to your registry.
Problems in halconfig:
- WARNING There is a newer version of Halyard available (1.28.0),
please update when possible
? Run 'sudo apt-get update && sudo apt-get install
spinnaker-halyard -y' to upgrade
- Failed to add account some-ecr-registry for provider
dockerRegistry.
I have confirmed that the aws-cli is installed on the clouddriver pod. And I've confirmed that I can the password-command directly from the clouddriver pod and it successfully returns a token.
I've also confirmed that if I manually generate an ECR token and run hal config provider docker-registry account add my-ecr-registry --address $ADDRESS --username AWS --password-command "echo $MANUALLY_GENERATED_TOKEN" everything works fine. So there is something specific to the password-command that is going wrong and I'm not sure how to debug this.
One other odd behavior: if I simplify the password command to be: hal config provider docker-registry account add some-ecr-registry --address $ADDRESS --username AWS --repositories code --password-command "aws --region us-west-2 ecr get-authorization-token" , I get an addt'l piece of output that says "- WARNING Password command returned non 0 return code stderr/stdout was:bash: aws: command not found". This output only appears for this simplified command.
Any advice on how to debug this would be much appreciated.
If like me your ECR registry is in another account, then you have to forcibly assume the role for the target account where your registry resides
passwordCommand: read -r AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN <<< `aws sts assume-role --role-arn arn:aws:iam::<AWS_ACCOUNT>:role/<SPINNAKER ROLE_NAME> --query "[Credentials.AccessKeyId, Credentials.SecretAccessKey, Credentials.SessionToken]" --output text --role-session-name spinnakerManaged-w2`; export AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN; aws ecr get-authorization-token --region us-west-2 --output text --query 'authorizationData[].authorizationToken' --registry-ids <AWS_ACCOUNT> | base64 -d | sed 's/^AWS://'
Credits to https://github.com/spinnaker/spinnaker/issues/5374#issuecomment-607468678
I also installed Spinnaker on AKS and all i did was by using an AWS Managing User with the correct AWS IAM policy to ECR:* i have access to the ECR repositories directly.
I dont think that hal being java based will execute the Bash command in --password-command
set the AWS ECS provider in your spinnaker deployment
Use the Following AWS IAM policy (SpinnakerManagingPolicy) to be attached to the AWS MAnaging User to give access to ECR. Please replace the AWS Accounts based on your need.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*",
"cloudformation:*",
"ecr:*"
],
"Resource": [
"*"
]
},
{
"Action": "sts:AssumeRole",
"Resource": [
"arn:aws:iam::123456789012:role/SpinnakerManagedRoleAccount1",
"arn:aws:iam::101121314157:role/SpinnakerManagedRoleAccount2",
"arn:aws:iam::202122232425:role/SpinnakerManagedRoleAccount3"
],
"Effect": "Allow"
}
]
}

Problems mounting a S3 bucket with s3fs

I am trying to mount a S3 bucket on an AWS EC2 instance following this instruction. I was able to install the dependencies via yum, followed by cloning the git repository, and then making and installing the s3fs tool.
Furthermore, I ensured my AWSACCESSKEYID and AWSSECRETACCESSKEY values were in several locations (because I could not get the tool to work and searching for an answer suggest placing the file in different locations).
~/.passwd-s3fs
/etc/.passwd-s3fs
~/.bash_profile
For the .passwd-s3fs I have set the permissions as follows.
chmod 600 ~/.passwd-s3fs
chmod 640 /etc/.passwd-s3fs
Additionally, the .passwd-s3fs files have the content as suggested in this format: AWSACCESSKEYID:AWSSECRETACCESSKEY.
I have also logged out and in just to make sure the changes take effect. When I execute this command /usr/bin/s3fs bucketname /mnt, I get the following response.
s3fs: MOUNTPOINT: /mnt permission denied.
When I run the same command with sudo, e.g. sudo /usr/bin/s3fs mybucket /mnt, I get the following message.
s3fs: could not determine how to establish security credentials.
I am using s3fs v1.84 on the following AMI ami-0ff8a91507f77f867 (Amazon Linux AMI 2018.03.0.20180811 x86_64 HVM GP2). From the AWS Console for S3, my bucket's name is NOT mybucket but something just as simple (I am wondering if there's anything special I have to do with naming).
Additionally, my AWS access and secret key pair is generated from the IAM web interface and placed into the admin group (having AdministratorAccess policy) defined below.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
Any ideas on what's going on? Did I miss a step?
After tinkering a bit, I found the following helps.
/usr/bin/s3fs mybucket /mnt -o passwd_file=.passwd-s3fs -o allow_other
Note that I specify the .passwd-s3fs file's location. And also note that I allow others to view the mount. Additionally, I had to modify /etc/fuse.conf to enable user_allow_other.
# mount_max = 1000
user_allow_other
To test, I typed in touch /mnt/README.md and then observed the file in my S3 bucket (web UI).
I am a little disappointed that this problem is not better documented. I would have expected the default home location or /etc to be where the .passwd-s3fs file would be looked by the tool, but that's not the case. Additionally, sudo (as suggested by a link I did not bookmark) forces the tool to look in ~/home/root, which does not exists.
For me it was mismatch in IAM profile while mounting and IAM profile of ec2 server.
EC2 was launched with role2 and I was mouting with
/usr/local/bin/s3fs -o allow_other mybucket /mnt/s3fs/mybucketfolder -o iam_role='role1'
which dit not through any error but did not mounted.
PS
I do not have any access keys or s3 password file on ec2 server.

Ec2 calling CLI from user data

When launching an ec2 instance, how does one go about using CLI commands from within a user data shell script?
When I SSH into the instance I can run CLI commands and everything works as expected.
I'm assuming the issue is that user data is executed as root. When I SSH into the instance and run the CLI commands I do so as ec2-user.
Considering I have to launch an instance every time I want to test my new user data script (this takes 3 minutes every try), I'd really appreciate not have to guess and check my way through this one.
any help is appriciate. Thank you
You newly launched instance needs to have access to the command that you're trying to use. I suggest you to add IAM role set up and added to the instance. This will save you the setup of credential etc... Example IAM policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:DescribeTags",
"ec2:CreateTags"
],
"Effect": "Allow",
"Resource": [
"*"
]
}
]
}
Ubuntu Example userdata
#!/bin/bash -x
apt-get update
apt-get install -y awscli # yum install awscli on CentOS based OS
REGION=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone | sed s/.$//g)
I_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
aws_p="$(which aws) --region ${REGION} --output text"
$aws_p ec2 create-tags --resources $I_ID --tags Key=Name,Value=my-test-server --region $REGION
# ............ more stuff related to your deployment ..... #
This will install awscli on the system and will tag itself with test name.
See how to add proper IAM roles