When launching an ec2 instance, how does one go about using CLI commands from within a user data shell script?
When I SSH into the instance I can run CLI commands and everything works as expected.
I'm assuming the issue is that user data is executed as root. When I SSH into the instance and run the CLI commands I do so as ec2-user.
Considering I have to launch an instance every time I want to test my new user data script (this takes 3 minutes every try), I'd really appreciate not have to guess and check my way through this one.
any help is appriciate. Thank you
You newly launched instance needs to have access to the command that you're trying to use. I suggest you to add IAM role set up and added to the instance. This will save you the setup of credential etc... Example IAM policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:DescribeTags",
"ec2:CreateTags"
],
"Effect": "Allow",
"Resource": [
"*"
]
}
]
}
Ubuntu Example userdata
#!/bin/bash -x
apt-get update
apt-get install -y awscli # yum install awscli on CentOS based OS
REGION=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone | sed s/.$//g)
I_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
aws_p="$(which aws) --region ${REGION} --output text"
$aws_p ec2 create-tags --resources $I_ID --tags Key=Name,Value=my-test-server --region $REGION
# ............ more stuff related to your deployment ..... #
This will install awscli on the system and will tag itself with test name.
See how to add proper IAM roles
Related
Does anyone know is it possible to pass a secret value as an environment variable in elastic beanstalk?
The alternative obviously is to use the sdk in our codebase but I want to explore the environment variable approach first
Cheers
Damien
Per #Ali's answer, it is not built-in at this point. However, it is relatively easy to use .ebextensions and the AWS cli. Here is an example that extracts a secret to a file, according to an MY_ENV environment variable. This value could then be set to an environment variable, but keep in mind environment variables are specific to the shell. You'd need to pass them in to anything you are launching.
10-extract-htpasswd:
env:
MY_ENV:
"Fn::GetOptionSetting":
Namespace: "aws:elasticbeanstalk:application:environment"
OptionName: MY_ENV
command: |
aws secretsmanager get-secret-value --secret-id myproj/$MY_ENV/htpasswd --region=us-east-1 --query=SecretString --output text > /etc/nginx/.htpasswd
chmod o-rwx /etc/nginx/.htpasswd
chgrp nginx /etc/nginx/.htpasswd
This also requires giving the EB service role IAM permissions to the secrets. i.e. A policy like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "xxxxxxxxxx",
"Effect": "Allow",
"Action": "secretsmanager:GetSecretValue",
"Resource": "arn:aws:secretsmanager:us-east-1:xxxxxxxxxxxx:secret:myproj*"
}
]
}
As above answers mention, there is still no build-in solution if you want to do this in Elastic Beanstalk. However a work around solution is to use "platform hook". Unfortunately it is poorly documented at this point.
To store your secret, best solution is to create a custom secret in AWS-Secret-Manager. In secret manager you can create a new secret by clicking "Store a new secret", then selecting "Other type of secret" and entering your secret key/value (see ). At the next step you need to provide a Secret Name (say "your_secret_name") and you can leave everything else to their default settings.
Then, you need to allow Elastic Beanstalk to get this secret. You can do it by creating a new IAM policy, for instance with this content:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Getsecretvalue",
"Effect": "Allow",
"Action": [
"secretsmanager:GetResourcePolicy",
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret",
"secretsmanager:ListSecretVersionIds"
],
"Resource": "your-secret-arn"
}
]}
You need to replace "your-secret-arn" with your secret ARN which you can get on AWS-secret-manager interface. Then, you need to add the policy you created to EB roles (it should be either "aws-elasticbeanstalk-ec2-role" or "aws-elasticbeanstalk-service-role").
Finally you need to add a hook file in your application. From the root of your application location should be ".platform/hooks/prebuild/your_hook.sh". Content of your file can be something like this:
#!/bin/sh
export your_secret_key=$(aws secretsmanager get-secret-value --secret-id your-secret-name --region us-east-1 | jq -r '.SecretString' | jq -r '. your_secret_key')
touch .env
{
printf "SECRET_KEY=%s\n" "$your_secret_key"
# printf whatever other variable you want to pass
} >> .env
Obviously you need to replace "your_secret_name" and the other variable by your own values and set the region to the region where your secret is stored (if it is not us-east-1). And don't forget to make it executable ("chmod +x your_hook.sh").
This assumes that your application can load its env from a .env file (which works fine with docker / docker-compose for example).
Another option is to store the variable in an ".ebextensions" config file but unfortunately it doesn't seem to work with the new Amazon Linux 2 platform. What's more you should not store sensitive information such as credentials directly in your application build. Builds of the application can be accessed by anyone with Elastic Beanstalk Read Access and they are also store unencrypted on S3.
With the hook approach, the secret is only stored locally on your Elastic Beanstalk underlying EC2 instances, and you can (should!) restrict direct SSH access to them.
Unfortunately, EB doesn't support secrets at this point, this might be added down the road. You can use them in your environment variables as the documentation suggests but they will appear in plain text in the console. Another, and IMO better, approach would be to use ebextensions, and use AWS CLI commands to grab secrets from the secrets manager, which needs some set up (e.g. having AWS CLI installed and having your secrets stored in SM). You can set these as environment variables in the same eb configuration. Hope this helps!
I'm just adding to #kaliatech's answer because while very helpful, it had a few gaps that left me unable to get this working for a few days. Basically you need to add a config file to the .ebextensions directory of your EB app, which uses a container_commands section retrieve your secret (in JSON format) and output it as a .env. file into the /var/app/current directory of the EC2 instances where your app's code lives:
# .ebextensions/setup-env.config
container_commands:
01-extract-env:
env:
AWS_SECRET_ID:
"Fn::GetOptionSetting":
Namespace: "aws:elasticbeanstalk:application:environment"
OptionName: AWS_SECRET_ID
AWS_REGION: {"Ref" : "AWS::Region"}
ENVFILE: .env
command: >
aws secretsmanager get-secret-value --secret-id $AWS_SECRET_ID --region $AWS_REGION |
jq -r '.SecretString' |
jq -r 'to_entries|map("\(.key)=\(.value|tostring)")|.[]' > $ENVFILE
Note: this assumes the AWS_SECRET_ID is configured in the app environment, but it can easily be hardcoded here as well.
All the utils needed for this script to work are already baked into the EC2 Linux image, but you'll need to grant permissions to the IamInstanceProfile role (usually named aws-elasticbeanstalk-ec2-role) which is assumed by EC2 to allow it access SecretManager:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "SecretManagerAccess",
"Effect": "Allow",
"Action": "secretsmanager:GetSecretValue",
"Resource": "arn:aws:secretsmanager:ap-southeast-2:xxxxxxxxxxxx:secret:my-secret-name*"
}
]
}
Finally, to debug any issues encountered during EC2 instance bootstrap, download the EB logs and check the EC2 log files at /var/log/cfn-init.log and /var/log/cfn-init-cmd.log.
This answer only applies if you're using code pipeline.
I think you can add a secret in the environment variables section now
If you use AWS CodeBuild use pre_build, add the following commands in your project's buildspec.yml to retrieve your environment variables from AWS Secrets Manager use sed to do some substituting/formatting and append them to .ebextensions/options.config's aws:elasticbeanstalk:application:environment namespace:
phases:
pre_build:
commands:
- secret=$(aws secretsmanager get-secret-value --secret-id foo-123 --region=bar-xyz --query=SecretString --output text)
- regex=$(cat ./sed_substitute)
- echo $secret | sed "${regex}" >> .ebextensions/options.config
Bit of a hack but the sed_substitute used in the commands above used to get the correct indentation/formatting that .ebextensions/options.config demands was:
s/",/\n /g; s/":/": /g; s/{"/ /g; s/"}//g; s/"//g;
I'm having trouble setting up Spinnaker with ECR access.
Background: I installed spinnaker using helm on an EKS cluster and I've confirmed that the cluster has the necessary ECR permissions (by manually running ECR commands from within the clouddriver pod). I am following the instructions here to get Spinnaker+ECR set up: https://www.spinnaker.io/setup/install/providers/docker-registry/
Issue: When I run:
hal config provider docker-registry account add my-ecr-registry \
--address $ADDRESS \
--username AWS \
--password-command "aws --region us-west-2 ecr get-authorization-token --output text --query 'authorizationData[].authorizationToken' | base64 -d | sed 's/^AWS://'"
I get the following output:
+ Get current deployment
Success
- Add the some-ecr-registry account
Failure
Problems in default.provider.dockerRegistry.some-ecr-registry:
- WARNING Resolved Password was empty, missing dependencies for
running password command?
- WARNING You have a supplied a username but no password.
! ERROR Unable to fetch tags from the docker repository: code, 400
Bad Request
? Can the provided user access this repository?
- WARNING None of your supplied repositories contain any tags.
Spinnaker will not be able to deploy any docker images.
? Push some images to your registry.
Problems in halconfig:
- WARNING There is a newer version of Halyard available (1.28.0),
please update when possible
? Run 'sudo apt-get update && sudo apt-get install
spinnaker-halyard -y' to upgrade
- Failed to add account some-ecr-registry for provider
dockerRegistry.
I have confirmed that the aws-cli is installed on the clouddriver pod. And I've confirmed that I can the password-command directly from the clouddriver pod and it successfully returns a token.
I've also confirmed that if I manually generate an ECR token and run hal config provider docker-registry account add my-ecr-registry --address $ADDRESS --username AWS --password-command "echo $MANUALLY_GENERATED_TOKEN" everything works fine. So there is something specific to the password-command that is going wrong and I'm not sure how to debug this.
One other odd behavior: if I simplify the password command to be: hal config provider docker-registry account add some-ecr-registry --address $ADDRESS --username AWS --repositories code --password-command "aws --region us-west-2 ecr get-authorization-token" , I get an addt'l piece of output that says "- WARNING Password command returned non 0 return code stderr/stdout was:bash: aws: command not found". This output only appears for this simplified command.
Any advice on how to debug this would be much appreciated.
If like me your ECR registry is in another account, then you have to forcibly assume the role for the target account where your registry resides
passwordCommand: read -r AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN <<< `aws sts assume-role --role-arn arn:aws:iam::<AWS_ACCOUNT>:role/<SPINNAKER ROLE_NAME> --query "[Credentials.AccessKeyId, Credentials.SecretAccessKey, Credentials.SessionToken]" --output text --role-session-name spinnakerManaged-w2`; export AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN; aws ecr get-authorization-token --region us-west-2 --output text --query 'authorizationData[].authorizationToken' --registry-ids <AWS_ACCOUNT> | base64 -d | sed 's/^AWS://'
Credits to https://github.com/spinnaker/spinnaker/issues/5374#issuecomment-607468678
I also installed Spinnaker on AKS and all i did was by using an AWS Managing User with the correct AWS IAM policy to ECR:* i have access to the ECR repositories directly.
I dont think that hal being java based will execute the Bash command in --password-command
set the AWS ECS provider in your spinnaker deployment
Use the Following AWS IAM policy (SpinnakerManagingPolicy) to be attached to the AWS MAnaging User to give access to ECR. Please replace the AWS Accounts based on your need.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*",
"cloudformation:*",
"ecr:*"
],
"Resource": [
"*"
]
},
{
"Action": "sts:AssumeRole",
"Resource": [
"arn:aws:iam::123456789012:role/SpinnakerManagedRoleAccount1",
"arn:aws:iam::101121314157:role/SpinnakerManagedRoleAccount2",
"arn:aws:iam::202122232425:role/SpinnakerManagedRoleAccount3"
],
"Effect": "Allow"
}
]
}
I am using this template to create the stack:
https://aws-blockchain-templates-us-east-1.s3.us-east-1.amazonaws.com/hyperledger/fabric/templates/simplenetwork/latest/hyperledger.template.yaml
While following this blog-post from AWS, I am getting an error.
Blog - Post Link :
https://aws.amazon.com/blockchain/templates/getting-started/
Region : us-east-1
Error Message :
The following resource(s) failed to create: [FabricEC2CommonStack]. . Rollback requested by user.
CREATE_FAILED AWS::CloudFormation::Stack FabricEC2CommonStack Embedded stack arn:aws:cloudformation:us-east-1:>:stack/FabricStack-FabricEC2CommonStack-NNFUD6RJCZB1/<> was not successfully created: The following resource(s) failed to create: [EC2InstanceForDev].
I have met all the prerequisites.
What could be the reason for this error and how to rectify it?
After this, I get ROLLBACK_IN_PROGRESS and ROLLBACK_COMPLETE.
The Official AWS Blockchain Cloud Formation Template for Hyperledger Fabric is a nested template (our base template calls another template which does all the setup on an EC2 instance which itself creates).
But the problem is it does everything on the EC2-Instance except installing docker-compose & it throws an error that docker-compose command not found at the end which causes the CloudFormation template to break(EC2InstanceForDev) and do a rollback. So instead of using CloudFormation Template, we can run the same script manually on the EC2-instance with a small change. The change is to install docker-compose beforehand. Rest setup remains the same i.e -- 1. Create a VPC, 2. Create Public Subnets, 3. Create EIP if you want to attach it later, 4. Create Key-Pair for SSH, 5. Create IAM Role & Policy, 6. Create Security Group with Inbound 8080(TCP) & 22(SSH), 7. launch an EC2 Instance with the created resources in step (1to6).
AMI which is preferred is -
ami-1853ac65 for us-east-1
ami-25615740 for us-east-2
ami-dff017b8 for us-west-2
Docker Image Repository -
354658284331 for us-east-1
763976151875 for us-east-2
712425161857 for us-west-2
SCRIPT TO RUN ON EC2 (Give chmod 777 and chmod +x for the script) -
#!/bin/bash -x
sudo curl -L https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
docker-compose --version
res=$?
echo $res
mkdir /tmp/fabric-install/
cd /tmp/fabric-install/
wget https://aws-blockchain-templates-us-east-1.s3.us-east-1.amazonaws.com/hyperledger/fabric/templates/simplenetwork/latest/HyperLedger-BasicNetwork.tgz -O /home/ec2-user/HyperLedger-BasicNetwork.tgz
cd /home/ec2-user
tar xzvf HyperLedger-BasicNetwork.tgz
rm /home/ec2-user/HyperLedger-BasicNetwork.tgz
chown -R ec2-user:ec2-user HyperLedger-BasicNetwork
chmod +x /home/ec2-user/HyperLedger-BasicNetwork/artifacts/first-run-standalone.sh
/home/ec2-user/HyperLedger-BasicNetwork/artifacts/first-run-standalone.sh us-east-1 example.com org1 org2 org3 mychannel 354658284331.dkr.ecr.us-east-1.amazonaws.com/ 354658284331
res=$?
echo $res
IAM policy which I attached to the role -
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:DescribeImages",
"ecr:BatchGetImage"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "*"
}
]
}
NOTE -
Please replace the appropriate AWS ECR account number for your region and appropriate AWS region in the above script and script has (example.com org1 org2 org3 mychannel), Please change this too as per requirement. Its the same RootDomain, Org1SubDomain, Org2SubDomain, Org3SubDomain, ChannelName as we enter in the CF template).
This whole process is tested in the us-east-1 region. The script can be straight deployed in the us-east-1 region. To access the Hyperledger web monitor interface (http://EC2-DNS OR EIP:8080)
You should be Checking your IAM Role and It fixed my issue.
I am setting up a AWS RDS cluster and I am researching how to connect to the cluster with credentials. The options seems to be either by username/password like usual or by using IAM and using a 15minute token.
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html
The IAM instance role supplied to EC2 can also specify that it is allowed to connect to the cluster so this seems pretty nice, I guess in that case no tokens are needed.
Is anyone using IAM in this case, or maybe usual user/pw is simpler? The documentation states that you should contrain the connections to 20 per second or lower when using IAM. It's difficult for me to assess wether this is low or not. Anyone know the impact IAM authentication have on AWS RDS in performance?
Prepare EC2 Instance
Install the following packages and commands
yum install curl mysql -y
service mysqld start
chkconfig mysqld on
Setup Database to use IAM
# Connect to DB
RDS_HOST="db-with-iam-support.ct5b4uz1gops.eu-central-1.rds.amazonaws.com"
REGION="eu-central-1"
# mysql -h {database or cluster endpoint} -P {port number database is listening on} -u {master db username} -p
mysql -h ${RDS_HOST} -P 3306 -u dbuser -p
Run this command to create a database user account that will use an AWS authentication token instead of a password:
CREATE USER 'db_iam_user' IDENTIFIED WITH AWSAuthenticationPlugin as 'RDS';
Optionally, run this command to require the user to connect to the database using SSL: Learn more here
GRANT USAGE ON *.* TO 'db_iam_user'#'%'REQUIRE SSL;
Run the “exit” command to close MySQL
IAM Inline Policy
Inline Policy to allow the db access to user, Change the db arn accordingly
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"rds-db:connect"
],
"Resource": [
"arn:aws:rds-db:eu-central-1:111111111111:dbuser:db-RWXD2T7YIWZU4VI2FBHSM2GE24/db_iam_user"
]
}
]
}
Download SSL Certificates
Download the AWS RDS Certificate pem file,
mkdir -p /var/mysql-certs/
cd /var/mysql-certs/
curl -O https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem
Generate an AWS authentication token
The authentication token consists of several hundred characters. It can be unwieldy on the command line. One way to work around this is to save the token to an environment variable, and then use that variable when you connect.
TOKEN="$(aws rds generate-db-auth-token --hostname ${RDS_HOST} --port 3306 --region ${REGION} --username db_iam_user)"
Connect to Database
mysql --host="${RDS_HOST}" \
--port=3306 \
--user=db_iam_user \
--ssl-ca=/var/mysql-certs/rds-combined-ca-bundle.pem \
--ssl-verify-server-cert \
--enable-cleartext-plugin \
--password="$TOKEN
Reference : https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.IAMPolicy.html
https://aws.amazon.com/premiumsupport/knowledge-center/users-connect-rds-iam/
I am new to AWS and I am trying to deploy using AWS CodeDeploy from Github.
For that, I created my instance named CodeDeployDemo and attached the role and policy to the instance.
Policy ARN arn:aws:iam::378939197253:policy/CE2CodeDeploy9
My policy is:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:Get*",
"s3:List*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
and also attached policy named AmazonEC2RoleforAWSCodeDeploy
I also installed CodeDeploy agent for my ubuntu step by step as following:
$chmod 400 Code1.pem
$ssh -i "Code1.pem" ubuntu#54.183.22.255
$sudo apt-get update
$sudo apt-get install awscli
$sudo apt-get install ruby2.0
$cd /home/ubuntu
$sudo aws s3 cp s3://aws-codedeploy-us-east-1/latest/install . --region us-east-1
$sudo chmod +x ./install
$sudo ./install auto
and then I create my application and deploy from GitHub to CodeDeploy using CodeDeployDefault.OneAtATime
But at final stage it shows following error:
Deployment failed: Because too many individual instances failed deployment,
too few healthy instances are available for deployment,
or some instances in your deployment group are experiencing problems.
(Error code: HEALTH_CONSTRAINTS)
NOTE: My only one instance is running when my deployment is running.I stopped other instances.
Please help me to find solution for this. THANKS IN ADVANCE.!!
This happens because the codeDeploy checks health of the ec2 instances by hitting instances. Before deployment, you need to run below bash script on the instances and check if the script worked. httpd service must be started. Reboot the instance.
#!/bin/bash
sudo su
apt-get update -y
apt-get install apache2 -y
apt-get install ruby2.0
apt-get install awscli
cd ~
aws s3 cp s3://aws-codedeploy-us-east-1/latest/install . --region us-east-1
chmod +x ./install
./install auto
echo 'hello world' > /var/www/html/index.html
hostname >> /var/www/html/index.html
update-rc.d apache2 defaults
service apache2 start