I am setting up a AWS RDS cluster and I am researching how to connect to the cluster with credentials. The options seems to be either by username/password like usual or by using IAM and using a 15minute token.
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html
The IAM instance role supplied to EC2 can also specify that it is allowed to connect to the cluster so this seems pretty nice, I guess in that case no tokens are needed.
Is anyone using IAM in this case, or maybe usual user/pw is simpler? The documentation states that you should contrain the connections to 20 per second or lower when using IAM. It's difficult for me to assess wether this is low or not. Anyone know the impact IAM authentication have on AWS RDS in performance?
Prepare EC2 Instance
Install the following packages and commands
yum install curl mysql -y
service mysqld start
chkconfig mysqld on
Setup Database to use IAM
# Connect to DB
RDS_HOST="db-with-iam-support.ct5b4uz1gops.eu-central-1.rds.amazonaws.com"
REGION="eu-central-1"
# mysql -h {database or cluster endpoint} -P {port number database is listening on} -u {master db username} -p
mysql -h ${RDS_HOST} -P 3306 -u dbuser -p
Run this command to create a database user account that will use an AWS authentication token instead of a password:
CREATE USER 'db_iam_user' IDENTIFIED WITH AWSAuthenticationPlugin as 'RDS';
Optionally, run this command to require the user to connect to the database using SSL: Learn more here
GRANT USAGE ON *.* TO 'db_iam_user'#'%'REQUIRE SSL;
Run the “exit” command to close MySQL
IAM Inline Policy
Inline Policy to allow the db access to user, Change the db arn accordingly
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"rds-db:connect"
],
"Resource": [
"arn:aws:rds-db:eu-central-1:111111111111:dbuser:db-RWXD2T7YIWZU4VI2FBHSM2GE24/db_iam_user"
]
}
]
}
Download SSL Certificates
Download the AWS RDS Certificate pem file,
mkdir -p /var/mysql-certs/
cd /var/mysql-certs/
curl -O https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem
Generate an AWS authentication token
The authentication token consists of several hundred characters. It can be unwieldy on the command line. One way to work around this is to save the token to an environment variable, and then use that variable when you connect.
TOKEN="$(aws rds generate-db-auth-token --hostname ${RDS_HOST} --port 3306 --region ${REGION} --username db_iam_user)"
Connect to Database
mysql --host="${RDS_HOST}" \
--port=3306 \
--user=db_iam_user \
--ssl-ca=/var/mysql-certs/rds-combined-ca-bundle.pem \
--ssl-verify-server-cert \
--enable-cleartext-plugin \
--password="$TOKEN
Reference : https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.IAMPolicy.html
https://aws.amazon.com/premiumsupport/knowledge-center/users-connect-rds-iam/
Related
With the explosion of multi-account AWS configuration, and ssh being snuffed out in favor of session manager, I need ssh functionality and multi-profile ProxyCommand.
From aws docs it's simple enough. But I can see now way to add extra args to specify a profile. All I can think of is essentially concatenating the profile to the instanceid and creating dedicated commands.
The question:
How can I support multiple profiles using aws ssm when the proxycommand doesn't seem to offer me extra args?
Example that I would like: ssh ec2-user#i-18274659f843 --profile dev
Because the i-* doesn't indicate what account profile to use
Assuming you're using the example below in your ssh/config, you can just define AWS_PROFILE environmental variable before connecting to the desired instance
host i-* mi-*
ProxyCommand sh -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"
terminal:
$ export AWS_PROFILE=bernard
$ ssh i-12345
I'm having trouble pulling images from AWS ECR, running Docker Swarm. It's been working ok for years, but my swarm manager nodes were changed to new EC2 instances. Now my services fail to deploy:
~ $ docker stack deploy -c dkr_compose_geo_site:3.2.0 --with-registry-auth geo_stack
The manager node log shows "no basic auth credentials":
May 19 21:21:12 ip-172-31-3-108 root: time="2020-05-19T21:21:12.857007050Z" level=error msg="pulling image failed" error="Get https://445523.dkr.ecr.us-west-2.amazonaws.com/v2/geo_site/manifests/sha256:da5820742cd0ecd52e3a2c61179a039ce80996564604b70465e3966087380a09: no basic auth credentials" module=node/agent/taskmanager node.id=eix8c6orbunemismg03ib1rih service.id=smilb788pets7y5rgbu3aze9l task.id=zd3ozdpr9exphwlz318pa9lpe
May 19 21:21:12 ip-172-31-3-108 root: time="2020-05-19T21:21:12.857701347Z" level=error msg="fatal task error" error="No such image: 445523.dkr.ecr.us-west-2.amazonaws.com/geo_site#sha256:da5820742cd0ecd52e3a2c61179a039ce80996564604b70465e3966087380a09" module=node/agent/taskmanager node.id=eix8c6orbunemismg03ib1rih service.id=smilb788pets7y5rgbu3aze9l task.id=zd3ozdpr9exphwlz318pa9lpe
This manager node is running on an EC2 Instance with an IAM Role; the IAM Role has an ECR policy that appears to grant permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:DescribeImages",
"ecr:BatchGetImage"
],
"Resource": "*",
"Effect": "Allow"
}
]
}
From reading the AWS/Docker docs, I thought docker commands run on a manager node should adopt the Instance IAM Role and access the ECR repo using the associated policy permissions. It's always seemed to work that way, but now it's looking like there might have been some config file hidden on the old manager node; I'm on a new instance and it doesn't work. I don't run an AWS-CLI on these manager nodes, so there's no aws ecr get-login to login manually. How do I get this new manager node to authenticate with ECR?
Thanks!
My solution, based on comment by Luigi Lopez and amazon-ecr-credential-helper:
The AWS IAM Role allows authentication, but the docker cli must still present credentials to the ECR, as Luigi pointed out in his comment.
This is a Docker Swarm implementation, with nodes running the Alpine OS. There is an aws-cli package available for Alpine, but the installation took a lot of fussing around and in the end the binary crashed anyway.
The Amazon ECR Credential Helper is a better long-term solution in any case because you don't need to get new tokens every 12 hours or set up a proxy server, etc. It uses the recommended IAM Role authentication, with no credentials stored on the machine or leaking into log files.
So under Alpine I followed the instructions in the link above to build from sources.
I installed go, git, and make, and then built the credential-helper as described. I set up the PATH as described, created a config file, and then my deployment worked. There's no docker login required.
I'm having trouble setting up Spinnaker with ECR access.
Background: I installed spinnaker using helm on an EKS cluster and I've confirmed that the cluster has the necessary ECR permissions (by manually running ECR commands from within the clouddriver pod). I am following the instructions here to get Spinnaker+ECR set up: https://www.spinnaker.io/setup/install/providers/docker-registry/
Issue: When I run:
hal config provider docker-registry account add my-ecr-registry \
--address $ADDRESS \
--username AWS \
--password-command "aws --region us-west-2 ecr get-authorization-token --output text --query 'authorizationData[].authorizationToken' | base64 -d | sed 's/^AWS://'"
I get the following output:
+ Get current deployment
Success
- Add the some-ecr-registry account
Failure
Problems in default.provider.dockerRegistry.some-ecr-registry:
- WARNING Resolved Password was empty, missing dependencies for
running password command?
- WARNING You have a supplied a username but no password.
! ERROR Unable to fetch tags from the docker repository: code, 400
Bad Request
? Can the provided user access this repository?
- WARNING None of your supplied repositories contain any tags.
Spinnaker will not be able to deploy any docker images.
? Push some images to your registry.
Problems in halconfig:
- WARNING There is a newer version of Halyard available (1.28.0),
please update when possible
? Run 'sudo apt-get update && sudo apt-get install
spinnaker-halyard -y' to upgrade
- Failed to add account some-ecr-registry for provider
dockerRegistry.
I have confirmed that the aws-cli is installed on the clouddriver pod. And I've confirmed that I can the password-command directly from the clouddriver pod and it successfully returns a token.
I've also confirmed that if I manually generate an ECR token and run hal config provider docker-registry account add my-ecr-registry --address $ADDRESS --username AWS --password-command "echo $MANUALLY_GENERATED_TOKEN" everything works fine. So there is something specific to the password-command that is going wrong and I'm not sure how to debug this.
One other odd behavior: if I simplify the password command to be: hal config provider docker-registry account add some-ecr-registry --address $ADDRESS --username AWS --repositories code --password-command "aws --region us-west-2 ecr get-authorization-token" , I get an addt'l piece of output that says "- WARNING Password command returned non 0 return code stderr/stdout was:bash: aws: command not found". This output only appears for this simplified command.
Any advice on how to debug this would be much appreciated.
If like me your ECR registry is in another account, then you have to forcibly assume the role for the target account where your registry resides
passwordCommand: read -r AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN <<< `aws sts assume-role --role-arn arn:aws:iam::<AWS_ACCOUNT>:role/<SPINNAKER ROLE_NAME> --query "[Credentials.AccessKeyId, Credentials.SecretAccessKey, Credentials.SessionToken]" --output text --role-session-name spinnakerManaged-w2`; export AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN; aws ecr get-authorization-token --region us-west-2 --output text --query 'authorizationData[].authorizationToken' --registry-ids <AWS_ACCOUNT> | base64 -d | sed 's/^AWS://'
Credits to https://github.com/spinnaker/spinnaker/issues/5374#issuecomment-607468678
I also installed Spinnaker on AKS and all i did was by using an AWS Managing User with the correct AWS IAM policy to ECR:* i have access to the ECR repositories directly.
I dont think that hal being java based will execute the Bash command in --password-command
set the AWS ECS provider in your spinnaker deployment
Use the Following AWS IAM policy (SpinnakerManagingPolicy) to be attached to the AWS MAnaging User to give access to ECR. Please replace the AWS Accounts based on your need.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*",
"cloudformation:*",
"ecr:*"
],
"Resource": [
"*"
]
},
{
"Action": "sts:AssumeRole",
"Resource": [
"arn:aws:iam::123456789012:role/SpinnakerManagedRoleAccount1",
"arn:aws:iam::101121314157:role/SpinnakerManagedRoleAccount2",
"arn:aws:iam::202122232425:role/SpinnakerManagedRoleAccount3"
],
"Effect": "Allow"
}
]
}
I am trying to mount a S3 bucket on an AWS EC2 instance following this instruction. I was able to install the dependencies via yum, followed by cloning the git repository, and then making and installing the s3fs tool.
Furthermore, I ensured my AWSACCESSKEYID and AWSSECRETACCESSKEY values were in several locations (because I could not get the tool to work and searching for an answer suggest placing the file in different locations).
~/.passwd-s3fs
/etc/.passwd-s3fs
~/.bash_profile
For the .passwd-s3fs I have set the permissions as follows.
chmod 600 ~/.passwd-s3fs
chmod 640 /etc/.passwd-s3fs
Additionally, the .passwd-s3fs files have the content as suggested in this format: AWSACCESSKEYID:AWSSECRETACCESSKEY.
I have also logged out and in just to make sure the changes take effect. When I execute this command /usr/bin/s3fs bucketname /mnt, I get the following response.
s3fs: MOUNTPOINT: /mnt permission denied.
When I run the same command with sudo, e.g. sudo /usr/bin/s3fs mybucket /mnt, I get the following message.
s3fs: could not determine how to establish security credentials.
I am using s3fs v1.84 on the following AMI ami-0ff8a91507f77f867 (Amazon Linux AMI 2018.03.0.20180811 x86_64 HVM GP2). From the AWS Console for S3, my bucket's name is NOT mybucket but something just as simple (I am wondering if there's anything special I have to do with naming).
Additionally, my AWS access and secret key pair is generated from the IAM web interface and placed into the admin group (having AdministratorAccess policy) defined below.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
Any ideas on what's going on? Did I miss a step?
After tinkering a bit, I found the following helps.
/usr/bin/s3fs mybucket /mnt -o passwd_file=.passwd-s3fs -o allow_other
Note that I specify the .passwd-s3fs file's location. And also note that I allow others to view the mount. Additionally, I had to modify /etc/fuse.conf to enable user_allow_other.
# mount_max = 1000
user_allow_other
To test, I typed in touch /mnt/README.md and then observed the file in my S3 bucket (web UI).
I am a little disappointed that this problem is not better documented. I would have expected the default home location or /etc to be where the .passwd-s3fs file would be looked by the tool, but that's not the case. Additionally, sudo (as suggested by a link I did not bookmark) forces the tool to look in ~/home/root, which does not exists.
For me it was mismatch in IAM profile while mounting and IAM profile of ec2 server.
EC2 was launched with role2 and I was mouting with
/usr/local/bin/s3fs -o allow_other mybucket /mnt/s3fs/mybucketfolder -o iam_role='role1'
which dit not through any error but did not mounted.
PS
I do not have any access keys or s3 password file on ec2 server.
When launching an ec2 instance, how does one go about using CLI commands from within a user data shell script?
When I SSH into the instance I can run CLI commands and everything works as expected.
I'm assuming the issue is that user data is executed as root. When I SSH into the instance and run the CLI commands I do so as ec2-user.
Considering I have to launch an instance every time I want to test my new user data script (this takes 3 minutes every try), I'd really appreciate not have to guess and check my way through this one.
any help is appriciate. Thank you
You newly launched instance needs to have access to the command that you're trying to use. I suggest you to add IAM role set up and added to the instance. This will save you the setup of credential etc... Example IAM policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:DescribeTags",
"ec2:CreateTags"
],
"Effect": "Allow",
"Resource": [
"*"
]
}
]
}
Ubuntu Example userdata
#!/bin/bash -x
apt-get update
apt-get install -y awscli # yum install awscli on CentOS based OS
REGION=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone | sed s/.$//g)
I_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
aws_p="$(which aws) --region ${REGION} --output text"
$aws_p ec2 create-tags --resources $I_ID --tags Key=Name,Value=my-test-server --region $REGION
# ............ more stuff related to your deployment ..... #
This will install awscli on the system and will tag itself with test name.
See how to add proper IAM roles