I want to setup IAM policy to grant permission for changing only RDS master password. I followed this document: https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonrds.html but I didn't find any strict policy for RDS master password.
I see we use this action: rds:ModifyDBInstance but this action is much larger than the requirement. Anyone can suggest for me
Related
I'm trying to setup my aws cli. I downloaded it and everything worked.
Now I wanted to log in from my powershell script.
Set-AWSCredentials –AccessKey key-name –SecretKey key-name
Because I don't have any users at the moment I had to create one. I have given the user admin rights.
When creating the user aws throws this error:
User: arn:aws:sts::37197122623409:assumed-role/voclabs/user2135080=.... is not authorized to perform: iam:CreateUser on resource: arn:aws:iam::371237422423709:user/.... because no identity-based policy allows the iam:CreateUser action
My first thought was that my education account is having a problem. But I didn't find anything about it. Thanks for your help in advance.
The "voclab" part of the error suggests you are not logged as the account root user but instead assuming a role in an account used for teaching purposes.
This role is probably designed to disallow IAM actions, in order to prevent privilege escalation.
Read
https://docs.aws.amazon.com/singlesignon/latest/userguide/howtogetcredentials.html to get role credentials for the role you're assuming
You can't make any IAM roles, policies or users as a student using voclabs account. AWS Academy does not allow to do that and its a hard limit which you nor your educator can change.
I have setup a RDS proxy for Aurora DB. I am able to connect to the RDS proxy endpoint but not able to perform any operations.
For e.g if I do show processlist; I get below error:
ERROR 1045 (28000): Database Access denied for user 'admin'#'ip-address' (using password: YES)
Note: I am able to access RDS endpoint and perform all the operations.
Thanks in advance!
I encountered this same issue. Turns out it was related to the auto-generated IAM role permissions.
The secrets manager had 2 user accounts added to it (with verified correct credentials), and both were added to the RDS proxy. However, only the first user account worked. The second user account would get a permission denied error.
Checking the CloudWatch logs, I saw a message similar to:
Credentials couldn't be retrieved. The IAM role "arn:aws:iam::ACCOUNT:role/service-role/rds-proxy-role-TIMESTAMP" is not authorized to read the AWS Secrets Manager secret with the ARN "arn:aws:secretsmanager:REGION:ACCOUNT:secret:SECRET_NAME"
When I looked at the IAM policy for the rds-proxy-role-TIMESTAMP role, it had only been granted access to the secret for the first user. This appears to be an issue with the creation of the IAM role when the proxy is set up.
To resolve it, I modified the policy for the rds-proxy-role-TIMESTAMP role to give it access to the ARN for the second user's secret as well. After a few minutes, I was able to log in as the second user.
If you are getting a Database access denied error please check the user permissions in RDS first.
If you can connect to RDS directly with this credentials, check that credentials in Secret Manager are the same.
Then check if you RDS Proxy policy has permission to access all you Secret Manager records as I mention here https://stackoverflow.com/a/73649818/4642536
Following this article to set up Cognito auth for AWS Elasticsearch.
https://aws.amazon.com/blogs/database/get-started-with-amazon-elasticsearch-service-use-amazon-cognito-for-kibana-access-control/
Getting an error:
Open Distro for Elasticsearch
Missing Role
No roles available for this user, please contact your system administrator.
Anybody knows why I could get it?
The crucial missing part was the below:
navigate to the Elastisearch domain on your AWS Elasticsearch console page
After this, click on the “Actions” button -> “Modify master user"
Then select “Set IAM ARN as master user” and in the “IAM ARN” field, add the IAM role ARN “arn:aws:iam::<aws_account_id>:role/<My_cognito_auth_role_assigned_to_the_cognito_user_group”
click Submit
If you have enabled Fine-Grained Access Control with your Elasticsearch domain, one of the assumed roles from the Amazon Cognito identity pool must match the IAM role that you specified for the Master User. Considering you have at least two existing IAM roles, one for the Master User and one for more limited users, this guide may help you.
Alternatively you can configure the master user role same as Cognito Authenticated role ARN.
I'm trying to replicate this lab :https://github.com/aws-samples/ec2-spot-montecarlo-workshop, But keep getting an error The provided credentials do not have permission to create the service-linked role for EC2 Spot Instances. seems like when it tries to create instance it fails, does anyone have an idea why ? I made sure to give it all permission role but didn't work ...
Seems that credentials which you use (IAM user or role) do not have permissions to execute an action iam:CreateServiceLinkedRole. The action:
Grants permission to create an IAM role that allows an AWS service to perform actions on your behalf
Please double check the IAM user and credentials which you use.
When lodging a spot request – there is a service-linked role that needs to be created (if it does not exist) in IAM called AWSServiceRoleForEC2Spot.
Check that the IAM user has the permission:
iam:CreateServiceLinkedRole
More in the docs:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-requests.html#service-linked-roles-spot-instance-requests
I have a HDP cluster on AWS and I have one s3(in other account) also, my hadoop version is Hadoop 3.1.1.3.0.1.0-187
Now I want to read from the s3 (which is in different account) and process, then write the result to my s3(same account as cluster).
But as per the HDP guide Here tells, I can configure only one keys of either my account or other account.
But in my case I want to configure two account keys, so How to do do that ?
Due to some security reason, other account can not change the bucket policy to add IAM role which is created in my account , Hence I tried to access like below
Configured the keys of other account
Added IAM role(which has access policy for my bucket) of my account
but Still I got below error when I tried to access my account s3 from spark write
com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3
What you need is to use the EC2 instance profile role. It is an IAM role that is attached to your instance: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html
You first create a role with permissions that allow s3 access. Then you attach that role to your HDP cluster(EC2 autoscaling group and EMR can both achieve that).No IAM access key configuration needed on your side, although AWS still does that for you in the background. This is the s3 "outbound" access part.
The 2nd step is to set up the bucket policy to allow cross-account access: https://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html
You will need to do this for each bucket in your different accounts. This is basically the "inbound" s3 access permission part.
You will encounter 400 if any part of your access(i.e., your instance profile role's permission, S3 bucket ACL, bucket policy, public access block setting and etc..) is denied in the permission chain. There are much more layers on the "inbound" side. So to start to get things working, if you are not IAM expert, try to start with a very open policy(use '*' wildcard) and then narrow things down.
If I've understood right
you want your EC2 VMs to access an S3 bucket to which the IAM role doesn't have access
your have a set of AWS login details for the external S3 bucket (login and password)
HDP3 has an default auth chain of, in order
per-bucket secrets. fs.s3a.bucket.NAME.access.key, fs.s3a.bucket.NAME.secret.key
config-wide secrets fs.s3a.access.key, fs.s3a.secret.key
env vars AWS_ACCESS_KEY and AWS_SECRET_KEY
the IAM Role (it does an HTTP GET to the 169.something server which serves up a new set of IAM role credentials at least once an hour)
What you need to try here is set up some per-bucket secrets for only the external source (either in a JCEKS file on all nodes in core-site.xml, or in the spark default. For example, if the external bucket was s3a://external, you'd have
spark.hadoop.fs.s3a.bucket.external.access.key AKAISOMETHING spark.hadoop.fs.s3a.bucket.external.secret.key SECRETSOMETHING
HDP3/Hadoop 3 can handle >1 secret in the same JCEKS file without problems. HADOOP-14507. my code. Older versions let you put username:secret in the URI, but that's such a security troublespot (everything logs those URIs as they aren't viewed as sensistive), that feature has been cut from Hadoop now. Stick to the JCEKs file with a per-bucket secret, falling back to IAM role for your own data
Note you can fiddle with the authentication list for ordering and behaviour: if you add use the TemporaryAWSCredentialsProvider then it'll support session keys as well, which is often handy.
<property>
<name>fs.s3a.aws.credentials.provider</name>
<value>
org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider,
org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider,
com.amazonaws.auth.EnvironmentVariableCredentialsProvider,
org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider
</value>
</property>