I have setup a RDS proxy for Aurora DB. I am able to connect to the RDS proxy endpoint but not able to perform any operations.
For e.g if I do show processlist; I get below error:
ERROR 1045 (28000): Database Access denied for user 'admin'#'ip-address' (using password: YES)
Note: I am able to access RDS endpoint and perform all the operations.
Thanks in advance!
I encountered this same issue. Turns out it was related to the auto-generated IAM role permissions.
The secrets manager had 2 user accounts added to it (with verified correct credentials), and both were added to the RDS proxy. However, only the first user account worked. The second user account would get a permission denied error.
Checking the CloudWatch logs, I saw a message similar to:
Credentials couldn't be retrieved. The IAM role "arn:aws:iam::ACCOUNT:role/service-role/rds-proxy-role-TIMESTAMP" is not authorized to read the AWS Secrets Manager secret with the ARN "arn:aws:secretsmanager:REGION:ACCOUNT:secret:SECRET_NAME"
When I looked at the IAM policy for the rds-proxy-role-TIMESTAMP role, it had only been granted access to the secret for the first user. This appears to be an issue with the creation of the IAM role when the proxy is set up.
To resolve it, I modified the policy for the rds-proxy-role-TIMESTAMP role to give it access to the ARN for the second user's secret as well. After a few minutes, I was able to log in as the second user.
If you are getting a Database access denied error please check the user permissions in RDS first.
If you can connect to RDS directly with this credentials, check that credentials in Secret Manager are the same.
Then check if you RDS Proxy policy has permission to access all you Secret Manager records as I mention here https://stackoverflow.com/a/73649818/4642536
Related
I'm trying to create an index management policy in Opensearch 1.3 on AWS using Terraform and the elasticsearch provider from phillbaker but I'm always getting a 403 forbidden exception when using an IAM master user. After several tries, I've changed to an internal database user and it worked straightaway once the domain access policy was open for any user.
These are the things I've tried so far:
Creating an IAM user with programmatic credentials, adding this user to the domain access policy and as a master user for the cluster and using the credentials in the provider (using aws_access_key and aws_secret_access_key parameters, not username and password).
Creating an IAM role with administrator access, adding this role as a master user. Configuring a Cognito user pool and identity pool as identity provider for the cluster and configuring authenticated users to use the role created before. Configuring the domain access policy to allow anyone to user the cluster.
Creating an internal user from the dashboard and adding this user to the all_access role. Configuring the domain access policy to allow anyone to use the cluster.
In all these cases, it didn't work. The last case, I tried after changing the configuration to use an internal database user as master and I verified both had the same rol mapping configuration. But only the credentials of the one I assigned through the AWS console worked.
I also tried changing the cluster security configuration on AWS so the domain access policy gets replaced with the fine-grained access control. But every time I save the changes, when I get back to the security tab, the domain access policy is still activated.
I am following the instructions to get AWS SSO working: https://www.gitpod.io/guides/integrate-aws-cli-ecr
I'm not sure about what the AWS_ROLE_NAME gitpod variable should be. I feel like I'm getting this wrong, because signing in with:
aws sso login --no-browser
and then aws sts get-caller-identity
I get An error occurred (ForbiddenException) when calling the GetRoleCredentials operation: No access
I've set it to an IAM role name which should have admin access.
resolved - AWS_ROLE_NAME needed to be set to AWSPowerUserAccess or other permission set name, which you can find in https://us-east-1.console.aws.amazon.com/iamv2/#/organization/permission-sets
also, don't forget to go to https://us-east-1.console.aws.amazon.com/iamv2/home#/organization/accounts click on an account and assign the SSO user to the account with an appropriate permission set.
Stupidly enough, I did delete by mistake my default AWS IAM user!
I used it for example do aws s3 sync...
Now the error I get is:
$ aws s3 sync build/ s3://mybucket.mydomain.com
fatal error: An error occurred (InvalidAccessKeyId) when calling the ListObjects operation: The AWS Access Key Id you provided does not exist in our records.
Is there a way to recover?
I think I need instructions how to create a new user with the sufficient roles to enable my local aws cli to be able to do aws s3 sync ...
UPDATE: I did just create a new user on my AWS console, and added a policy (to start with) to list my bucket. The problem is I don't know how to attach my aws cli to that new user... :-(
If you are the only person using this AWS Account, then add the AdministratorAccess Policy to your IAM User. That will grant complete access.
Then, in the Security credentials tab of the IAM User click Create access key. Copy the Access Key and Secret Access Key.
On the command line, run aws configure and provide those keys to configure the user.
Test with: aws s3 ls
I am trying to apply the role binding below to grant the Storage Admin Role to a GCP roleset in Vault.
resource "//cloudresourcemanager.googleapis.com/projects/{project_id_number}" {
roles = [
"roles/storage.admin"
]
}
I want to grant access to the project level, not a specific bucket so that the GCP roleset can access and read/write to the Google Container Registry.
When I try to create this roleset in Vault, I get this error:
Error writing data to gcp/roleset/my-roleset: Error making API request.
URL: PUT http://127.0.0.1:8200/v1/gcp/roleset/my-roleset
Code: 400. Errors:
* unable to set policy: googleapi: Error 403: The caller does not have permission
My Vault cluster is running in a GKE cluster which has OAuth Scopes for all Cloud APIs, I am the project owner, and the service account Vault is using has the following permissions:
Cloud KMS CryptoKey Encrypter/Decrypter
Service Account Actor
Service Account Admin
Service Account Key Admin
Service Account Token Creator
Logs Writer
Storage Admin
Storage Object Admin
I have tried giving the service account both Editor and Owner roles, and I still get the same error.
Firstly, am I using the correct resource to create a roleset for the Storage Admin Role at the project level?
Secondly, if so, what could be causing this permission error?
I had previously recreated the cluster and skipped this step:
vault write gcp/config credentials=#credentials.json
Adding the key file fixed this.
There is also a chance that following the steps to create a custom role here and adding that custom role played a part.
have been given access key and secret key through IAM. But restricted to open IAM through my AWS console.
After setting the environment variables for access key and secret key, region.
executed ./ec2.py --list which gives 403 forbidden error. What will be the problem?
And i have seen my policy. the policy structure of my IAM is
Statements :
Effect:allow
Resource:ec2:*
Sorry I cannot copy my policy structure. And i run behind a proxy. I don't think so proxy may be a drawback because am getting response.
The AWS console can be connected only by having a remote desktop gateway and a server. Will this may be a problem. But I have my access id and secret id.
It sounds like you either do not have your environment set up correctly, or have the incorrect permissions to list metadata about your EC2 instances. If it's the former, you need to export your AWS_ACCESS and AWS_SECRET, e.g:
export AWS_SECRET_ACCESS_KEY=your-aws-secret-key
export AWS_ACCESS_KEY_ID=your-aws-access-key
If you are referring to permissions on the remote host for making calls to EC2 then you can do this by creating IAM roles which delegate various rights to instances that belong to the role.
the mistake was the role assigning and corporate proxy.