I am running keycloak in the ecs cluster. I need to give access to the s3 bucket when a user gets registered in the keycloak. Is there any way how we can do it?
Related
I'm trying to create an index management policy in Opensearch 1.3 on AWS using Terraform and the elasticsearch provider from phillbaker but I'm always getting a 403 forbidden exception when using an IAM master user. After several tries, I've changed to an internal database user and it worked straightaway once the domain access policy was open for any user.
These are the things I've tried so far:
Creating an IAM user with programmatic credentials, adding this user to the domain access policy and as a master user for the cluster and using the credentials in the provider (using aws_access_key and aws_secret_access_key parameters, not username and password).
Creating an IAM role with administrator access, adding this role as a master user. Configuring a Cognito user pool and identity pool as identity provider for the cluster and configuring authenticated users to use the role created before. Configuring the domain access policy to allow anyone to user the cluster.
Creating an internal user from the dashboard and adding this user to the all_access role. Configuring the domain access policy to allow anyone to use the cluster.
In all these cases, it didn't work. The last case, I tried after changing the configuration to use an internal database user as master and I verified both had the same rol mapping configuration. But only the credentials of the one I assigned through the AWS console worked.
I also tried changing the cluster security configuration on AWS so the domain access policy gets replaced with the fine-grained access control. But every time I save the changes, when I get back to the security tab, the domain access policy is still activated.
I have setup a RDS proxy for Aurora DB. I am able to connect to the RDS proxy endpoint but not able to perform any operations.
For e.g if I do show processlist; I get below error:
ERROR 1045 (28000): Database Access denied for user 'admin'#'ip-address' (using password: YES)
Note: I am able to access RDS endpoint and perform all the operations.
Thanks in advance!
I encountered this same issue. Turns out it was related to the auto-generated IAM role permissions.
The secrets manager had 2 user accounts added to it (with verified correct credentials), and both were added to the RDS proxy. However, only the first user account worked. The second user account would get a permission denied error.
Checking the CloudWatch logs, I saw a message similar to:
Credentials couldn't be retrieved. The IAM role "arn:aws:iam::ACCOUNT:role/service-role/rds-proxy-role-TIMESTAMP" is not authorized to read the AWS Secrets Manager secret with the ARN "arn:aws:secretsmanager:REGION:ACCOUNT:secret:SECRET_NAME"
When I looked at the IAM policy for the rds-proxy-role-TIMESTAMP role, it had only been granted access to the secret for the first user. This appears to be an issue with the creation of the IAM role when the proxy is set up.
To resolve it, I modified the policy for the rds-proxy-role-TIMESTAMP role to give it access to the ARN for the second user's secret as well. After a few minutes, I was able to log in as the second user.
If you are getting a Database access denied error please check the user permissions in RDS first.
If you can connect to RDS directly with this credentials, check that credentials in Secret Manager are the same.
Then check if you RDS Proxy policy has permission to access all you Secret Manager records as I mention here https://stackoverflow.com/a/73649818/4642536
I'm trying to create a botocore session (that does not use my local AWS credentials on ~/.aws/credentials). In other words, I want to create a "burner AWS account". With that burner credentials/session, I want to setup an STS client and with that client, assume a role in order to access a DynamoDB database. Can someone provide some example code which accomplishes exactly this?
Because if I want my system to go into production environment, I CANNOT store the AWS credentials on Github because AWS will scan for it. I'm trying to implement a workaround such that we don't have to store ~/.aws/credentials file on Github.
The running a task in Amazon ECS, simply assign an IAM Role to the task.
Amazon ECS will then generate temporary credentials for that IAM Role. Any code that uses an AWS SDK (such as boto3 for Python) knows how to access those credentials via the metadata service.
The result is that your code using boto3 will automatically receive credentials that have the permissions associated with the IAM Role assigned to the task.
See: IAM roles for tasks - Amazon Elastic Container Service
I have created a EC2 instance, which creates by default service account with default permissions. So when I checked the default permissions I found that the service account is all these permissions below.
https://www.googleapis.com/auth/devstorage.read_only
https://www.googleapis.com/auth/logging.write
https://www.googleapis.com/auth/monitoring.write
https://www.googleapis.com/auth/servicecontrol
https://www.googleapis.com/auth/service.management.readonly
https://www.googleapis.com/auth/trace.append
Now I tried to list all the objects inside the bucket by using the command:-
gsutil ls gs://mybucketname
Found an error
AccessDeniedException: 403 XXXX#developer.gserviceaccount.com does not have storage.objects.list access to the Google Cloud Storage bucket.
Why I am getting this error even though my service account user is having devstorage.read_only?
And I am very new to GCP here, so let me know.
Please read the official documentation regarding the difference between setting the service account level of access with IAM roles and setting the GCE instance's access scopes:
Service account permissions
When you set up an instance to run as a service account, you determine
the level of access the service account has by the IAM roles that you
grant to the service account. If the service account has no IAM roles,
then no API methods can be run by the service account on that
instance.
Furthermore, an instance's access scopes determine the default OAuth
scopes for requests made through the gcloud tool and client libraries
on the instance. As a result, access scopes potentially further
limit access to API methods when authenticating through OAuth. However, they do not extend to other authentication protocols like
gRPC.
Essentially:
IAM restricts access to APIs based on the IAM roles that are granted
to the service account.
Access scopes potentially further limit
access to API methods when authenticating through OAuth.
Therefore I would recomend to add an IAM role with storage.objects.list permission to your instance service account (maybe roles/storage.legacyBucketReader).
I have set up the Codedeploy Agent, however when I run it, I get the error:
Error: HEALT_CONSTRAINTS
By going further , this is the entry in the code deploy log from the EC2 instance:
InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Cannot reach InstanceService: Aws::S3::Errors::AccessDenied - Access Denied
I have done a simple wget from the bucket and it results:
Connecting to s3-us-west-2.amazonaws.com (s3-us-west-2.amazonaws.com)|xxxxxxxxx|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
On the opposite, if I use the AWS cli I can correctly reach the S3 bucket.
The EC2 instance is on a VPC, it has a role associated with full permission on S3, firewall settings inbound and outbound seem correct. So it is obviously something related to permissions in accessing from https.
The questions:
Under which credentials Code Deploy Agent runs ?
What permissions or roles have to be set on S3 bucket ?
The EC2 instance's credentials (the instance role) will be used when pulling from S3.
To be clear, the Service Role that CodeDeploy needs does not need S3 permissions. The ServiceRole CodeDeploy needs allows CodeDeploy to call AutoScaling & EC2 APIs to describe the instances so CodeDeploy knows how to deploy to them.
That being said, for your AccessDenied issue for S3, there are 2 things you need to check
The role that the EC2 instance(s) has s3:Get* and s3:List* (or more specific) permissions
The S3 bucket you want to deploy has a policy attached that allows the EC2 instance role to get the object.
Documentation for permissions: http://docs.aws.amazon.com/codedeploy/latest/userguide/instances-ec2-configure.html#instances-ec2-configure-2-verify-instance-profile-permissions
CodeDeploy uses "Service Roles" to access AWS resoures. In the AWS console for CodeDeploy, look for "Service role". Assign the IAM role that you created for CodeDeploy in your application settings.
If you have not created a IAM role for CodeDeploy, do so and then assign it to your CodeDeploy application.