Question
Please help understand the cause and solution for the problem.
Problem
SageMaker Studio domain creation fails due to KMS permissions. The IAM Role specified to the SageMaker arn:aws:iam::316725000538:role/SageMaker has the permissions for KMS required as specified in https://docs.aws.amazon.com/sagemaker/latest/dg/api-permissions-reference.html.
Domain creation failed
Unable to create Amazon EFS for domain 'd-1dq5c9rpkswy' because you don't have permissions to use the KMS key 'arn:aws:kms:us-east-2:316725000538:key/1e2dbf9d-daa0-408d-a290-1633b615c54f'. See https://docs.aws.amazon.com/sagemaker/latest/dg/api-permissions-reference.html for required permissions for CreateDomain action.
tells the IAM permissions
IAM Permission for CreateDomain action
Amazon SageMaker API Permissions: Actions, Permissions, and Resources Reference
The IAM permission required for the CreateDomain action have been attached to the IAM role.
I had the same problem when trying to use the aws/s3 key. I created my own Customer Managed Key (CMK) and it worked just fine.
I think it's related to the AWS assigned policy on the aws/s3 key.
This part:
"Condition": {
"StringEquals": {
"kms:CallerAccount": "120455730103",
"kms:ViaService": "s3.us-east-1.amazonaws.com"
}
I don't think SageMaker meets the kms:ViaService condition.
Apart from SageMakerFullAccess we need to create a new policy and attach that to your user.
Create a new policy with below json -
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"sagemaker:CreateUserProfile",
"sagemaker:CreateModel",
"sagemaker:CreateLabelingJob",
"sagemaker:CreateFlowDefinition",
"sagemaker:CreateDomain",
"sagemaker:CreateAutoMLJob",
"sagemaker:CreateProcessingJob",
"sagemaker:CreateTrainingJob",
"sagemaker:CreateNotebookInstance",
"sagemaker:CreateCompilationJob",
"sagemaker:CreateImage",
"sagemaker:CreateMonitoringSchedule",
"sagemaker:RenderUiTemplate",
"sagemaker:UpdateImage",
"sagemaker:CreateHyperParameterTuningJob"
],
"Resource": "*"
}
]
}
Related
I am trying to set up CI/CD with AWS + EC2 and am stuck when creating Deployment Group. The role of CodeDeploy has policies AWSCodeDeployRole and AWSCodeDeployRoleForECS but it throws an error. I tried giving it Admin rights but it is still not enough. Am I missing something? Thanks for any help!
You have a role that has the permissions required for the codedeploy to perform the deployment. What you are missing here is, You should have a trust policy defined in the role that allows codedeploy to assume the role.
Goto IAM console and select the role from the roles section
Click Trust relationships
Click Edit trust Relationships
Add the following trust policy to allow code deploy service to assume this role.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": [
"codedeploy.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
Reference: Create a service role for CodeDeploy
Error message: User "arn:aws:redshift:us-west-2:123456789012:dbuser:my-cluster/user2" is not authorized to assume IAM Role "roleArn"
on the role I've updated the trust policy to this which should allow the assume role, what am I messing up here?
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "redshift.amazonaws.com"
},
"Action": "sts:AssumeRole"
},
code is valid JSON had to cut off the rest.
I'm interning and new to IAM roles. if the redshift account also needs the permission update, how do I give it to them? I've been on this issue for a while so thanks to any help you can give.
To be able to use an IAM role with LOAD or UNLOAD operations one has to:
create an IAM role with trust relationship with Redshift service
attach the role to the cluster
You described doing the first step. Have you also attached the role? You can see the attached role in the AWS UI or list them with CLI:
aws redshift describe-clusters --cluster-identifier my-cluster --query 'Clusters[].IamRoles'
[
[
{
"IamRoleArn": "arn:aws:iam::123456789012:role/my-redhift-role",
"ApplyStatus": "in-sync"
}
]
]
Looking at the error you're getting,
Error message: User "arn:aws:redshift:us-west-2:123456789012:dbuser:my-cluster/user2" is not authorized to assume IAM Role "roleArn"
looks like in the operation you're issuing the role is wrongly configured. To me the error suggests that you're instructing Redshift to assume roleArn role, which probably does not exist. You should put your role name there.
I am trying to create an IAM user and I want to assign the user for Full S3 Access using IAM role (via console access). I know I can do that using Group or attaching the S3FullAccessPolicy directly to the user. I am unable to do this and could not find any help regarding this. The articles I come across describes how you can attach IAM policies to EC2 instance etc.
I managed to create a role and attached a trust policy as below. I also attached the policy "AmazonS3FullAccess" to the role.
But it never worked if I login using AWS management console (browser). It still denies all permission to the user for S3 access. The trusted entities policy looks like below - the IAM username I am trying to use is s3AdminUserWithRole. Th eAWS account id is 6XXXXXXXXXXX0
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::6XXXXXXXXXXX0:user/s3AdminUserWithRole",
"arn:aws:iam::6XXXXXXXXXXX0:root"
]
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
}
Is it not possible to do like this for AWS Management console for a user? We have to use only Groups /managed policies/ inline policies and NOT roles for this? Confused about the AWS documentation then.
Based on the comments, the solution is to use sts service and its assume-role API.
For Console there is Switch Role option.
I suspect this has to more to do with IAM roles than Sagemaker.
I'm following the example here
Specifically, when it makes this call
tf_estimator.fit('s3://bucket/path/to/training/data')
I get this error
ClientError: An error occurred (AccessDenied) when calling the GetRole operation: User: arn:aws:sts::013772784144:assumed-role/AmazonSageMaker-ExecutionRole-20181022T195630/SageMaker is not authorized to perform: iam:GetRole on resource: role SageMakerRole
My notebook instance has an IAM role attached to it.
That role has the AmazonSageMakerFullAccess policy. It also has a custom policy that looks like this
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
My input files and .py script is in an s3 bucket with the phrase sagemaker in it.
What else am I missing?
If you're running the example code on a SageMaker notebook instance, you can use the execution_role which has the AmazonSageMakerFullAccess attached.
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
role = get_execution_role()
And you can pass this role when initializing tf_estimator.
You can check out the example here for using execution_role with S3 on notebook instance.
This is not an issue with S3 Bucket policy but for IAM, The user role that you're choosing has a policy attached that doesn't give it permissions to manage other IAM roles. You'll need to make sure the role you're using can manage (create, read, update) IAM roles.
Hope this helps !
Try using aws configure and make sure you are the expected user. If not, change / update your credentials.This worked for me.
In our environment, all IAM user accounts are assigned a customer-managed policy that grants read-only access to a lot of AWS services. Here's what I want to do:
Migrate a sql server 2012 express database from on-prem to a RDS instance
Limit access to the S3 bucket containing the database files
Here's the requirements according to AWS:
A S3 bucket to store the .bak database file
A role with access to the bucket
SQLSERVER_BACKUP_RESTORE option attached to RDS instance
So far, I've done the following:
Created a bucket under the name "test-bucket" (and uploaded the .bak file here)
Created a role under the name "rds-s3-role"
Created a policy under the name "rds-s3-policy" with these settings:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::test-bucket/"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObjectMetaData",
"s3:GetObject",
"s3:PutObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::test-bucket/*"
}
]
}
Assigned the policy to the role
Gave the AssumeRole permissions to the RDS service to assume the role created above
Created a new option group in RDS with the SQLSERVER_BACKUP_RESTORE option and linked it to my RDS instance
With no restrictions on my S3 bucket, I can perform the restore just fine; however, I can't find a solid way of restricting access to the bucket without hindering the RDS service from doing the restore.
In terms of my attempts to restrict access to the S3 bucket, I found a few posts online recommending using an explicit Deny statement to deny access to all types of principals and grant access based on some conditional statements.
Here's the contents of my bucket policy:
{
"Version": "2012-10-17",
"Id": "Policy1486769843194",
"Statement": [
{
"Sid": "Stmt1486769841856",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::test-bucket",
"arn:aws:s3:::test-bucket/*"
],
"Condition": {
"StringNotLike": {
"aws:userid": [
"<root_id>",
"<user1_userid>",
"<user2_userid>",
"<user3_userid>",
"<role_roleid>:*"
]
}
}
}
]
}
I can confirm the bucket policy does restrict access to only the IAM users that I specified, but I am not sure how it treats IAM roles. I used the :* syntax above per a document I found on the aws forums where the author stated the ":*" is a catch-all for every principal that assumes the specified role.
The only thing I'm having a problem with is, with this bucket policy in place, when I attempt to do the database restore, I get an access denied error. Has anyone ever done something like this? I've been going at it all day and haven't been able to find a working solution.
The following, admittedly, is guesswork... but reading between the lines of the somewhat difficult to navigate IAM documentation and elsewhere, and taking into account the way I originally interpreted it (incorrectly), I suspect that you are using the role's name rather than the role's ID in the policy.
Role IDs look similar to AWSAccessKeyIds except that they begin with AROA....
For the given role, find RoleId in the output from this:
$ aws iam get-role --role-name ROLE-NAME
https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/
Use caution when creating a broad Deny policy. You can end up denying s3:PutBucketPolicy to yourself, which leaves you in a situation where your policy prevents you from changing the policy... in which case, your only recourse is presumably to persuade AWS support to remove the bucket policy. A safer configuration would be to use this to deny only the object-level permissions.