I am getting error when i call get_execution_role() from sagemaker in python.
I have attached the error for the same.
I have added the SagemakerFullAccess Policy to role and user both.
get_execution_role() is a function helper used in the Amazon SageMaker Examples GitHub repository.
These examples were made to be executed from the fully managed Jupyter notebooks that Amazon SageMaker provides.
From inside these notebooks, get_execution_role() will return the IAM role name that was passed in as part of the notebook creation. That allows the notebook examples to be executed without code changes.
From outside these notebooks, get_execution_role() will return an exception because it does not know what is the role name that SageMaker requires.
To solve this issue, pass the IAM role name instead of using get_execution_role().
Instead of:
role = get_execution_role()
kmeans = KMeans(role=role,
train_instance_count=2,
train_instance_type='ml.c4.8xlarge',
output_path=output_location,
k=10,
data_location=data_location)
you need to do:
role = 'role_name_with_sagemaker_permissions'
kmeans = KMeans(role=role,
train_instance_count=2,
train_instance_type='ml.c4.8xlarge',
output_path=output_location,
k=10,
data_location=data_location)
I struggled with this for a while and there are a few different pieces but I believe these are the steps to solve (according to this doc)
You must add a role to your aws config file. Go to terminal and enter:
~/.aws/config
Add your own profile
[profile marketingadmin]
role_arn = arn:aws:iam::123456789012:role/marketingadmin
source_profile = default
Then Edit Trust Relationships in the AWS Dashboard:
add this and update:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "sagemaker.amazonaws.com",
"AWS": "arn:aws:iam::XXXXXXX:user/YOURUSERNAME"
},
"Action": "sts:AssumeRole"
}
]
}
Lastly, I clicked the link that says
Give this link to users who can switch roles in the console
After adding my credentials - it worked.
thanks for trying out SageMaker!
The exception you are seeing already suggests the reason. The credentials you are using are not a role credentials but most likely a user.
The format of 'user' credentials will look like:
'arn:aws:iam::accid:user/name' as opposed to a role:
'arn:aws:iam::accid:role/name'
Hope this helps!
Related
I am trying to use the AWS SageMaker Studio > Get Started > Quick Start, as an IAM user with the AmazonSageMakerFullAccess policy attached, but I am getting the following error:
User: arn:aws:iam::<user-id>:user/<username> is not authorized to perform: sagemaker:CreateDomain on resource: arn:aws:sagemaker:us-west-1:<user-id>:domain/d-<domain-id>
I looked up some documentation on the CreateDomain command, and it looks like it involves EFS storage and VPC configuration, so I have also added the FullAccess policies for these services to my IAM user, but am still getting the same error.
I also tried adding a custom policy as shown here: https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html#sagemaker-roles-createdomain-perms which also seemed to have no effect.
What am I doing wrong here?
AmazonSageMakerFullAccess policy gives the user access to perform actions such as start training jobs, deploy endpoints, along with limited access on other services such as ECR, Glue etc. This is generally attached to a SageMaker notebook instance or Studio.
The user creating the SageMaker domain needs sagemaker:CreateDomain permission, i.e., to your IAM user, add:
{
"Sid": "AllowCreateDomain",
"Effect": "Allow",
"Action": "sagemaker:CreateDomain",
"Resource": "*"
}
I work at AWS and my opinions are my own.
I am experimenting the AWS SDK for python to access Timestream. I tried their in house example code from the repository and I wrote my own code to create a database:
import boto3
from botocore.config import Config
client = boto3.client('timestream-write')
response = client.create_database(DatabaseName='test')
Both sample code and my own code got the following error:
AccessDeniedException: An error occurred (AccessDeniedException) when
calling the DescribeEndpoints operation: This operation is not
allowed.
I googled a bit, but I could not find any information about it. Thanks!
Timestream is currently only available in a handful of regions. Make sure the boto3 region configuration set the correct region to those eligible ones.
The credentials that you are using to interact with Timestream should use an IAM role that has has either an AWS managed policy or a custom policy that allow you to call timestream:DescribeEndpoints. See this page for an example: https://docs.aws.amazon.com/timestream/latest/developerguide/security_iam_id-based-policy-examples.html
Assuming you configured your environment to use the AWS CLI and ran aws configure, the IAM User that is tied to those credentials should be granted timestream:DescribeEndpoints. https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html
You may have gotten this permissions error because you are missing TableName, which is a required parameter.
https://docs.aws.amazon.com/timestream/latest/developerguide/API_CreateTable.html
in your iam role add this permission policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"timestream:DescribeEndpoints"
],
"Resource": "*"
}
] }
DescribeEndpoints is called bt sdk in case you defined endpoints interface like this in your vpc
query-cell2.timestream..amazonaws.com.
I launched an ec2 instance and created a role with a full S3 access policy for the instance. I installed awscli on it and configured my user's access key. My user has admin access and full S3 access policy too. I can see the buckets in the aws console but when I try to run aws s3 ls on the instance it returned An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied.
What else I need to do to add permission to the role or my user properly to be able to list and sync object between S3 and the instance?
I ran into this issue as well.
I ran aws sts get-caller-identity and noticed that the ARN did not match what I was expecting. It turns out if you have AWS configurations set in your bash_profile or bashrc, the awscli will default to using these instead.
I changed the enviornment variables in bash_profile and bashrc to the proper keys and everything started working.
Turns out I forgot I had to do mfa to get access token to be able to operate in S3. Thank you for everyone response.
There appears to be confusion about when to use IAM Users and IAM Roles.
When using an Amazon EC2 instance, the best method to grant permissions is:
Create an IAM Role and attach policies to grant the desired permissions
Associate the IAM Role with the Amazon EC2 instance. This can be done at launch time, or afterwards (Actions/Instance Settings/Attach IAM Role).
Any application running on the EC2 instance (including the AWS CLI) will now automatically receive credentials. Do not run aws configure.
If you are wanting to grant permissions to your own (non-EC2) computer, then:
Create an IAM User (or use your existing one) and attach policies to grant the desired permissions
On the computer, run aws configure and enter the Access Key and Secret Key associated with the IAM User. This will store the credentials in ~/.aws/credentials.
Any application running on this computer will then use credentials from the local credentials file
Create a IAM user with permission.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucketName/*"
}
]
}
Save Access key ID & Secret access key.
sudo apt install awscli
aws configure
AWS Access Key ID [None]: AKIAxxxxxxxxxxxZI4
AWS Secret Access Key [None]: 8Bxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx8
Default region name [None]: region (ex. us-east-2)
Default output format [None]: json
aws s3 ls s3://s3testingankit1/
This problem can occurs not only from the CLI but also when executing S3 API for example.
The reason for this error can come from wrong configuration of the access permissions to the bucket.
For example with the setup below you're giving a full privileges to perform actions on the bucket's internal objects, BUT not specifying any action on the bucket itself:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>/*"
]
}
]
}
This will lead to the mentioned
... (AccessDenied) when calling the ListBuckets ...
error.
In order to fix this you should allow application to access the bucket (1st statement item) and to edit all objects inside the bucket (2nd statement item):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>/*"
]
}
]
}
There are shorter configurations that might solve the problem, but the one specified above tries also to keep fine grained security permissions.
I ran into this yesterday running a script I ran successfully in September 2021.
TL;DR: add --profile your.profile.name to the end of the command
I have multiple profiles on the login I was using. I think something in the aws environment changed, or perhaps I had done something that was able to bypass this before. Back in September I set the profile with
aws configure set region us-west-2 --profile my.profile.name
But yesterday, after the failure, I saw that aws sts get-caller-identity was returning a different identity. After some documentation search I found the additional method for specifying the profile, and operations like:
aws s3 cp myfile s3://my-s3-bucket --profile my.profile.name
all worked
I have an Windows machine with CyberDuck from which I was able to access a destination bucket, but when trying to access the bucket from a Linux machine with aws command, I got "An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied".
I then executed same command "aws s3 ls" from a command line interface on the Windows machine and it worked just fine. It looks like there is some security restriction on the AWS server for the machine/IP.
I suspect this has to more to do with IAM roles than Sagemaker.
I'm following the example here
Specifically, when it makes this call
tf_estimator.fit('s3://bucket/path/to/training/data')
I get this error
ClientError: An error occurred (AccessDenied) when calling the GetRole operation: User: arn:aws:sts::013772784144:assumed-role/AmazonSageMaker-ExecutionRole-20181022T195630/SageMaker is not authorized to perform: iam:GetRole on resource: role SageMakerRole
My notebook instance has an IAM role attached to it.
That role has the AmazonSageMakerFullAccess policy. It also has a custom policy that looks like this
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
My input files and .py script is in an s3 bucket with the phrase sagemaker in it.
What else am I missing?
If you're running the example code on a SageMaker notebook instance, you can use the execution_role which has the AmazonSageMakerFullAccess attached.
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
role = get_execution_role()
And you can pass this role when initializing tf_estimator.
You can check out the example here for using execution_role with S3 on notebook instance.
This is not an issue with S3 Bucket policy but for IAM, The user role that you're choosing has a policy attached that doesn't give it permissions to manage other IAM roles. You'll need to make sure the role you're using can manage (create, read, update) IAM roles.
Hope this helps !
Try using aws configure and make sure you are the expected user. If not, change / update your credentials.This worked for me.
I'm getting the following error when I try to create a development endpoint for AWS Glue.
{ "service":"AWSGlue",
"statusCode":400,
"errorCode":"ValidationException",
"requestId":"<here goes an UUID>",
"errorMessage":"Role arn:aws:iam::<IAM ID>:role/AWSGlueServiceRole-DefaultRole
should be given assume role permissions for Glue Service.\n",
"type":"AwsServiceError" }
And my role has the following permissions.
AmazonS3FullAccess
AWSGlueServiceNotebookRole
AmazonAthenaFullAccess
AWSGlueServiceRole
CloudWatchLogsReadOnlyAccess
AWSGlueConsoleFullAccess
AWSCloudFormationReadOnlyAccess
Any clues on what am I missing?
In your trust relationship, the trust should be established with glue.amazonaws.com. Your role (AWSGlueServiceRole-DefaultRole) may not have this. To confirm, go to the IAM roles console, select the IAM role: AWSGlueServiceRole-DefaultRole and click on the Trust Relationship tab.
The json for this should look like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "glue.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Example screenshot for the Trust relationship:
I was tripped up by this as well; the problem is that when you use the console to create a default glue service role it ends up creating the IAM role like this:
arn:aws:iam:::role/service-role/AWSGlueServiceRole-DefaultRole
Make note of the "service-role" in the path.
But then when choosing that role as the role you want to use in the console wizard for setting up a new dev endpoint it doesn't include the "service-role" in the path and looks for a role named like this:
arn:aws:iam:::role/AWSGlueServiceRole-DefaultRole
I think this is just a bug in the console wizard for creating dev endpoints. I got around it by creating a new role that doesn't have "service-role" in the path and then chose that role in the console wizard and was able to successfully create a dev endpoint.
The problem was somehow related to an old Role that I already messed up with. Created a brand new role just for development following this link and this link, worked like a charm.