I'm using AWS DataPipeline to run an aws-cli command that creates an EMR Cluster, but I'm getting the following error when the command runs:
user ... is not authorized to perform: elasticmapreduce:RunJobFlow
I want to associate the right Policy to authorise this, but how do I know which policy is needed?
Select a User > Add Permissions > Attach existing policies directly > AmazonElasticMapReduceFullAccess
I suspect you didn't use the roles that are created by default in your account to run pipelines (DataPipelineDefaultResourceRole and DataPipelineDefaultRole). If this is the case just use that and it should work.
Related
I am new to AWS. I am trying to import an OVA to a AMI and use it for an EC2 instance as described here:
One of the commands it asks you to run is
aws ec2 describe-import-image-tasks --import-task-ids import-ami-1234567890abcdef0
When I do this I get
An error occurred (UnauthorizedOperation) when calling the DescribeImportImageTasks operation: You are not authorized to perform this operation.
I believe this means I need to add the appropriate Role (with a policy to be able to describe-import-image-tasks) to my cli user.
In the IAM console, I see this search feature to filter policies for a role which I will assign to my user. However it doesn't seem to have any results for describe-import-image-tasks
Is there an easy way to determine which policies are needed to run an AWS Cli command?
There is not an easy way. The CLI commands usually (but not always) map to a single IAM action that you need permission to perform. In your case, it appears you need the ec2:DescribeImportImageTasks permission, as listed here.
I'm working on an AWS account managed by another team which use it only for S3 storage. We have authorization to use sagemaker, and administartor said “AmazonSageMakerFullAccess” have been given to me.
I'm trying to access sagemaker studio, for that I'm ask to "Setup SageMaker Domain" by aws.
I then need a "Default execution role"
If I try to create one, I got error "User ... is not authorized to perform: iam:CreateRole on resource: ..."
There is an option to use a custom exiting one with the format
"arn:aws:iam::YourAccountID:role/yourRole"
but while I have an account Id, I don't know what role to use.
I don't have permission to create role, and the ones I see in IAM service doesn't seem to be related to sagemaker (also I don't have permission to see the details of those roles).
Should the sagemaker setup be done by the administrator who can create a new role ? Or is there a way for me to do it, and if so where can I find the role I need ?
If you don't attach any role to AWS SageMaker, and when you try to create SageMaker resource the very first time it will create a default execution role for the service. Either get the permission to create a role or ask your administrator to create a execution role for your SageMaker so that next time when you create one you can use the same role.
I'm setting a new skill with ASK CLI V2 in Alexa. I would like to specify a specific role when deploying the new skill instead of letting the command create a new one.
Some background: I created a new skill using the new command and used the hello world template. Then, I ran the deploy command. I am using a corporate account and I don't have permissions to create a new role. I have to use an existing one.
AccessDenied: User: [...] is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::845692260290:role/ask-lambda-skill-sample-nodejs-hello-world
I am afraid that you need to ask your organization to give some permissions to your user to create Lambda Execution Roles, as per the documentation here
AWS permissions
When ASK CLI creates a new Lambda function, it associates the AWSLambdaBasicExecutionRole with the function. For more information, see Manage Permissions: Using an IAM Role (Execution Role) in the AWS Lambda documentation. Make sure the AWS credentials that you configured for use with ASK CLI have permission to create IAM roles and associate permissions.
I hope this helps.
I am following these instructions in order to send our EKS cluster logs to CloudWatch:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-logs.html
Since it wasn't working I ran the suggested to command to tail the logs for one of the fluentd pods:
kubectl logs fluentd-cloudwatch-fc7vx -n amazon-cloudwatch
I am seeing this error:
error_class=Aws::CloudWatchLogs::Errors::AccessDeniedException
error="User:
arn:aws:sts::913xxxxx71:assumed-role/eksctl-prod-nodegroup-standard-wo-NodeInstanceRole-1ESBFXHSI966X/i-0937e3xxxx07ea6
is not authorized to perform: logs:DescribeLogGroups on resource:
arn:aws:logs:us-west-2:913617820371:log-group::log-stream:"
I have a role that has the right permissions, but how can I give the role to the arn:aws:sts::913xxxxx71:assumed-role/eksctl-prod-nodegroup-standard-wo-NodeInstanceRole-1ESBFXHSI966X/i-0937e3xxxx07ea6 user?
You need to perform step to attach the CloudWatchAgentServerPolicy policy to cluster worker node role documented here: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-prerequisites.html
To do this, you need to assume the role. This can be done a few different ways:
You can setup an AWS profile and use that to execute commands as a different role.
You can use a tool like awsudo
One caveat is the the role you are assuming must have a trust relationship setup so that is permits others to assume it. There is an example of this trust relationship setup in the link for (1) above.
That being said, you probably shouldn't be doing any of this for your use case.
If your other role is in a state where it needs to be updated to allow assumption, it is going to be much easier and more secure for you to just update the eksctl-prod-nodegroup-standard-wo-NodeInstanceRole-1ESBFXHSI966X role directly with the permissions you need.
Ideally you can associate the role with the same policy that is attached to the other role with the desired permissions.
I am attempting to launch a Docker container stored in ECR as an AWS batch job. The entrypoint python script of this container attempts to connect to S3 and download a file.
I have attached a role with AmazonS3FullAccess to both the AWSBatchServiceRole in the compute environment and I have also attached a role with AmazonS3FullAccess to the compute resources.
This is the following error that is being logged: botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL: "https://s3.amazonaws.com/"
There is a chance that these instances are being launched in a custom VPC, not the default VPC. I'm not sure this makes a difference, but maybe that is part of the problem. I do not have appropriate access to check. I have tested this Docker image on an EC2 instance launched in the same VPC and everything works as expected.
You mentioned compute environment and compute resources. Did you add this S3 policy to the Job Role as mentioned here?
After you have created a role and attached a policy to that role, you can run tasks that assume the role. You have several options to do this:
Specify an IAM role for your tasks in the task definition. You can create a new task definition or a new revision of an existing task definition and specify the role you created previously. If you use the console to create your task definition, choose your IAM role in the Task Role field. If you use the AWS CLI or SDKs, specify your task role ARN using the taskRoleArn parameter. For more information, see Creating a Task Definition.
Specify an IAM task role override when running a task. You can specify an IAM task role override when running a task. If you use the console to run your task, choose Advanced Options and then choose your IAM role in the Task Role field. If you use the AWS CLI or SDKs, specify your task role ARN using the taskRoleArn parameter in the overrides JSON object. For more information, see Running Tasks.