I would like to know how to search details when error is vague in AWS. In below example I would like to know which permission is missing or which operation was performed.
Terminated with errors
Service role bundle-release-import-AWSDataPipelineRole has insufficient EC2 permissions.
EC2 Message: AmazonEC2Exception: You are not authorized to perform this operation.
(Service: AmazonEC2; Status Code: 403; Error Code: UnauthorizedOperation;
Request ID: e2614d7b-ef8f-467d-81cf-14ee9c4671c8; Proxy: null)
You can use:
Option 1: Use Athena queries to troubleshoot IAM permission API call failures by searching AWS CloudTrail logs
Option 2: Use the AWS CLI to troubleshoot IAM permission API call failures
for more details on how to implement each option you can refer to the article below
https://aws.amazon.com/premiumsupport/knowledge-center/troubleshoot-iam-permission-errors/
Related
I'm trying to create ECS Fargate deployment using Cloudformation script, but the script fails during creation of ECS Cluster with error saying that unable to assume service role. I'm not able to figure out what I'm missing in the script, I have tried many ways none of them seem to be working.
Here is the link to cloud formation script as I'm not able to post it here due to character limitation.
ECS Cloudformation script
the error where the resource creation fails.
Resource handler returned message: "Invalid request provided: CreateCluster Invalid Request: Unable to assume the service linked role. Please verify that the ECS service linked role exists. (Service: AmazonECS; Status Code: 400; Error Code: InvalidParameterException; Request ID: e08ab312-4bd8-4c21-852f-ae5d49cc5932; Proxy: null)" (RequestToken: a686f226-e1d3-7b4c-13f1-66fa0a516c51, HandlerErrorCode: InvalidRequest
I'm able to get it working if I create an ECS cluster from aws console, as it creates a service liked role. But I want to work without creating the cluster manually from Console, enerything building up from Cloudformation. I tried looking over aws docs and did dig up Internet but couldn't get it working.Can anyone please help me out.
We have a CodePipeline process set up, and all stages work except the CodeDeploy stage.
Our pipeline stage is as follows:
GenerateChangeSet for CloudFormation
ExecuteChangeSet for CloudFormation
Deploy for CodeDeploy
These stages were set up and configured by CodeStar.
Our GenerateChangeSet stage tries to access s3 to get our BuildArtifact, but fails with the following error:
Action execution failed
Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 40P7HSHQGWXSRA72; S3 Extended Request ID: I6hiCC7xx+YmnQMLfUnMzZziLDz/5b8uJWzOqWNZwSiVRCS14Q6UyVfss6q80teO5MAGuR9Xft4=; Proxy: null)
This suggests that CloudFormation cannot access s3, but I've checked and rechecked the policy that it uses and it definitely has the correct permissions for accessing s3.
I'm not quite sure why this error is happening, given that the role policy does indeed have access to s3. I even went with the nuclear option of granting this role full control over s3 (with a view to reverting once I solved the issue), but to no avail, the error still occurs.
Has anyone encountered this before? Anyone know why it might be happening?
I discovered the issue. The CloudFormation template file (template.yml and template-configuration.yml) was reading the one from the repo, but that had been removed at some point prior, so I was getting access denied errors from that resource.
I wish the error message was more explicit, it would have saved hours.
I am using an aws educate account provided by my college instructor to learn about serverless application development in aws. I am trying to use CloudFront for Content Delivery Network services but I get the following error. How can this be resolved.
com.amazonaws.services.cloudfront.model.AccessDeniedException: User: arn:aws:sts::127746452845:assumed-role/vocstartsoft/user616202=riwaj.chalise#deerwalk.edu.np is not authorized to perform: cloudfront:ListDistributions with an explicit deny (Service: AmazonCloudFront; Status Code: 403; Error Code: AccessDenied; Request ID: 50ae6438-3196-452a-bcf9-80aaa5cf5e7c; Proxy: null)
How can I resolve this issue? Can my educator provide me the access to this service(cloudfront)?
This is because your user doesn't have privilege to access AWS cloudfront. You can ask your educator for the same.
There is something called AWS Identity and Access Management (IAM) which helps to create users and manage access for each users or group of users to AWS services and resources securely.
while using cloud watch event in aws lambda function in my aws educate starter account
i get this error
User: arn:aws:sts::****:assumed-role/vocstartsoft/*** is not authorized to perform: events:PutRule on resource: arn:aws:event*****:rule/onemin with an explicit deny (Service: AmazonCloudWatchEvents; Status Code: 400; Error Code: AccessDeniedException; Request ID: *)
I see lot of solution related with this like adding iam permission role and so on
but none of this work.
Please help me.
AWS Educate account is very limited. You have explicit deny, which means that AWS Educate admins explicitly denied that action.
While working with AWS Educate you will be encountering very often such messages.
The only thing you could try is to contact their support, hoping they would relax the restrictions for you. Alternatively, you can get regular AWS account where you are the admin, and get AWS Educate credits for your use.
General list of their restrictions is here. Important to note is that:
All services may have additional restrictions not listed below [in the link provided].
All my calls to spark.sql("") fails with the error in the stacktrace (1) below
Update - 2
I have zeroed in on the problem, it is AccessDenied for sts:AssumeRule, any leads appreciated
User: arn:aws:sts::00000000000:assumed-role/EMR_EC2_XXXXX_XXXXXX_POLICY/i-3232131232131232 is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::00000000000:role/EMR_XXXXXX_XXXXXX_POLICY
When the same location is accessed with
spark.read.parquet("s3a://xxx.xxx-xxx-xx.xxxxx-xxxxx/xxx/")
I was able to read the records.
But the same stacktrace (1) resurfaces when access with s3: instead of s3a: scheme
spark.read.parquet("s3://xxx.xxx-xxx-xx.xxxxx-xxxxx/xxx/")
So how can I configure Spark on EMR to use s3a: or have s3: running without the access denied which is presume because it may not be using the appropriate credential chain
(1)
Caused by: com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException: Access denied (Service: AWSSecurityTokenService; Status Code: 403; Error Code: AccessDenied; Request ID: xxxxx-xxxx-xxxx-xxxx-xxxxxxxx)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1658)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1322)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1072)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:745)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:719)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:701)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:669)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:651)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:515)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.doInvoke(AWSSecurityTokenServiceClient.java:1369)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.invoke(AWSSecurityTokenServiceClient.java:1338)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.invoke(AWSSecurityTokenServiceClient.java:1327)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.executeAssumeRole(AWSSecurityTokenServiceClient.java:488)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.assumeRole(AWSSecurityTokenServiceClient.java:460)
Update - 1
Tried setting secret and access key doesn't work
spark.sparkContext.hadoopConfiguration.set("fs.s3.awsAccessKeyId", "")
spark.sparkContext.hadoopConfiguration.set("fs.s3.awsSecretAccessKey", "")
this stack trace says "amazon EMR S3 client"; not the Apache ASF one, so different settings, and error messages.
That error message about "assumed role" hints that you are running in an EC2 VM (yes?), and that "assumed role" is actually the IAM role the EC2 VM is deployed as. In which case (a) no other credentials are being picked up and (b) that VM doesn't have permissions to access the role. Fixes: work out the setting to get the credentials in, increase EC2 IAM role rights, or create VMs with a different role