We have a CodePipeline process set up, and all stages work except the CodeDeploy stage.
Our pipeline stage is as follows:
GenerateChangeSet for CloudFormation
ExecuteChangeSet for CloudFormation
Deploy for CodeDeploy
These stages were set up and configured by CodeStar.
Our GenerateChangeSet stage tries to access s3 to get our BuildArtifact, but fails with the following error:
Action execution failed
Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 40P7HSHQGWXSRA72; S3 Extended Request ID: I6hiCC7xx+YmnQMLfUnMzZziLDz/5b8uJWzOqWNZwSiVRCS14Q6UyVfss6q80teO5MAGuR9Xft4=; Proxy: null)
This suggests that CloudFormation cannot access s3, but I've checked and rechecked the policy that it uses and it definitely has the correct permissions for accessing s3.
I'm not quite sure why this error is happening, given that the role policy does indeed have access to s3. I even went with the nuclear option of granting this role full control over s3 (with a view to reverting once I solved the issue), but to no avail, the error still occurs.
Has anyone encountered this before? Anyone know why it might be happening?
I discovered the issue. The CloudFormation template file (template.yml and template-configuration.yml) was reading the one from the repo, but that had been removed at some point prior, so I was getting access denied errors from that resource.
I wish the error message was more explicit, it would have saved hours.
Related
I'm trying to create ECS Fargate deployment using Cloudformation script, but the script fails during creation of ECS Cluster with error saying that unable to assume service role. I'm not able to figure out what I'm missing in the script, I have tried many ways none of them seem to be working.
Here is the link to cloud formation script as I'm not able to post it here due to character limitation.
ECS Cloudformation script
the error where the resource creation fails.
Resource handler returned message: "Invalid request provided: CreateCluster Invalid Request: Unable to assume the service linked role. Please verify that the ECS service linked role exists. (Service: AmazonECS; Status Code: 400; Error Code: InvalidParameterException; Request ID: e08ab312-4bd8-4c21-852f-ae5d49cc5932; Proxy: null)" (RequestToken: a686f226-e1d3-7b4c-13f1-66fa0a516c51, HandlerErrorCode: InvalidRequest
I'm able to get it working if I create an ECS cluster from aws console, as it creates a service liked role. But I want to work without creating the cluster manually from Console, enerything building up from Cloudformation. I tried looking over aws docs and did dig up Internet but couldn't get it working.Can anyone please help me out.
I encountered the following error when launching an AWS VPC from the command line interface following a quickstart guide here.
Commands used:
git clone https://github.com/aws-quickstart/quickstart-aws-biotech-blueprint-cdk.git
cd quickstart-aws-biotech-blueprint-cdk
npm install
npm run build
cdk bootstrap
npm run build && cdk deploy
Error message:
AwsBiotechBlueprint: creating CloudFormation changeset...
11:38:13 AM | CREATE_FAILED | AWS::IAM::Role
| ConfigEnabledPr
omi...corderRoleFC6F886B
Policy arn:aws:iam::aws:policy/service-role/AWSConfigRole does not exist or is not
attachable. (Service
: AmazonIdentityManagement; Status Code: 404; Error Code: NoSuchEntity; Request ID:
f03b794e-7aa5-4f24-
899e-2aefaa6e8fb3; Proxy: null)
I am using an IAM user (not Root) and the error appears to indicate that "AWSConfigRole" policy is not associated with my user. To correct this error I added the "AWSConfigRole" permissions through the IAM management console via my web browser.
Unfortunately when I rerun the steps in the quickstart I still encounter the exact same error.
How can I ensure the updated permissions from the IAM management console are being properly communicated to the command line interface?
I still encounter the exact same error.
Because this policy should be setup for IAM role for Config service to assume, not your IAM user. Also AWSConfigRole policy has been long depricated. Now you should be using AWS_ConfigRole instead explaining why it can't be used anymore.
It seems that the template you are deploying is old and not up to date. Its better to make an issue about this to the devs of the template, as they should update it.
I have an AWS Glue Spark job that fails with the following error:
An error occurred while calling o362.cache. com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: ...; S3 Extended Request ID: ...; Proxy: null), S3 Extended Request ID: ...
I believe the error is thrown at line where the Spark persist() method is called on a DataFrame. The Glue job is assigned an IAM role that has full S3 access (all locations/operations allowed), yet I'm still getting the S3 exception. I tried setting the "Temporary path" for the Glue job on the AWS Console to a specific S3 bucket with full access, I also tried setting the Spark temporary directory to a specific S3 bucket with full access via:
conf = pyspark.SparkConf()
conf.set('spark.local.dir', 's3://...')
self.sc = SparkContext(conf=conf)
which didn't help. It's very strange that the job is failing even with full S3 access. Not sure what to try next, any help would be really appreciated. Thank you!
I am trying to create cross account deployment using codepipeline and terraform. My codecommit repo is account A and codepipeline is in account B. I want to create trigger so that whenever I merge the branch it should trigger codepipeline to start in Account B.
I tried using Event bridge but it only sends notification. I also need source artifacts for codebuild project. So I tried using couple of articles from Medium such as this. But I am getting this error. Currently it doesn't even get to build stage fails before that
The service role or action role doesn’t have the permissions required to access the Amazon S3 bucket named artifacts-bucket-dev. Update the IAM role permissions, and then try again. Error: Amazon S3:AccessDenied:Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: K86ED6QM; S3 Extended Request ID: BsVDy7vYRyL2mavM+XbZNrWxR+y8Do=; Proxy: null)
I tried updating the role and actually gave administrator permission as I just wanted it work.
I am wanting to access S3 from a Spring Boot application using Spring Cloud AWS. My access to S3 works fine from my desktop, but when I bundle the app up as a WAR file and deploy to an EC2 Tomcat container, I get an 403 exception:
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 4F0EBE3A853C6D99)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1078) ~[aws-java-sdk-core-1.9.27.jar:na]
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:726) ~[aws-java-sdk-core-1.9.27.jar:na]
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:461) ~[aws-java-sdk-core-1.9.27.jar:na]
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:296) ~[aws-java-sdk-core-1.9.27.jar:na]
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3737) ~[aws-java-sdk-s3-1.9.27.jar:na]
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1028) ~[aws-java-sdk-s3-1.9.27.jar:na]
at org.springframework.cloud.aws.core.io.s3.SimpleStorageResource.getObjectMetadata(SimpleStorageResource.java:182) ~[spring-cloud-aws-core-1.0.2.RELEASE.jar:1.0.2.RELEASE]
at org.springframework.cloud.aws.core.io.s3.SimpleStorageResource.exists(SimpleStorageResource.java:112) ~[spring-cloud-aws-core-1.0.2.RELEASE.jar:1.0.2.RELEASE]
I have an application.yml where I define access to AWS:
cloud:
aws:
region:
static: eu-west-1
auto: false
credentials:
accessKey: myaccesskey
secretKey: somereallylongkeyhere
instanceProfile: true
This works fine from my desktop. What see do I need to do to make this work? I have tried turning on every permission I can see within S3 but I can't seem to get around this.
I had a similar problem where the culprit was an outdated system clock. EC2 instances can sometimes drift and IAM API is very sensitive to it. Relevant information can be found here: https://github.com/boto/boto/issues/2885.