I have a build pipeline that has a source of AWS Code Commit. When there is a commit, this runs a build script in AWS Code Build that builds the project, builds a docker image and pushes into ECR. The final stage deploys the docker image into an ECS cluster in a different region which fails with the following error:
Replication of artifact 'BuildArtifact' failed: Failed replicating artifact from bucket 1 in region 1 to bucket 2 in region 2: Check source and destination artifact buckets exist and pipeline role has permission to access it.
Bucket 1 does have the artifact in it, but bucket 2 is empty. I have tried giving the code pipeline role full access to S3, but didn't change anything. There is nothing in cloud trail regarding the error. This question discuses a similar issue but I believe this is no longer relevant as the way cross region deployments work has changed since then. I have tried re-creating the pipeline (with the same parameters) but this still gives the same error. Perhaps there is some additional permission it needs that AWS didn't create.
If anybody could tell me how to fix, or debug this issue, it would be appreciated.
Thanks,
Adam
Related
I'm having some trouble with CDK Pipeline/ CodePipeline in AWS. When I run the pipeline (git commit) the Assets section always runs even if I don't change the files that it is building and every pipeline execution creates an S3 bucket with pipeline assets so we have loads of s3 buckets. This behaviour while odd does seem to work but it takes a long time to run and doesn't seem right. Is this to be expected and if not what may be the issue?
Update
We sometimes see the below error msg in the build logs which may be related but it doesn't cause failure:
Failed to store notices in the cache: Error: ENOENT: no such file or directory, open '/root/.cdk/cache/notices.json'
If you create an S3 bucket and then reference that bucket in your Codepipeline, the output will always be in that S3 bucket, and the artifacts will be sub directories of that specific S3 bucket. That way you will get new build assets, but they will be placed inside of the same bucket, and you only have one S3 bucket.
I try to create an AWS CodePipeline that will trigger and pull files from my GITHUB repo whenever there is a commit and then build & deploy to my ECS using CodeBuild.
I managed to create a CodeBuild that takes the files, builds a docker and tag + push it to the ECR and it's working perfectly fine.
BUT - when I try to use this CodeBuild project (which is working definitely OK manually) in my CodePipeline I receive an error. CLIENT_ERROR: AccessDenied: Access Denied status code: 403, request id: MRKXFJDHM0ZJF1F6, host id: C6ds+Gg//r7hxFtBuwwpOPfPPcLbywL5AEWkXixCqfdNbjuFOo4zKEqRx6immShnCNK4VgIyJTs= for primary source and source version arn:aws:s3:::codepipeline-us-east-1-805870671912/segev/SourceArti/Qm4QUD8
I understand it has some connection with the S3 bucket but I can not understand this error. Policies/roles are fine I guess.
Any idea why manually building is working OK and when the pipeline triggers the build I get this error?
Make sure the role associated to your CodePipeline has read&write permissions to your artifact S3, which from the error I can tell is arn:aws:s3:::codepipeline-us-east-1-805870671912
Check the docs about artifacts in CodePipeline:
https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome-introducing-artifacts.html
Greetings.
I have migrated existing AWS Resources from one Cloudformation (CFT) stack to another CFT stack using below link.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import-new-stack.html
After migration, my new CFT stack's status was "IMPORT_COMPLETE". Then I have created an AWS CodePipeline wherein my source is AWS CodeCommit and I am trying to deploy it in Cloudformation stack using CodePipeline.
In my CodePipeline I am using my new CFT stack where I have migrated my existing AWS resources and in the same template I have updated my code by added SQS queue policy and uploaded the code in CodeCommit.
So, when my AWS CodePipeline is getting triggered it is getting failed with "InternalFailure" error and it is not giving any specific error about why it is getting failed.
Also, I have checked into CloudTrail logs and there I can see my pipeline is getting failed after "UploadArchive" event which belongs to CodeCommit and it is nor moving further. Also, I tried to give administrator permission to my pipeline service role as well as cloudformation role but still the error is same.
Later, one thing I observed and that is when I update my new Cloudformation stack using AWS Cloudformation console then my stack's status is changing to "Update_Complete" status. Then after that if I try to update the code into CodeCommit then my pipeline is getting completed successfully.
So, not sure why my Pipeline is getting failed with "InternalFailure" when my stacks status is "IMPORT_COMPLETE". Could you please help me to understand if I am missing any specific step die to which my pipeline is getting failed with this error when my CFT stacks status is "IMPORT_COMPLETE" status
It's a bug in codepipeline. I'd recommend submitting at ticket to them in hopes they make a fix. I only found this out via support myself.
I have a 3 stage CodePipeline on AWS.
Source: Checks out upon commit a specific branch of CodeCommit (success)
Build: Runs some tests on a docker image via CodeBuild (success)
Deploy: Performs a deployment on a deployment group (a.k.a. some specifically tagged EC2 instances) via CodeDeploy (failure).
Step 3 fails with
Unable to access the artifact with Amazon S3 object key
'someitem-/BuildArtif/5zyjxoZ' located in the Amazon S3
artifact bucket 'codepipeline-eu-west-1-somerandomnumber'. The provided
role does not have sufficient permissions.
Which role is the later referring to?
The service role of CodePipeline or the service role of CodeDeploy?
I am almost certain I have attached the appropriate policies to both though ...
Here is a snippet of my CodePipeline service role
try to give "CodeDeploy" policy with full access, it should work.
This could also be due to the actual BuildArtifact not existing. Check the specified path in your S3 bucket to see whether the object actually exists. CodePipeline just gives CodeDeploy a reference to an artifact it thinks has been built and uploaded, but it doesn't really know.
This issue is not related to the Roles assigned to either Codepipeline or Codebuild. If you investigate you would find that in the S3 bucket 'codepipeline-eu-west-1-somerandomnumber', there is no folder "BuildArtif" and certainly no file - "5zyjxoZ".
The issue is that Codebuild is not sending any artifact to Codedeploy, change the 'Input artifacts' for Codebuild to the output of the Source stage of the Pipeline and the issue would be resolved.
The error message should be referring to the CodeDeploy role. The CodeDeploy action passes the S3 artifact by reference to CodeDeploy, so the CodeDeploy role needs to have read access to the CodePipeline artifact.
I am trying to set up a Continuous Integration pipeline for my simple AWS lambda function. To confess, the is my very first time using AWS code pipeline. I am having trouble with setting up the pipeline. The deploy stage in the pipeline is failing.
I created a CodeBuild
Then I created an application in CodeDeploy
Then I created a CodePipeline choosing the source as GitHub. The selected a repository and branch from the GitHub. Then linked the pipeline with the CodeDeploy application and CodeBuild I previously created.
After I save the pipeline and when the pipeline is built, I am getting this error.
When I check the error details, it says this
Unable to access the artifact with Amazon S3 object key 'the-goodyard-pipelin/BuildArtif/G12YurC' located in the Amazon S3 artifact bucket 'codepipeline-us-east-1-820116794245'. The provided role does not have sufficient permissions.
Basically, that Bucket does not exist as well. Isn't the Bucket created automatically? What went wrong with my set up?
The Bucket exist as well. It is just throwing error.
In the bucket, I can see the zip file as well.
Well, the error message looks self explanatory, the role you assigned to codebuild doesn't have enough access to go to s3.
Go to codebuild -> Build projects - > Choose your project -> Click on tab 'Build Details'.
You will see a 'Service Role' ARN, that if you click on it, it will send you to that IAM role (if you are not an admin for that account, you may not have enough permissions to see IAM, as it is a critical permission service, so check this with the admin.)
Check the policies for that role, and check if the policies have the action: s3:GetObject on resource: your bucket.
If it doesn't, then you need to add it. Use the visual editor, use S3 as service, add Get* as action, and your s3 bucket to it.