I have a cross-account pipeline running in an account CI deploying resources via CloudFormation in another account DEV.
After deploying I save the artifact outputs as a JSON file and want to access it in another pipeline action via CodeBuild.
CodeBuild fails in the phase DOWNLOAD_SOURCE with the following messsage:
CLIENT_ERROR: AccessDenied: Access Denied status code: 403, request
id: 123456789, host id: xxxxx/yyyy/zzzz/xxxx= for primary source and
source version arn:aws:s3:::my-bucket/my-pipeline/DeployArti/XcUNqOP
The problem is likely that the CloudFormation, when executed in a different account, encrypt the artifacts with a different key than the pipeline itself.
Is it possible to give the CloudFormation an explicit KMS key to encrypt the artifacts with, or any other way how to access those artifacts back in the pipeline?
Everything works when executed from within a single account.
Here is my code snippet (deployed in the CI account):
MyCodeBuild:
Type: AWS::CodeBuild::Project
Properties:
Artifacts:
Type: CODEPIPELINE
Environment: ...
Name: !Sub "my-codebuild"
ServiceRole: !Ref CodeBuildRole
EncryptionKey: !GetAtt KMSKey.Arn
Source:
Type: CODEPIPELINE
BuildSpec: ...
CrossAccountCodePipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: "my-pipeline"
RoleArn: !GetAtt CodePipelineRole.Arn
Stages:
- Name: Source
...
- Name: StagingDev
Actions:
- Name: create-stack-in-DEV-account
InputArtifacts:
- Name: SourceArtifact
OutputArtifacts:
- Name: DeployArtifact
ActionTypeId:
Category: Deploy
Owner: AWS
Version: "1"
Provider: CloudFormation
Configuration:
StackName: "my-dev-stack"
ChangeSetName: !Sub "my-changeset"
ActionMode: CREATE_UPDATE
Capabilities: CAPABILITY_NAMED_IAM
# this is the artifact I want to access from the next action
# within this CI account pipeline
OutputFileName: "my-DEV-output.json"
TemplatePath: !Sub "SourceArtifact::stack/my-stack.yml"
RoleArn: !Sub "arn:aws:iam::${DevAccountId}:role/dev-cloudformation-role"
RoleArn: !Sub "arn:aws:iam::${DevAccountId}:role/dev-cross-account-role"
RunOrder: 1
- Name: process-DEV-outputs
InputArtifacts:
- Name: DeployArtifact
ActionTypeId:
Category: Build
Owner: AWS
Version: "1"
Provider: CodeBuild
Configuration:
ProjectName: !Ref MyCodeBuild
RunOrder: 2
ArtifactStore:
Type: S3
Location: !Ref S3ArtifactBucket
EncryptionKey:
Id: !GetAtt KMSKey.Arn
Type: KMS
CloudFormation generates output artifact, zips it and then uploads the file to S3.
It does not add ACL, which grants access to the bucket owner. So, you get a 403 when you try to use the CloudFormation output artifact further down the pipeline.
workaround is to have one more action in your pipeline immediately after CLoudFormation action for ex: Lambda function that can assume the target account role and update the object acl ex: bucket-owner-full-control.
mockora's answer is correct. Here is an example Lambda function in Python that fixes the issue, which you can configure as an Invoke action immediately after your cross account CloudFormation deployment.
In this example, you configure the Lambda invoke action user parameters setting as the ARN of the role you want the Lambda function to assume in remote account to fix the S3 object ACL. Obviously your Lambda function will need sts:AssumeRole permissions for that role, and the remote account role will need s3:PutObjectAcl permissions on the pipeline bucket artifact(s).
import os
import logging, datetime, json
import boto3
from aws_xray_sdk.core import xray_recorder
from aws_xray_sdk.core import patch_all
# X-Ray
patch_all()
# Configure logging
logging.basicConfig()
log = logging.getLogger()
log.setLevel(os.environ.get('LOG_LEVEL','INFO'))
def format_json(data):
return json.dumps(data, default=lambda d: d.isoformat() if isinstance(d, datetime.datetime) else str(d))
# Boto3 Client
client = boto3.client
codepipeline = client('codepipeline')
sts = client('sts')
# S3 Object ACLs Handler
def s3_acl_handler(event, context):
log.info(f'Received event: {format_json(event)}')
# Get Job
jobId = event['CodePipeline.job']['id']
jobData = event['CodePipeline.job']['data']
# Ensure we return a success or failure result
try:
# Assume IAM role from user parameters
credentials = sts.assume_role(
RoleArn=jobData['actionConfiguration']['configuration']['UserParameters'],
RoleSessionName='codepipeline',
DurationSeconds=900
)['Credentials']
# Create S3 client from assumed role credentials
s3 = client('s3',
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken']
)
# Set S3 object ACL for each input artifact
for inputArtifact in jobData['inputArtifacts']:
s3.put_object_acl(
ACL='bucket-owner-full-control',
Bucket=inputArtifact['location']['s3Location']['bucketName'],
Key=inputArtifact['location']['s3Location']['objectKey']
)
codepipeline.put_job_success_result(jobId=jobId)
except Exception as e:
logging.exception('An exception occurred')
codepipeline.put_job_failure_result(
jobId=jobId,
failureDetails={'type': 'JobFailed','message': getattr(e, 'message', repr(e))}
)
I've been using CodePipeline for cross account deployments for a couple of years now. I even have a GitHub project around simplifying the process using organizations. There are a couple of key elements to it.
Make sure your S3 bucket is using a CMK, not the default encryption key.
Make sure you grant access to that key to the accounts to which you are deploying. When you have a CloudFormation template, for example, that runs on a different account than where the template lives, the role that is being used on that account needs to have permissions to access the key (and the S3 bucket).
It's certainly more complex than that, but at no point do I run a lambda to change the object owner of the artifacts. Create a pipeline in CodePipeline that uses resources from another AWS account has detailed information on what you need to do to make it work.
CloudFormation should use the KMS encryption key provided in the artifact store definition of your pipeline: https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_ArtifactStore.html#CodePipeline-Type-ArtifactStore-encryptionKey
Therefore, so long as you give it a custom key there and allow the other account to use that key too it should work.
This is mostly covered in this doc: https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html
Related
I am trying to deploy lambda function with Serverless framework
I've added my ADMIN credentials in the aws cli
and I am getting this error message every time I try to deploy
Warning: Not authorized to perform: lambda:GetFunction for at least one of the lambda functions. Deployment will not be skipped even if service files did not change.
Error:
CREATE_FAILED: HelloLambdaFunction (AWS::Lambda::Function)
Resource handler returned message: "null (Service: Lambda, Status Code: 403, Request ID: ********)" (RequestToken: ********, HandlerErrorCode: GeneralServiceException)
I've also removed everything from my project and from the YML file and nothing worked
service: test
frameworkVersion: '3'
provider:
name: aws
runtime: nodejs12.x
iam:
role:
statements:
- Effect: "Allow"
Action:
- lambda:*
- lambda:InvokeFunction
- lambda:GetFunction
- lambda:GetFunctionConfiguration
Resource: "*"
functions:
hello:
handler: handler.hello
Deployments default to us-east-1 region and used the default profile set on the machine where the serverless command is run. Perhaps you dont have permission to deploy is that region or serverless is using a different profile than intended. (e.g If i run serverless from an EC2 and login separately, it would still use the default profile, i.e the EC2 instance Profile.)
Can you update your serverless.yml file to include the region as well.
provider:
name: aws
runtime: nodejs12.x
region: <region_id>
profile: profile name if not Default
When I tried to create a lambda function manually from the AWS website I found that I've no permission to view or create any lambda function
And after that I found that my account was suspended due to a behavior I've done that is not acceptable in AWS policy
I've followed the steps the support has sent me and then my account was back and everything worked fine
I am using AWS CDK to deploy a codepipeline. It also has a notification rule which notify when the pipeline fails. I need to put the codepipeline job URL in the notify message in order for people to open the piepline easily.
In cloudformation, I have to put below configuation to compute the URL:
Targets:
- Arn: !Ref SNSTopicNotification
Id: piplineID
InputTransformer:
InputPathsMap:
pipeline: "$.detail.pipeline"
executionId: "$.detail.execution-id"
region: "$.region"
InputTemplate: !Sub |
"Pipeline <pipeline> failed"
"https://<region>.console.aws.amazon.com/codesuite/codepipeline/pipelines/<pipeline>/executions/<executionId>/timeline?region=<region>"
the key is using $.detail.xxx to reference the value at runtime. How can I achieve this in CDK?
Have been trying to setup an AWS pipeline following the tutorial here: https://docs.aws.amazon.com/lambda/latest/dg/build-pipeline.html
But the pipeline continously fails with below error logs:
Here are some of the actions, I tried already:
Granted full access of S3 to "cfn-lambda-pipeline" role associated with Cloud Formation and Code Pipeline Service Role.
Allowed public ACL access to S3 bucket.
Below is my buildspec.yml
version: 0.2
phases:
install:
runtime-versions:
nodejs: 12
build:
commands:
- npm install
- export BUCKET=xx-test
- aws cloudformation package --template-file template.yaml --s3-bucket $BUCKET --output-template-file outputtemplate.yml
artifacts:
type: zip
files:
- template.yml
- outputtemplate.yml
Below is my template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
helloWorld
DZ Bank API Gateway connectivity helloWorld
Globals:
Function:
Timeout: 3
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: ./
Handler: app.lambdaHandler
Runtime: nodejs12.x
Events:
HelloWorld:
Type: Api
Properties:
Path: /hello
Method: get
The error is actually related to CodeBuild not CodePipeline. It seems like CodeBuild does not have valid permissions for its attached service role.
From the console you can find the attached service role by performing the following:
Go to the CodeBuild console
Click "Build Projects" from the menu on the left hand side
Click the radio button next to build project you're using, then on the top menu click "Edit" and select then "Edit Source" option.
At the bottom of the page will be a section titled "Service role permissions" with the Arn below it.
This IAM role will need to be granted the permissions it requires (in your case "s3:PutObject") if they are not already there.
AWS provides a full policy in the Create a CodeBuild service role documentation.
"cfn-lambda-pipeline" role associated with Cloud Formation and Code Pipeline Service Role.
The S3 permissions should be associated with CodeBuild (CB), because CB is going to run buildspec.yml. Thus CB needs to be able to access the S3.
According to the tutorial linked in the Update the build stage role section, the AmazonS3FullAccess should be added to codebuild-lamba-pipeline-build-service-role role, not to cfn-lambda-pipeline nor CodePipeline's role.
I am trying to deploy an AWS Lambda function that gets triggered when an AVRO file is written to an existing S3 bucket.
My serverless.yml configuration is as follows:
service: braze-lambdas
provider:
name: aws
runtime: python3.7
region: us-west-1
role: arn:aws:iam::<account_id>:role/<role_name>
stage: dev
deploymentBucket:
name: serverless-framework-dev-us-west-1
serverSideEncryption: AES256
functions:
hello:
handler: handler.hello
events:
- s3:
bucket: <company>-dev-ec2-us-west-2
existing: true
events: s3:ObjectCreated:*
rules:
- prefix: gaurav/lambdas/123/
- suffix: .avro
When I run serverless deploy, I get the following error:
ServerlessError: An error occurred: IamRoleCustomResourcesLambdaExecution - API: iam:CreateRole User: arn:aws:sts::<account_id>:assumed-role/serverless-framework-dev/jenkins_braze_lambdas_deploy is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH.
I see some mentions of Serverless needing iam:CreateRole because of how CloudFormation works but can anyone confirm if that is the only solution if I want to use existing: true? Is there another way around it except using the old Serverless plugin that was used prior to the framework adding support for the existing: true configuration?
Also, what is 1M5QQI6P2ZYUH in arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH? Is it a random identifier? Does this mean that Serverless will try to create a new IAM role every time I try to deploy the Lambda function?
I've just encountered this, and overcome it.
I also have a lambda for which I want to attach an s3 event to an already existing bucket.
My place of work has recently tightened up AWS Account Security by the use of Permission Boundaries.
So i've encountered the very similar error during deployment
Serverless Error ---------------------------------------
An error occurred: IamRoleCustomResourcesLambdaExecution - API: iam:CreateRole User: arn:aws:sts::XXXXXXXXXXXX:assumed-role/xx-crossaccount-xx/aws-sdk-js-1600789080576 is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::XXXXXXXXXXXX:role/my-existing-bucket-IamRoleCustomResourcesLambdaExec-LS075CH394GN.
If you read Using existing buckets on the serverless site, it says
NOTE: Using the existing config will add an additional Lambda function and IAM Role to your stack. The Lambda function backs-up the Custom S3 Resource which is used to support existing S3 buckets.
In my case I needed to further customise this extra role that serverless creates so that it is also assigned the permission boundary my employer has defined should exist on all roles. This happens in the resources: section.
If your employer is using permission boundaries you'll obviously need to know the correct ARN to use
resources:
Resources:
IamRoleCustomResourcesLambdaExecution:
Type: AWS::IAM::Role
Properties:
PermissionsBoundary: arn:aws:iam::XXXXXXXXXXXX:policy/xxxxxxxxxxxx-global-boundary
Some info on the serverless Resources config
Have a look at your own serverless.yaml, you may already have a permission boundary defined in the provider section. If so you'll find it under rolePermissionsBoundary, this was added in I think version 1.64 of serverless
provider:
rolePermissionsBoundary: arn:aws:iam::XXXXXXXXXXXX:policy/xxxxxxxxxxxx-global-boundary
If so, you can should be able to use that ARN in the resources: sample I've posted here.
For testing purpose we can use:
provider:
name: aws
runtime: python3.8
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action: "*"
Resource: "*"
For running sls deploy, I would suggest you use a role/user/policy with Administrator privileges.
If you're restricted due to your InfoSec team or the like, then I suggest you have your InfoSec team have a look at docs for "AWS IAM Permission Requirements for Serverless Framework Deploy." Here's a good link discussing it: https://github.com/serverless/serverless/issues/1439. At the very least, they should add iam:CreateRole and that can get you unblocked for today.
Now I will address your individual questions:
can anyone confirm if that is the only solution if I want to use existing: true
Apples and oranges. Your S3 configuration has nothing to do with your error message. iam:CreateRole must be added to the policy of whatever/whoever is doing sls deploy.
Also, what is 1M5QQI6P2ZYUH in arn:aws:iam::<account_id>:role/braze-lambdas-dev-IamRoleCustomResourcesLambdaExec-1M5QQI6P2ZYUH? Is it a random identifier? Does this mean that serverless will try to create a new role every time I try to deploy the function?
Yes, it is a random identifier
No, sls will not create a new role every time. This unique ID is cached and re-used for updates to an existing stack.
If a stack is destroyed/recreated, it will assign a generate a new unique ID.
I want to attach an existing role to my serverless.yml file, I have created a role in aws console, my code works fine when I test it in aws console, but when I try to test it with the http endpoint it gives me the following:
{"message": "Internal server error"}
I think is because I did not specify any role in the serverless.yml file for the simple reason that I don't know how to do it.
Here is my serverless.yml file :
Resources:
ec2-dev-instance-status:
Properties:
Path: "arn:aws:iam::119906431229:role/lambda-ec2-describe-status"
RoleName: lambda-ec2-describe-status
Type: "AWS::IAM::Role"
functions:
instance-status:
description: "Status ec2 instances"
events:
-
http:
method: get
path: users/create
handler: handler.instance_status
role: "arn:aws:iam::119906431229:role/lambda-ec2-describe-status"
provider:
name: aws
region: us-east-1
runtime: python2.7
stage: dev
resources: ~
service: ec2
Please help.
Thank you.
According to the documentation, there's a few ways to attach existing roles to a function (or entire stack)
Role defined as a Serverless resource
resources:
Resources:
myCustRole0:
Type: AWS::IAM::Role
# etc etc
functions:
func0:
role: myCustRole0
Role defined outside of the Serverless stack
functions:
func0:
role: arn:aws:iam::0123456789:role//my/default/path/roleInMyAccount
Note that the role you use must have additional permissions to log to cloudwatch etc, otherwise you won't get logging.