I have 2 policies each for S3 and Kinesis stream which includes DescribeStream. The S3 policy works well but I am getting this error with KinesisPolicy.
Resources:
S3
KinesisStream
Firehose
Role:
FirehoseRole
Policies:
S3 policy with the following permissions:
- 's3:AbortMultipartUpload'
- 's3:GetBucketLocation'
- 's3:GetObject'
- 's3:ListBucket'
- 's3:ListBucketMultipartUploads'
- 's3:PutObject'
Kinesis Policy with the following permissions:
- 'kinesis:PutRecord'
- 'kinesis:DescribeStreamSummary'
- 'kinesis:PutRecords'
- 'kinesis:GetShardIterator'
- 'kinesis:GetRecords'
- 'kinesis:DescribeStream'
Error:
The role (firehoseRole) is not authorized to perform DescribeStream on MyKinesisStream.
Cloud formation template
Resources:
S3Bucket:
Type: AWS::S3::Bucket
Properties:
VersioningConfiguration:
Status: Enabled
firehoseRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Sid: ''
Effect: Allow
Principal:
Service: firehose.amazonaws.com
Action: 'sts:AssumeRole'
Condition:
StringEquals:
'sts:ExternalId': !Ref 'AWS::AccountId'
DeliveryPolicy:
Type: AWS::IAM::Policy
Properties:
PolicyName: firehose_delivery_policy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- 's3:AbortMultipartUpload'
- 's3:GetBucketLocation'
- 's3:GetObject'
- 's3:ListBucket'
- 's3:ListBucketMultipartUploads'
- 's3:PutObject'
Resource:
- !Sub 'arn:aws:s3:::${S3Bucket}'
- !Sub 'arn:aws:s3:::${S3Bucket}*'
Roles:
- !Ref firehoseRole
KinesisPolicy:
Type: AWS::IAM::Policy
Properties:
PolicyName: kinesis_policy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- 'kinesis:PutRecord'
- 'kinesis:DescribeStreamSummary'
- 'kinesis:PutRecords'
- 'kinesis:GetShardIterator'
- 'kinesis:GetRecords'
- 'kinesis:DescribeStream'
Resource:
- !GetAtt MyKinesisStream.Arn
Roles:
- !Ref firehoseRole
MyKinesisStream:
Type: AWS::Kinesis::Stream
Properties:
ShardCount: 1
DeliveryStream:
Type: AWS::KinesisFirehose::DeliveryStream
Properties:
DeliveryStreamType: KinesisStreamAsSource
KinesisStreamSourceConfiguration:
KinesisStreamARN: !GetAtt MyKinesisStream.Arn
RoleARN: !GetAtt firehoseRole.Arn
S3DestinationConfiguration:
BucketARN: !GetAtt S3Bucket.Arn
BufferingHints:
IntervalInSeconds: 60
SizeInMBs: 50
CompressionFormat: UNCOMPRESSED
Prefix: firehose/
RoleARN: !GetAtt firehoseRole.Arn
I was able to resolve the error. I had to add DependsOn To DeliveryStream and include both the policies.
Related
i am trying to deploy below stack using sam template where it supposed to deploy lambda and would add a s3 trigger, but iam getting following error
Getting ValidationError when calling the CreateChangeSet operation: Template error: instance of Fn::GetAtt references undefined resource"
i am not sure whats went wrong here to get such error
yml template
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
Environment:
Type: String
S3:
Type: String
Key:
Type: String
SecretMgr:
Type: String
Resources:
LambdaS3ToKinesis:
Type: AWS::Serverless::Function
Properties:
Handler: lambda_function.lambda_handler
Runtime: python3.7
Timeout: 60
FunctionName: !Sub "my_s3_to_kinesis"
CodeUri: ./test/src
Role: !GetAtt testKinesisRole.Arn
Description: "My lambda"
Environment:
Variables:
KINESIS_STREAM: !Sub "test_post_kinesis"
DDB_TRACKER_TABLE: my_tracker_table
ENVIRONMENT: !Sub "${Environment}"
BUCKET_NAME: !Sub "${S3}"
Events:
FileUpload:
Type: S3
Properties:
Bucket: !Sub "${S3}"
Events: s3:ObjectCreated:*
Filter:
S3Key:
Rules:
- Name: prefix
Value: "${Environment}/test1/INPUT/"
- Name: suffix
Value: ".json"
- Name: prefix
Value: "${Environment}/test2/INPUT/"
- Name: suffix
Value: ".json"
LambdaTest1KinesisToDDB:
Type: AWS::Serverless::Function
Properties:
Handler: lambda_function.lambda_handler
Runtime: python3.7
Timeout: 60
FunctionName: !Sub "${Environment}_test1_to_ddb"
CodeUri: test1_kinesis_to_ddb/src/
Role: !GetAtt testKinesisToDDBRole.Arn
Description: "test post kinesis"
Layers:
- !Ref LambdaLayertest1
Environment:
Variables:
BUCKET_NAME: !Sub "${S3}"
DDB_ACC_PLCY_TABLE:test1
DDB_TRACKER_TABLE: test_tracker
ENVIRONMENT: !Sub "${Environment}"
S3_INVALID_FOLDER_PATH: invalid_payload/
S3_RAW_FOLDER_PATH: raw_payload/
S3_UPLOAD_FLAG: false
Events:
KinesisEvent:
Type: Kinesis
Properties:
Stream: !GetAtt Kinesistest1.Arn
StartingPosition: LATEST
BatchSize: 1
Enabled: true
MaximumRetryAttempts: 0
LambdaLayerTest1KinesisToDDB:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: !Sub "${Environment}_test1_kinesis_to_ddb_layer"
ContentUri: test1_kinesis_to_ddb/dependencies/
CompatibleRuntimes:
- python3.7
Metadata:
BuildMethod: python3.7
testKinesisRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Sub "${Environment}_s3_to_kinesis_role"
Description: Role for first lambda
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- s3.amazonaws.com
- lambda.amazonaws.com
Action:
- "sts:AssumeRole"
Policies:
- PolicyName: !Sub "${Environment}_s3_to_kinesis_policy"
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- s3:PutObject
- s3:GetObject
- s3:DeleteObject
Resource:
- !Sub "arn:aws:s3:::${S3}/*"
- !Sub "arn:aws:s3:::${S3}"
- Effect: Allow
Action:
- kinesis:PutRecord
Resource:
- !Sub "arn:aws:kinesis:${AWS::Region}:${AWS::AccountId}:mystream1/${Environment}_test1"
- !Sub "arn:aws:kinesis:${AWS::Region}:${AWS::AccountId}:mystream2/${Environment}_test2"
- Effect: Allow
Action:
- lambda:*
- cloudwatch:*
Resource: "*"
- Effect: Allow
Action:
- dynamodb:Put*
- dynamodb:Get*
- dynamodb:Update*
- dynamodb:Query
Resource:
- !GetAtt Dynamomytracker.Arn
- Effect: Allow
Action:
- kms:*
Resource:
- !Sub "${Key}"
testKinesisToDDBRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Sub "${Environment}_test1_to_ddb_role"
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- kinesis.amazonaws.com
- lambda.amazonaws.com
Action:
- "sts:AssumeRole"
ManagedPolicyArns:
- "arn:aws:iam::aws:test/service-role/AWSLambdaBasicExecutionRole"
Policies:
- PolicyName: !Sub "${Environment}_test1_kinesis_to_ddb_policy"
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- s3:PutObject
- s3:GetObject
- s3:DeleteObject
Resource:
- !Sub "arn:aws:s3:::${S3}/*"
- !Sub "arn:aws:s3:::${S3}"
- Effect: Allow
Action:
- kinesis:Get*
- kinesis:List*
- kinesis:Describe*
Resource:
- !GetAtt KinesisTest1.Arn
- !GetAtt KinesisTest2.Arn
- Effect: Allow
Action:
- dynamodb:Put*
- dynamodb:Get*
- dynamodb:Describe*
- dynamodb:List*
- dynamodb:Update*
- dynamodb:Query
- dynamodb:DeleteItem
- dynamodb:BatchGetItem
- dynamodb:BatchWriteItem
- dynamodb:Scan
Resource:
- !Sub
- "${Table}*"
- { Table: !GetAtt "Dynamotest.Arn" }
- !Sub
- "${Table}*"
- { Table: !GetAtt "Dynamotest.Arn" }
- Effect: Allow
Action:
- kms:*
Resource:
- !Sub "${Key}"
######################################
# Update for TEst2
######################################
KinesisTest2:
Type: AWS::Kinesis::Stream
Properties:
Name: !Sub ${Environment}_test2_kinesis
StreamEncryption:
EncryptionType: KMS
KeyId: !Sub "${Key}"
RetentionPeriodHours: 24
ShardCount: 1
LambdaLayerTest2KinesisToDDB:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: !Sub "${Environment}_test2_kinesis_to_ddb_layer"
ContentUri: test2_kinesis_to_ddb/dependencies/
CompatibleRuntimes:
- python3.7
Metadata:
BuildMethod: python3.7
LambdaTest2KinesisToDDB:
Type: AWS::Serverless::Function
Properties:
Handler: lambda_function.lambda_handler
Runtime: python3.7
Timeout: 60
FunctionName: !Sub "${Environment}_Test2_kinesis_to_ddb"
CodeUri: Test2_kinesis_to_ddb/src/
Role: !GetAtt testKinesisToDDBRole.Arn
Description: "Test2"
Layers:
- !Ref LambdaLayerTest2KinesisToDDB
Environment:
Variables:
BUCKET_NAME: !Sub "${S3}"
DDB_ACC_PLCY_TABLE: my_table2
DDB_TRACKER_TABLE: my_log
ENVIRONMENT: !Sub "${Environment}"
S3_INVALID_FOLDER_PATH: invalid_payload/
S3_RAW_FOLDER_PATH: raw_payload/
S3_UPLOAD_FLAG: false
Events:
KinesisEvent:
Type: Kinesis
Properties:
Stream: !GetAtt KinesisTest2.Arn
StartingPosition: LATEST
BatchSize: 1
Enabled: true
MaximumRetryAttempts: 0
can anybody help me how can resolve this? i am not sure what exactly missed in the template and how to resolve this error
You are using AWS Serverless Application Model and your template does not conform to its format. For example, its missing required Transform statement:
Transform: AWS::Serverless-2016-10-31
There could be many other things wrong, as your template is nor CloudFormation nor Serverless at this point.
I am trying to create a lambda function with a S3 trigger. While executing the templates, I am getting S3 bucket already exist error. There is no any buckets with the same name in my S3 and even in this code I am creating bucket only once but somehow it seems it is creating buckets twice.
Below are the my cloudformation templates.
'''python
AWSTemplateFormatVersion : 2010-09-09
Parameters:
BucketName:
Type: String
Resources:
Bucket:
Type: AWS::S3::Bucket
DependsOn:
- ProcessingLambdaPermission
Properties:
BucketName: !Ref BucketName
NotificationConfiguration:
LambdaConfigurations:
- Event: s3:PutObject:*
Function: !GetAtt ProcessingLambdaFunction.Arn
Filter:
S3Key:
Rules:
- Name: suffix
Value: .txt
ProcessingLambdaPermission:
Type: AWS::Lambda::Permission
Properties:
Action: 'lambda:InvokeFunction'
FunctionName: !Ref ProcessingLambdaFunction
Principal: s3.amazonaws.com
SourceArn: 'arn:aws:s3:::hope'
SourceAccount: !Ref AWS::AccountId
ProcessingLambdaExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- sts:AssumeRole
Policies:
- PolicyName: allowLogging
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- logs:*
Resource: arn:aws:logs:*:*:*
- PolicyName: getAndDeleteObjects
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- s3:GetObject
- s3:DeleteObject
Resource: !Sub 'arn:aws:s3:::${BucketName}/*'
ProcessingLambdaFunction:
Type: AWS::Lambda::Function
Properties:
Code:
ZipFile: !Sub |
import json
import boto3
s3 = boto3.client("s3")
def lambda_handler(event,context):
print("hello")
Handler: index.handler
Role: !GetAtt ProcessingLambdaExecutionRole.Arn
Runtime: python2.7
MemorySize: 512
Timeout: 120
'''
I am trying to create a KMS Key using Cloudformation unfortunately I am not able to create it. In the console I am getting the following error :
null (Service: Kms, Status Code: 400, Request ID: 156b452d-8ffb-5517-9jbc-a6yh6e3a79, Extended Request ID: null)
I am not able to understand the root cause of the issue. Please refer to the attached template which I am using to create the KMS :
AWSTemplateFormatVersion: 2010-09-09
Description: Testing KMS Using CloudFormation
Resources:
KMSEncryption:
Type: AWS::KMS::Key
Properties:
Description: KMS-Key
KeyPolicy:
Version: '2012-10-17'
Id: encryption-key
EnableKeyRotation: 'True'
PendingWindowInDays: 7
Statement:
- Sid: Enable IAM User Permissions
Effect: Allow
Principal:
AWS:
Fn::Join:
- ''
- - 'arn:aws:iam::'
- Ref: AWS::AccountId
- :root
Action: kms:*
Resource: '*'
- Sid: Allow use of the key
Effect: Allow
Principal:
AWS:
Fn::Join:
- ''
- - 'arn:aws:iam::'
- Ref: AWS::AccountId
- :role/
- !Ref KMSLambdaRole
Action:
- kms:DescribeKey
- kms:Encrypt
- kms:Decrypt
- kms:ReEncrypt*
- kms:GenerateDataKey
- kms:GenerateDataKeyWithoutPlaintext
Resource: '*'
- Sid: Allow administration of the key
Effect: Allow
Principal:
AWS: arn:aws:iam::xxxxxxxxx:user/Shiv
Action:
- kms:Create*
- kms:Describe*
- kms:Enable*
- kms:List*
- kms:Put*
- kms:Update*
- kms:Revoke*
- kms:Disable*
- kms:Get*
- kms:Delete*
- kms:ScheduleKeyDeletion
- kms:CancelKeyDeletion
EncryptionAlias:
Type: AWS::KMS::Alias
Properties:
AliasName: 'Testing'
TargetKeyId:
Ref: KMSEncryption
KMSLambdaRole:
Type: AWS::IAM::Role
Properties:
RoleName: 'TestingKMSAccess'
AssumeRolePolicyDocument:
Statement:
- Action: ['sts:AssumeRole']
Effect: Allow
Principal:
Service: [lambda.amazonaws.com]
Path: /
ManagedPolicyArns:
- arn:aws:iam::aws:policy/ReadOnlyAccess
Policies:
- PolicyName: AWSLambdaBasicExecutionRole
PolicyDocument:
Version: 2012-10-17
Statement:
- Sid: SQS
Action:
- 'sqs:SendMessage'
- 'sqs:SendMessageBatch'
Effect: Allow
Resource: '*'
Your EnableKeyRotation and PendingWindowInDays should be outside of KeyPolicy:
Resources:
KMSEncryption:
Type: AWS::KMS::Key
Properties:
Description: KMS-Key
EnableKeyRotation: 'True'
PendingWindowInDays: 7
KeyPolicy:
Version: '2012-10-17'
Id: encryption-key
# the rest
Note, that there could be other issues which are not yet apparent, e.g. non-existing principles.
error : 1 validation error detected: Value 'BATS::SAM::CodeS3Bucket' at 'code.s3Bucket' failed to satisfy constraint: Member must satisfy regular expression pattern: ^[0-9A-Za-z.-_]*(?
what is the role of BATS::SAM::CodeS3Bucket
Conditions:
HasBucketKey:
Fn::Not:
- Fn::Equals:
- {Ref: BucketKey}
- ''
HasBucketName:
Fn::Not:
- Fn::Equals:
- {Ref: BucketName}
- ''
Parameters:
BucketKey: {Default: '', Type: String}
BucketName: {Default: '', Type: String}
Resources:
OriginAccessLambdaRole:
Properties:
AssumeRolePolicyDocument:
Statement:
- Action: ['sts:AssumeRole']
Effect: Allow
Principal:
Service: [lambda.amazonaws.com]
Version: '2012-10-17'
Policies:
- PolicyDocument:
Statement:
- Action: ['logs:CreateLogStream', 'logs:PutLogEvents', 'logs:CreateLogGroup']
Effect: Allow
Resource: '*'
- Action: ['cloudfront:*']
Effect: Allow
Resource: '*'
PolicyName: CloudFrontOAIPolicy
Type: AWS::IAM::Role
OriginAccessLambda:
DependsOn: [OriginAccessLambdaRole]
Properties:
Code:
S3Bucket:
Fn::If:
- HasBucketName
- {Ref: BucketName}
- BATS::SAM::CodeS3Bucket
S3Key:
Fn::If:
- HasBucketKey
- {Ref: BucketKey}
- BATS::SAM::CodeS3Key
Description: Creates an origin access identity
Handler: handlers.oai
MemorySize: 2048
Role:
Fn::GetAtt: [OriginAccessLambdaRole, Arn]
Runtime: python3.6
Timeout: 120
Type: AWS::Lambda::Function
Transform: AWS::Serverless-2016-10-31**
this my cloudFormation template
IAM role is creating successfully but while lambda creation above error coming.
thanks
I have a cloudformation stack to create my codepipeline/codebuild resources etc. When I try to run it, I get:
iam:PutRolePolicy User: arn:aws:sts::0000000000:assumed-role/aaaaaaaaaa/AWSCloudFormation is not authorized to perform: iam:PutRolePolicy on resource: role bbbbbbbbbb
Whats wrong? I already have a policy like:
- Effect: Allow
Resource: !Sub 'arn:aws:iam::${AWS::AccountId}:role/*'
Action:
- 'iam:GetRole'
- 'iam:CreateRole'
- 'iam:DeleteRole'
- 'iam:PassRole'
- 'iam:AttachRolePolicy'
- 'iam:DetachRolePolicy'
- 'iam:DeleteRolePolicy'
- 'iam:PutRolePolicy'
My stack YAML
AWSTemplateFormatVersion : '2010-09-09'
Description: 'Skynet stack for CodePipeline'
Parameters:
PipelineName:
Type: String
Description: Pipeline Name (Lower case only, since S3 bucket names can only have lowercase)
Default: skynet-pipeline
GitHubOwner:
Type: String
Description: GitHub Owner
Default: 2359media
GitHubRepo:
Type: String
Description: GitHub Repo
Default: 'skynet'
GitHubBranch:
Type: String
Description: GitHub Branch
Default: master
GitHubToken:
Type: String
Description: GitHub Token
NoEcho: true
Resources:
Pipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: !Ref PipelineName
RoleArn: !GetAtt [PipelineRole, Arn]
ArtifactStore:
Location: !Ref PipelineArtifactStore
Type: S3
DisableInboundStageTransitions: []
Stages:
- Name: GitHubSource
Actions:
- Name: Source
ActionTypeId:
Category: Source
Owner: ThirdParty
Version: 1
Provider: GitHub
Configuration:
Owner: !Ref GitHubOwner
Repo: !Ref GitHubRepo
Branch: !Ref GitHubBranch
OAuthToken: !Ref GitHubToken
OutputArtifacts:
- Name: SourceCode
- Name: Build
Actions:
- Name: Lambda
InputArtifacts:
- Name: SourceCode
OutputArtifacts:
- Name: LambdaPackage
ActionTypeId:
Category: Build
Owner: AWS
Version: 1
Provider: CodeBuild
Configuration:
ProjectName: !Ref CodeBuildLambda
- Name: CreateChangeSet
Actions:
- Name: Lambda
InputArtifacts:
- Name: LambdaPackage
OutputArtifacts:
- Name: LambdaDeployment
ActionTypeId:
Category: Deploy
Owner: AWS
Version: 1
Provider: CloudFormation
Configuration:
ActionMode: CHANGE_SET_REPLACE
ChangeSetName: !Sub
- '${PipelineName}-lambda'
- {PipelineName: !Ref PipelineName}
RoleArn: !GetAtt [CloudFormationRole, Arn]
StackName: !Sub
- '${PipelineName}-lambda'
- {PipelineName: !Ref PipelineName}
TemplatePath: 'LambdaPackage::SkynetLambdaPackaged.yml'
- Name: ExecuteChangeSet
Actions:
- Name: Lambda
ActionTypeId:
Category: Deploy
Owner: AWS
Version: 1
Provider: CloudFormation
Configuration:
ActionMode: CHANGE_SET_EXECUTE
ChangeSetName: !Sub
- '${PipelineName}-lambda'
- {PipelineName: !Ref PipelineName}
StackName: !Sub
- '${PipelineName}-lambda'
- {PipelineName: !Ref PipelineName}
CodeBuildLambda:
Type: AWS::CodeBuild::Project
Properties:
Name: !Sub '${PipelineName}-lambda'
Artifacts:
Type: CODEPIPELINE
Environment:
ComputeType: BUILD_GENERAL1_SMALL
Image: aws/codebuild/nodejs:7.0.0
Type: LINUX_CONTAINER
EnvironmentVariables:
- Name: S3_BUCKET
Value: !Ref PipelineArtifactStore
ServiceRole: !Ref CodeBuildRole
Source:
BuildSpec: 'lambda/buildspec.yml'
Type: CODEPIPELINE
PipelineArtifactStore:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub '${PipelineName}-artifacts'
VersioningConfiguration:
Status: Enabled
CodeBuildRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Sub '${PipelineName}-codebuild'
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
Effect: Allow
Principal:
Service: codebuild.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: !Sub '${PipelineName}-codebuild'
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Resource: 'arn:aws:logs:*:*:*'
Action:
- 'logs:CreateLogGroup'
- 'logs:CreateLogStream'
- 'logs:PutLogEvents'
- Effect: Allow
Resource:
- !Sub 'arn:aws:s3:::codepipeline-${AWS::Region}-*/*'
- !Sub
- '${PipelineArtifactStoreArn}/*'
- {PipelineArtifactStoreArn: !GetAtt [PipelineArtifactStore, Arn]}
Action:
- 's3:GetObject'
- 's3:GetObjectVersion'
- 's3:PutObject'
CloudFormationRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Sub '${PipelineName}-cloudformation'
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: cloudformation.amazonaws.com
Action:
- sts:AssumeRole
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/AWSLambdaExecute'
Policies:
- PolicyName: !Sub '${PipelineName}-cloudformation'
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Resource: '*'
Action:
- 's3:GetObject'
- 's3:GetObjectVersion'
- 's3:GetBucketVersioning'
- Effect: Allow
Resource: 'arn:aws:s3:::codepipeline*'
Action:
- 's3:PutObject'
- Effect: Allow
Resource: !Sub 'arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:*'
Action:
- 'lambda:*'
- Effect: Allow
Resource: !Sub 'arn:aws:apigateway:${AWS::Region}::*'
Action:
- 'apigateway:*'
- Effect: Allow
Resource: '*'
Action:
- 'lambda:CreateEventSourceMapping'
- 'lambda:DeleteEventSourceMapping'
- 'lambda:GetEventSourceMapping'
- Effect: Allow
Resource: !Sub 'arn:aws:iam::${AWS::AccountId}:role/*'
Action:
- 'iam:GetRole'
- 'iam:CreateRole'
- 'iam:DeleteRole'
- 'iam:PassRole'
- 'iam:AttachRolePolicy'
- 'iam:DetachRolePolicy'
- 'iam:DeleteRolePolicy'
- 'iam:PutRolePolicy'
- Effect: Allow
Resource: '*'
Action:
- 'iam:PassRole'
- Effect: Allow
Resource: !Sub 'arn:aws:cloudformation:${AWS::Region}:aws:transform/Serverless-2016-10-31'
Action:
- 'cloudformation:CreateChangeSet'
PipelineRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Sub '${PipelineName}-pipeline'
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Action: ['sts:AssumeRole']
Effect: Allow
Principal:
Service: [codepipeline.amazonaws.com]
Path: /
Policies:
- PolicyName: SkynetPipeline
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- 's3:GetObject'
- 's3:GetObjectVersion'
- 's3:GetBucketVersioning'
Effect: 'Allow'
Resource: '*'
- Action:
- 's3:PutObject'
Effect: 'Allow'
Resource:
- !GetAtt [PipelineArtifactStore, Arn]
- Action:
- 'codecommit:CancelUploadArchive'
- 'codecommit:GetBranch'
- 'codecommit:GetCommit'
- 'codecommit:GetUploadArchiveStatus'
- 'codecommit:UploadArchive'
Effect: 'Allow'
Resource: '*'
- Action:
- 'codedeploy:CreateDeployment'
- 'codedeploy:GetApplicationRevision'
- 'codedeploy:GetDeployment'
- 'codedeploy:GetDeploymentConfig'
- 'codedeploy:RegisterApplicationRevision'
Effect: 'Allow'
Resource: '*'
- Action:
- 'elasticbeanstalk:*'
- 'ec2:*'
- 'elasticloadbalancing:*'
- 'autoscaling:*'
- 'cloudwatch:*'
- 's3:*'
- 'sns:*'
- 'cloudformation:*'
- 'rds:*'
- 'sqs:*'
- 'ecs:*'
- 'iam:PassRole'
Effect: 'Allow'
Resource: '*'
- Action:
- 'lambda:InvokeFunction'
- 'lambda:ListFunctions'
Effect: 'Allow'
Resource: '*'
- Action:
- 'opsworks:CreateDeployment'
- 'opsworks:DescribeApps'
- 'opsworks:DescribeCommands'
- 'opsworks:DescribeDeployments'
- 'opsworks:DescribeInstances'
- 'opsworks:DescribeStacks'
- 'opsworks:UpdateApp'
- 'opsworks:UpdateStack'
Effect: 'Allow'
Resource: '*'
- Action:
- 'cloudformation:CreateStack'
- 'cloudformation:DeleteStack'
- 'cloudformation:DescribeStacks'
- 'cloudformation:UpdateStack'
- 'cloudformation:CreateChangeSet'
- 'cloudformation:DeleteChangeSet'
- 'cloudformation:DescribeChangeSet'
- 'cloudformation:ExecuteChangeSet'
- 'cloudformation:SetStackPolicy'
- 'cloudformation:ValidateTemplate'
- 'iam:PassRole'
Effect: 'Allow'
Resource: '*'
- Action:
- 'codebuild:BatchGetBuilds'
- 'codebuild:StartBuild'
Effect: 'Allow'
Resource: '*'
Seems like either manually deleting stack and re-creating or changing IAM resource to * solves the issue.
In my I need to use aws sts assume-role first