I have created a cloudformation template which creates a new repo in codecommit,Also it need to pull the source.zip from S3 and copy it to the repo. but while running the template i see 400 bad request.
CF template:
AWSTemplateFormatVersion: 2010-09-09
Description: my First code commit CF template
Parameters:
DemoBucket:
Type: String
Description: Bucket in which you have code
Default: jaivijaycccf
DemoKey:
Type: String
Description: key of zipped code
Default: demo.zip
Resources:
HelloWorld:
Type: AWS::CodeCommit::Repository
Properties:
RepositoryName: HelloWorldApp
RepositoryDescription: This is a repository for my project with code from MySourceCodeBucket
Code:
BranchName: development
S3:
Bucket: !Ref DemoBucket
Key: !Ref DemoKey
ObjectVersion: 1
If the S3 bucket that is storing the source code does not use Object Versioning then providing the ObjectVersion paramater to the Cloudformation template will cause it to fail.
Removing the unnecessary parameter will fix the problem.
I'm adding this as an answer after you've confirmed that you aren't using object versioning.
Related
I have a CFN template where in I am creating 2 s3 buckets for the image resizing using CloudFront.
the issue is that I want to use an already existing bucket from s3 for these functions.
but I get an error that s3 already exists when I provide the resource ARN and other data.
how can I resolve this?
I tried giving the details ARN name etc and tried deploying but it doesn't work
Something like this would help you:
AWSTemplateFormatVersion: '2010-09-09'
Description: 'CFN template example for referencing existing S3 bucket to lambda'
Parameters:
myS3Bucket:
Type: String
Description: Provide the S3 bucket you want to referece into your lambda.
Resources:
MyLambdaFunction:
Type: AWS::Lambda::Function
Properties:
Description: A lambda function
Handler: index.handler
Runtime: python3.7
Environment:
Variables:
S3_BUCKET: !Ref myS3Bucket
I am trying to create a Workflow object using AWS CloudFormation. This workflow will be used with AWS File Transfer Family so that files get copied to S3 upon uploading.
AWSTemplateFormatVersion: "2010-09-09"
Resources:
SftpToS3Workflow:
Type: AWS::Transfer::Workflow
Properties:
Description: 'Workflow used by AWS File Transfer Family. Copies the files to S3'
Steps:
- Type: COPY
CopyStepDetails:
Name: copt-to-s3-wf-step
DestinationFileLocation:
S3FileLocation:
Bucket: !ImportValue GenesysS3BucketName
Key: "genesys/"
OverwriteExisting: 'TRUE'
Outputs:
SftpToS3WorkflowId:
Description: 'Id of the Workflow'
Value: !GetAtt SftpToS3Workflow.WorkflowId
Export:
Name: SftpToS3WorkflowId
Unfortunately, this script fails with the below error. The error does not say what property is failing validation. Can someone help, please? I could not find even one single example on GitHub.
Properties validation failed for resource SftpToS3Workflow with message: #/Description: failed validation constraint for keyword [pattern]
I have used this CloudFormation schema to write the code:
https://github.com/APIs-guru/openapi-directory/blob/0380216a44c364b4517b31a93295089a6f4f23b9/APIs/amazonaws.com/transfer/2018-11-05/openapi.yaml
The Description can only be
^[\w- ]*$
So it should be:
Description: 'Workflow used by AWS File Transfer Family - Copies the files to S3'
I have a yaml cloud formation file which requires a variable stored in ssm parameter. The yaml file is a CFT template. Below is the sample code,
AWSTemplateFormatVersion: 2010-09-09
Description: 'Fully Automated OT Archival Data Migration'
Parameters:
Environment:
Description: 'Stage type (for Tags)'
Type: String
Default: dev
Resources:
S3Bucket:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: '{{resolve:ssm:/opentext/config/automated-ot-archival-data-migration/migration.bucket.name:1}}-${Environment}'
When I upload the code to cloudformation in AWS console, I results with an error. I'm wondering whether the ssm param reference is correct or not.
Please let me know if you find any issues here.
Thanks
You are missing the !Sub function for your {Environment} variable.
BucketName: !Sub '{{resolve:ssm:/opentext/config/automated-ot-archival-data-migration/migration.bucket.name:1}}-${Environment}'
I am attempting to create a CloudFormation template for an AWS lambda service and I'm running into a "chicken or the egg" scenario between the s3 bucket holding my lambda code, and the lambda function calling said bucket.
The intent is for our lambda code to be built to a jar, which will be hosted in an S3 Bucket, and our lambda function will reference that bucket. However when I run the template (using the CLI aws cloudformation create-stack --template-body "file://template.yaml"), I run into the following error creating the lambda function:
CREATE_FAILED Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist. (Service: AWSLambdaInternal; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: ...; Proxy: null)
I believe this is happening because cloudformation is building both the bucket and lambda in the same transaction, and I can't stop it in the middle to push content into the brand new bucket.
I can't be the only one that has this problem, so I'm wondering if there's a common practice for tackling it? I'd like to keep all my configuration in a single template file if possible, but the only solutions I'm coming up with would require splitting the stack creation into multiple steps. (e.g. build the bucket first, deploy my code to it, then create the rest of the stack.) Is there a better way to do this?
template.yaml (the relevant bits)
...
myS3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub "${AWS::StackName}"
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
AccessControl: Private
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
VersioningConfiguration:
Status: Enabled
myLambdaFunction:
Type: AWS::Lambda::Function
Properties:
FunctionName: !Sub "${AWS::StackName}-dec"
Handler: "lambda.Handler"
Role: !GetAtt myLambdaExecutionRole.Arn
Code:
S3Bucket: !Ref myS3Bucket
S3Key: "emy-lambda-fn.jar"
Runtime: "java8"
Timeout: 90
MemorySize: 384
Environment:
Variables:
stackName: !Sub "${AWS::StackName}"
...
I'm coming up with would require splitting the stack creation into multiple steps. [...] Is there a better way to do this?
Splitting template into two is the most logical and easiest way of doing what you are trying to do.
There are some alternatives that would allow you to keep everything in one template, but they are more difficult to implement, manage and simply use. One alternative would be to develop a custom resources. The resource would be in the form of a lambda function that would get invoked after the bucket creation. The lambda would wait and check for existence of your emy-lambda-fn.jar in the bucket, and when the key is uploaded (within 15 min max), the function returns, and your stack creation continues. This means that your myLambdaFunction would be creating only after the custom resource returns, ensuring that emy-lambda-fn.jar exists.
I'm new to SAM templates. I have the following snippet of my SAM Template where I used to pass the name of bucket name as a parameter from outside of this SAM YAML file :-
SAM Template:-
MyLambda:
Type: AWS::Serverless::Function
Properties:
CodeUri: ./functions/test/dist/
Handler: index.lambdaHandler
Runtime: nodejs12.x
Events:
S3PutObjectEvent:
Type: S3
Properties:
Bucket: !Ref S3BucketName
Events: s3:ObjectCreated:*
Parameter.YAML:-
DeploymentEnvironment:
default:
S3BucketName: my-awesome-s3-bucket
Now, I do not create any S3 Bucket using SAM Template or Infrastructure as a code (IaC). Bucket creation is done by Lambda code itself hence there is no S3 Object Type declaration in my SAM Template.
When I execute this command, sam validate to validate the SAM Template then I get this error:-
/template.yaml' was invalid SAM Template.
Error: [InvalidResourceException('MyLambda', 'Event with id [S3PutObjectEvent] is invalid. S3 events must reference an S3 bucket in the same template.')] ('MyLambda', 'Event with id [S3PutObjectEvent] is invalid. S3 events must reference an S3 bucket in the same template.')
I really need your help in achieving this as I tried hard in getting it solved. I read various forums, not sure if we can pass the bucket name from outside of the SAM template or not.
Is there any way workaround? This is really critical issue for me. Appreciate your help on this. thanks
Bucket creation is done by Lambda code itself
I'd recommend against this pattern, as your Lambda even source won't get created if the Bucket doesn't already exist.
Try creating the bucket in your SAM template, and pass the bucket name to your function as an environment variable.
Optionally you can set different environment names on your bucket name (addressing comment) using Parameters.
Parameters:
Env:
Type: String
AllowedValues:
- dev
- qa
- prod
Default: dev
Resources:
MyBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub 'My-unique-bucket-name-${Env}'
MyLambda:
Type: AWS::Serverless::Function
Properties:
CodeUri: ./functions/test/dist/
Handler: index.lambdaHandler
Runtime: nodejs12.x
Environment:
Variables:
BUCKET_NAME: !Ref MyBucket # passed to Lambda as environment variable
Events:
S3PutObjectEvent:
Type: S3
Properties:
Bucket: !Ref MyBucket
Events: s3:ObjectCreated:*
And get the bucket name in your function
const bucket = process.env.BUCKET_NAME