I am trying to set up a resource pipeline, where I want to deploy all my resources using CloudFormation. I have a separate pipeline to deploy code.
Using below CloudFormation template
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Retain",
"Properties": {
"BucketName": "devanimalhubstorage"
}
},
"HelloLambdaRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": "HelloLambdaRole",
"AssumeRolePolicyDocument": {
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
}
},
"AnimalhubLambdaFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"FunctionName": "AnimalhubLambdaFunction",
"Role": {
"Fn::GetAtt": ["HelloLambdaRole","Arn"]
},
"Code": {},
"Runtime": "dotnetcore2.1",
"Handler": "myfirstlambda::myfirstlambda.Function::FunctionHandler"
}
}
}
}
Problem Resource handler returned message: "Please provide a source for function code. (Service: Lambda, Status Code: 400, Request ID: 377fff66-a06d-495f-823e-34aec70f0e22, Extended Request ID: null)" (RequestToken: 9c9beee7-2c71-4d5d-e4a8-69065e12c5fa, HandlerErrorCode: InvalidRequest)
I want to build a separate pipeline for code build and deployment. Can't we deploy Lambda function without code?
What is the recommended solution in AWS? (I used to follow this approach in Azure, am new in AWS)
First, I would advise you to look into AWS SAM. It is very helpful when creating and deploying serverless applications and will have a lot of examples to help you with your use case.
Second, using separate pipelines for this purpose is not the recommended way in AWS. Using dummy code, as the other answer suggests, is also quite dangerous, since an update to your cloudformation would override any other code that you have deployed to the lambda function using your other pipeline.
In a serverless application like this, you could make a separation into two or more cloudformation stacks. For example, you could create your S3 buckets and other more "stable" infrastructure in one stack, and deploy this either manually or in a pipeline. And deploy your code in a separate pipeline using another cloudformation stack. Any values (ARNs etc) that would be needed from the more stable resources, you could inject as a parameter in the template, or use the ImportValue function of CloudFormation. I'd personally recommend using the parameter since it is more flexible for future changes.
Related
hope you're all good,
I keep getting this error when using sls deploy with runtime node16x:
npx sls deploy --stage dev --alias iteration01
Deploying my-service to stage dev (us-east-1)
Packaging
Adding build information (0.18.0-63aef72)
Adding branch information (features/iteration01)
Excluding development dependencies for service package
Retrieving CloudFormation stack
Preparing alias ...
Processing custom resources
Removing resources:
Processing functions
Processing API
Configuring stage
Processing event source subscriptions
Processing SNS Lambda subscriptions
Adding deployment date information
Uploading
Uploading CloudFormation file to S3
Uploading State file to S3
Uploading service my-service.zip file to S3 (16.59 MB)
✖ Stack my-service-dev failed to deploy (62s)
Environment: darwin, node 16.17.0, framework 3.18.2 (local), plugin 6.2.2, SDK 4.3.2
Credentials: Local, environment variables
Docs: docs.serverless.com
Support: forum.serverless.com
Bugs: github.com/serverless/serverless/issues
Error:
The CloudFormation template is invalid: Template format error: Unresolved resource dependencies [HelloWorld
LambdaPermissionApiGateway] in the Resources block of the template
This is happening to all of our services that we try to upgrade runtime to nodejs16.x
following are my package.json and serverless.yml
serverless.yml
package.json
If I check the cloudformation template generated it's there under resources:
"HelloWorldLambdaPermissionApiGateway": {
"Type": "AWS::Lambda::Permission",
"Properties": {
"FunctionName": {
"Fn::GetAtt": [
"HelloWorldLambdaFunction",
"Arn"
]
},
"Action": "lambda:InvokeFunction",
"Principal": "apigateway.amazonaws.com",
"SourceArn": {
"Fn::Join": [
"",
[
"arn:",
{
"Ref": "AWS::Partition"
},
":execute-api:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":",
{
"Ref": "ApiGatewayRestApi"
},
"/*/*"
]
]
}
}
}
Have anyone seeing this error before? If I delete the http handler it works fine
PS: Upgrade to serverless 3.22.0 didn't work.
We have an AWS Amplify project that I am in the process of migrating the API from Transformer 1 to 2.
As part of this, we have a number of custom resolvers that previously had their own stack JSON template in the stacks/ folder as generated by the Amplify CLI.
As per the migration instructions, I have created new custom resources using amplify add custom which allow me to create either CDK (Cloud development kit) resource or a CloudFormation template. I just want a lift n shift for now so I've gone with the template option and moved the content from the stack JSON to the new custom resolver JSON template.
This seems like it should work, but the custom templates no longer have access to the parameters shared from the parent stack:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Metadata": {},
"Parameters": {
"AppSyncApiId": {
"Type": "String",
"Description": "The id of the AppSync API associated with this project."
},
"S3DeploymentBucket": {
"Type": "String",
"Description": "The S3 bucket containing all deployment assets for the project."
},
"S3DeploymentRootKey": {
"Type": "String",
"Description": "An S3 key relative to the S3DeploymentBucket that points to the root of the deployment directory."
}
},
...
}
So these are standard parameters that were used previously and my challenge now is in accessing the deployment bucket and root key as these values are generated upon deployment.
The exact use case is for the AppSync function configuration when I attempt to locate the request and response mapping template S3 locations:
"RequestMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/MyCustomResolver.req.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
},
The error message I am receiving is
AWS::CloudFormation::Stack Tue Feb 15 2022 18:20:42 GMT+0000 (Greenwich Mean Time) Parameters: [S3DeploymentBucket, AppSyncApiId, S3DeploymentRootKey] must have values
I feel like I am missing a step to plumb the output values to the parameters in the JSON but I can't find any documentation to suggest how to do this using the updated Amplify CLI options.
Let me know if you need any further information and fingers crossed it is something simple for you Amplify/CloudFormation ninjas out there!
Thank you in advance!
I have been trying to use the update functionality of the AWS CLI to update codepipeline with the following command:
aws codepipeline update-pipeline --cli-input-json file://Pipelines/AWS/SomeName.json
And I keep getting the following error
Unknown parameter in pipeline.stages[0].actions[0]: "region", must be one of: name, actionTypeId, runOrder, configuration, outputArtifacts, inputArtifacts, roleArn
I have checked the documentation for AWS and I don´t think theres anything wrong with the way actions is set up, here is the snippet from JSON:
"actions": [
{
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"provider": "S3",
"version": "1"
},
"runOrder": 1,
"configuration": {
"PollForSourceChanges": "false",
"S3Bucket": "some-bucket-name",
"S3ObjectKey": "someEnvironment/someZip.zip"
},
"outputArtifacts": [
{
"name": "SourceArtifact"
}
],
"inputArtifacts": [],
"region": "eu-west-1"
},...
]
According to the documentation provided at https://docs.aws.amazon.com/codepipeline/latest/userguide/action-reference-S3.html
everything seems to be correct. Removing the region parameter updates the pipeline correctly but I am unsure of the consequences that could have on the updates itself
Any help is appreciated.
Cheers
Sky
If you try to create the pipeline through the AWS console and choose S3 as a source you will notice the region option is not available (as shown in the screenshot below). I would say this is a current limitation to the service more then a chosen design and a gap in documentation (however, happy to be proven wrong).
However, you could try include the full S3 bucket Arn which would include the region. Or take comfort in that any action deployed (without a region specified) defaults to the same region that the codepipeline is in, as per the AWS documentation.
I had previously created a Jenkins build provider using CodePipeline console. During creation, it asks for a Jenkins server URL.
Now, I need to change my Jenkins server URL, but when I try to edit, there isn't any option to change the build provider. See snapshot below:
The only solution I see is to add a new one.
I tried to get the pipeline using aws-cli,
aws codepipeline get-pipeline --name <pipeline-name>
But the JSON response just has a reference to to the build provider:
...
},
{
"name": "Build",
"actions": [
{
"inputArtifacts": [
{
"name": "APIServer"
}
],
"name": "Build",
"actionTypeId": {
"category": "Build",
"owner": "Custom",
"version": "1",
"provider": "jenkins-api-server"
},
"outputArtifacts": [
{
"name": "APIServerTarball"
}
],
"configuration": {
"ProjectName": "api-server-build"
},
"runOrder": 1
}
]
},
{
I couldn't find any other command to manage the build provider either. So my question is where and how should I update the existing build providers configuration in AWS CodePipeline?
The Jenkins action is actually defined as a custom action in your account. If you want to update the action configuration you can define a new version using the create custom action type API. Your changes will be a new "version" of the action type, so you then update the actionTypeId in your pipeline to point to your new version.
Once you're done, you can also delete the old version to prevent it appearing in the action list.
Regarding the Jenkins URL changing, one solution to this is to setup a DNS record (eg. via Route53) pointing to your Jenkins instance and use the DNS hostname in your action configuration. That way you can remap the DNS record in future without updating your pipeline.
I'm trying to create an s3 bucket in a specific region (us-west-2).
This doesn't seem possible using Cloudformation Templates.
Any ideas? I have had no luck explicitly naming it using the service-region-hash convention that i read about.
Template:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"AccessControl": "PublicReadWrite",
}
}
},
"Outputs": {
"BucketName": {
"Value": {
"Ref": "S3Bucket"
},
"Description": "Name of the sample Amazon S3 bucket with a lifecycle configuration."
}
}
}
If you want to create the s3 bucket in a specific region, you need to run the Cloudformation JSON template from this region.
The end point URLs for cloudFormation are Region based (https://aws.amazon.com/cloudformation/faqs/?nc1=h_ls#regions and http://docs.aws.amazon.com/general/latest/gr/rande.html#cfn_region) and you cannot as for today cross check region.
I saw it has been discussed few times on aws support thread so might be something that will be done in future
The bucket is created within 1 region and when you need to access it you need to pass the region to the s3 end-point.
If you want to enable cross-region replication you can do that from the cloud formation template with the ReplicationConfiguration properties