Following migrating Amplify CLI and AppSync Transformer to v2 no longer able to access CloudFormation parameters from a custom resource - amazon-web-services

We have an AWS Amplify project that I am in the process of migrating the API from Transformer 1 to 2.
As part of this, we have a number of custom resolvers that previously had their own stack JSON template in the stacks/ folder as generated by the Amplify CLI.
As per the migration instructions, I have created new custom resources using amplify add custom which allow me to create either CDK (Cloud development kit) resource or a CloudFormation template. I just want a lift n shift for now so I've gone with the template option and moved the content from the stack JSON to the new custom resolver JSON template.
This seems like it should work, but the custom templates no longer have access to the parameters shared from the parent stack:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Metadata": {},
"Parameters": {
"AppSyncApiId": {
"Type": "String",
"Description": "The id of the AppSync API associated with this project."
},
"S3DeploymentBucket": {
"Type": "String",
"Description": "The S3 bucket containing all deployment assets for the project."
},
"S3DeploymentRootKey": {
"Type": "String",
"Description": "An S3 key relative to the S3DeploymentBucket that points to the root of the deployment directory."
}
},
...
}
So these are standard parameters that were used previously and my challenge now is in accessing the deployment bucket and root key as these values are generated upon deployment.
The exact use case is for the AppSync function configuration when I attempt to locate the request and response mapping template S3 locations:
"RequestMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/MyCustomResolver.req.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
},
The error message I am receiving is
AWS::CloudFormation::Stack Tue Feb 15 2022 18:20:42 GMT+0000 (Greenwich Mean Time) Parameters: [S3DeploymentBucket, AppSyncApiId, S3DeploymentRootKey] must have values
I feel like I am missing a step to plumb the output values to the parameters in the JSON but I can't find any documentation to suggest how to do this using the updated Amplify CLI options.
Let me know if you need any further information and fingers crossed it is something simple for you Amplify/CloudFormation ninjas out there!
Thank you in advance!

Related

Not able to create AWS Lambda using CloudFormation without code

I am trying to set up a resource pipeline, where I want to deploy all my resources using CloudFormation. I have a separate pipeline to deploy code.
Using below CloudFormation template
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Retain",
"Properties": {
"BucketName": "devanimalhubstorage"
}
},
"HelloLambdaRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": "HelloLambdaRole",
"AssumeRolePolicyDocument": {
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
}
},
"AnimalhubLambdaFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"FunctionName": "AnimalhubLambdaFunction",
"Role": {
"Fn::GetAtt": ["HelloLambdaRole","Arn"]
},
"Code": {},
"Runtime": "dotnetcore2.1",
"Handler": "myfirstlambda::myfirstlambda.Function::FunctionHandler"
}
}
}
}
Problem Resource handler returned message: "Please provide a source for function code. (Service: Lambda, Status Code: 400, Request ID: 377fff66-a06d-495f-823e-34aec70f0e22, Extended Request ID: null)" (RequestToken: 9c9beee7-2c71-4d5d-e4a8-69065e12c5fa, HandlerErrorCode: InvalidRequest)
I want to build a separate pipeline for code build and deployment. Can't we deploy Lambda function without code?
What is the recommended solution in AWS? (I used to follow this approach in Azure, am new in AWS)
First, I would advise you to look into AWS SAM. It is very helpful when creating and deploying serverless applications and will have a lot of examples to help you with your use case.
Second, using separate pipelines for this purpose is not the recommended way in AWS. Using dummy code, as the other answer suggests, is also quite dangerous, since an update to your cloudformation would override any other code that you have deployed to the lambda function using your other pipeline.
In a serverless application like this, you could make a separation into two or more cloudformation stacks. For example, you could create your S3 buckets and other more "stable" infrastructure in one stack, and deploy this either manually or in a pipeline. And deploy your code in a separate pipeline using another cloudformation stack. Any values (ARNs etc) that would be needed from the more stable resources, you could inject as a parameter in the template, or use the ImportValue function of CloudFormation. I'd personally recommend using the parameter since it is more flexible for future changes.

How to conditionally create s3 bucket in cloudformation?

We have a cloud formation stack that contains data storage resources like RDS,S3. We want to preserve them even if the stack is deleted or when upgrading the stack, few parameters can cause the services to re-create. so we have set a deletion policy to retain. Now when I re-run the stack it is unable to create the resources because the resources with the same name exist.
I thought of creating the resources by checking if it exists or not. Is it possible for cloud formation to have this check by itself. I also realized the retained resources have become rouge resources as they are not now part of this cloud formation.
What would be the best approach to get around this solution?
You have to import these buckets back into your stack.
Sadly CFN does not have functionality to check if something exists or not, and perform the import automatically by itself. So you have to do it manually using AWS console or programmatically using SDK or CLI.
This is a complex question in general, and without knowledge of your setup and specific CF template, it's hard to give specific advice. I will address some general points and suggested actions.
"when I re-run the stack it is unable to create the resources because the resources with the same name exist" - This is common problem with named resources. As a general guidance, avoid statically naming those resources, with which no human operators will interact. If you have strict naming convention in place, then always add random suffix to names, which will make them unique to that stack.
If you have storage resources, i.e. S3 buckets and DBs, then consider putting them into separate CF stack and expose relevant attributes (like DB endpoint address) as CF exports. You can reference them in other stacks by using Fn::ImportValue. This way you can separate "stable" resources from "volatile" ones and still have benefits of CF.
You can import some existing resources into CF stacks. They will act as a legitimate part of the stack. But not all resources can be imported like this.
I think yes you can do this by putting a condition in your CFN whether in JSON/YAML
{
"AWSTemplateFormatVersion": "2010-09-09",
"Transform": "AWS::Serverless-2016-10-31",
"Description": "",
"Parameters": {
"ShouldCreateBucketInputParameter": {
"Type": "String",
"AllowedValues": [
"true",
"false"
],
"Description": "If true then the S3 bucket that will be proxied will be created with the CloudFormation stack."
}
},
"Conditions": {
"CreateS3Bucket": {
"Fn::Equals": [
{
"Ref": "ShouldCreateBucketInputParameter"
},
"true"
]
}
},
"Resources": {
"SerialNumberBucketResource": {
"Type": "AWS::S3::Bucket",
"Condition": "CreateS3Bucket",
"Properties": {
"AccessControl": "Private"
}
}
}
}
And then using CLI you just need to set "true" or "false"
aws cloudformation deploy --template ./s3BucketWithCondition.json --stack-name bucket-stack --parameter-overrides ShouldCreateBucketInputParameter="true"

Dataflow logs from Stackdriver

The resource.labels.region field for the dataflow_step logs in stackdriver, points to global even though the specified regional endpoint is Europe-west2.
Any idea on what is it exactly pointing to?
Once you've supplied GCP Logs Viewer with the desired filtering option, as most simple query based on your inputs seeking for dataflow_step resource type:
resource.type="dataflow_step"
resource.labels.region="europe-west2"
You would probably observe query results retrieved from Cloud Dataflow REST API, consisting with logs entries formatted as a JSON outputs for all Dataflow Jobs that are residing within your GCP project in europe-west2 Regional endpoint:
{
"insertId": "insertId",
"jsonPayload": {
....
"message": "Message content",
....
},
"resource": {
"type": "dataflow_step",
"labels": {
"job_id": "job_id",
"region": "europe-west2",
"job_name": "job_name",
"project_id": "project_id",
"step_id": "step_id"
}
},
"timestamp": "timestamp",
"severity": "severity_level",
"labels": {
"compute.googleapis.com/resource_id": "resource_id",
"dataflow.googleapis.com/job_id": "job_id",
"compute.googleapis.com/resource_type": "resource_type",
"compute.googleapis.com/resource_name": "resource_name",
"dataflow.googleapis.com/region": "europe-west2",
"dataflow.googleapis.com/job_name": "job_name"
},
"logName": "logName",
"receiveTimestamp": "receiveTimestamp"
According to GCP logging service documentation each monitoring resource type derives particular labels from the nested service API, dataflow.googleapis.com corresponds to Dataflow service.
Therefore, if you run Dataflow Job defining the location for job's metadata region, GCP logging service will fetch up this regional endpoint from job description throughout dataflow.googleapis.com REST methods.
The resource.labels.region field on Dataflow Step logs should refer to the regional endpoint that the job is using. "Global" is not an expected value there.

How to update a previously created AWS CodePipeline Build Provider?

I had previously created a Jenkins build provider using CodePipeline console. During creation, it asks for a Jenkins server URL.
Now, I need to change my Jenkins server URL, but when I try to edit, there isn't any option to change the build provider. See snapshot below:
The only solution I see is to add a new one.
I tried to get the pipeline using aws-cli,
aws codepipeline get-pipeline --name <pipeline-name>
But the JSON response just has a reference to to the build provider:
...
},
{
"name": "Build",
"actions": [
{
"inputArtifacts": [
{
"name": "APIServer"
}
],
"name": "Build",
"actionTypeId": {
"category": "Build",
"owner": "Custom",
"version": "1",
"provider": "jenkins-api-server"
},
"outputArtifacts": [
{
"name": "APIServerTarball"
}
],
"configuration": {
"ProjectName": "api-server-build"
},
"runOrder": 1
}
]
},
{
I couldn't find any other command to manage the build provider either. So my question is where and how should I update the existing build providers configuration in AWS CodePipeline?
The Jenkins action is actually defined as a custom action in your account. If you want to update the action configuration you can define a new version using the create custom action type API. Your changes will be a new "version" of the action type, so you then update the actionTypeId in your pipeline to point to your new version.
Once you're done, you can also delete the old version to prevent it appearing in the action list.
Regarding the Jenkins URL changing, one solution to this is to setup a DNS record (eg. via Route53) pointing to your Jenkins instance and use the DNS hostname in your action configuration. That way you can remap the DNS record in future without updating your pipeline.

AWS Cloudformation Template - Set Region in S3 Bucket

I'm trying to create an s3 bucket in a specific region (us-west-2).
This doesn't seem possible using Cloudformation Templates.
Any ideas? I have had no luck explicitly naming it using the service-region-hash convention that i read about.
Template:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"AccessControl": "PublicReadWrite",
}
}
},
"Outputs": {
"BucketName": {
"Value": {
"Ref": "S3Bucket"
},
"Description": "Name of the sample Amazon S3 bucket with a lifecycle configuration."
}
}
}
If you want to create the s3 bucket in a specific region, you need to run the Cloudformation JSON template from this region.
The end point URLs for cloudFormation are Region based (https://aws.amazon.com/cloudformation/faqs/?nc1=h_ls#regions and http://docs.aws.amazon.com/general/latest/gr/rande.html#cfn_region) and you cannot as for today cross check region.
I saw it has been discussed few times on aws support thread so might be something that will be done in future
The bucket is created within 1 region and when you need to access it you need to pass the region to the s3 end-point.
If you want to enable cross-region replication you can do that from the cloud formation template with the ReplicationConfiguration properties