cdk deploy deletes the bootstrap version ssm parameter - amazon-web-services

Here's how I'm instantiating my stack:
new LambdaStack(new App(), 'LambdaStack', {
env: { account: AWS_ACCOUNT_ID, region: 'us-east-1' },
synthesizer: new DefaultStackSynthesizer({
qualifier: 'lambda-stk',
}),
stackName: 'LambdaStack',
});
First I ensure that my ~/.aws/credentials file has the correct credentials. Then I bootstrap:
npx cdk bootstrap --qualifier lambda-stk --toolkit-stack-name LambdaStack aws://ACCOUNT_ID_HERE/us-east-1
Everything looks good in the console. Then, I deploy:
npx cdk deploy --require-approval never
Everything still looks good in the console -- the lambdas have been created as I expected, etc.
Then, I simply run the same deploy command again without changing anything and I get this error:
LambdaStack failed: Error: LambdaStack: SSM parameter /cdk-bootstrap/lambda-stk/version not found. Has the environment been bootstrapped? Please run 'cdk bootstrap' (see https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html)
Upon further investigation, it seems that the bootstrap command properly creates the referenced SSM parameter but then the first deploy command deletes that parameter. Why would it do that and how can I fix this problem?

Fixed it by naming the bootstrap stack something different from LambdaStack. I was under the impression that the bootstrap command was spinning up the stack that the "main" stack would use, but actually it's a completely different stack. So I changed the bootstrap command to:
npx cdk bootstrap --qualifier lambda-stk --toolkit-stack-name LambdaStackCDKToolkit aws://ACCOUNT_ID_HERE/us-east-1
And it worked.

Related

How do I best do local environment variables with CDK & SAM?

thanks in advance for any help/guidance you could provide.
Context
I am currently using CDK in a project to create AWS resources (a few Lambda functions) and SAM to test locally, this works wonderfully but I'm struggling with environment variables to be used locally with my setup of CDK + SAM.
I run and test the project locally via the command
$ cdk synth --no-staging > template.yaml && sam local start-api
Deployments are done via
$ cdk deploy testStack123 --context secretToken=123
The issue came when I had to include (locally) a sensitive token required for my project and I couldn't figure out how to differentiate like how you would in a project, for example, that only uses AWS SAM where you can define:
your local environment variables via a env.json file
and your environment variables you want to use for deployment that you'd pass in via
$ sam deploy --stack-name=testStack123 --secretToken=123
What I tried?
Sam's --env-vars command such as:
$ sam local start-api --env-vars env.json
but since I'm not managing the template.yaml myself instead I'm relaying on CDK's synth command to output the CloudFormation, there is no way I can reliably reference the Lambda function names in the env.json to pass local environment variables via --env-vars env.json.
// env.json example
{
"TestLambdaFunction": { // Will fail as its referenced in template.yaml as TestLambdaFunction67CA3BED for example
"SECRET_TOKEN": "123"
"ENVIRONMENT": "test"
},
}
I tried the runtime context via cdk.json that the AWS team suggests for CDK envrionments but I noticed that it is also required to push the cdk.json file to the repo, so you always ran the risk of the dev not noticing that they’re accidentally staging and pushing sensitive tokens to the repo. However, this solution would work for both CI and local dev, but comes with the risk mentioned before.
Any advice on how to best solve this so I can make it so local environments can safely be passed via an (git)ignored filed such as env.json but actually work by referencing the Lambdas correctly that are emitted by the synthesised CloudFormation template cia cdk synth.
The generated Logical IDs like TestLambdaFunction67CA3BED are stable (unless you change the Construct ID or construct tree), so are generally fine to use in env.json Alternatively, you can place all the env vars under a Parameters key:
// env.json
{
"Parameters": {
"TABLE_NAME": "localtable",
"BUCKET_NAME": "testBucket",
"STAGE": "dev"
}
}

Watching TypeScript CDK Builds

I need to be able to watch for my TypeScript lambda function changes within my CDK app. I'm using SAM to locally invoke the API and do not want to deploy to the cloud each time changes happen. So using something such as SAM Accelerate, for example, is not an option.
Currently, I must run cdk build and sam local start-api manually each time I change a single line in my function code, and it painfully takes a long time to start.
Any solutions or workarounds for this?
You need a Typescript watch feature with a hook to run arbitrary post-compile commands.* Typescript's tsc --watch can't do it (open issue), but the tsc-watch package can:
tsc-watch --onSuccess "./start-api.sh"
tsc-watch will call start-api.sh after each each successful compile, synthing a sam-friendly template version and starting the local testing api:
# start-api.sh
STACK_NAME=MyStack
npx cdk synth $STACK_NAME -a 'ts-node ./bin/app.ts' --no-staging --no-validation --quiet --output cdk.local
sam local start-api --template cdk.local/$STACK_NAME.template.json
* cdk watch (an alias of cdk deploy --watch) won't work in your case, because you don't want to deploy on each change.

(#aws-cdk/aws-codepipeline) - Using "ShellScriptAction" equivalent in a Pipeline made with #aws-cdk/aws-codepipeline

Here's my problem. I'm currently struggling to run a basic shell script execution action in my pipeline.
The pipeline was created thru the Pipeline construct in #aws-cdk/aws-codepipeline
import { Artifact, IAction, Pipeline } from "#aws-cdk/aws-codepipeline"
const pipeline = new Pipeline(this, "backend-pipeline",{
...
});
Now, I'm running a cross deployment pipeline and would like to invoke a lambda right after it's been created. Before, a simple ShellScriptAction would've sufficed in the older (#aws-cdk/pipelines) package, but for some reason, both packages pipelines and aws-codepipeline are both maintained at the same time.
What I would like to know is how to run a simple basic command in the new (aws-codepipeline) package, ideally as an Action in a Stage.
Thanks in advance!
You would use a codebuild.PipelineProject in a codepipeline_actions.CodeBuildAction to run arbitrary shell commands in your pipeline. The CDK has several build tool constructs*, used in different places. pipelines.CodePipeline specializes in deploying CDK apps, while the lower-level codepipeline.Pipeline has a broader build capabilities:
Build tool construct
CloudFormation Resource
Can use where?
pipelines.ShellStep
AWS::CodeBuild::Project
pipelines.CodePipeline
pipelines.CodeBuildStep
AWS::CodeBuild::Project
pipelines.CodePipeline
codebuild.PipelineProject
AWS::CodeBuild::Project
codepipeline.Pipeline
codebuild.Project
AWS::CodeBuild::Project
codepipeline.Pipeline or standalone
In your case, the setup goes Pipeline > Stage > CodeBuildAction > PipelineProject.
// add to stage actions
new codepipeline_actions.CodeBuildAction({
actionName: 'CodeBuild',
project: new codebuild.PipelineProject(this, 'PipelineProject', {
buildSpec: codebuild.BuildSpec.fromObject({
version: '0.2',
phases: {
build: { commands: ['echo "[project foo] $PROJECT_FOO"'] },
},
}),
environmentVariables: {
PROJECT_FOO: {
value: 'Foo',
type: codebuild.BuildEnvironmentVariableType.PLAINTEXT,
},
},
}),
input: sourceOutput,
});
* The ShellScriptAction you mentioned is another one, now deprecated in v1 and removed from v2.
#aws-cdk/aws-codepipeline is for AWS Codepipeline. #aws-cdk/pipelines is for utilizing AWS Codepipeline to deploy CDK apps. Read more about the package and its justification here.
Regarding your question, you have some options here.
First of all, if you're looking for a simple CodeBuild action to run arbitrary commands, you can use CodeBuildAction.
There's a separate action specifically for invoking a lambda, too, it's LambdaInvokeAction.
Both are part of #aws-cdk/aws-codepipeline-actions module.

How to use AWS Amplify with Sapper?

My goal is to use AWS Amplify in a Sapper project.
Creating a Sapper project from scratch (using webpack) then adding AWS Amplify and running it in dev is a success, but run it in production throws a GraphQL error in the console (Uncaught Error: Cannot use e "__Schema" from another module or realm).
Fixing this error thows another one (Uncaught ReferenceError: process is not defined).
A solution is to upgrade GraphQL from 0.13.0 to 14.0.0 unfortunatly GraphQL 0.13.0 is an AWS Amplify API dependency.
Does anyone know what can be done to get AWS Amplify work with Sapper in production ?
The link to the repo containing the source files is located here: https://github.com/ehemmerlin/sapper-aws-amplify
(Apologies for the long post but I want to be explicit)
Detailled steps
1/ Create a Sapper project using webpack (https://sapper.svelte.dev).
npx degit "sveltejs/sapper-template#webpack" my-app
cd my-app
yarn install
2/ Add AWS Amplify (https://serverless-stack.com/chapters/configure-aws-amplify.html) and lodash
yarn add aws-amplify
yarn add lodash
3/ Configure AWS Amplify (https://serverless-stack.com/chapters/configure-aws-amplify.html)
Create src/config/aws.js config file containing (change the values with yours but works as is for the purpose of this post):
export default {
s3: {
REGION: "YOUR_S3_UPLOADS_BUCKET_REGION",
BUCKET: "YOUR_S3_UPLOADS_BUCKET_NAME"
},
apiGateway: {
REGION: "YOUR_API_GATEWAY_REGION",
URL: "YOUR_API_GATEWAY_URL"
},
cognito: {
REGION: "YOUR_COGNITO_REGION",
USER_POOL_ID: "YOUR_COGNITO_USER_POOL_ID",
APP_CLIENT_ID: "YOUR_COGNITO_APP_CLIENT_ID",
IDENTITY_POOL_ID: "YOUR_IDENTITY_POOL_ID"
}
};
Add the following code to the existing code in src/client.js:
import config from './config/aws';
Amplify.configure({
Auth: {
mandatorySignIn: true,
region: config.cognito.REGION,
userPoolId: config.cognito.USER_POOL_ID,
identityPoolId: config.cognito.IDENTITY_POOL_ID,
userPoolWebClientId: config.cognito.APP_CLIENT_ID
},
Storage: {
region: config.s3.REGION,
bucket: config.s3.BUCKET,
identityPoolId: config.cognito.IDENTITY_POOL_ID
},
API: {
endpoints: [
{
name: "notes",
endpoint: config.apiGateway.URL,
region: config.apiGateway.REGION
},
]
}
});
4/ Test it
In dev (yarn run dev): it works
In production (yarn run build; node __sapper__/build): it throws an error.
Uncaught Error: Cannot use e "__Schema" from another module or realm.
Ensure that there is only one instance of "graphql" in the node_modules
directory. If different versions of "graphql" are the dependencies of other
relied on modules, use "resolutions" to ensure only one version is installed.
https://yarnpkg.com/en/docs/selective-version-resolutions
Duplicate "graphql" modules cannot be used at the same time since different
versions may have different capabilities and behavior. The data from one
version used in the function from another could produce confusing and
spurious results.
5/ Fix it
Following the given link (https://yarnpkg.com/en/docs/selective-version-resolutions) I added this code to package.json file:
"resolutions": {
"aws-amplify/**/graphql": "^0.13.0"
}
6/ Test it
rm -rf node_modules; yarn install
Throws another error in the console (even in dev mode).
Uncaught ReferenceError: process is not defined
at Module../node_modules/graphql/jsutils/instanceOf.mjs (instanceOf.mjs:3)
at \_\_webpack_require\_\_ (bootstrap:63)
at Module../node_modules/graphql/type/definition.mjs (definition.mjs:1)
at \_\_webpack_require\_\_ (bootstrap:63)
at Module../node_modules/graphql/type/validate.mjs (validate.mjs:1)
at \_\_webpack_require\_\_ (bootstrap:63)
at Module../node_modules/graphql/graphql.mjs (graphql.mjs:1)
at \_\_webpack_require\_\_ (bootstrap:63)
at Module../node_modules/graphql/index.mjs (main.js:52896)
at \_\_webpack_require\_\_ (bootstrap:63)
A fix given by this thread (https://github.com/graphql/graphql-js/issues/1536) is to upgrade GraphQL from 0.13.0 to 14.0.0 unfortunatly GraphQL 0.13.0 is an AWS Amplify API dependency.
When building my project (I'm using npm and webpack), I got this warning,
WARNING in configuration
The 'mode' option has not been set, webpack will fallback to 'production' for this value. Set 'mode' option to 'development' or 'production' to enable defaults
for each environment.
You can also set it to 'none' to disable any default behavior. Learn more: https://webpack.js.org/configuration/mode/
which seems to be related to the schema error, as these posts indicate that the quickest fix for the error is to set NODE_ENV to production in your environment (mode is set to the NODE_ENV environment variable in the webpack config):
https://github.com/aws-amplify/amplify-js/issues/1445
https://github.com/aws-amplify/amplify-js/issues/3963
How to do that:
How to set NODE_ENV to production/development in OS X
How can I set NODE_ENV=production on Windows?
Or you can mess with the webpack config directly:
https://gist.github.com/jmannau/8039787e29190f751aa971b4a91a8901
Unfortunately some posts in those GitHub issues point out the environment variable change might not work out for a packaged app, specifically on mobile.
These posts suggest that disabling the mangler might be the next best solution:
https://github.com/graphql/graphql-js/issues/1182
https://github.com/rmosolgo/graphiql-rails/issues/58
For anyone just trying to get the basic Sapper and Amplify setup going, to reproduce this error or otherwise, I build up mine with:
npm install -g #aws-amplify/cli
npx degit "sveltejs/sapper-template#webpack" my-app
npm install
npm install aws-amplify
npm install lodash (Amplify with webpack seems to need this)
amplify configure
npm run build
amplify init (dev environment, VS Code, javascript, no framework, src directory, __sapper__\build distribution directory, default AWS profile. This generates aws-exports.js.
In src/client.js:
import Amplify from 'aws-amplify';
import awsconfig from './aws-exports';
Amplify.configure(awsconfig);
npm run build

How do I run my CDK app?

I created and built a new CDK project:
mkdir myproj
cd myproj
cdk init --language typescript
npm run build
If I try to run the resulting javascript, I see the following:
PS C:\repos\myproj> node .\bin\myproj.js
CloudExecutable/1.0
Usage:
C:\repos\myproj\bin\myproj.js REQUEST
REQUEST is a JSON-encoded request object.
What is the right way to run my app?
You don't need to run your CDK programs directly, but rather use the CDK Toolkit instead.
To synthesize an AWS CloudFormation from your app:
cdk synth --app "node .\bin\myproj.js"
To avoid re-typing the --app switch every time, you can setup a cdk.json file with:
{ "app": "node .\app\myproj.js" }
Note: A default cdk.json is created by cdk init, so you should already see it under C:\repos\myproj.
You can also use the toolkit to deploy your app into an AWS environment:
cdk deploy
Or list all the stacks in your app:
cdk ls
The CDK application expects a request to be provided as a positional CLI argument when you're using the low-level API (aka running the app directly), for example:
node .\bin\myproj.js '{"type":"list"}'
It can also be passed as a Base64-encoded blob instead (that can make quoting the JSON less painful in a number of cases) - the Base64 needs to be prefixed with base64: in this case.
node .\bin\myproj.js base64:eyAidHlwZSI6ICJsaXN0IiB9Cg==
In order to determine what are the APIs that are available, and what arguments they expect, you can refer to the #aws-cdk/cx-api specification.