Trigger CDK pipeline via API request (pass params) - amazon-web-services

I have created a CDK app that will provision me a custom VPC and bunch of subnets based on env variables i pass into it.
My use case is, i want to trigger this stack via a API request (triggered from a Admin UI/ SPA client) to provision infra. The API request will contain all the necessary params required to initiate the CDK stack.
I've also created a CI/CD pipeline (codepipeline/code build) for the CDK app but not sure how to trigger them without changing the actual source repo.
Not sure whats the best way to trigger a CDK build and pass the relevant env variables?

Related

How to export already created (via web console) services to CDK locally and be able to deploy/update them?

Current situation:
I have an AWS API Gateway referencing some AWS Lambdas, and some Lambdas querying a DynamoDB instance.
All of the above are created and handled manually via the AWS web console. There's no cloudformation template for that.
^ I want to be able to have that locally too using CDK:
I want to apply some healthy developer procedures and create a CDK file system locally, for testing, manage deployment, manage versioning via GitHub or whatever AWS has to offer in that field (didn't get to that part yet).
I noticed that there is 0% information on how to do that. Most tutorials follow a situation where:
i am creating a cdk from scratch locally
or already have a cloudformation structure.
Please help me figure our the best proper way to do that.
Some things that came up but didn't actually do:
Do i just init a cdk and name services the same as my current services to "take over them"?
or will they get re-written ( = total disaster).
Is there a way to export a code sample for each service i currently have and connect them with each other?
You must import existing resource's to CDK.
https://link.medium.com/1RbcEdal4wb
TL;DR Use cdk import for supported resources. Re-create the RestApi from an exported OpenApi Definition.
cdk import
The CDK has experimental import functionality to bring existing console-created resources under CDK management. The cdk import CLI command piggybacks off the related CloudFormation resource import operation.
Not all resources support the import operation. The AWS::Lambda::Function and AWS::DynamoDB::Table resources are supported. You must also consider secondary resources, such as the Lambda's execution role (AWS::IAM::Role is supported for importing).
Resource importing starts with manually configuring a CDK stack that matches the existing cloud-side configuration. To guide your work, say, on recreating a dynamodb.Table in CDK, consider running the DescribeTable API to get a dump of the current configuration. Because of the manual work involved, it's wise to focus your importing energies on stateful resources and consider simply destroying and recreating stateless resources.
Once the app is complete, run the cdk import command. After that, the imported resources can be modified like any other CDK resource.
CDK RestApi from an exported OpenApi Definition
AWS::ApiGateway::RestApi is not on the list of supported resources for import1. A Plan B for your Api Gateway is to export your API as an OpenAPI defnition2. Then pass the JSON as the API definition to CDK RestApi construct with the ApiDefinition.fromAsset method. This will create a new API, not import it per se.
Although the RestApi resource itself is not supported for importing, many related AWS::ApiGateway resources are, such as Resource, Stage, Model and Method.
See the AWS How do I migrate API Gateway REST APIs between AWS accounts or Regions? for a non-CDK use case.

How to configure AWS Systems Manager Session Manager with CDK

Is there a way to configure the Session Manager via CDK?
I want to change settings like enabling KMS encryption and max session duration as well as writing session data to a S3 bucket. The online documentation from AWS (https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-configure-preferences.html) only has manual steps via the console described. However, everything is set up via CDK in my case and I also want to have those things configured via CDK, so in case the S3 bucket which is created via CDK is deleted/renewed I don't have to do any manual steps to configure SSM again.
You cant do that. Those settings are set per account globally. CDK/Cloudformation is resource provisioning tool.
Session Manager preferences are regional and since they be changed via command line, they can also be changed via a CDK custom resource.
Just create a lambda that runs the
aws ssm update-document --name "SSM-SessionManagerRunShell"
with a json config as explained here:
https://docs.aws.amazon.com/systems-manager/latest/userguide/getting-started-configure-preferences-cli.html
If you pass the name of your S3 bucket as a parameter of your custom resource it will trigger an on_event update every time your bucket changes.

How to migrate AWS Amplify project to another AWS account?

I have a React application with AWS Amplify as its backend. I'm using AppSync API and DynamoDB database to save data. AppSync API is the only category that I provisoned in my project.
Category
Resource name
Operation
Provider plugin
Api
testAPI
No Change
awscloudformation
I need to clone this same AWS Amplify backend to another AWS account easily.
Yes, I could create another Amplify project and provision resources one by one. But is there any other easy method to move this Amplify backend to another AWS account?
I found a solution through this (https://github.com/aws-amplify/amplify-cli/issues/3350) Github issue thread. But I'm not 100% sure whether this is the recommend method to migrate Amplify resources.
These are the steps that I followed.
First, I pushed the project into a GitHub repo. This will push only the relevant files inside the amplify directory. (Amplify automatically populates .gitignore when we initialize our backend using amplify init).
Clone this repo to a new directory.
Next, I removed the amplify/team-provider-info.json file.
Run amplify init and you can choose your new AWS profile or you can enter secretAccessKeyId and accessKeyId for the new AWS account. (Refer this guide to create and save an IAM user with AWS Amplify access)
This will create backend resources locally. Now to push those resources, you can execute amplify push.
If you want to export the Amplify backend using a CDK pipeline, you can refer to this guide: https://aws.amazon.com/blogs/mobile/export-amplify-backends-to-cdk-and-use-with-existing-deployment-pipelines/

Is bitbucket enterprise server allowed with AWS codebuild?

I am looking to integrate enterprise bitbucket server with aws ci/cd pipeline features.
I have tried creating a project within aws codebuild but do not see any option for bitbucket enterprise .
If this is not possible then what is the long route using api gateway / webhooks etc ?
AWS Codebuild only supports the Bitbucket cloud. To integrate with Bitbucket self hosted solution, you will need to create a API gateway + Lambda. And then add this gateway address as a webhook in the bitbucket repo. The Lambda will then be responsible to process the incoming events from Bitbucket server. There could be 2 routes from here.
One way could be to download the zip for the particular commit and upload it on a S3 bucket. Add S3 as a source trigger for the build project. You lose the ability to run any git specific commands in such a case though as it's just a zip file containing the specific version of files.
Second option could be to pass on the relevant info to codebuild by directly invoking it from Lambda. Passing off details like commit_id, event (pr or push), branch etc as environment variables. Based on this info, run a git clone in codebuild before running other build steps. This way you would have access to git specific commands.
Here is an example workflow from AWS (it is for codepipeline, but you can modify it suitably for codebuild)

AWS Chalice - CI/CD - Deploy under the same gateway

When we use
chalice deploy
for a component which is to be available as REST endpoint, Chalice creates the Lambda and API on AWS infrastructure.
Every chalice project creates a new API with a unique id.
I want to be able deploy multiple chalice projects under the same API id. We want to be able to configure this API name/id and use it in CI/CD pipeline as well.
How do we achieve this?
The reason for the new API ids is because chalice when using the chalice deploy command, it creates a file in .chalice/deployed for that stage. In that file it would have the ID it would re-deploy to.
There are two solutions if you are using a CI/CD pipeline.
First being you can issue the FIRST deploy to create the file on your project LOCALLY. From your local machine you can run chalice deploy --stage {YourStageHere} and it will create the proper file and you can push that into your repo to save it. Then the pipeline will read from that file for the API ID.
The second being is much more in depth. It would require setting up a changeset to the pipeline. There is a very good starting tutorial in the official documentation:
https://chalice-workshop.readthedocs.io/en/latest/todo-app/part2/02-pipeline.html