When we use
chalice deploy
for a component which is to be available as REST endpoint, Chalice creates the Lambda and API on AWS infrastructure.
Every chalice project creates a new API with a unique id.
I want to be able deploy multiple chalice projects under the same API id. We want to be able to configure this API name/id and use it in CI/CD pipeline as well.
How do we achieve this?
The reason for the new API ids is because chalice when using the chalice deploy command, it creates a file in .chalice/deployed for that stage. In that file it would have the ID it would re-deploy to.
There are two solutions if you are using a CI/CD pipeline.
First being you can issue the FIRST deploy to create the file on your project LOCALLY. From your local machine you can run chalice deploy --stage {YourStageHere} and it will create the proper file and you can push that into your repo to save it. Then the pipeline will read from that file for the API ID.
The second being is much more in depth. It would require setting up a changeset to the pipeline. There is a very good starting tutorial in the official documentation:
https://chalice-workshop.readthedocs.io/en/latest/todo-app/part2/02-pipeline.html
Related
I have created a CDK app that will provision me a custom VPC and bunch of subnets based on env variables i pass into it.
My use case is, i want to trigger this stack via a API request (triggered from a Admin UI/ SPA client) to provision infra. The API request will contain all the necessary params required to initiate the CDK stack.
I've also created a CI/CD pipeline (codepipeline/code build) for the CDK app but not sure how to trigger them without changing the actual source repo.
Not sure whats the best way to trigger a CDK build and pass the relevant env variables?
I have a React application with AWS Amplify as its backend. I'm using AppSync API and DynamoDB database to save data. AppSync API is the only category that I provisoned in my project.
Category
Resource name
Operation
Provider plugin
Api
testAPI
No Change
awscloudformation
I need to clone this same AWS Amplify backend to another AWS account easily.
Yes, I could create another Amplify project and provision resources one by one. But is there any other easy method to move this Amplify backend to another AWS account?
I found a solution through this (https://github.com/aws-amplify/amplify-cli/issues/3350) Github issue thread. But I'm not 100% sure whether this is the recommend method to migrate Amplify resources.
These are the steps that I followed.
First, I pushed the project into a GitHub repo. This will push only the relevant files inside the amplify directory. (Amplify automatically populates .gitignore when we initialize our backend using amplify init).
Clone this repo to a new directory.
Next, I removed the amplify/team-provider-info.json file.
Run amplify init and you can choose your new AWS profile or you can enter secretAccessKeyId and accessKeyId for the new AWS account. (Refer this guide to create and save an IAM user with AWS Amplify access)
This will create backend resources locally. Now to push those resources, you can execute amplify push.
If you want to export the Amplify backend using a CDK pipeline, you can refer to this guide: https://aws.amazon.com/blogs/mobile/export-amplify-backends-to-cdk-and-use-with-existing-deployment-pipelines/
I want to deploy NextJS on AWS using AWS CDK for a POC and was looking at options. In the NextJS docs, it says that we can just create an instance and run npm run build && npm start, it will start the service for us. However, this is not the most optimised way of deploying.
Vercel deploys this in the most optimized way possible:
How can I do the same with AWS? How can I serve the static assets and pages via Cloudfront CDN and the server side rendered pages and APIs via either Lambda or ECS? Is there a step by step guide that I can follow to split out the build files for the same?
Other options I explored
AWS Amplify: As it is a premium service, I feel doing all this by my self would be a lot cheaper and gives me more flexibility in CDK (I am not sure how Amplify works behind the scenes to deploy the nextjs assets on a S3 + Cloudfront + Lambda stack)
serverless framework: There is a plugin to deploy nextjs. But, I want to have full control over the deployment and don't want to depend on any external framework. Want to do it natively using AWS CDK.
Any pointers to do this natively using AWS CDK would be helpful. Thanks.
Deploying Next.js as a serverless application requires a bunch of services when you don't want to pack the whole Next.js server into a single Lambda.
My current setup of AWS services to achieve this looks like the following:
It consists of 3 main resources:
CloudFront
This works as a serverless reverse proxy that routes the traffic from the Internet to S3 (JavaScript, prerendered pages) or Lambda (Server rendered pages).
When using the image optimization capabilities of Next.js you also need an extra service that provides the API for it.
S3
Since you don't want to invoke Lambdas just to serve static content, you need a S3 bucket where those files are stored and served from.
Lambda
The Lambdas are then used to serve the server generated pages (SSR & API).
They contain a minimal version of the Next.js server (e.g. without the static files that are served from S3).
I built this setup with Terraform, so there is no native CDK solution available at this time.
But most of it could be simply translated to CDK since the model behind Terraform and CDK is pretty much the same.
Source code of the Terraform module is available on GitHub: https://github.com/milliHQ/terraform-aws-next-js
I am looking to integrate enterprise bitbucket server with aws ci/cd pipeline features.
I have tried creating a project within aws codebuild but do not see any option for bitbucket enterprise .
If this is not possible then what is the long route using api gateway / webhooks etc ?
AWS Codebuild only supports the Bitbucket cloud. To integrate with Bitbucket self hosted solution, you will need to create a API gateway + Lambda. And then add this gateway address as a webhook in the bitbucket repo. The Lambda will then be responsible to process the incoming events from Bitbucket server. There could be 2 routes from here.
One way could be to download the zip for the particular commit and upload it on a S3 bucket. Add S3 as a source trigger for the build project. You lose the ability to run any git specific commands in such a case though as it's just a zip file containing the specific version of files.
Second option could be to pass on the relevant info to codebuild by directly invoking it from Lambda. Passing off details like commit_id, event (pr or push), branch etc as environment variables. Based on this info, run a git clone in codebuild before running other build steps. This way you would have access to git specific commands.
Here is an example workflow from AWS (it is for codepipeline, but you can modify it suitably for codebuild)
I am currently looking into AWS Amplify as well as I am reading Serverless Stack. My goal is to create a simple ToDo list app. Both "Getting started" / Documentations seem to have the same goal. However, AWS Amplify guide seems to be way easier from the setup.
And that's where I am confused. As far as I understand AWS Amplify also uses DynamoDB and gets data via GraphQL. But where is the difference between these two documentations?
Serverless Stack is a resource providing guidance on how to create serverless applications with AWS. It was created by a company called Anomaly Innovations.
AWS Amplify is an open source framework maintained by AWS which helps developers integrate their applications with AWS resources.
AWS Amplify is a very confusing service and consists of many components. I would categorize as follow.
AWS Amplify Console
AWS Amplify CLI
AWS SDK&Libraries to integrate to your mobile or web
AWS Appsync Transformer
AWS Amplify Console gives you the ability to easily to setup Continous Deployment for your Amplify project. Amplify Console use together with AWS Amplify CLI for you to manage different environments.
Let's say you want to start the Todo App. You start on your local using Amplify CLI and create API Gateway/Lambda/DynamoDB stacks.
Amplify CLI lets you create the whole stack easily and push it to AWS to deploy the whole stack. Then you can create a different environment based on the same stacks, let's say you want your dev environment, and QA environment and production environment.
Amplify CLI gives you all the commands necessary for you to achieve this, then if you want to auto-deploy the change to AWS when someone push the code to your Git repository, you can use the Amplify Console to set up exactly that.
Amplify Console also integrate with AWS Domain so, you can easily point your own domain to any of the environment.
On top of these, Amplify also provides, GraphQL Transformer, which you can easily define the GraphQL schema in Amplify format and it will transform and deploy to AWS Appsync. And there is a Mobile SDK which you can sync data between AppSync and you're mobile and provides some UIs as well.
We used one of our web projects and we liked it for Continues Deployment aspect of the Amplify, but we didn't like the AppSync(GraphQL) aspect of Amplify just b/c it was not easy to implement layered resolver.
Also, keep in mind that Amplify CLI/SDK/Transformer is under one project and it's still very fragile. You can take a look at the version history from https://www.npmjs.com/package/#aws-amplify/cli and you will see few version bump just in a single month. There were many obvious bugs we encounter, even on the AWS Console.
I haven't use the Serverless yet, but as long as I know, Serverless provides No1 and No2 of Amplify with greater stability.