How to structure AWS CDK for product development - amazon-web-services

Our company is exploring the use of AWS CDK. Our app is composed of an Angular Front End and an ASP.NET Core 3.1 Back End. Our Angular Front End is deployed as a static site in an S3 bucket. Back End is deployed as a Lambda with an API Gateway to allow for public API calls. The database is an Aurora RDS instance.
The app is a core product that will have some client-specific config when deployed into a client-specific AWS environment. Code-wise we are looking to keep the core product the same.
Based on the reading I did, placing CDK code alongside app code would allow us to define Constructs that correspond to app components (e.g. Front End S3 Buckets, Back End Lambda + API Gateway, RDS setup, etc.), all located in the same repo. I was thinking of producing artifacts (e.g. Nugets, binaries, zips) from the core app and cdk builds.
Now a client-specific setup would consume artifacts created from the main repository build to create a client-specific build, composed of core app + base AWS CDK constructs. I can imagine building AWS CDK stacks that use the ones created in the core repo adding client-specific configurations. Still unsure how to do the core app + client-specific config but am wondering if anyone either solved this problem or has suggestions.

I think you can start with AWS CDK Best Practices with special attention to Coding best practices section. Second, you can refer to AWS Solution Architect article who describes Recommended AWS CDK project structure for Python applications. By default, I understand that python is not Nextjs, though you can see find the general principles in it. Third, you can use a better way to structure AWS CDK projects around Nested Stacks and start to convert the Stacks into Libraries and re-using the code.

Related

Is `cdk bootstrap` safe to run on a production AWS system?

I have inherited a small AWS project, and the infra is built in CDK. I am relatively new to CDK.
I have a Bitbucket pipeline that deploys to our preprod environment fine. Since it feels reliable, I am now productionising it.
I detailed on a prior question that there is no context in the project for the production VPCs and subnets. I have been advised there that I can get AWS to generate the context file; I have not had much luck with that, so for now I have hand-generated it.
For safety I have made the deployment command a no-execute one:
cdk deploy --stage=$STAGE --region=eu-west-1 --no-execute --require-approval never
In production I get this error with the prod creds:
current credentials could not be used to assume 'arn:aws:iam::$CDK_DEFAULT_ACCOUNT:role/cdk-xxxxxxxx-lookup-role-$CDK_DEFAULT_ACCOUNT-eu-west-1', but are for the right account. Proceeding anyway.
Bundling asset VoucherSupportStack/VoucherImporterFunction/Code/Stage...
I then get:
❌ VoucherSupportStack failed: Error: VoucherSupportStack: SSM parameter /cdk-bootstrap/xxxxxxxx/version not found. Has the environment been bootstrapped? Please run 'cdk bootstrap' (see https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html)
I am minded to run cdk bootstrap in a production pipeline, on a once-off basis, as I think this is all it needs. We have very little CDK knowledge amongst my team, so I am a bit stuck on obtaining the appropriate reassurances - is this safe to run on a production AWS account?
As I understand it, it will just create a harmless "stack" that does nothing (unless we start using cdk deploy ...).
Yes, you need to bootstrap every environment (account/region) that you deploy to, including your production environment(s).
It is definitely safe to do - it's what CDK expects.
You can scope the execution role down if you need (the default policy is AdministratorAccess).
Although your pipeline shouldn't ideally be performing lookups during synth - the recommended way is to run cdk synth once with your production credentials, which will perform the lookups and populate the cdk.context.json file. You would then commit this file to VCS and your pipeline will use these cached values instead of performing the lookups every time.
Generally yes, but here some extension to #gshpychka answer:
You don't have to bootstrap your production environment in case you are deploying your application with AWS Service Catalog. The setup in our project looks like following:
Resources account - for pipelines, secrets, ...
Development account - bootstrapped, the dev pipeline deploys directly to this account
Integration Account and Production Account - not bootstrapped, we are provisioning the releases and the release candidates through the AWS Service Catalog.
Service Catalog provides the cool functionality to provision and also update the applications in a friendly way. There are CDK LVL2 stable constructs for building Your product stacks.
Of course, this approach has its advantages and disadvantages. I would recommend using it if you want to have full control over when you want to deploy or update your application. It is also worth using this approach if you are developing an application that will be installed on a client account.

How to deploy nuxt frontend with express backend on AWS?

I have a Nuxt v2 SSR application as the frontend running on port 3000
I have an express API as the backend running on port 8000
I have a python script that loads data from external APIs and needs to run continuously
Currently all of them are separate projects with their own package.json and what not
How do I deploy this to AWS?
The only thing I have figured out so far is that I may have to deploy express API as an Elastic Beanstalk application.
Should I have a separate docker-compose file for each because they are separate projects currently or should I merge them into one project with a single docker-compose file
I saw similar questions asked about React, could really appreciate some direction here in Nuxt
None of these similar questions are based on Nuxt
How to deploy separated frontend and backend?
How to deploy a React + NodeJS Express application to AWS?
How to deploy backend and frontend projects if they are separate?
There are a couple of approaches depending on your workload and budget. Let's cover what the options are and which ones apply to the work at hand.
Serverless Approach with Server-side rendering (SSR)
Tutorial
By creating an API Gateway route that invokes NuxtRenderer in a Lambda. The resulting generated HTML/JS/CSS would be pushed to S3 bucket. The S3 bucket acts as a Cloudfront origin. Use the CDN to cache the least updated items and use the API call to update the cache when needed. This is the cheapest way to deploy, but if you don't have traffic some customers may experience brief lag when cache updates hit. Calculator
Serverless Approach via static generation
This is the easiest way to get going. All the routes that do not have dynamic template generation can simply live in an S3 bucket. The simplest approach is to run nuxt generate and then upload the resulting contents of dist to the bucket. No node backend here, which is not the requirement in the question but its worth mentioning.
Well documented on NuxtJS.org and free tier.
Serverless w/ Elastic Beanstalk
IMO this approach is unnecessary and slightly dated for the offering AWS provides in 2022. It still works of course, but the cost to benefits aren't attractive.
To do this you need to use the eb command line tool and set NPM_CONFIG_UNSAFE_PERM=true. AFAIK there is nothing else special that needs to happen for eb to know what to do from there. Calculator
Server-ish Approach(es)
Another alternative is to use Lightsail and NodeJS server. Lightsail offers low (but not lower than serverless) cost to a full time NodeJS server. In this case you would want to clone your project to the server then setup systemd script to keep nodejs running.
Yet another way to do this and have some scalability is to use ECS and Docker. In this case you would create a Dockerfile that builds the containers, executes npm start, and exposes the port Nuxt runs on to the host. This example shows how to run it using Fargate, which is essentially a serverless version of EC2 machine. Calculator
There are some ways for you to deploy your stack in AWS. I can give you some options, but you're best shot if you want to save some costs is by using Lambda Functions as your backend, S3 for your front-end and a batch Lambda job for your python script.
For your backend - https://github.com/vendia/serverless-express
For your nuxt frontend - https://nuxtjs.org/deployments/amazon-web-services
For your python job - https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/RunLambdaSchedule.html
It's not way too simple to execute all of those, but with the following links you'll probably have an idea of how you may implement your solution.

Serverless framework CLI vs GUI. Eg. AWS console

Why would anyone use Serverless framework CLI to write the lambda functions or deploy them when we have AWS console GUI? Are there any specific benefits out of it?
Normally you don't develop and deploy a lambda function in isolation, instead it is one part of your cloud infrastructure. That can include other lambdas, S3 buckets, databases, API Gateways, IAM roles, environment variables and much more.
Serverless framework is allows you to write your infrastructure as code. For AWS services, it translates serverless.yaml config files into AWS cloudformation files, and from there deploys any new or updated services you define. You lambda function is just one part of that.
A major benefit of writing and deploying this way is that you can use your favourite editor locally, and can check your code into version control (i.e. git). This is not just for your lambda code, but also your infrastructure config i.e. serverless.yaml and associated files.
The Serverless Framework is more than just a replacement for the AWS Console (GUI). You can definitely set everything up via the AWS console for a Serverless application but how do you share that with your team? What if you wish to deploy that repeatedly into multiple applications? The Serverless Framework gives you a configuration file, usually called serverless.yml, where you define all the services within AWS (and other vendors, there is support for more than just AWS) and then you use the CLI to perform functions on this configuration file such as deploy, invoke and lot more.
Then there are the Serverless Framework plugins designed by the community around the project to make other tasks even easier such as unit testing, configuration of S3 buckets, CloudFront and domains to make frontend deployment easier and a lot, lot more.
Lastly, but most importantly, there is a professional product provided in addition to the open source framework that you can use to add on monitoring, deployment management, troubleshooting, optimisation, CI/CD and too many other benefits to list here.
Definitely, if you are doing a big project the Serverless framework has a lot of benefits, imagine you developing an MVC c# project with notepad. How do you feel about that?
The framework are done to make our life ( for developers ) very much easier.

AWS Lambda - Building serverless API using .NET Core

Recently I have been looking into AWS Lambdas and how to build Serverless API using .Net Core. From what I understand, you can do it in 2 different ways.
1) Write multiple separate Lambdas in C# and deploy them to AWS. Requests come in via API gateway and each lambda acts as an endpoint.
2) Build a Serverless Web API using .Net core. When you create the serverless Web API project a Lambda is automatically created which becomes the entry point to the Web API.
Are there any limitations of 1 vs 2, or use cases where one approach might be beneficial over other? Or is it just 2 different ways of achieving the same thing?
I don't think your options are correct. The two options for building a Lambda backed API are:
1- Build lambdas and deploy them independently to AWS in one or more project. Then Manually create API Gateway endpoints that point to your one or more lambdas.
2- Use a Serverless project to combine your lambdas in one project. Define your endpoints in that project and have Cloudformation create the API Gateway endpoints and hook them up to your lambdas on deployment.
As far as pros and cons,
Option 1:
Pros: has the flexibility of deploying lambdas independently, also you can configure your API Gateway endpoints however way you want without having to understand Cloudformation definition syntax which took some ramp up time in my experience.
Cons: If you have a lot of lambdas this becomes a management nightmare. Also your endpoint definition is not in the source code and changes to the endpoint configuration will not be tracked.
Option 2:
Pros: If you figure out Cloudformation or if you want to go with the default configuration deploying a lambda and hooking it to an API Gateway endpoint is super easy. AWS will create the endpoint for you and will create dev and prod stages, policies, IAM roles, etc. This being deployed by Cloudformation directly from Visual Studio causes the whole deployment and all related objects to fall under the same "Stack" in AWS Cloudformation which can be changed, repliocated, or deleted very easily. Also your Infrastructure is now code and changes to it are auditable in your git repo.
Cons: The biggest con in my opinion is the fact that the stack doesn't span the VS Solution but rather just the project, so all your lambdas have to live in the same project which means that if you have a lot they will end up all in one monolith lambda binary. The generated large project binary will costing you memory runtime on AWS and efficiency problems. The other con is that if you want to have specific or out of the ordinary API Gateway you will need to understand Cloudformation syntax to change your serverless.template file.
Conclusion: My preferred solution was to divide my true application into smaller chunks of related lambdas based on the API object and place these lambdas in a few Serverless application projects. For example I have an order project which contains all lambdas related to the order API, and a Product project that contains the lambdas related to product API etc. Both of them would live in the same solution and would get deployed separately. I'm currently working on a way to deploy the entire solution at once.

Deploying an Angular 2 app built with webpack using Bitbucket

I have searched high and low for an answer to this question but have been unable to find one.
I am building an Angular 2 app that I would like hosted on an S3 bucket. There will be an EC2 (possibly) backend but that's another story. Ideally, I would like to be able to check my code into Bitbucket, and by some magic that alludes me I would like S3, or EC2, or whatever to notice via a hook, for instance, that the source has changed. Of course the source would have to be built using webpack and the distributables deployed correctly.
Now this seems like a pretty straightforward request but I can find no solution exception something pertaining to WebDeploy which I shall investigate right now.
Any ideas anyone?
Good news, AWS Lambda created for you.
You need to create following scenario and code to achieve your requirement.
1-Create Lambda function, this function should do the following steps:
1-1- Clone your latest code from GitHub or Bitbucket.
1-2- install grunt or another builder for your angular app.
1-3- install node modules.
1-4- build your angular app.
1-5- copy new build to your S3 bucket.
1-6- Finish.
2-Create AWS API gateway with one resource and one method point to your Lambda function.
3-Goto your GitHub or Bitbucket settings and add webhook with your API gateway.
4-Enjoy life with AWS.
;)
Benefits:
1-You only charge when you have the new build.
2-Not need any machine or server (EC2).
3-You only maintain one function on Lambda.
for more info:
https://aws.amazon.com/lambda/
https://aws.amazon.com/api-gateway/
S3 isn't going to listen for Git hooks and fetch, build and deploy your code. BitBucket isn't going to build and deploy your code to S3. What you need is a service that sits in-between BitBucket and S3 that is triggered by a Git hook, fetches from Git, builds, and then deploys your code to S3. You need to search for Continuous Integration/Continuous Deployment services which are designed to do this sort of thing.
AWS has CodePipeline. You could setup your own Jenkins or TeamCity server. Or you could look into a service like CodeShip. Those are just a few of the many services out there that could accomplish this task. I think any of these services will require a bit of scripting on your part in order to get them to perform the actual webpack and copy to S3.