Azure DevOps AWS Serverless application deployment - S3 bucket per environment - amazon-web-services

I have followed this guide on how to separately package and deploy a .NET Core serverless application on AWS.
I'm trying to deploy to different environments, which I can achieve by parameterising the CloudFormation template and creating different stages in the Azure DevOps release pipeline, each stage passing different parameters to the template. However, there is one aspect that's confusing me a little bit. From the guide, an AWS Lambda .NET Core Deployment task is added to create a deployment package in the CI part. In that task, a S3 bucket name is specified, which is where the compiled code will be uploaded to. However, this means that one single S3 bucket will contain uploaded code for all environments. What if I wanted to upload the code to a different S3 bucket, depending on the environment being deployed to? Or is it normal practice to have just one S3 bucket in this scenario?
I thought of creating the bucket at the release pipeline stages, but the task to package the serverless application (during CI) requires a bucket name to be supplied.
Am I going the wrong way about this?

Related

How to structure AWS CDK for product development

Our company is exploring the use of AWS CDK. Our app is composed of an Angular Front End and an ASP.NET Core 3.1 Back End. Our Angular Front End is deployed as a static site in an S3 bucket. Back End is deployed as a Lambda with an API Gateway to allow for public API calls. The database is an Aurora RDS instance.
The app is a core product that will have some client-specific config when deployed into a client-specific AWS environment. Code-wise we are looking to keep the core product the same.
Based on the reading I did, placing CDK code alongside app code would allow us to define Constructs that correspond to app components (e.g. Front End S3 Buckets, Back End Lambda + API Gateway, RDS setup, etc.), all located in the same repo. I was thinking of producing artifacts (e.g. Nugets, binaries, zips) from the core app and cdk builds.
Now a client-specific setup would consume artifacts created from the main repository build to create a client-specific build, composed of core app + base AWS CDK constructs. I can imagine building AWS CDK stacks that use the ones created in the core repo adding client-specific configurations. Still unsure how to do the core app + client-specific config but am wondering if anyone either solved this problem or has suggestions.
I think you can start with AWS CDK Best Practices with special attention to Coding best practices section. Second, you can refer to AWS Solution Architect article who describes Recommended AWS CDK project structure for Python applications. By default, I understand that python is not Nextjs, though you can see find the general principles in it. Third, you can use a better way to structure AWS CDK projects around Nested Stacks and start to convert the Stacks into Libraries and re-using the code.

How to change CodePipeline artifacts behavior? (If possible)

I am designing a CI/CD system and looking for complete isolation between frontend and backend components. They will use separate CodeCommit Repositories, CodeBuild Projects, and deployment mechanisms. I've even created separate S3 buckets to house the artifacts. Furthermore, I'm using lengthy descriptive names for the pipelines, and related services, to ensure there's no confusion.
However, I am hitting an annoying issue where CodePipeline seems to be creating a Folder inside my artifacts S3 bucket with a truncated version of the pipeline name. I feel this is irrelevant and will only add to the confusion I'm trying to avoid. The entire S3 bucket is dedicated to those pipeline artifacts so I don't want a Folder containing everything within. I don't see anyway to stop CodePipeline from behaving this way.
Example:
Pipeline Name: my-clients-pipeline-for-frontend
S3 Artifacts: my-clients-pipeline-artifacts-for-frontend
----> my-clients-pipeline-f
--------> SourceArtifact
--------> BuildAritfact
Pipeline Name: my-clients-pipeline-for-backend
S3 Artifacts: my-clients-pipeline-artifacts-for-backend
----> my-clients-pipeline-f
--------> SourceArtifact
--------> BuildAritfact
The documentation states,
"Every time you use the console to create another pipeline in that
Region, CodePipeline creates a folder for that pipeline in the bucket.
It uses that folder to store artifacts for your pipeline as the
automated release process runs."
Although, I am using CloudFormation to build the pipelines and not sure if this still applies. The pipelines are working... can I remove this truncated folder somehow?
https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome-introducing-artifacts.html
This response is 19 months late so I hope you were able to find a solution in the meantime. ^-^
I generally try and use aws CLI for anything related to AWS Env modification when my pipelines are running. So for example in your case, if you have manually set your output artifacts to go in the buckets with appropriate names i.e: "my-clients-pipeline-artifacts-for-backend", then you can add a stage after deploy stage to call AWS Lambda that uses aws CLI to remove any folders created by AWS CodePipeline for example:
aws s3 rm s3://my-clients-pipeline-for-backend/my-clients-pipeline-f
--recursive
This command recursively deletes everything in the folder: "my-clients-pipeline-f" and then deletes the folder itself as well.
I know this isn't a super straightforward approach but it is a workaround/solution that works for me in most cases. I have not tested this to delete build artifact s3 folders but it should work fine nonetheless.
REFERENCES
Integrate AWS CLI in Lambda: https://bezdelev.com/hacking/aws-cli-inside-lambda-layer-aws-s3-sync/
Invoke Lambda in CodePipeline: https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-invoke-lambda-function.html#actions-invoke-lambda-function-add-action

Deployment job in jenkins from s3 bucket to aws codedeploy

Trying to create a simple deployment job on jenkins with the plugin post-build aws codedeploy.
The issue i'm facing is not able to target an s3 zip file as a deployment target. I don't want to upload nothing from codedeploy, just simply trigger a deployment from jenkins with a proper configuration ( bucket, region and of course the package.zip, which is already exists in the bucket )
Is there any "easy" way i can do that?
https://aws.amazon.com/blogs/devops/setting-up-the-jenkins-plugin-for-aws-codedeploy/
Current setup works as charmed, the deployment is triggered on aws but with wrong target file so the deployment fails at the moment. There is no chance to merge the build(and upload to s3) and deploy job together.
Switched to aws-cli for properly target a bucket for deployment. There is no way to use the plugin for situation like this
Instead of having build and deploy as two different stages you can have both in same stage where the jenkins job will checkout from the pipeline and codedeploy post job will automatically zip and store the revision in S3 actually this is the way I achieved it. But the best way is to use AWS cli.

Configure serverless framework not to upload to S3?

Need to deploy a serverless function in aws lambda using serverless. Serverless uses aws Cloud formation to build the stack completely and uploads the module into S3. It uploads by default into S3, but the intended file is less than 10 mb which could be attached in aws lambda directly. How to configure the serverless.yml to achieve the scenario.
This is not possible.
You've asked serverless to create a CloudFormation template that creates some lambdas. When AWS executes the template, it executes it in the cloud away from your computers local files. Thats why your code is packaged, uploaded to S3, and made available for CloudFormation use.
CloudFormation does allow for code to be inline in the template but serverless does not support this. And there is no way to ask CloudFormation to create a lambda without code attached for manual upload at a later date.
Frankly the cost to have the additional bucket and a few small files is minimal (if any). If the concern is the additional deployment bucket, you can specify a deployment bucket name for multiple serverless deployments.

CodeDeploy to S3

I have a site in a S3 bucket, configured for web access, for which I run an aws s3 sync command every time I push on a specific git repository (I'm using Gitlab at the moment).
So if I push to stable branch, a Gitlab runner performs the npm start build command for building the site, and then aws s3 sync to synchronize to a specific bucket.
I want to migrate to CodeCommit and use pure AWS tools to do the same.
So far I was able to successfully setup the repository, create a CodeBuild for building the artifact, and the artifact is being stored (not deployed) to a S3 bucket. Difference is that I can't get it to deploy to the root folder of the bucket instead of a subfolder, seems like the process is not made for that. I need it to be on a root folder because of how the web access is configured.
For the deployment process, I was taking a look at CodeDeploy but it doesn't actually let me deploy to S3 bucket, it only uses the bucket as an intermediary for deployment to a EC2 instance. So far I get the feeling CodeDeploy is useful only for deployments involving EC2.
This tutorial with a similar requirement to mine, uses CodePipeline and CodeBuild, but the deployment step is actually a aws s3 sync command (same as I was doing on Gitlab), and the actual deployment step on CodePipeline is disabled.
I was looking into a solution which involves using AWS features made for this specific purpose, but I can't find any.
I'm also aware of LambCI, but to me looks like what CodePipeline / CodeBuild is doing, storing artifacts (not deploying to the root folder of the bucket). Plus, I'm looking for an option which doesn't require me to learn or deploy new configuration files (outside AWS config files).
Is this possible with the current state of AWS features?
Today AWS has announced as a new feature the ability to target S3 in the deployment stage of CodePipeline. The announcement is here, and the documentation contains a tutorial available here.
Using your CodeBuild/CodePipeline approach, you should now be able to choose S3 as the deployment provider in the deployment stage rather than performing the sync in your build script. To configure the phase, you provide an S3 bucket name, specify whether to extract the contents of the artifact zip, and if so provide an optional path for the extraction. This should allow you to deploy your content directly to the root of a bucket by omitting the path.
I was dealing with similar issue and as far as I was able to find out, there is no service which is suitable for deploying app to S3.
AWS CodeDeploy is indeed for deploying code running as server.
My solution was to use CodePipeline with three stages:
Source which takes source code from AWS CodeCommit
Build with AWS CodeBuild
Custom lambda function which after successful build takes artifact from S3 artifact storage, unzip it and copies files to my S3 website host.
I used this AWS lambda function from SeamusJ https://github.com/SeamusJ/deploy-build-to-s3
Several changes had to be made, I used node-unzip-2 instead of unzip-stream for unziping artifict from s3.
Also I had to change ACLs in website.ts file
Uploading from CodeBuild is currently the best solution available.
There's some suggestions on how to orchestrate this deployment via CodePipeline in this answer.