How do I add a description to a serverless deploy call? - amazon-web-services

I have a pretty complex backend project that I deploy to AWS using the Serverless framework. The problem I'm facing is related to versioning. I have a React app on the FE, which has a version on it, but I didn't add a version to the BE for simplicity (it is the same app, I'm not exposing any special API so didn't want to deal with versioning matrices between the FE and the BE, backward compatibility, etc..) --> Is this a mistake?
When I deploy my BE code, AWS does keeps track of the deploy calls and adds versions in the Versions tab of the Lambdas page, and it has a Description property. I'd like to access that Description to at least have an idea which code is running at any given time.
I was looking at the serverless docs and couldn't find a way to send a Description up to AWS. I'm calling it like so:
serverless deploy -s integration
NOTE: I don't have CI/CD hooked up yet, but the idea would be that only checkins to a specific branch (master or develop) would do a deploy to AWS (as opposed to doing it manually on a feature branch while developing). Is this something anyone is doing?
Any thoughts and/or ideas on versioning serverless backend are appreciated.

Related

Automate Apigee API Proxy Deployment in CI/CD Pipeline

Is there any recommended method to create and deploy the Apigee API Proxy Bundle via a CI/CD pipeline (I'm using Azure DevOps)?
I want to avoid excessive API Proxy Bundles from being created and deployed when there are no changes to be made. I've already tested, and I see that identical bundles still create a new revision.
So far, my own solution is to write a PowerShell script to use apigeecli to download the current bundle and compare it against the apiproxy that I have locally in my repo. If it differs, I create and deploy a new API Proxy Bundle.
Has anyone seen anything better?
I have mainly automated with Gitlab but will share my ideas probably may help with your specific case.
So we use version control to manage our apigee repos. I have setup a gitlab pipeline that checks for the diff anytime we push to our repository and only if there are any changes do we redeploy the proxy to Apigee. Normally when the pipeline is triggered, we check if there are any changes to target servers, proxies and shared flows, and if changes are detected, we check the deployed revision and environments.
Through my deployment script, i am able to get a list of these changes and pass them to the pipeline as CHANGES variable. This means that only these modified proxies will be deployed.
On my pipeline I could do something like this git diff --name-only $CI_COMMIT_SHA..$CI_COMMIT_BEFORE_SHA > /changes.txt and pass the content of the changes file to the CHANGES to be deployed.

Setup serverless local environment for AWS using serverless framework

Hi I am using the serverless framework to develop my application and I need to set it up in a local environment I am using API gateway, Lambda, VPC , SNS, SQS, and DB is connected via VPC peering, currently, everytime I am deploying and testing my code and its tedious process and takes 5 mins to deploy, Is there any way to set up a local environment to have everything in one place
It should be possible in theory, but it is not an easy thing to do. There are products like LocalStack that offer exactly this.
But, I would not recommend going that route. Ultimately, by design this will always be a huge cat and mouse game. AWS introduces a new feature or changes some minor detail of their implementation and products like LocalStack need to catch up. Furthermore, you will always only get an "approximation" of the "actual cloud". It never won't be a 100% match.
I would think there is a lot of work involved to get products like LocalStack working properly with your setup and have it running well.
Therefore, I would propose to invest the same time into proper developer experience within the "actual cloud". That is what we do: every developer deploys their version of the project to AWS.
This is also not trivial, but the end result is not a "fake version" of the cloud that might or might not reflect the "real cloud".
The key to achieve this is Infrastructure as code and as much automation as possible. We use Terraform and Makefiles which works very well for us. If done properly, we only ever build and deploy what we changed. The result is that changes can be deployed in seconds to AWS and the developer can test the result either through the Makefile itself or using the AWS console.
And another upside of this is, that in theory you need to do all the same work anyway for your continuous deployment, so ultimately you are reducing work by not having to maintain local deployments and cloud deployments.

How to use AWS CLI to create a stack from scratch?

The problem
I'm approaching AWS, and the first test project will be a website, but i'm struggling on how to approach the resource and the tools to accomplish this.
AWS documentation is not really beginner-friendly, so to me it is like to being punched in the face at the first boxe training session.
First attempt
I've installed bot AWS and SAM cli tools, so what I would expect is to be able to create an empty stack at first and adding the resource one by one as the specifications are given/outlined, but instead what I see is that i need to give a template to the tool to create the new stack, but that means I need to know how to write it beforehand and therefore the template specifications for each resource type.
Second attempt
This lead me to create the stack and the related resources from the online console to get the final stack template, but then I need to test every new resource or any updated resource locally, so I have to copy the template from the online console to my machine and run the cli tools with this, but obviously it is not the desired development flow.
What I expected
Coming from a standard/classical web development I would expect to be able to create the project locally, test the related resources locally, version it, and delegate the deployment to the pipeline.
So what?
All this made me understand that "probably" I'm missing somenthing on how to use the aws cli tools and how the development for an aws-hosted application is meant to be done.
I'm not seeking for a guide on specific resource types like every single tutorial I've found online, but something on a higher level on how to handle a project development on aws, best practices and stuffs like that, I can then dig deeper on any resource later when needed.
AWS's Cloud Development Kit ticks the boxes on your specific criteria.
Caveat: the CDK has a learning curve in line with its power and flexibility. There are much easier ways to deploy a web app on AWS, like the higher-level AWS Amplify framework, with abstractions tailored to front-end devs who want to minimise the mental energy spent on the underlying infrastructure.
Each of the squillion AWS and 3rd Party deploy tools is great for somebody. Nevertheless, looking at your explicit requirements in "What I expected", we can get close to the CDK as an objective answer:
Coming from a standard/classical web development
So you know JS/Python. With the CDK, you code infrastructure as functions and classes, rather than 500 lines of YAML as with SAM. The CDK's reference implementation is in Typescript. JS/Python are also supported. There are step-by-step AWS online workshops for these and the other supported languages.
create the project locally
Most of your work will be done locally in your language of choice, with a cdk deploy CLI command to
bundle the deployment artefacts and send them up to the cloud.
test the related resources locally
The CDK has built-in testing and assertion support.
version it
"Deterministic deploy" is a CDK design goal. Commit your code and the generated deployment artefacts so you have change control over your infrastructure.
delegate the deployment to the pipeline
The CDK has good pipeline support: i.e. a push to the remote main branch can kick off a deploy.
AWS SAM is actually a good option if you are just trying to get your feet wet with AWS. SAM is an open-source wrapper around the aws-cli, which allows you to create aws resources like Lambda in say ~10 lines of code vs ~100 lines if you were to use the aws-cli directly. Yes, you'll need to learn SAM specific things like SAMtemplate and SAM-cli but it is pretty straightforward using this doc.
Once you get the hang of it, it would be easier to start looking under the hood of what/how SAM is doing things and get into the weeds with aws-cli if you wanted. Which will then allow you to build out custom solutions (using aws-cli) for your complex use cases that SAM may not support. Caveat: SAM is still pretty new and has open issues that could be a blocker for advanced features/complex use cases.

Separate URL for each git branch in Cloud Run

I am looking into Cloud Run to host my new app, and I am wondering if it is possible to generate a separate URL for each git branch.
I use Netlify to host my other app. When it is connected to GitHub (or other VCS services), it builds the source code in a branch and deploy it to a URL that is specific to the branch.
Can it be done easily or do I have to write some logic?
Or do you think AWS amplify or some other services are of better fit?
The concept of Cloud Run and URLs is quite simple:
https://<service-name>-<project hash>.<region>.run.app
If your project and region are the same for all the branches, you simply have to deploy a different service for each branch to get a different URL.
That was for Cloud Run. Now, I'm not sure that Netlify is compliant with Cloud Run. I found no documentation on this.
This answer won't be directly useful to you but I think it's relevant and worth mentioning
The open source Knative API (and implementation actually exposes a "tag" feature while splitting the traffic between multiple revisions: https://github.com/knative/docs/blob/master/docs/serving/spec/knative-api-specification-1.0.md#traffictarget
This feature is not currently supported on Cloud Run fully managed, but it will be.
By tagging releases this way, you could define tag: v1 and tag: v2 in your traffic configuration, and you would get new URLs like:
https://v1-SERVICE_NAME...run.app
https://v2-SERVICE_NAME...run.app
that directly go to these specific versions.
And the interesting thing is, these revisions you specified in the traffic: block of the Service object do not have to receive any traffic (you can say traffic percentage: 0) but it would still create a domain name like I showed above to the inactive revisions of your app.
So when Cloud Run fully-managed supports tag fields, you can actually achieve this, although it will be less out-of-the-box experience than Netlify.

best practice for bitbucket pipeline deployment in AWS to live server

I am on a project which is about to release first version. I want to setup bitbucket pipeline when deploying to AWS. When doing so, I am afraid that users on website might be affected while we are deploying. What is the best practice for deploying new feature to the live server without affecting users on the website?
One possible option might be that put maintenance page on the web and deploy new codes when not many users are using the website. is there other way to deploy?
As mentioned in the comment it something that depends on underlying tools and technology, but I will focus on your last question.
One possible option might be that put maintenance page on the web and
deploy new codes when not many users are using the website. is there
other way to deploy?
First thing, you should not deploy a new feature without proper testing as pipeline must include automating testing, as sometimes such code breaks the complete application.
You should not put application under maintenance during deployment, that is why we have CI/CD pipeline. You should design your pipeline in the way that you are sure about the lastest code and feature that It should work in production as expected. Many AWS services support blue/green deployment and in the interesting part of blue/green deployment is rollback. You can explore further in the below links.
AWS_Blue_Green_Deployments
using-bitbucket-pipeline-for-aws-ecs-deployments
deploy-to-ec2-with-aws-codedeploy-from-bitbucket-pipelines
continuous-deployment-pipeline