AzureDevOps - Parameters question for YAML templates - amazon-web-services

Alright so I have no idea if this is possible, but I was told so by someone more experienced than me...
I have a pipeline in azure devops to create a cloud formation stack. The cloudformation stack is created from a template. The template requires some parameters
Currently i am passing the parameters through hardcoding the value in the template file. This is just for testing purposes.
But I was told that there is a way to, from azure devops, prompt the customer in a GUI like way and ask them for inputs that azure devops will then place to the template?
The GUI bit...is confusing for me. Hope this is clear if anyone can help?

Yes, you can.
When creating a first level yaml pipeline the parameters works like an input source for your pipeline execution.
Just declare your parameters and it's types and use it on your pipeline tals as you need.
For example:
Creating a parameter:
Running you pipeline:

Related

How to use AWS CLI to create a stack from scratch?

The problem
I'm approaching AWS, and the first test project will be a website, but i'm struggling on how to approach the resource and the tools to accomplish this.
AWS documentation is not really beginner-friendly, so to me it is like to being punched in the face at the first boxe training session.
First attempt
I've installed bot AWS and SAM cli tools, so what I would expect is to be able to create an empty stack at first and adding the resource one by one as the specifications are given/outlined, but instead what I see is that i need to give a template to the tool to create the new stack, but that means I need to know how to write it beforehand and therefore the template specifications for each resource type.
Second attempt
This lead me to create the stack and the related resources from the online console to get the final stack template, but then I need to test every new resource or any updated resource locally, so I have to copy the template from the online console to my machine and run the cli tools with this, but obviously it is not the desired development flow.
What I expected
Coming from a standard/classical web development I would expect to be able to create the project locally, test the related resources locally, version it, and delegate the deployment to the pipeline.
So what?
All this made me understand that "probably" I'm missing somenthing on how to use the aws cli tools and how the development for an aws-hosted application is meant to be done.
I'm not seeking for a guide on specific resource types like every single tutorial I've found online, but something on a higher level on how to handle a project development on aws, best practices and stuffs like that, I can then dig deeper on any resource later when needed.
AWS's Cloud Development Kit ticks the boxes on your specific criteria.
Caveat: the CDK has a learning curve in line with its power and flexibility. There are much easier ways to deploy a web app on AWS, like the higher-level AWS Amplify framework, with abstractions tailored to front-end devs who want to minimise the mental energy spent on the underlying infrastructure.
Each of the squillion AWS and 3rd Party deploy tools is great for somebody. Nevertheless, looking at your explicit requirements in "What I expected", we can get close to the CDK as an objective answer:
Coming from a standard/classical web development
So you know JS/Python. With the CDK, you code infrastructure as functions and classes, rather than 500 lines of YAML as with SAM. The CDK's reference implementation is in Typescript. JS/Python are also supported. There are step-by-step AWS online workshops for these and the other supported languages.
create the project locally
Most of your work will be done locally in your language of choice, with a cdk deploy CLI command to
bundle the deployment artefacts and send them up to the cloud.
test the related resources locally
The CDK has built-in testing and assertion support.
version it
"Deterministic deploy" is a CDK design goal. Commit your code and the generated deployment artefacts so you have change control over your infrastructure.
delegate the deployment to the pipeline
The CDK has good pipeline support: i.e. a push to the remote main branch can kick off a deploy.
AWS SAM is actually a good option if you are just trying to get your feet wet with AWS. SAM is an open-source wrapper around the aws-cli, which allows you to create aws resources like Lambda in say ~10 lines of code vs ~100 lines if you were to use the aws-cli directly. Yes, you'll need to learn SAM specific things like SAMtemplate and SAM-cli but it is pretty straightforward using this doc.
Once you get the hang of it, it would be easier to start looking under the hood of what/how SAM is doing things and get into the weeds with aws-cli if you wanted. Which will then allow you to build out custom solutions (using aws-cli) for your complex use cases that SAM may not support. Caveat: SAM is still pretty new and has open issues that could be a blocker for advanced features/complex use cases.

What is the proper way to build many Lambda functions and updated them later?

I want to make a bot that makes other bots on Telegram platform. I want to use AWS infrastructure, look like their Lamdba functions are perfect fit, pay for them only when they are active. In my concept, each bot equal to one lambda function, and they all share the same codebase.
At the starting point, I thought to make each new Lambda function programmatically, but this will bring me problems later I think, like need to attach many services programmatically via AWS SDK: Gateway API, DynamoDB. But the main problem, how I will update the codebase for these 1000+ functions later? I think that bash script is a bad idea here.
So, I moved forward and found SAM (AWS Serverless Application Model) and CloudFormatting, which should help me I guess. But I can't understand the concept. I can make a stack with all the required resources, but how will I make new bots from this one stack? Or should I build a template and make new stacks for each new bot programmatically via AWS SDK from this template?
Next, how to update them later? For example, I want to update all bots that have version 1.1 to version 1.2. How I will replace them? Should I make a new stack or can I update older ones? I don't see any options in UI of CloudFormatting or any related methods in AWS SDK for that.
Thanks
But the main problem, how I will update the codebase for these 1000+ functions later?
You don't. You use lambda alias. This allows you to fully decouple your lambda versions from your clients. This works because you are using an alias of your function in your client's code (or api gateway). The alias is fixed and does not change.
However, alias is like a pointer - it can point to different versions of your lambda function. Therefore, when you publish a new lambda version you just point alias to it. Its fully transparent from your clients and their alias does not require any change.
I agree with #Marcin. Also it would be worth checking serverless? Seems like you are still experimenting so most likely you are deploying using bash scripts with AWS SDK/SAM commands. This is fine but once you start getting the gist of what your architecture looks like, I think you will appreciate what serverless can offer. You can deploy/teardown cloudformation stacks in matter of seconds. Also you can use serverless-offline so that you can have a local build of your AWS lambda architecture on your local machine.
All this has saved me hours of grunt work.

How to avoid deployment of all five functions in a server of serverless framework if only one function is changed

I have a serverless framework service with (say)five aws lambda functions using python. By using github I have created a CodePipeline for CI/CD.
When I push the code changes, it deploys all the functions even only function is changed.
I want to avoid the deployment of all functions and the CI/CD should determine the changed function and deploy it. Rest of functions should not be deployed again.
Moreover, is there anyway to deal with such problems using AWS SAM, as at this stage I have an option to switch towards SAM by quitting serverless framework
Unfortunately there is no "native" way to do it. You would need to write a bash that will loop through the changed files and call sls deploy -s production -f for each one of them.
I was also faced this issue, and eventually it drove me to create an alternative.
Rocketsam takes advantage of sam local to allow deploying only changed functions instead of the entire microservice.
It also supports other cool features such as:
Fetching live logs for each function
Sharing code between functions
Template per function instead of one big template file
Hope it solves your issue :)

CloudFormation, AWS Lambda: Ignore Parameter from Old Template

I am deploying a .Net Core Web API project to AWS Lambda. It works, but I have the following issue:
Previous Template Contains Parameter No Longer Used
A previous deployment of our Lambda created a CloudFormation template with a defined Parameter. For discussion, let's call it "BadParameter".
Now, we don't want to use that parameter anymore. We've updated our serverless.template so that it does not have that parameter anymore.
Now, all our deployments (using the update template) fail with the message:
Error creating CloudFormation change set: Parameters: [BadParameter]
do not exist in the template
I can fix this by downloading the template from CloudFormation, manually removing the parameter, then re-uploading the template, but that is tedious and error-prone.
Is there some way I can specify in my new template that the old parameter should be deleted?
I ran into this too. I'm going to give details on what I saw and what I did to fix it because my situation doesn't seem to be exactly the same as SouthShoreAK's situation, but is close enough that I'm certain we are experiencing the same issue.
Situation and Error
parent-template.yml is a CloudFormation template which is being deployed as part of the CI/CD process. Inside this template are several AWS::CloudFormation::StackSet resources, child-template-1.yml, child-template-2.yml, etc. Each child-template-*.yml StackSet resource is deploying into multiple accounts.
I was receiving this error on one of the AWS::CloudFormation::StackSet resources:
Parameters: [OldParameter1, OldParameter2] do not exist in the template
Changes Which Caused the Error
Recent changes were:
In child-template-1.yml I removed these two parameters, OldParameter1 and OldParameter2.
In parent-template.yml in one of the AWS::CloudFormation::StackSet resources I had been passing in values for OldParamter1 and OldParameter2. I removed these two ParameterKey-ParameterValues. In other words, I stopped providing these parameters as inputs to the child template.
Steps Taken
I went into child-template-1.yml and added the parameters OldParameter1 and OldParameter2 back into the template.
I gave both parameters a default value, "Dummy".
I re-deployed with aws cloudformation deploy
I went into child-template-1.yml and removed the parameters OldParameter1
and OldParameter2.
I re-deployed
At that point, the parameters have been removed from the templates. I do not know why this is necessary; it seems like a bug in CloudFormation to me.
Old question I know, but I just ran into this myself using CodePipeline. I cant tell from the OP's question whether they were using it though.
The solution was to remove the old parameters from the json file referenced at TemplateConfiguration in the CodePipeline CHANGE_SET_REPLACE stage.

How Cloud Formation Works

I see that there are a lot of success stories using CloudFormation, we're planning to use it to make sure our Prod/Dev environments are identical. I heard that its a a great place to have a single file, in version control, for deploying multiple similar environments.
I've a doubt, lets say if I use CloudFormer and create a template of say my DB instance and save it GIT, and say in next 10-15 days I make couple of changes like add new volumes in instance to store DataFiles, or delete some volumes etc, Now, when I use that Template in say our Dev Environment will it reflect the volumes which I added/deleted. I mean how does it work behind the scene.
This is the basic way to use CloudFormation:
Create a JSON template describing your stack. You can write it manually, or write code that creates the JSON for you.
Create one or more stacks based on the template.
Whenever you want to change something, edit your template (always committing changes to version control) and update the stack(s).
You will often have several templates, where stacks based on one template uses resources created by stacks based on other templates. Outputs and parameters are good for coordinating this.
Most importantly: You should never change resources created using CloudFormation in any other way than by changing the stack template and updating the stack.
No, such changes would not be reflected automatically.
A CloudFormation template is a declarative description of AWS resources. When you create a Stack from a template, AWS will provision all resources described in the template. You can also update a stack with new resources or delete entire stacks.
ClodFormer is a separate tool that will scan you account for resources and create a template describing them.
So, if you create two stacks from the same template, they will be similar only after created, but totally separate lives thereafter. But you can have resources that are shared between stacks, for example, you can have one database stack that is referenced by two application stacks, if that makes sense to your environment.