I have my stack on opsworks and app is deploying fine (cake php).
Now I have to configure some things like chmod, php versions, etc etc... I'm reading about this but don't know exactly whats the best way to do this.
Question 1 - Should I do this with custom deploy JSON or via custom cookbooks?
Question 2 - Whats the correctly way to work with custom cookbooks? Fork original AWS repositories, update recipes and then use it in my stack?
depends on what you would like to achieve, you may implement many things, such as:
a recipe, which is invoked only once during a chef-client run.
a lightweight resource provider, which supports notifies and can be invoked zero or more times.
a definition, which is available before resource collection and can be invoked zero or more times.
for your second question, first checkout berkshelf -- a cookbook manager.
i would suggest forking a project only if the project is dead, otherwise i would consider to contribute to already implemented project so everybody will benefit from it; and you can always write your own wrapper cookbook, you can also refer to Chef wrapper cookbook best practices.
Related
The problem
I'm approaching AWS, and the first test project will be a website, but i'm struggling on how to approach the resource and the tools to accomplish this.
AWS documentation is not really beginner-friendly, so to me it is like to being punched in the face at the first boxe training session.
First attempt
I've installed bot AWS and SAM cli tools, so what I would expect is to be able to create an empty stack at first and adding the resource one by one as the specifications are given/outlined, but instead what I see is that i need to give a template to the tool to create the new stack, but that means I need to know how to write it beforehand and therefore the template specifications for each resource type.
Second attempt
This lead me to create the stack and the related resources from the online console to get the final stack template, but then I need to test every new resource or any updated resource locally, so I have to copy the template from the online console to my machine and run the cli tools with this, but obviously it is not the desired development flow.
What I expected
Coming from a standard/classical web development I would expect to be able to create the project locally, test the related resources locally, version it, and delegate the deployment to the pipeline.
So what?
All this made me understand that "probably" I'm missing somenthing on how to use the aws cli tools and how the development for an aws-hosted application is meant to be done.
I'm not seeking for a guide on specific resource types like every single tutorial I've found online, but something on a higher level on how to handle a project development on aws, best practices and stuffs like that, I can then dig deeper on any resource later when needed.
AWS's Cloud Development Kit ticks the boxes on your specific criteria.
Caveat: the CDK has a learning curve in line with its power and flexibility. There are much easier ways to deploy a web app on AWS, like the higher-level AWS Amplify framework, with abstractions tailored to front-end devs who want to minimise the mental energy spent on the underlying infrastructure.
Each of the squillion AWS and 3rd Party deploy tools is great for somebody. Nevertheless, looking at your explicit requirements in "What I expected", we can get close to the CDK as an objective answer:
Coming from a standard/classical web development
So you know JS/Python. With the CDK, you code infrastructure as functions and classes, rather than 500 lines of YAML as with SAM. The CDK's reference implementation is in Typescript. JS/Python are also supported. There are step-by-step AWS online workshops for these and the other supported languages.
create the project locally
Most of your work will be done locally in your language of choice, with a cdk deploy CLI command to
bundle the deployment artefacts and send them up to the cloud.
test the related resources locally
The CDK has built-in testing and assertion support.
version it
"Deterministic deploy" is a CDK design goal. Commit your code and the generated deployment artefacts so you have change control over your infrastructure.
delegate the deployment to the pipeline
The CDK has good pipeline support: i.e. a push to the remote main branch can kick off a deploy.
AWS SAM is actually a good option if you are just trying to get your feet wet with AWS. SAM is an open-source wrapper around the aws-cli, which allows you to create aws resources like Lambda in say ~10 lines of code vs ~100 lines if you were to use the aws-cli directly. Yes, you'll need to learn SAM specific things like SAMtemplate and SAM-cli but it is pretty straightforward using this doc.
Once you get the hang of it, it would be easier to start looking under the hood of what/how SAM is doing things and get into the weeds with aws-cli if you wanted. Which will then allow you to build out custom solutions (using aws-cli) for your complex use cases that SAM may not support. Caveat: SAM is still pretty new and has open issues that could be a blocker for advanced features/complex use cases.
Our infra pipeline is setup using terraform + gitlab-ci. I am given with task to provide documentation on setup with what's implemented and what's left. I am new to infra world and finding it hard to come up template to start documentation.
So far I thought of having a table with resources needed with details on dependencies, source of the module, additional notes, etc
If you have a template, can you share OR any other suggestions?
For starters, you could try one or both of the below approaches:
a) create a graph of the Terraform resources using its graph command
b) group and then list all of your resources for a specific tag using AWS Resource Groups, specifically its Create Resource Group functionality
The way I do documentation is to keep it as simple as possible, explain how it works, how to use it and also provide instructions on how it was setup from scratch for reference and as an insurance policy. So that if it's destroyed, someone other than the person that set it all up could recreate it.
Since this is just a pipeline there is probably not much to diagram. The structure of documentation I would provide would be something like this and I would add this either as part of the README.md, in Confluence or however your team does documentation.
Summary
1-2 Sentences about the work and why it was created.
How the Repo is Structured
An explanation on how the repo is structured and decisions behind why it was structured the way it was.
How To Use
Provide steps on how a user can use the pipeline
How It Was Created
Provide steps on how it was setup so anybody can manage it and work on it going forward.
I want to make a bot that makes other bots on Telegram platform. I want to use AWS infrastructure, look like their Lamdba functions are perfect fit, pay for them only when they are active. In my concept, each bot equal to one lambda function, and they all share the same codebase.
At the starting point, I thought to make each new Lambda function programmatically, but this will bring me problems later I think, like need to attach many services programmatically via AWS SDK: Gateway API, DynamoDB. But the main problem, how I will update the codebase for these 1000+ functions later? I think that bash script is a bad idea here.
So, I moved forward and found SAM (AWS Serverless Application Model) and CloudFormatting, which should help me I guess. But I can't understand the concept. I can make a stack with all the required resources, but how will I make new bots from this one stack? Or should I build a template and make new stacks for each new bot programmatically via AWS SDK from this template?
Next, how to update them later? For example, I want to update all bots that have version 1.1 to version 1.2. How I will replace them? Should I make a new stack or can I update older ones? I don't see any options in UI of CloudFormatting or any related methods in AWS SDK for that.
Thanks
But the main problem, how I will update the codebase for these 1000+ functions later?
You don't. You use lambda alias. This allows you to fully decouple your lambda versions from your clients. This works because you are using an alias of your function in your client's code (or api gateway). The alias is fixed and does not change.
However, alias is like a pointer - it can point to different versions of your lambda function. Therefore, when you publish a new lambda version you just point alias to it. Its fully transparent from your clients and their alias does not require any change.
I agree with #Marcin. Also it would be worth checking serverless? Seems like you are still experimenting so most likely you are deploying using bash scripts with AWS SDK/SAM commands. This is fine but once you start getting the gist of what your architecture looks like, I think you will appreciate what serverless can offer. You can deploy/teardown cloudformation stacks in matter of seconds. Also you can use serverless-offline so that you can have a local build of your AWS lambda architecture on your local machine.
All this has saved me hours of grunt work.
The documentation on AWS-CDK has examples of setting it up as a standalone application with support in multiple languages.
I have the following questions regarding the same:
Is it possible to use it within a separate app (written in .NET Core or Angular) like a library?
By above I mean being able to instantiate the construct classes within my app's services and create stacks in my AWS account.
If yes, how does it affect the deployment process? Will invoking the synth() function, generate the cloud-formation templates as expected?
Apologies if my question is vague. I am just getting started with this and am willing to provide necessary details if needed.
I appreciate any help in this regard. Thank you.
I've tried using cdk as a library, but had a few issues and started calling it from another app by using a cli call.
I was using typescript and basically what I did was to call the synth method on the app construct:
import * as cdk from '#aws-cdk/core';
const app = new cdk.App();
... // do something
const cf = app.synth(); // here you get the cloud assembly
cf.something() // you can manipulate the results here
A few issues I found was to get errors during synth as they were not proper bubbled up.
I couldn't find a way to deal with the assets either...
In summary, I didn't explore it much further than that, but I think cdk might need more development to be able to use all its features when importing as a library.
Recently I came across a situation where am building AWS infrastructure using terraform to setup a clustered environment for some applications. Thing is when I apply terraform scripts it builds all the necessary modules and spins multiple instances altogether and then finishes. This may be meant to do like this and there is nothing to blame anyways terraform works greatly to build such infra.
When I'm trying to setup such infra to deploy an application in a clustered way, here am using a configuration management tool. While building ec2 instances CM scripts gets invoked and configured accordingly. Problem comes when there is some dependency on the modules.
Consider a scenario that 2(A & B) components are part of Autoscale group and 2(C & D) components are normal ec2-instances. Here if I wish to build A first and then C since C instance got dependency on A which has to be fully configured first or vice versa, how can I control the order in which terraform helps me to achieve this.
Please can someone helps me achieving it.
Thanks in advance
The other answer is correct in the literal sense, but overall this is something to avoid. Build your CM code so that it will keep re-trying to converge until it succeeds. With Chef in particular, you can use the chef-client cookbook to deploy a service which runs Chef converges automatically at a given interval (30 minutes by default but you might want to make that shorter). Running things in the "right" order sounds nice, but when dealing with byzantine failures you'll thank your past self for ensuring reliable convergence no matter the order.
You can use the depends_on parameter. Resources can be made explicitly dependant on other resources. Terraform will in turn only build the resource once dependent resources have provisioned successfully.
https://www.terraform.io/intro/getting-started/dependencies.html
The question has a broad nature and the other answers are right in their own rights. What I would like to add is that making use of modules to determine order of logical sub projects works well too.
In terraform you can force procedural order with depends_on in resource level but you cannot use it for modules. However for modules you can use the output of one module as input to the other one, which would help you manage procedural order on modules level.
So, in your case, I would put A & B in one module, C & D into another and use the output variables from one to the other to control order.