Simple question: I wrote a lambda function (Golang) which I usually deploy manually, through either the AWS CLI or the console.
Recently I moved the code repository to CodeCommit and I'm thinking about using the other dev services, too.
I guess I should write a buildspec.yml for CodeBuild to create the build zip on my behalf and upload it to S3.
What I can find enough (and clear) documentation about is CodeDeploy.
The deployment action is extremely simple, as I understand it, deploy to Lambda a zip stored in my S3 bucket. Though, the only official documentation I found revolves around Cloudformation, SAM, etc.
I'd prefer to keep my CI/CD as simple as possible. What do you suggest? Do I get it wrong? Can you please point me out in the right direction?
Related
Since the code pipeline does not support git tag-based triggers natively, what's the best way to control what commit should be deployed using code pipeline/code build, in case we do not want to deploy the head of the branch?
I noticed a article about "Customizing triggers for AWS CodePipeline with AWS Lambda and Amazon CloudWatch Events". Hopefully, it can help you.
However, the solution in the article maybe a little complex.
If your source code is stored in a GitHub or Bitbucket repository. You may try to create a custom webhook with some filters
as you like, please refer to this link for details about creating webhook.
I have created a simple deploy pipeline using jenkins, I have created the codedeploy, the S3 Bucket, the Autoscalable group, the ami. Everything listed in the docs. But it needs a appspec.yml. I have looked at the documentation for appspec.yml. And it’s very confusing.
Is there any way to generate a appspec.yml. I am not even sure what its role is. I thought the code deploy would take the zip file out of the S3 Bucket and deploy it to the scaleble group.
Any help?
appspec.yml is the file that tells codedeploy service about what tasks it should do with the code on your EC2 servers. So it needs to be built according to your workflow. This documentation and the examples will help you what you want to achieve.
Is there any way to generate a appspec.yml
You can't auto-generate the file. It must be custom designed for your specific application, and only you know what your application is, how it works, how it is configured, what are its dependencies, and so on.
I am new to Terraform and building a CI setup. When I want to create a CodePipeline that is going to be connected to a GitHub repo, do I run specific commands inside my Terraform codebase that will reach out to AWS and create the CodePipeline config/instance for me? Or would I set this CodePipeline up manually inside AWS console and hook it up to Terraform after the fact?
do I run specific commands inside my Terraform codebase that will reach out to AWS and create the CodePipeline config/instance for me?
Yes, you use aws_codepipeline which will create new pipeline in AWS.
Or would I set this CodePipeline up manually inside AWS console and hook it up to Terraform after the fact?
You can also import existing resources to terraform.
I see you submitted this eight months ago, so I am pretty sure you have your answer, but for those searching that comes across this question, here are my thoughts on it.
As most of you have researched, terraform is infrastructure as code (IaC). As IaC it needs to be executed somewhere. This means that you either execute locally or inside a pipeline. A pipeline consists of docker containers that emulate a local environment and run commands for you to deploy your code. There is more to that, but the premise of understanding how terraform runs remains the same.
So to the magic question, Terraform is Code, and if you intend to use a pipeline, Jenkins, AWS, GitLab, and more, then you need a code repository to put all your code into. In this case, a code repository where you can store your terraform code so a pipeline can consume it when deploying your code. There are other reasons why you should use a code repository, but your question is directed to terraform and its usage with the pipeline.
Now the magnificent argument, the chicken or the egg, when to create your pipeline and how to do it. To your original question, you could do both. You could store all your terraform code in a repository (i recommend), clone it down, and locally run terraform to create your pipeline. This would be ideal for you to save time and leverage automation. Newbies, you will have to research terraform state files which is an element you need to backup in some form or shape once the pipeline is deployed for you.
If you are not so comfortable with Terraform, the GUI in AWS is also fine, and you can configure it easily to hook your pipeline into Github to run jobs.
You must set up Terraform and AWS locally on your machine or within the pipeline to deploy your code in both scenarios. This article is pretty good and will give you the basic understanding of setting up terraform
Don't forget to configure AWS on your local machine. For you Newbies using pipeline, you can leverage some of the pipeline links to get you started. Remember one thing, within AWS Codepipeine; you have to use IAM roles and not access keys. That will make more sense once you have gone through the first link. Please also go to youtube and search Terraform for beginners in AWS. Various videos can provide a lot more substance to help you get started.
I want to achieve a Continuous delivery for provisioning AWS resources using Bitbucket & AWS. My use case is to create a kinesis Firehose Delivery stream with destination as Elastic Search. I want this to be achieved by using the AWS cloudformation templates (keeping in mind the different stages for dev, uat, prod). Whenever I update my bitbucket repo the build should get created and the stack will get updated in AWS. Any help will be highly appreciated.
I have searched a lot over the internet but could not find any relevant examples which clearly describes my use case.
Cloudformatiom template committed in bitbucket will provision AWS resources in the cloud
You can use AWS CodePipeline for this purpose. The only issue is that Code Pipeline does not work directly with Bitbucket, only works with AWS CodeCommit or GitHub as the triggering repo.
But there is a workaround for that. You can set up syncing from Bitbucket to GitHub and then set the GitHub repository as the source repository for the pipeline you will create on AWS CodePipeline. You can find many guides for syncing so I won't explain here. The pipeline itself can be defined as a template.
I have already explained Setting up AWS CodePipeline in another answer here that you can follow for this purpose. Hope this helps!
I have a site in a S3 bucket, configured for web access, for which I run an aws s3 sync command every time I push on a specific git repository (I'm using Gitlab at the moment).
So if I push to stable branch, a Gitlab runner performs the npm start build command for building the site, and then aws s3 sync to synchronize to a specific bucket.
I want to migrate to CodeCommit and use pure AWS tools to do the same.
So far I was able to successfully setup the repository, create a CodeBuild for building the artifact, and the artifact is being stored (not deployed) to a S3 bucket. Difference is that I can't get it to deploy to the root folder of the bucket instead of a subfolder, seems like the process is not made for that. I need it to be on a root folder because of how the web access is configured.
For the deployment process, I was taking a look at CodeDeploy but it doesn't actually let me deploy to S3 bucket, it only uses the bucket as an intermediary for deployment to a EC2 instance. So far I get the feeling CodeDeploy is useful only for deployments involving EC2.
This tutorial with a similar requirement to mine, uses CodePipeline and CodeBuild, but the deployment step is actually a aws s3 sync command (same as I was doing on Gitlab), and the actual deployment step on CodePipeline is disabled.
I was looking into a solution which involves using AWS features made for this specific purpose, but I can't find any.
I'm also aware of LambCI, but to me looks like what CodePipeline / CodeBuild is doing, storing artifacts (not deploying to the root folder of the bucket). Plus, I'm looking for an option which doesn't require me to learn or deploy new configuration files (outside AWS config files).
Is this possible with the current state of AWS features?
Today AWS has announced as a new feature the ability to target S3 in the deployment stage of CodePipeline. The announcement is here, and the documentation contains a tutorial available here.
Using your CodeBuild/CodePipeline approach, you should now be able to choose S3 as the deployment provider in the deployment stage rather than performing the sync in your build script. To configure the phase, you provide an S3 bucket name, specify whether to extract the contents of the artifact zip, and if so provide an optional path for the extraction. This should allow you to deploy your content directly to the root of a bucket by omitting the path.
I was dealing with similar issue and as far as I was able to find out, there is no service which is suitable for deploying app to S3.
AWS CodeDeploy is indeed for deploying code running as server.
My solution was to use CodePipeline with three stages:
Source which takes source code from AWS CodeCommit
Build with AWS CodeBuild
Custom lambda function which after successful build takes artifact from S3 artifact storage, unzip it and copies files to my S3 website host.
I used this AWS lambda function from SeamusJ https://github.com/SeamusJ/deploy-build-to-s3
Several changes had to be made, I used node-unzip-2 instead of unzip-stream for unziping artifict from s3.
Also I had to change ACLs in website.ts file
Uploading from CodeBuild is currently the best solution available.
There's some suggestions on how to orchestrate this deployment via CodePipeline in this answer.