Is it possible to use AWS CodePipeline with Lightsail? - amazon-web-services

I'm working all the day and couldn't find the answer. So I'm asking you guys: is it possible to use AWS Pipeline with AWS Lightsail?
My objective is to store the code inside CodeCommit and use CodeBuild, CodeDeploy, CodePipeline and S3 to create a Continuous Deployment inside a Lightsail instance.
Those are the steps I think I have to follow to accomplish the task:
[x] setup a Lightsail instance
[x] create an IAM user and set permissions
[x] transfer my repository to CodeCommit
[x] create an S3 bucket to hold the build artifacts
[x] create a CodeBuild project to build the artifacts
[x] create a buildspec.yml file with my build steps
[ ] create a CodeDeploy project to deploy my application
[ ] create a CodePipeline project to trigger the build when I commit to certain branch
As you can see, I'm almost there. But I couldn't find any way to use my Lightsail instance with CodeDeploy. So, my question is: is it possible? Is there some limitation? Did I miss something really basic? Is there any other way to make the CD with Lighsail? Sorry, I'm getting a little crazy right here ahhaha.

Today, 08/16/2017, it's not possible to integrate them.
I asked the same question on AWS forums and they replied that those technologies are not integrated yet since they are separated from each other.
Well I guess I'll have to find another way.

i’m not a total expert here, but I think the way to do it would be with a custom script in CodeBuild, rather than with CodeDeploy.
CodeDeploy has a lot of custom stuff going on to support rollbacks and that sorr of advanced stuff (means you have to install the agent on your target server etc).
CodeBuild is just made for running scripts, so I think it’d be reasonable to add a deploy script (that runs after your tests) that connects up to yor Lightsail instance via SSH and deploy any changed files (similar to how you’d do it in open source using Travis CI etc).
Specifically I’ve used the dploy package on npm to do the actual SFTP upload before. It’s Git-aware so it only uploads changes since the last revision (but you could just rsync if you didn’t care about that).

I recently had the same challenge and got it working.
It is necessary to register the Lightsail Instance as an on-premise instance with CodeDeploy. On the instance itself the CodeDeploy agent needs to be installed and configured.
I have written a post about how to set this up on my blog.

https://scratchpad.blog/howto/how-to-use-codedeploy-with-aws-lightsail/
Following those steps can help you deploy lightsail as an onpremises instance and you configure codedeploy to deploy to the onpremises instance

Related

Creating a simple pipeline (CodeCommit repository

I have created a simple deploy pipeline using jenkins, I have created the codedeploy, the S3 Bucket, the Autoscalable group, the ami. Everything listed in the docs. But it needs a appspec.yml. I have looked at the documentation for appspec.yml. And it’s very confusing.
Is there any way to generate a appspec.yml. I am not even sure what its role is. I thought the code deploy would take the zip file out of the S3 Bucket and deploy it to the scaleble group.
Any help?
appspec.yml is the file that tells codedeploy service about what tasks it should do with the code on your EC2 servers. So it needs to be built according to your workflow. This documentation and the examples will help you what you want to achieve.
Is there any way to generate a appspec.yml
You can't auto-generate the file. It must be custom designed for your specific application, and only you know what your application is, how it works, how it is configured, what are its dependencies, and so on.

Setting up CodePipeline with Terraform

I am new to Terraform and building a CI setup. When I want to create a CodePipeline that is going to be connected to a GitHub repo, do I run specific commands inside my Terraform codebase that will reach out to AWS and create the CodePipeline config/instance for me? Or would I set this CodePipeline up manually inside AWS console and hook it up to Terraform after the fact?
do I run specific commands inside my Terraform codebase that will reach out to AWS and create the CodePipeline config/instance for me?
Yes, you use aws_codepipeline which will create new pipeline in AWS.
Or would I set this CodePipeline up manually inside AWS console and hook it up to Terraform after the fact?
You can also import existing resources to terraform.
I see you submitted this eight months ago, so I am pretty sure you have your answer, but for those searching that comes across this question, here are my thoughts on it.
As most of you have researched, terraform is infrastructure as code (IaC). As IaC it needs to be executed somewhere. This means that you either execute locally or inside a pipeline. A pipeline consists of docker containers that emulate a local environment and run commands for you to deploy your code. There is more to that, but the premise of understanding how terraform runs remains the same.
So to the magic question, Terraform is Code, and if you intend to use a pipeline, Jenkins, AWS, GitLab, and more, then you need a code repository to put all your code into. In this case, a code repository where you can store your terraform code so a pipeline can consume it when deploying your code. There are other reasons why you should use a code repository, but your question is directed to terraform and its usage with the pipeline.
Now the magnificent argument, the chicken or the egg, when to create your pipeline and how to do it. To your original question, you could do both. You could store all your terraform code in a repository (i recommend), clone it down, and locally run terraform to create your pipeline. This would be ideal for you to save time and leverage automation. Newbies, you will have to research terraform state files which is an element you need to backup in some form or shape once the pipeline is deployed for you.
If you are not so comfortable with Terraform, the GUI in AWS is also fine, and you can configure it easily to hook your pipeline into Github to run jobs.
You must set up Terraform and AWS locally on your machine or within the pipeline to deploy your code in both scenarios. This article is pretty good and will give you the basic understanding of setting up terraform
Don't forget to configure AWS on your local machine. For you Newbies using pipeline, you can leverage some of the pipeline links to get you started. Remember one thing, within AWS Codepipeine; you have to use IAM roles and not access keys. That will make more sense once you have gone through the first link. Please also go to youtube and search Terraform for beginners in AWS. Various videos can provide a lot more substance to help you get started.

from gitlab ci/cd to AWS EC2

It's beens ome time since I've been trying to figure out the really easy way.
I am using gitlab CI/CD and want to move the built data from there to AWS EC2. Problem is i found 2 ways which both are really bad ideas.
building project on gitlab ci/cd, then ssh into the AWS, pull the project from there again, and run npm scripts. This is really wrong and I won't go into details why.
I saw the following: How to deploy with Gitlab-Ci to EC2 using AWS CodeDeploy/CodePipeline/S3 , but it's so big and complex.
Isn't there any easier way to copy built files from gitlab ci/cd to AWS EC2 ?
I use Gitlab as well, and what has worked for me is configuring my runners on EC2 instances. A few options come to mind:
I'd suggest managing your own runners (vs. shared runners) and
giving them permissions to drop built files in S3 and have your
instances pick from there. You could trigger SSM commands from the
runner targeting your instances (preferably by tags) and they'll
download the built files.
You could also look into S3 notifications. I've used them to trigger
Lambda functions on object uploads: it's pretty fast and offers
retry mechanisms. The Lambda could then push SSM commands to
instances. https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html

How to script AWS VPC with Cloudformation seeded by Docker

I want to confirm my approach to setting up a VPC using cloudformation/scepter and seeding instances with docker container is correct.
Create an aws ec2 instance.
Create a docker image on that instance
Create a cloudformation VPC template (.yaml )
-reference docker image in template?
Create a sceptre project using the template above and run script from ec2 instance
So as I understand if the majority of the work will be in the cloudformation template. Currently I'm stuck on sceptre errors, but I wanted to make sure I was approaching the problem correctly. Does this look like the right approach?
There are a lot of ways of doing what you want:
Run sceptre locally on your development machine
This is easier, but not best practice for important environments as
having a build server, gives a better trail of what was done when (especially in shared environments)
Use CodeBuild to save you having to do steps 1 & 2 yourself (AWS maintain a docker image with python installed)
It also avoids the chicken and egg problem of how you deploy the EC2 instance in the first place.
Configure Jobs on a build server such as Jenkins
CodeDeploy is good for simple setups, but a well configured build server, can have dashboards to track what is deployed where
as sceptre is just a way of generating/managing deploying templates across environments, there are lots of other ways of doing this including what you outlined.
p.s Apologies that the getting started documentation isn't great at the moment, it is something we are focusing on for release 2.0.

Deploying a specific branch using AWS CodeDeploy

I have followed this guide:
https://blogs.aws.amazon.com/application-management/post/Tx33XKAKURCCW83/Automatically-Deploy-from-GitHub-Using-AWS-CodeDeploy
It mentions that it will push the default branch from GitHub.
What about all the other branches one might have in the same repo?
Can I somehow specify which branch to deploy?
Here is how you can accomplish branch-specific deploy scenarios using AWS Code Deploy and AWS CodePipeline:
Assuming you've already set up an application and deploy group with Code Deploy, create one group for your "Dev" branch, and another deploy group for "qa" or "stage".
Enable CodePipeline in your AWS Console.
Create a new pipeline by authorizing your Github account and providing access to the repository and branches you desire.
In the BETA section of your new pipeline, edit it, authorize github again, and choose the specific branch you wish to deploy when changes are made.
Now your system will automatically deploy based on a specific branch.
After scratching and cursing and researching and out of the box thinking...I managed to do it like this.
As long as CodeDeploy plays nicely only with the default branch, let's manipulate that one from the GitHUb API [you can do it also from the settings of the GH UI].
This is the code to change/update the default branch from your repo.
I have confirmed that CodeDeploy had no problem deploying the new branch! :]