How to deploy code build to a EC2 instance from GitLab Pipeline - amazon-web-services

I have been working on a React web app and I need to deploy it now. I have the codebase in GitLab and I'm using Gitlab pipeline to run the tests and create build and deploy. For deployment I need to deploy it to a EC2 instance. My pipeline runs well until creating the build. Now the problem is how to push that created build to the EC2 instance. Can someone help me in here. I tried in the following way.
Gitlab CI deploy AWS EC2
It showed me connection time out message instead of connecting to ec2 instance. After that I allowed all IPs to access the instance with SSH using the security groups. Then it worked fine for me. But the problem is it's not secure to allow all IPs to access SSH. How can I solve this problem.

Related

How to automatically deploy docker to ec2 instance w/o ECS / Is it possible to SSH to an EC2 instance using the post build commands of a build script?

I am using AWS ECS to automatically deploy my server in a docker container to my EC2 instance, the only problem is I have to use an elastic load balancer (ELB). This is a for a school project but it also uses a Telegram bot so I needed a HTTPS endpoint to receive updates from Telegram. An ELB is completely overkill for this and is also costing me more than I would like considering everything else is under the free tier that I am using. Does anyone know how to set up automatic deployment of a docker container to EC2 without an ELB/ECS OR does anyone know if it is possible to SSH to an EC2 instance during a build since that could possibly be a solution of how to run a deployment script on the instance automatically from the build. Thanks!
You dont need ECS.to run Docker. I have run Docker containers from an EC2 userdata script, so that is does a docker run command at launch. Works great.

Support For Elastic Beanstalk on LocalStack

I am rather new to LocalStack and I am finding it to be extremely useful. The LocalStack GitHub page does not list Elastic Beanstalk as a supported service. Is there any information on whether this will be rolled out anytime soon?
Elasticbeanstalk automatically creates the infrastructure necessary to deploy and run your application in the cloud. For example. it creates EC2 instances, load balancer etc for you.
if you look at localstack, it supports the standalone services such IAM, S3, Dynamodb etc. Therefore I don't think the localstack will ever support elasticbeanstalk as a service.
If you want to simulate running the elasticbeanstalk application locally, you can try running the following the eb cli command. It will run the application in docker.
eb local run

how to create table automatically in aws aurora serverless with serverless framework

I'm trying to create table automatically with npm migrate whenever we deploy any changes with serverless framework. It's quite fine when I used with aurora database. But I've moved to Aurora Serverless RDS (Sydney region), it's not working at all. Because Aurora Serverless RDS itself is working inside VPC, thus when we need to access it lambda function should must be at same VPC.
PS: we're using Github Action as pipeline to deploy everything to Lambda.
Please let me know how to solve that issue, thanks.
There are only two basic ways that you can approach this: open a tunnel into the VPC or run your updates inside the VPC. Here are some of the approaches to each that I've used in the past:
Tunnel into the VPC:
VPN, such as OpenVPN.
Relatively easy to set up, but designed to connect two networks together and represents an always-on charge for the server. Would work well if you're running the migrations from, say, your corporate network, but not something that you want to try to configure for GitHub Actions (or any third-party build tool).
Bastion host
This is an EC2 instance that runs in a public subnet and exposes SSH to the world. You make an SSH connection to the Bastion and then tunnel whatever protocol you want underneath. Typically run as an "always on" instance, but you can start and stop programmatically.
I think this would add a lot of complexity to your build. Assuming that you just want to run on demand, you'd need a script that would start the instance and wait for it to be ready to accept connections. You would probably also want to adjust the security group ingress rules to only allow traffic from your build machine (whose IP is likely to change for each build). Then you'd have to open the tunnel, by running ssh in the background, and close it again after the build is done.
Running the migration inside the VPC:
Simplest approach (imo) is to just move your build inside the VPC, using CodeBuild. If you do this you'll need to have a NAT so that the build can talk to the outside world. It's also not as easy to configure CodeBuild to talk to GitHub as it should be (there's one manual step where you need to provide an access token).
If you're doing a containerized deployment with ECS, then I recommend packaging your migrations in a container and deploying it onto the same cluster that runs the application. Then you'd trigger the run with aws ecs run-task (I assume there's something similar for EKS, but haven't used it).
If you aren't already working with ECS/EKS, then you can implement the same idea with AWS Batch.
Here is an example on how you could approach database schema migration using Amazon API Gateway, AWS Lambda, Amazon Aurora Serverless (MySQL) and Python CDK.

How to deploy an application on GKE private cluster with terraform?

****I have made a bastion host VM(to be used as the master authorized network in private cluster) and a private cluster with Terraform which works fine.**** Now to deploy an application on the private cluster manually what we do is SSH into that bastion host VM first and then connect to the private cluster and then run the kubectl apply (deploy command) to deploy so how we can do this deployment procedure with Terraform script in GCP? Can anyone please help as I couldn't find the right example for doing this in GCP?
Instead of ssh your master machine, you can - for example - just use Ansible. First you need to configure Ansible to access the machine. Then you can run your Ansible scripts which contain the kubectl commands for deployment.
Preferably, you should use multiple Ansible roles to split your services deployment, then you can manage everything with a main Ansible Playbook.
In addition, Ansible scripts can be hosted and integrated into a CI-CD server / tool like Gitlab CI or Jenkins and at the end of the day, you deploy your services on Kubernetes via your CI CD pipeline.

Code Deploy Health Constraint Error EC2&GitHub

I'm trying to launch a single EC2 instance and connect it to my Github utilizing the CodeDeploy service on AWS. I'm having trouble with the actual deployment of a revision to my EC2 instance. It seems that there is something wrong with the recognition of my EC2's health. The EC2 however, is running perfectly, the code deploy agent is running on the instance and the IAM roles are configured appropriately. I've been at this for hours and I cant seem to figure out what is wrong.