Gitlab-runner autoscale not registering - amazon-web-services

I am new Gitlab and following this guide:
https://docs.gitlab.com/runner/configuration/runner_autoscale_aws/
It might be out of date?
There are a couple of issues:
when idlecount is zero, the gitlab runner with docker-machine does not automatically create an instance when i submit a job. i had to set the idlecount to 1 to have gitlab runner with docker-machine to create an instance.
when i run gitlab-runner on debug, it keeps on showing builds=0, though the gitlab shared runners execute the jobs. so the builds is no zero. i'm using group shared runners btw.
docker-machine uses ubuntu 16.04 as the default ami. when spinning up, it fails completely.
i had to specify the ami with docker-machine to ubuntu 18.04 and 20.04. it spins up a instance and completes. but it does not register the gitlab-runner. i logged into the new instance, and gitlab-runner is not installed and no docker container is being run on the machine.
Questions:
Has anybody use this guide recently?
Is AWS tested or should we use GCP like the Gitlab shared runners
docker-machine is no longer supported, but I understood Gitlab would still continue supporting it?
i was thinking of creating solution around lambda functions creating gitlab runners, but there is not way to view the pending jobs queue in gitlab? any suggestions?
Thanks in advance!

Related

How can I automatically install from CodeCommit onto a Raspberry Pi?

I want to be able to use AWS CodeCommit as a repo for my scripts, and then have AWS automatically deploy any new commits to a bunch of Raspberry Pi systems (on-premise instances which I've already set up in Systems Manager). Preferably, it would take a commit and install it on a single staging RPi first, test it, and if the tests go well, then install it on the rest of the fleet of RPi systems.
(The Raspberry Pi systems are running Ubuntu Server 20.04 LTS, so are all compatible as per the requirements of Systems Manager)
Is this possible with AWS? Are there any clear guides on how to do this?
The closest I've come to success was following this: https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-simple-codecommit.html, but that tutorial explains how to deploy from CodeCommit to an EC2 instance rather than to an on-premises instance. I tried switching to an on-premise instance instead of EC2 (in step 5), and specified the tags I've already assigned to my on-premises instance (in Systems Manager > Fleet Manager), but when I try to run the deployment, I get an error: "The deployment failed because no instances were found for your deployment group. Check your deployment group settings to make sure the tags for your Amazon EC2 instances or Auto Scaling groups correctly identify the instances you want to deploy to, and then try again." The tags are definitely correct so I don't know why that's failing.
Thanks in advance for any help.
Essentially, I had skipped a bunch of steps in the user guide without realising it. Going back to the start after a decent night's sleep helped.
PEBCAK is a thing.

Dynamically update AMI

I have a question regarding AWS, have an AMI with windows server installed, IIS installed, and a site up and running.
My AutoScale always maintains two instances created based on this AMI.
However, whenever I need to change something on the site I need to upload a new instance, make the changes, update the AMI and update the auto-scale, which is quite time consuming.
Is there any way to automate this by linking to a Git repository?
This is more like a CI CD work rather than achieved in AWS.
You can schedule a CI CD pipeline to detect any update happens in SCM(GIT) and trigger a build job(Jenkins or similar tool) which will provide an artifact to you. You can deploy the artifact to respective application server using CD tools (ansible/even with jenkins or similar tools) whichever suits your infra. In the deploy script itself you can connect to ec2 service to create a new AMI once deployment is completed.
You need to use set of tools to achieve it SCM webhook/poll, Jenkins, Ansible.

How to deploy a spring boot application jar from Jenkins to an EC2 machine

I'm seeing so many different sources how to to achieve CI with Jenkins and EC2 and strangely none seem to fit my needs.
I have 2 EC2 ubuntu instances. One is empty and the other has Jenkins installed on it.
I want to perform a build on the Jenkins machine and copy the jar to the other ubuntu machine. Once the jar is there i want to run mvn spring-boot:run
That's is - a very simple flow which i can't find a good source to follow that doesn't include slaves, dockers etc..
AWS Code Deploy lets you use a Jenkins and deploy it on your EC2 instances.
Quick google search gave me this very detailed instruction on how to setup code pipeline with AWS Code Deploy.
The pipeline uses GitHub -> Jenkins -> EC2 flow, as you need it.
Set up jenkins to do a build then scp the artifact to the other machine
There's an answer here how to setup ssh keys for jenkins to publish via ssh about setting up the keys for ssh

Codedeploy with AWS ASG

I have configured an aws asg using ansible to provision new instances and then install the codedeploy agent via "user_data" script in a similar fashion as suggested in this question:
Can I use AWS code Deploy for pulling application code while autoscaling?
CodeDeploy works fine and I can install my application onto the asg once it has been created. When new instances are triggered in the ASG via one of my rules (e.g. high cpu usage), the codedeploy agent is installed correctly. The problem is, CodeDeploy does not install the application on these new instances. I suspect it is trying to run before the user_data script has finished. Has anyone else encountered this problem? Or know how to get CodeDeploy to automatically deploy the application to new instances which are spawned as part of the ASG?
AutoScaling tells CodeDeploy to start the deployment before the user data is started. To get around this CodeDeploy gives the instance up to an hour to start polling for commands for the first lifecycle event instead of 5 minutes.
Since you are having problems with automatic deployments but not manual ones and assuming that you didn't make any manual changes to your instances you forgot about, there is most likely a dependency specific to your deployment that's not available yet at the time the instance launches.
Try listing out all the things that your deployment needs to succeed and make sure that each of those is available before you install the host agent. If you can log onto the instance fast enough (before AutoScaling terminates the instance), you can try and grab the host agent logs and your application's logs to find out where the deployment is failing.
If you think the host agent is failing to install entirely, make sure you have Ruby2.0 installed. It should be there by default on AmazonLinux, but Ubuntu and RHEL need to have it installed as part of the user data before you can install the host agent. There is an installer log in /tmp that you can check for problems in the initial install (again you have to be quick to grab the log before the instance terminates).

Using a custom AMI (with s3cmd) in a Datapipeline

How can I install s3cmd on a AMI that is used in the pipeline?
This should be a fairly basic thing to do but I can't seem to get it done:
Here's what I've tried:
Started a Pipeline without the Image-id option => Everything works fine
Navigated to EC2 and created an Image of the running Instance to make sure all the needed stuff to run in the pipeline is installed on my custom AMI
Started this AMI manually as an Instance
SSH'd into the machine and installed S3cmd
Created another Image of the machine, this time with s3cmd installed
Shut down the Instance
Started the Pipeline again, this time with the newly created AMI as Image-id and S3cmd installed
Now the Resource starts "RUNNING" but my Activity (ShellCommandActivity) is stuck in the WAITING_FOR_RUNNER state and the script never gets executed.
What do I have to do to get the pipeline running with a custom image? Or is there even an easier way to use s3cmd in a pipeline?
Thank you!
I figured it out now, by using a "clean" Amazon Linux AMI (from the marketplace for example) and installing S3cmd, rather than creating an AMI out of a running Pipeline Resource. I saw that this AMI has a different Kernel version, so this could have been the problem.