Code agent gives us the ability to run builds against local instances or we can use the standard containers that get spun up when we run a build. I want to know if there is a way to dynamically select between these at run time. For example is it possible that I could run my build and check if my local agent is busy, if it is spin up a build container and if not run it on the local agent.
Related
I'm using Google Cloud Platform and exploring its CI/CD tools.
I have an app deployed in a VM instance and I'm wondering if I can use GCP's tool such as Cloud Build to do CI/CD instead of using Jenkins.
From what I've learned over several resources, Cloud Build seems to be a nice tool for Cloud Run (deploying Docker images) and Cloud Functions.
Can I use it for apps deployed in VM instances?
When you create a job in Cloud Build, you set up a cloudbuild.yaml file in which you specify the build steps. How to write the step such that it will go into a linux VM, log in as a particular user, cd into a directory, pull the master branch of the project repo, and start running the main.py (say it's a python project)?
You can do this like that
- name: gcr.io/cloud-builders/gcloud
entrypoint: "bash"
args:
- "-c"
- |
gcloud compute ssh --zone us-central1-a my_user#oracle -- whoami;ls -la;echo "cool"
However, it's not a cloud native solution to deploy an app. The VM aren't "pets" but "cattle", that means, when you no longer need it, kill it, no emotion!
So, a modern way to use the cloud, is to create a new VM with the new version of your app. Optionally, you can keep the previous VM, stopped (to pay nothing) in case of rollback. To achieve this, you can add a startup script which install all the required packages, libraries, and you app on the VM, and start it.
An easiest way is to create a container. Like this, all the system dependencies are inside the container, and the VM doesn't need any customization: simply download the container and run it
Cloud Build allows you creating a VM with a startup script with the gcloud CLI. You can also stop the previous one. Do you have a persistent to reuse (for the data between version)? with cloud build you can also clone it and attach it to the new VM; or detach it from the previous one and attach it to the new one!
I'm working on a C#/C++ project that need specific hardware architecture, therefore, event for unit testing, I can't run them in a standard docker; I need to use a specific AWS EC2 instance.
My current build steps are the following one:
Build Step 1: Teamcity is making the build in a local docker.
Build Step 2: Teamcity is uploading the the artifacts on S3.
Build Step 3: Teamcity is launching a specific AWS EC2 instance.
Now I want to tell Teamcity to run the tests on this specific instance and still following the progress of the tests to be able to send alerts.
Or, at least, I can manage the testing execution at instance Startup, then need information on the output format to send to Teamcity.
Regards
I have a question regarding AWS, have an AMI with windows server installed, IIS installed, and a site up and running.
My AutoScale always maintains two instances created based on this AMI.
However, whenever I need to change something on the site I need to upload a new instance, make the changes, update the AMI and update the auto-scale, which is quite time consuming.
Is there any way to automate this by linking to a Git repository?
This is more like a CI CD work rather than achieved in AWS.
You can schedule a CI CD pipeline to detect any update happens in SCM(GIT) and trigger a build job(Jenkins or similar tool) which will provide an artifact to you. You can deploy the artifact to respective application server using CD tools (ansible/even with jenkins or similar tools) whichever suits your infra. In the deploy script itself you can connect to ec2 service to create a new AMI once deployment is completed.
You need to use set of tools to achieve it SCM webhook/poll, Jenkins, Ansible.
Currently working on an environment requirement where we are to push the same file out to multiple EC2 instances running Windows on a scheduled interval. As it stands now, I see a few options and have tried each:
Windows Task Manager: run a basic task on a set schedule invoking the S3 Sync CLI tool
Cons I can see here include: setting up the task on each EC2 instance (there are many).
Lambda: scheduled lambda job that utilizes SSM to run commands on each server in a resource group
Cons: introducing another layer required to execute this task.
Run Command: using an AWS-RunRemoteScript document, run the script (stored in S3) bucket on target instances.
Cons: I'm not positive you can automate these commands on a schedule without adding another layer.
What is the most scalable path forward? Thanks in advance for your help.
Using the Run Command feature of AWS Systems Manager together with either the Maintenance Window feature of AWS Systems Manager or using CloudWatch Events to schedule the execution of Run Command should be useful here.
If you also tag instances appropriately, you can use the tag targeting feature of Run Command to ensure that all instances run the command (including new instances launched in the future as long as they are tagged).
/Mats
I'm trying to setup an Auto Scaling Group in combination with CodeDeploy. Everything works fine except for the fact that when a new instance is created CodeDeploy starts before the user data script (defined in the Launch Configuration) finishes.
The default value of this user data script downloads and install the code deploy agent and i've extended it with installation of a couple of windows features, IIS rewrite module and msdeploy.
In my appspec.yml i'm using the hook AfterInstall to deploy my IIS website and this obviously fails when msdeploy is not installed (yet).
Am i going about this the wrong way or is there a way to make CodeDeploy wait for the user data script to finish?
Unfortunately, there's no was for CodeDeploy to know anything more than the instance has loaded it's OS. The good thing is that CodeDeploy give the host agent 1 hour to start polling for commands with automatic deployments. The easiest thing to do is install the host agent after all the required dependencies are installed. The automatic deployment will be created, but can't proceed until after the host agent is started.
This is explained in detail here - https://aws.amazon.com/blogs/devops/under-the-hood-aws-codedeploy-and-auto-scaling-integration/
Ordering execution of launch scripts – The CodeDeploy agent looks for and executes deployments as soon as it starts. There is no ordering between the deployment execution and launch scripts such as user data, cfn-init, etc. We recommend you install the host agent as part of (and maybe as the last step in) the launch scripts so that you can be sure the deployment won’t be executed until the instance has installed dependencies that are not part of your CodeDeploy deployment. If you prefer baking the agent into the base AMI, we recommend that you keep the agent service in a stopped state and use the launch scripts to start the agent service.