GCP Build with Google Run always create a new service why? - google-cloud-platform

Hi have a basic CI/CD setup
Whenever I push new code on my GitHub repository (trigger).
It should push it via Google Build to Google Run
But currently, whenever the trigger is initiated, it creates a new service on my cloud account.
Now, I am not sure why it's happening!
What I want is to update the current running service on Google Run
or
Create a new service, migrate 100% traffic to it and delete the old one.
This is my Deployment screen looks like. I have highlighted the section which I believe might be the reason for this duplicate services.

When you deploy a new version of your service (a new version can be a new image, or the same image with different parameters (concurrency, min/max instance, CPU, env vars,...)), it's named a revision. There is no limit on the number of revision (in fact yes, you have a limit at 1000, then the oldest will be deleted when a new one is created).
The pricing model of Cloud Run (and other serverless product) is simple: you pay when your service is running. In your case, you haven't a min instance parameter, and therefore your revision will run (and you will start to pay) when request invokes your service (and you pay only when you go over the free tier)
Same thing for Cloud Build: you start to pay when you use it. And you use it every time you push your code on GitHub (that invoke a Cloud Build trigger and then deploy your code on Cloud Run). Here again, you have a comfortable free tiers of 120 minutes free per day for Cloud Build.

Related

Best way to update code on Azure Linux VMSS from Git using JENKINS

I am planning to use Azure VMSS for deploying a set of spring boot apps. I am planning to create a custom linux VM image with all the required softwares/utilities as well as the required directory structure and configure this image in VMSS. We use jenkins as CI/CD tool and Git as source code repo. What is the best way to build and deploy these spring boot apps on VMSS?
I think one way is to write a custom script extension which downloads code from Git repo and then starts these spring boot apps. I believe this script will then get executed every time a new VM is provisioned.
But what about cases where already multiple VMs are running on top of minimum scale instance count. I believe a manual restart will not trigger the CSE script to run on these already running VMs right?
Could anyone advise the best way to handle this?
Also once a VM is deallocated due to auto scale down, what is the best/cost optimal way to back up the log files from VM to storage (blob or file share)?
You could enable Automatically tear down virtual machines after every use in the organization settings/project setting >> agent pool >> VMSS agent pool >> settings. Then, a new VM instance is used for every job. After running a job, the VM will go offline and be reimaged before it picks up another job. The Custom Script Extension will be executed on every virtual machine in the scaleset immediately after it is created or reimaged. Here is the reference document: Create the scale set agent pool.
To back up the log files from VM, you could refer to Troubleshoot and support about related file path on the target virtual machine.

How to manage automatic deployment to ECS using Terraform Cloud and CircleCI?

I have an ECS task which has 2 containers using 2 different images, both hosted in ECR. There are 2 GitHub repos for the two images (app and api), and a third repo for my IaC code (infra). I am managing my AWS infrastructure using Terraform Cloud. The ECS task definition is defined there using Cloudposse's ecs-alb-service-task, with the containers defined using ecs-container-definition. Presently I'm using latest as the image tag in the task definition defined in Terraform.
I am using CircleCI to build the Docker containers when I push changes to GitHub. I am tagging each image with latest and the variable ${CIRCLE_SHA1}. Both repos also update the task definition using the aws-ecs orb's deploy-service-update job, setting the tag used by each container image to the SHA1 (not latest). Example:
container-image-name-updates: "container=api,tag=${CIRCLE_SHA1}"
When I push code to the repo for e.g. api, a new version of the task definition is created, the service's version is updated, and the existing task is restarted using the new version. So far so good.
The problem is that when I update the infrastructure with Terraform, the service isn't behaving as I would expect. The ecs-alb-service-task has a boolean called ignore_changes_task_definition, which is true by default.
When I leave it as true, Terraform Cloud successfully creates a new version whenever I Apply changes to the task definition. (A recent example was to update environment variables.) BUT it doesn't update the version used by the service, so the service carries on using the old version. Even if I stop a task, it will respawn using the old version. I have to manually go in and use the Update flow, or push changes to one of the code repos. Then CircleCI will create yet aother version of the task definition and update the service.
If I instead set this to false, Terraform Cloud will undo the changes to the service performed by CircleCI. It will reset the task definition version to the last version it created itself!
So I have three questions:
How can I get Terraform to play nice with the task definitions created by CircleCI, while also updating the service itself if I ever change it via Terraform?
Is it a problem to be making changes to the task definition from THREE different places?
Is it a problem that the image tag is latest in Terraform (because I don't know what the SHA1 is)?
I'd really appreciate some guidance on how to properly set up this CI flow. I have found next to nothing online about how to use Terraform Cloud with CI products.
I have learned a bit more about this problem. It seems like the right solution is to use a CircleCI workflow to manage Terraform Cloud, instead of having the two services effectively competing with each other. By default Terraform Cloud will expect you to link a repo with it and it will auto-plan every time you push. But you can turn that off and use the terraform orb instead to run plan/apply via CircleCI.
You would still leave ignore_changes_task_definition set to true. Instead, you'd add another step to the workflow after the terraform/apply step has made the change. This would be aws-ecs/run-task, which should relaunch the service using the most recent task definition, which was (possibly) just created by the previous step. (See the task-definition parameter.)
I have decided that this isn't worth the effort for me, at least not at this time. The conflict between Terraform Cloud and CircleCI is annoying, but isn't that acute.

GCP cloud run send a request to all running instances

I have a rest API running on cloud run that implements a cache, which needs to be cleared maybe once a week when I update a certain property in the database. Is there any way to send a HTTP request to all running instances of my application? Right now my understanding is even if I send multiple requests and there are 5 instances, it could all go to one instance. So is there a way to do this?
Let's go back to basics:
Cloud Run instances start based on a revision/image.
If you have the above use case, where suppose you have 5 instances running and you suddenly need to re-start them as restarting the instances resolves your use case, such as clearing/rebuilding the cache, what you need to do is:
Trigger a change in the service/config, so a new revision gets
created.
This will automatically replace, so will stop and relaunch all your instances on the fly.
You have a couple of options here, choose which is suitable for you:
if you have your services defined as yaml files, the easiest is to run the replace service command:
gcloud beta run services replace myservice.yaml
otherwise add an Environmental variable like a date that you increase, and this will yield a new revision (as a change in Env means new config, new revision) read more.
gcloud run services update SERVICE --update-env-vars KEY1=VALUE1,KEY2=VALUE2
As these operations are executed, you will see a new revision created, and your active instances will be replaced on their next request with fresh new instances that will build the new cache.
You can't reach directly all the active instance, it's the magic (and the tradeoff) of serverless: you don't really know what is running!! If you implement cache on Cloud Run, you need a way to invalidate it.
Either based on duration; when expired, refresh it
Or by invalidation. But you can't on Cloud Run.
The other way to see this use case is that you have a cache shared between all your instance, and thus you need a shared cache, something like memory store. You can have only 1 Cloud Run instance which invalidate it and recreate it and all the other instances will use it.

Creating a duplicate of a VM

I'm preparing to get in to the world of cloud computing.
My first question is:
Is it possible to programmatically create a new, or duplicate an existing VM from my server?
Project Background
I provide a file processing service, and as it's been growing I need to offer a better service.
Project Requirement
Machine specs:
HDD: Min 16gb
CPU: Min 1 core
RAM: Min 2
GB GPU: Min CUDA 10.1 compatible
What I'm thinking is the following steps:
User uploads a file
A dedicated VM is created for that specific file inside Google Cloud Compute
The file is sent to the VM
File is processed using a Anaconda environment
Results are downloaded to local server
Dedicated VM is removed
Results are served to user
How is this accomplished?
PS: I'm looking for resources and advice. Not code.
Your question is a perfect formulation of the concept of Google Cloud Run. At the highest level concept, you create a Docker image (think of it like a VM) and then register that Docker image with GCP Cloud Run. When a trigger occurs, GCP will spin up an instance of that Docker container and pass in information about the cause of that trigger (a file created in GCS or a REST request or others ...). What you do in your container is up to you. You have full power of the Linux environment (under Docker) to do as you like. When your request ends, the container is spun down. You are only billed for the compute resources you use. If your container (VM) isn't being used, you pay nothing until the next trigger.
An alternative to Cloud Run is Cloud Functions. This is a higher level abstraction where instead of providing a Docker container, you provide the body of a function (JavaScript, Java, Python or others) and the request is passed to that function when a trigger occurs. Which you use is mostly personal choice (you didn't elaborate on "File is processed").
References:
Cloud Run
Cloud Functions

How "cf push" works?

Applying cf push on existing Running application, stops and starts the application instance with new artifact.
This application has route name assigned.
1) In order to assess the downtime for a banking app, amidst cf push, what are the steps involved from stopping existing application instance to starting new application instance?
2) Does Blue-Green deployment decrease the downtime?
The steps done in a cf push are explained in the flowchart shown here This includes
creation of application
upload
staging
Blue green deployment eliminates downtimes caused by pushes of new application versions.
The basic workflow is described pretty well in the documentation. The basic idea is to deploy the new version side by side to the old one, assigning the route of the application to both of them, the remove the route from the old version. This way, there is no unavailability for the application route.
There is at least the CF CLI plugin blue-green-deploy, which will help you automate this workflow, so you do not have to take care of the single steps.