Deploy a container to Cloud Run after pushing to Artifact Registry automaticlly - google-cloud-platform

I want to setup a connection (without writing code) between Google Cloud Artifact Registry and Cloud Run. So on every push, I want to create/update a service on Cloud Run with the same name.
Is this possible?

No, you can't. The system that push the image must just after trigger the deployment of a new Cloud Run revision, with the new image. the latest tag if you want.
Nothing automatic is possible on a push event on Artifact registry.

I would suggest u thinking about Container Registry , then u can connect automatic events.

Related

Can Cloud Run automatically use latest digest?

I have a continuous deployment pipeline setup with Github via Cloud Build Triggers. Each time a push is made to the main branch, the cloudbuild.yaml does its thing and produces a digest with a tag of latest. All of my digests are stored in Artifact Registry. Is there anyway to get Cloud Run to always use the digest tagged as latest? During the setup of my Cloud Run service this seems to be the case but after the service has been created, the image ends up resolving to a SHA value ie a specific digest rather than the tag.
When you deploy a revision on Cloud Run, the LASTEST version of the image is used and cached on Cloud Run infrastructure.
If your CI/CD pipeline generate other container images, with the LATEST tag (but it's also true with defined tag), Cloud Run DOES NOT reload its cache. You have to deploy a new revision to update that cache.
Therefore, at the end of your pipeline add a cloud run deployment to update the container version.

How to auto-deploy to Compute Engine from Artifact Registry?

Context
I'm working on an app. The code is in a Cloud Source Repository. I've set up a Build Trigger with Cloud Build so that when I push new commits, the app is automatically built: it's containerized and the image is pushed to the Artifact Registry.
I also have a Compute Engine VM instance with a Container-Optimized OS. It's set up to use my app's container image. So when I start the VM, it pulls the latest image from the Artifact Registry and runs the container.
Issue
So, currently, deploying involves two steps:
Pushing new commits, which updates the container in the Artifact Registry.
Restarting my VM, which pulls the new container from the Artifact Registry.
Is there a way to combine these two steps?
Build Triggers detect code changes to trigger builds. Is there similar way for automatically triggering deployments from the Artifact Registry to Compute Engine?
Thank you.

Google Cloud Code's Cloud Run extension stores the Docker image in Cloud Storage instead of Artifact Registry

Why when I use the Visual Studio Code extension "Cloud Code", to deploy a Cloud Run service, it seems to store the image contents in Cloud Storage (via Container Registry).
Can I make it store the image in the Google Cloud Artifact Registry instead?
I just tried the scenario and it worked for me! Following these steps should get you going.
Create an artifact registry repo at https://console.cloud.google.com/artifacts and setup docker auth on your client to use gcloud to authenticate the repo. You can find detailed steps to do this here.
When deploying to Cloud Run in Cloud Code, you'll find that it will default to a Container Registry repo as the "Container image URL", but you can easily use an artifact registry repo here instead. Here, you can paste the repo name you created in the previous step, and append an image name. Here's a screenshot of the example I just tested.

Deploy to GKE when there is new image in Cloud Repository

I have a very interesting task.
I'm using 5 different triggers to build 5 images on every push in gitbranch.
After they are built I push them to 5 different GCP Container registries.
I want to automate the GKE deployment, so when I have new image in the registry to deploy them to GKE automatically.
How can I achieve that ?
I proposed you something very similar. Instead of use Container Registry, use Artifact registry. Create a Docker registry and use it as you use Container Registry.
Then, activate the audit logs on artifact registry
You only need the write audit logs in this case
Finally, create a Cloud Logging sink to PubSub on this filter:
protoPayload.serviceName="artifactregistry.googleapis.com"
protoPayload.methodName="Docker-PutManifest"
protoPayload.authorizationInfo.resource="projects/<PROJECT_ID>/locations/<YOUR_LOCATION>/repositories/<REPOSITORY_NAME>"
Now, you only have to listen this event with Cloud Function or Cloud Run and do what you want, deploy a new revision for example.

AWS-ECS Task Services restart automation

Currently we are running application with serverless architecture microservices using AWS ECS, whenever we deployed or update new artifacts on ECR, we need to restart the services by changing the tasks from 0 to 1 vice versa to restart the service and pick-up the new artifacts. As we know this process is very manual and taking some steps to accomplish, I want to automate this, is it possible to use AWS-lambda or cloudwatch? or any configuration as long as to skip the manual process. What kind of code and language and example of automation do I need to achieve this?
Take a look at ecs-deploy script. Basically it will replace an existing service with a latest (or specific) image from ECR. So if you have automation to update ECR with the latest image this script will deploy that image to ECS
A setup which could work if you have a CI/CD pipeline is upon uploading to ECR, trigger a lambda which resets that corresponding service. Supplying any variables to the lambda such as ECR tag to pull or service name.
ECS has an option to restart services with ForceNewDeployment. In Python the call would look like.
updateTrigger = client.update_service(
cluster = myClusterName,
service = serviceToUpdate,
desiredCount = 1,
forceNewDeployment=True
)
From https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ecs.html#ECS.Client.update_service