I have an application which is built as a docker container. The users of these applications are different people with their own GCP account. I want to continue to build my applications and release by pushing to a public container registry. I want all my users to automatically pull the latest container image and update their GCP CloudRun service.
In addition, I want to enable the following capabilities
Only users who has enabled automatic updates get updates automatically. Others can install them manually or when they enable automatic updates.
Ability to deploy to select set of users if they have automatic updates enabled.
(Basically, some custom logic in the continuous deployment workflow to see whether we can do an automatic update or not)
How to achieve this?
GCP documentation shows, I can do continuous deployment if my code in is GitHub and I enable Code build on every change. I want to skip this. Instead, I want all my users to listen to new container images and use them.
Related
I need to restrict some users to push 'latest' or 'master' tags to a shared GCR repository, only automated process like jenkins should be able to push this tags, is that possible?
Is there a way to do this like AWS Iam policies and conditions?
I think not but it's an interesting question.
I wondered whether IAM conditions could be used but neither Container Registry nor Artifact Registry are resources that accept conditional bindings.
Container Registry uses Cloud Storage and Cloud Storage is a resource type that accepts bindings (albeit only on buckets). However, I think tags aren't manifest (no pun intended) at the GCS level.
One way to approach this would be limit container pushes to your automated processes and then add some process (workflow) in which developers can request restricted tags and have these applied only after approval.
Another approach would be to audit changes to the registry.
Google Artifact Registry (GAR) is positioned as a "next generation" (eventual replacement?) of GCR. With it, you can have multiple repositories within a project that could be used as a way to provide "free-for-all" and "restricted" repositories. I think (!?) even with GAR, you are unable to limit pushes by tag.
You could submit a feature request on Google's Issue Tracker for registries but, given Google's no new features on GCR, you may be out of luck.
Is there any way to disable google cloud functions versioning?
I've for a long time tried to limit the number of versions kept in the cloud functions history, or if impossible, disable it completely...
This is something that at low level any infrastructure manager will let you do but google intentionally doesn't
When using Firebase Cloud Function, There's a Lifecycle of a background function. As stated from the documentation:
When you update the function by deploying updated code, instances for older versions are cleaned up along with build artifacts in Cloud Storage and Container Registry, and replaced by new instances.
When you delete the function, all instances and zip archives are cleaned up, along with related build artifacts in Cloud Storage and Container Registry. The connection between the function and the event provider is removed.
There is no need to manually clean or remove the previous versions as Firebase deploy scripts are doing it automatically.
Based on the Cloud Functions Execution Environment:
Cloud Functions run in a fully-managed, serverless environment where
Google handles infrastructure, operating systems, and runtime
environments completely on your behalf. Each Cloud Function runs in
its own isolated secure execution context, scales automatically, and
has a lifecycle independent from other functions.
These means that you should not remove build artifacts since cloud functions are scaling automatically and new instances are built from these artifacts.
Context
I'm working on an app. The code is in a Cloud Source Repository. I've set up a Build Trigger with Cloud Build so that when I push new commits, the app is automatically built: it's containerized and the image is pushed to the Artifact Registry.
I also have a Compute Engine VM instance with a Container-Optimized OS. It's set up to use my app's container image. So when I start the VM, it pulls the latest image from the Artifact Registry and runs the container.
Issue
So, currently, deploying involves two steps:
Pushing new commits, which updates the container in the Artifact Registry.
Restarting my VM, which pulls the new container from the Artifact Registry.
Is there a way to combine these two steps?
Build Triggers detect code changes to trigger builds. Is there similar way for automatically triggering deployments from the Artifact Registry to Compute Engine?
Thank you.
Google Cloud needs enabled API before many things are possible to be done.
Enabling needs just one CLI command, and usually is very fast. Enabling is even proposed by CLI if I try to do something which requires not-enabled API. But it anyway interrupts development.
My question is why they are not enabled by default? And is it ok if I enable them all just after creating new project to don't bother about enabling them later?
I would like to understand purpose of such design and learn best practices.
Well, they're disabled mainly in order not to incurr costs that you weren't intending on inducing, for you to be aware which service you're using at which point and to track the usage/costs for each of them.
Also, some services like Pub/Sub are dependent on others, and others such as Container Registry (or Artifact Registry), require a Cloud Storage bucket for artifacts to be stored, and it will create a one automatically if you're pushing a Docker image or using Cloud Build. So these are things for you to be aware of.
Enabling an API takes a bit of time depending on the service, yes, but it's a one-time action per project. I'm not sure what exactly your concerns on the waiting time are, but if you want to run commands while having executed a gcloud command to enable some APIs you can use the --async flag which will run the commands in the background without needing you to wait for it to complete before running another one.
Lastly, sure, you can just enable them all if you know what you're doing but at your own risk - it's a safer route to enable just the ones you need and as you might already be aware, you can enable multiple in a single gcloud command. In the example of Container Registry, it uses Cloud Storage, for which you will still be billed on.
Enabling services enables access to (often billed) resources.
It's considered good practice to keep this "surface" of resources constrained to those that you(r customers) need; the more services you enable, the greater your potential attack surface and potential bills.
Google provides an increasing number of services (accessible through APIs). It is highly unlikely that you would ever want to access them all.
APIs are enabled by Project. The Project creation phase (including enabling services) is generally only a very small slice of the entire lifetime of a Project; even of those Projects created-and-torn-down on demand.
It's possible to enable the APIs asynchronously, permitting you to enable-not-block each service:
for SERVICE in "containerregistry" "container" "cloudbuild" ...
do
gcloud services enable ${SERVICE}.googleapis.com --project=${PROJECT} --async
done
Following on from this, it is good practice to automate your organization's project provisioning (scripts, Terraform, Deployment Manager etc.). This provides a baseline template for how your projects are created, which services are enabled, default permissions etc. Then your developers simply fire-and-forget a provisioner (hopefully also checked-in to your source control), drink a coffee and wait these steps are done for them.
I currently have a relatively simple OpsWorks MEAN stack configuration, consisting of two layers.
One layer is the Node.js App Server layer, and the other layer is a Custom MongoDB layer. (As a side note, I hope one day Amazon will provide a Mongo store for OpsWorks, but until then, I had to create my own custom layer.)
I really like the way everything works, with the exception that when I deploy my Applications as shown above, the Deployment defaults to deploying to my Custom MongoDB layer as well:
Other than remembering to uncheck the boxes just before I click 'Deploy', I can't seem to find any way to specify, in the Deployment, Application, Layer, or Stack configuration, that I don't ever want my Application deployed to my Custom layer.
That's possibly not a huge deal for my MongoDB layer specifically, but it doesn't seem to make sense to have the application code over there in general, and I can most certainly envision application-specific custom chef configuration that I definitely don't want applied to my DB layer.
Can anyone point me at a configuration option or other mechanism for excluding deployment to a custom OpsWorks layer?
Thanks!
-- Tim
Deploying your application to all instances in your Stack is safe, OpsWorks won't install your Node application on your MongoDB.
When you do a deployment in OpsWorks, a deployment event get's triggered on the selected instances. Your MongoDB layer for example will most probably just discard the deployment for your application unless you write explicitly a recipe.
If you still want to save a selection of instances you want to deploy to you just create your deployment once and later you repeat that deployment. OpsWorks will persist the selected instances in there.