How do you create a gcr repo with terraform? - google-cloud-platform

https://www.terraform.io/docs/providers/google/d/google_container_registry_repository.html
There is a data source but no resource.
gcr seems to have no direct API. Is there a workaround for create a gcr repo with terraform? Can I create a folder in that "artifacts" bucket that gcr uses? Is there a way to manually terraform a gcr repo?

First, Terraform notion of google_container_registry_repository seems incomplete, because it represents only "root repositories" :
gcr.io/[PROJECT_ID]
[REGION].gcr.io/[PROJECT_ID], where [REGION] can be us, eu or asia
Whereas "repository" (in GCP terminology) can also refer to :
[REGION].gcr.io/[PROJECT_ID]/my-repo
[REGION].gcr.io/[PROJECT_ID]/my-repo/my-sub-repo
...
There is no notion of these types of repositories in Terraform data source.
That being said :
"root repositories" cannot be created and are managed by Google (if a new region xy appears, then xy.gcr.io will be created by Google)
other repositories used to order images (for example, a repository per developer or per project) seems kind of an abstract notion, more something like directories in Google Cloud Storage. They are created "on-the-fly" when you push an image, and they do not exist if there is no image in it. To complete the analogy between GCS directories and GCR repositories, note that there are also no google_storage_bucket_directory resources
For the latter kind of repositories (my-repo, my-repo/my-subrepo), the underlying storage bucket cannot be configured : it will always be artifacts.[PROJECT-ID].appspot.com or [REGION].artifacts.[PROJECT-ID].appspot.com, depending of the "root repository". There is no way to isolate different repositories in different buckets.
So in conclusion : you cannot create a GCR repository, whether it be with Terraform, with gcloud, from the web UI, etc.

You can create GCRs using Terraform but you can't destroy them or change region, the problem is with how GCR is implemented by Google rather than a limitation of Terraform.
As norbjd explained (pretty well imho) GCR is like a front-end for buckets that store container images.
GCP doesn't seem to have a concept of deleting GCRs at all, you have to delete the underlying bucket. Terraform can't handle this because it's like saying "when you apply use resource A but when you destroy use resource B".
To destroy you need some other mechanism, either manually deleting the bucket(s) or running a gcloud command for instance.
This simple code will create a repo, and will report as destroyed successfully by terraform destroy but will still appear in Container Registries in your project (and maybe incurring storage costs):
# This creates a global GCR in the current project.
# Though it actually creates the multi-region bucket 'artifacts.[PROJECT_ID].appspot.com'
resource "google_container_registry" "my_registry" {
}

Related

Google Container Registry : Prevent a group of users to push reserved tag names

I need to restrict some users to push 'latest' or 'master' tags to a shared GCR repository, only automated process like jenkins should be able to push this tags, is that possible?
Is there a way to do this like AWS Iam policies and conditions?
I think not but it's an interesting question.
I wondered whether IAM conditions could be used but neither Container Registry nor Artifact Registry are resources that accept conditional bindings.
Container Registry uses Cloud Storage and Cloud Storage is a resource type that accepts bindings (albeit only on buckets). However, I think tags aren't manifest (no pun intended) at the GCS level.
One way to approach this would be limit container pushes to your automated processes and then add some process (workflow) in which developers can request restricted tags and have these applied only after approval.
Another approach would be to audit changes to the registry.
Google Artifact Registry (GAR) is positioned as a "next generation" (eventual replacement?) of GCR. With it, you can have multiple repositories within a project that could be used as a way to provide "free-for-all" and "restricted" repositories. I think (!?) even with GAR, you are unable to limit pushes by tag.
You could submit a feature request on Google's Issue Tracker for registries but, given Google's no new features on GCR, you may be out of luck.

Can I change default Cloud Storage bucket region used to store artifacts while deploying Cloud Functions?

What I'm doing
I'm deploying Cloud Functions using Cloud Source Repositories as source using the gcloud command line like this:
gcloud functions deploy foo \
--region=us-east1 \
--source=<repoUrl> \
--runtime=nodejs12 \
--trigger-http
This process behinds the scene triggers Cloud Build that uses Cloud Container Registry to store its images and also creates some buckets in Cloud Storage.
Problem
The problem is one of those buckets us.artifacts.<projectName>.appspot.com that is a multi-regional storage which incurs additional charges compared to a regional storage and doesn't have a free tier to use.
The others bucket are created in the same region as the function (us-east1 in my case)
What I'd like to know
If I can change the default region for this artifacts bucket
Or, if it's not possible, what I can change in my deployment process to avoid these charges.
What I've already tried or read
Some users had similar problems and suggested a lifecycle rule to auto-clean this bucket, also in the same post some users didn't recommend to do it because it may break the build process
Here we had an answer explaining the behind the scenes of an App Engine application deployment that also creates the same bucket.
This post may solve my problem but I'd need to setup Cloud Build to trigger a build after a commit in master branch.
Thanks in advance

Backing up each and every resources in AWS account

I am exploring backing up our AWS services configuration to a backup disk or source control.
Only configs. eg -iam policies, users, roles, lambdas,route53 configs,cognito configs,vpn configs,route tables, security groups etc....
We have a tactical account where we have created some resources on adhoc basis and now we have a new official account setup via cloud formation.
Also in near future planning to migrate tactical account resources to new account either manually or using backup configs.
Looked at AWS CLI, but it is time consuming. Any script which crawls through AWS and backup the resources?
Thank You.
The "correct" way is not to 'backup' resources. Rather, it is to initially create those resources in a reproducible manner.
For example, creating resources via an AWS CloudFormation template allows the same resources to be deployed in a different region or account. Only the data itself, such as the information stored in a database, would need a 'backup'. Everything else could simply be redeployed.
There is a poorly-maintained service called CloudFormer that attempts to create CloudFormation templates from existing resources, but it only supports limited services and still requires careful editing of the resulting templates before they can be deployed in other locations (due to cross-references to existing resources).
There is also the relatively recent ability to Import Existing Resources into a CloudFormation Stack | AWS News Blog, but it requires that the template already includes the resource definition. The existing resources are simply matched to that definition rather than being recreated.
So, you now have the choice to 'correctly' deploy resources in your new account (involves work), or just manually recreate the ad-hoc resources that already exist (pushes the real work to the future). Hence the term Technical debt - Wikipedia.

Google Container Registry images lifecycle

I would like to know if there is a way to setup an objects lifecycle in GCP Container Registry?
I would like to keep the last n versions of an image, automatically deleting the older ones as new ones are pushed online.
I can't work directly on the Cloud Storage bucket because, having multiple images saved, the storage objects are not recognizable.
Seth Vargo, a Google Cloud developer advocate has release GCRCleaner.
Follow the instruction for setting up a scheduler and a Cloud Run for cleaning the GCR.
Unfortunaltely, there is no concept pf managed lifecycle management of images managed in GCR just like there is in AWS which allows creating policies to manage images in the registry.
You have to plan this yourself i.e. a script which emulates the following behavior and runs periodically.
gcloud container images delete -q --force-delete-tags "${IMAGE}#${digest}"
Unfortunately, at this time there’s no feature able to do such in GCR, however there’s already a feature request created. You can follow on it and write comments.
Also check this example, where image deletion was implemented in specific time.

How can i create s3 bucket and object (like upload an shell script file) pro grammatically

I have to do this for almost 100 account so planning to create using something infra as code. Cloud formation does not support creating object.. can anyone help
There are several strategies, depending on the client environment.
The aws-cli may be used for shell scripting, aws-sdk for JavaScript environments, or Boto3 for python environments.
If you provide the client environment, creating an S3 object is almost a one-liner holding equal s3 bucket security and lifecycle matters.
As Rich Andrew said, there are several different technologies. If you are trying to do infrastructure as code and attach policies and roles I would suggest you look into Terraform or Serverless.
I frequently combine two of the techniques already mentioned above.
For infrastructure setup - Terraform. This tool is always ahead of competition (Ansible, etc.) in terms of cloud modules. You can use it to create bucket, create bucket policies, users, their IAM policies for bucket access, upload initial files to bucket and much more.
It will keep state file containing record of those resources, so you can use the same workflow to destroy all that's created if necessary with very little modifications.
Very easy to get started, but not flexible and you can be caught out if scope change in middle of project suddenly requires feature that's not there.
To get started check out Terrafrom module registry - https://registry.terraform.io/.
It has quite a few S3 modules available to get started even quicker.
For interaction with aws resources - Python Boto3. In your case that would be subsequent file uploads, deletions in S3 bucket.
You can use Boto3 to set up infrastructure - just like Terraform, but it will require more work on your side (like handling exceptions and errors).