GCP Artifact Registry public google-container images - google-container-registry

Since Google is pushing more on Artifact Registry and already announced that Container Registry is not actively developed anymore, is there a replacement for the public Google Container images, that are currently offered at https://console.cloud.google.com/gcr/images/google-containers/GLOBAL ?

Artifact Registry is the evolution of Google Container Registry which basically extends the capabilities of Container Registry. It comes with certain added features over Container Registry like additional Artifact formats, support for both regional and multiregional registry host, support for repository-level permissions, Artifact Registry IAM roles, Google Kubernetes Engine image streaming etc.
However, there is also an option for backwards compatibility and co-existence i.e., you can use both Artifact Registry and Container Registry in the same project. You can explore more on this from the documentation.
Though, Container Registry is still available and supported as a Google Enterprise API, new features will only be available in Artifact Registry and is highly recommended. Container Registry will only receive critical security fixes.
You may also refer to a similar Stackoverflow case.
However, there are other alternatives to Google Container Registry for users who want to try different solutions.

For anyone still looking for answer to this one, Google has official documentation on how to do this via IAM policies
https://cloud.google.com/artifact-registry/docs/access-control#public

Related

Why can I not see the images from the Container Registry?

I have created a docker image and uplaoded it to Container Registry.
But when I try to access the image by clicking "Create a Deployment" from my K8s cluster, I get the following error: You don't have permission to list images for this project.
I was looking at this doc and added the following roles: Storage Admin and Storage Object Viewer. Apart from that, I also have the role of an Owner assigned to me.
Can I please seek your guidance on what is it that I am missing here.
I resolved it via enabling the Artifact Registry API. (No need to migrate your existing Container Registry, simply to enable the Artifact Registry API)
I think it is related to the latest Google's recommendation to transitioning to Artifact Registry

Spring boot with KMS

My Spring boot microservice is running in a docker container. It requires an encryption key for encrypting the incoming payload. I thought of using AWS KMS for storing the keys. Reading them at runtime, and encrypt the payload.
I was trying to find out the libraries that can be used for accessing the AWS KMS from Spring boot microservice. Searching on the google results in below github projects.
https://github.com/zalando/spring-cloud-config-aws-kms
https://github.com/kinow/spring-boot-aws-kms-configuration
There is an SDK from AWS as well.
https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/java-example-code.html
I am little confused on which one I should use? The two github projects seems to be more open approach than using AWS SDK to me. Also, the "zalando" project was last updated on May 2020.. so they appears to be active.
Add the needed dependency
Create some secrets in AWS as 'other type of secret'. Name those secrets based on your project.
/secret/application for properties shared across all services
/secret/{spring.application.name} for the properties to look up for this specific service.
the above can be changed via some spring configuration properties - see section 3.3
then just inject them as you would any other property.
#Value("${verySpecialKey}")
private String verySpecialKey;

Google Cloud Run only has access to subset of metadata on http://metadata.google.internal

Problem: Google Cloud Run only provides a subset of documented metadata
I have a simple JVM based application running on Google Cloud Run that queries http://metadata.google.internal for available metadata.
The only metadata available is at the following paths:
http://metadata.google.internal/computeMetadata/v1/instance/service-accounts
http://metadata.google.internal/computeMetadata/v1/instance/zone
http://metadata.google.internal/computeMetadata/v1/project/project-id
http://metadata.google.internal/computeMetadata/v1/project/numeric-project-id
As per the documentation, I was expecting more than this and hoping that I would be able to query the metadata server for the name of the Cloud Run service and the metadata required to configure Stackdriver Monitoring for a generic_node.
One clue that I have found is in the server header in the response from querying the Metadata server gives the value: Metadata Server for Serverless
Theory: Cloud Run is in beta and the Metadata Server for Serverless is separate from the typical metadata server and is a work in progress.
Question(s):
Is this theory valid?
Is this limitation documented somewhere?
Is there a roadmap for adding additional metadata?
Is there an alternative for determining the metadata needed to configure Stackdriver?
Compute Metadata service you linked is only available to Compute Engine products (such as GCE, GKE). Many of the endpoints in there are about VM details, VM metadata/tags, VM startup scripts etc.
These concepts don't apply to serverless compute environments. Therefore I don't think a feature request here will succeed.
Serverless products such as App Engine, Cloud Functions and Cloud Run support a minimal version of the metadata service to provide basic functionality to SDKs (such as Google Cloud client libraries, Stackdriver or OpenTelemetry/OpenCensus clients, or gcloud CLI). Using these endpoints, Google’s own client libraries can automatically get auth tokens, discover project IDs etc.
Also, these serverless products don't run on GCE, and don't have the same concepts. That's why a full metadata service isn't available for these products.
The applicable for serverless environments are the endpoints you listed in your question.
I don't think you will find much information in order to validate your theory as that has to do with the product's architecture and I don't think Google will share it for the moment, however, it does seem to be valid based on the evidence you found.
What can be done is to open a feature request to Google so that they work on adding more information to the metadata so that it cover your needs. As the product is on beta, they should be open to do some changes.
Hope you find this useful.

Can we use combination of AWS and Google Cloud resources in a single serverless.yml file?

We are using AWS primarily for our application but we also need to use a particular Google service. This service requires us to upload media on Google Cloud Storage.
Like AWS resources, we want to use the serverless framework to create all required GCP resources.
I need your help to know the answer to the below questions:
How can we use the same serverless.yml to create required GCP resources as well?
Do we need to use two serverless.yml files, one for AWS and other for Google?
How to manage credentials for creating and accessing GCP resources?
How can we use the same serverless.yml to create required GCP resources as well?
Since YAML is just (from the docs)
a human friendly data serialization
standard for all programming languages.
there is no proper way to have one file that fits both architectures, by looking at both examples the just change a few lines,so you wont be able to use the same file but it will be very similar
Do we need to use two serverless.yml files, one for AWS and other for Google?
Yes both services need specific configurations for them to work correctly
How to manage credentials for creating and accessing GCP resources
To access GCP resources you will use service accounts this is all managed by Cloud IAM and it's made to represent a non-human user, in this case a app, an API, service, etc.
EXTRA: Some useful links:
App Egine configuration with yaml
AWS serveless .yml example

AWS assume iam roles vs gcp's json files with private keys

One thing I dislike about Google Cloud Platform (GCP) is its less baked-in security model around roles/service accounts.
Running locally on my laptop, I need to use the service account's key specified in a JSON file. In AWS, I can just assume a role I have been granted access to assume (without needing to carry around a private key). Is there an analogue to this with GCP?
I am going to try and answer this. I have the AWS Security Specialty (8 AWS certifications) and I know AWS very well. I have been investing a lot of time this year mastering Google Cloud with a focus on authorization and security. I am also an MVP Security for Alibaba Cloud.
AWS has a focus on security and security features that I both admire and appreciate. However, unless you really spend the time to understand all the little details, it is easy to implement poor/broken security in AWS. I can also say the same about Google security. Google has excellent security built into Google Cloud Platform. Google just does it differently and also requires a lot of time to understand all the little features / details.
In AWS, you cannot just assume a role. You need an AWS Access Key first or be authenticated via a service role. Then you can call STS to assume a role. Both AWS and Google make this easy with AWS Access Keys / Google Service Accounts. Whereas AWS uses roles, Google uses roles/scopes. The end result is good in either platform.
Google authentication is based upon OAuth 2.0. AWS authentication is based upon Access Key / Secret Key. Both have their strengths and weaknesses. Both can be either easy to implement (if you understand them well) or a pain to get correct.
The major cloud providers (AWS, Azure, Alibaba, Google, IBM) are moving very fast with a constant stream of new features and services. Each one has strengths and weaknesses. Today, there is no platform that offers all the features of the others. AWS today is ahead both in features and market share. Google has a vast number of services that outnumber AWS and I don't know why this is overlooked. The other platforms are catching up quickly and today, you can implement enterprise class solutions and security with any of the cloud platforms.
Today, we would not choose only Microsoft or only Open Source for our application and server infrastructure. In 2019, we will not be chosing only AWS or only Google, etc. for our cloud infrastructure. We will mix and match the best services from each platform for our needs.
As described in the Getting Started with Authentication [1] page, for service accounts it is needed the key file in order to authenticate.
From [2]: You can authenticate to a Google Cloud Platform (GCP) API using service accounts or user accounts, and for APIs that don't require authentication, you can use API keys.
Service and user accounts needs the key file to authenticate. Taking this information into account, there is no manner to locally authenticate without using a key file.
Links:
[1] https://cloud.google.com/docs/authentication/getting-started
[2] https://cloud.google.com/docs/authentication/