In the SPIFFE specification it is stated that
Since a workload in its early stages may have no prior knowledge of
its identity or whom it should trust, it is very difficult to secure
access to the endpoint. As a result, the SPIFFE Workload Endpoint
SHOULD be exposed through a local endpoint, and implementers SHOULD
NOT expose the same endpoint instance to more than one host.
Can you please explain on what is meant by this and how Istio implements this?
Actually, Istio mesh services adopt SPIFFE standard policies through Istio Security mechanisms using the same identity document SVID. Istio Citadel is the key component for secure provisioning various identities and provides credential management.
It is feasible in the near future to use Node agent within Istio mesh in order to discover relevant services via Envoy secret discovery service (SDS) API and this approach is very similar to SPIRE design.
The key concepts of SPIRE design, described in the official documentation, you can find below:
SPIRE consists of two components, an agent and a server.
The server provides a central registry of SPIFFE IDs, and the
attestation policies that describe which workloads are entitled to
assume those identities. Attestation policies describe the properties
that the workload must exhibit in order to be assigned an identity,
and are typically described as a mix of process attributes (such as a
Linux UID) and infrastructure attributes (such as running in a VM that
has a particular EC2 label).
The agent runs on any machine (or, more formally, any kernel) and
exposes the local workload API to any process that needs to retrieve a
SPIFFE ID, key, or trust bundle. On *nix systems, the Workload API is
exposed locally through a Unix Domain Socket. By verifying the
attributes of a calling workload, the workload API avoids requiring
the workload to supply a secret to authenticate.
SPIRE promises to become the main contributor for workload authentication mechanisms, however so far it's on developing stage with desired future implementation on production deployments.
Related
I have a java application that is deployed on GKE cluster. Let's call it the "orchestrator"
The application should be able to deploy other applications on same GCP project where the "orchestrator" app is running (can be same GKE or different GKE cluster), using helm cli commands.
We were able to do that using Google Service Account authentication, where the JSON key is provided to the "orchestrator" and we could use it to generate tokens.
My question is.. since both theĀ "orchestrator" and the others apps are running on same GCP project (sometimes on same GKE cluster), is there a way to use some default credentials auto discovered by GCP, instead of generating and providing a Service Account JSON key to theĀ "orchestrator" app?
That way, the customer won't need to expose this Key to our system and the authentication will be happened behind the scenes, without our app intervention.
Is there something a GCP admin can do which make this use case work seamlessly?
I will elaborate on my comment.
When you are using a Service Account, you have to use keys to authenticate - Each service account is associated with a public/private RSA key pair. As you are working on GKE cluster, did you consider using Workload identity, like mentioned in Best practices for using and managing SA?
According to Best practices for using and managing service accounts all non-human accounts should be represented by Service Account:
Service accounts represent non-human users. They're intended for scenarios where a workload, such as a custom application, needs to access resources or perform actions without end-user involvement.
So in general, whenever you want to provide some permissions to applications, you should use Service Account.
In Types of keys for service accounts you can find information, that all Service Accounts needs RSA pair key:
Each service account is associated with a public/private RSA key pair. The Service Account Credentials API uses this internal key pair to create short-lived service account credentials, and to sign blobs and JSON Web Tokens (JWTs). This key pair is known as the Google-managed key pair.
In addition, you can create multiple public/private RSA key pairs, known as user-managed key pairs, and use the private key to authenticate with Google APIs. This private key is known as a service account key.
You could also think about Workload Identity, but I am not sure if this would fulfill your needs as there are still many unknowns about your environment.
Just as additional information, there was something called Basic Authentication which could be an option for you, but due to security reasons it's not supported since GKE 1.19. This was mentioned in another stack case: We have discouraged Basic authentication in Google Kubernetes Engine (GKE).
To sum up:
Best Practice to provide permissions for non-human accounts is to use Service Account. Each service account requires a pair of RSA Keys and you can create multiple keys.
Good Practice is also to use Workload Identity if you have this option, but due to lack of details it is hard to determine if this would work in your scenario.
Additional links:
Authenticating to the Kubernetes API server
Use the Default Service Account to access the API server
One way to achieve that is to use use default credentials approach mentioned here :
Finding credentials automatically. Instead of exposing the SA key to our App, the GCP admin can attach the same SA to the GKE cluster resource (see attached screenshot), and the default credentials mechanism will use that SA credentials to get access the APIs and resources (depends on the SA roles and permissions).
Google docu says that workload identity could be used to authorise GKE pods to consume services provided by Google APIs (and it works fine). It also says that there would be one automatically created identity pool called PROJECT_ID.svc.id.goog.
Docu about workload identity federation says: "You can use a workload identity pool to organize and manage external identities."
After I configured workload identity as described here (and it works fine) I am trying to retrieve workload identity pools existing in my project and I expect to see PROJECT_ID.svc.id.goog. I am using this command to list pools: gcloud beta iam workload-identity-pools list --location="global" --show-deleted and as an output I get:
local#local:~/$ gcloud beta iam workload-identity-pools list --location="global"
Listed 0 items.
So are GKE workload identity pool and workload identity pools, from workload identity federation, simply two absolutely separate entities? Or it is just beta API which is not completely working atm? Or it is something third?
Finding correct name to product is sometime difficult. 2 very similar name for 2 different products. -> That's your mistake.
Workload Identity is a GKE Addon. Before going deeper, you have to know that, on Google Cloud Platform, you don't need to use service account key file because the service account are automatically loaded on every services (Compute Engine, App Engine, Cloud Run, Cloud Function, Cloud Build,...) and accessible through the metadata server . The Google Cloud client libraries automatically detect the environment and use the metadata server if present.
The problem with GKE is that you can run container on several different Compute Engine instances (the nodes) and your different service (K8S services) can have different level of authorization. If you rely on the Compute Engine service account (default behavior without the Workload identity addon), all the pods on the same instances use the same service account (and thus have the same permissions).
To solve that, the Workload Identity addon creates a proxy that intercept the metadata server calls and reply with the correct bind service account for this pods/service on GKE
Workload identity pool is totally different. The principle is to configure third party identity providers (such as AWS, Okta, or even custom) and to define the conditions to accept the third party token (email, claims,...).
When the token is accepted, you can perform a call to impersonate a service account, and thus generate a new token (a Google compliant one this time), that you will be able to use in subsequent calls.
The principle here is to avoid to use service account key file, and to rely on third party identity provider to interact with GCP. For example, on AWS you need to call BigQuery, you can create a token with Workload identity pool and your AWS identity and then call BigQuery without the need to exchanges secrets between the platforms.
Note: the best way to keep a secret secure is not to have secret!
My guess is that historically Google started with GKE Workload Identity as a GKE-specific feature a came to a generic approach titled Workload Identity Federation. Both approaches allow you to create access bindings for external identities but use slightly different syntax. Again, I guess they started with GKE and then came to more generic and flexible scheme.
Another platforms, like AWS or Azure do the same k8s magic with Workload Identity Federation feature.
Which AWS services are GDPR ready? Can I build and run GDPR compliant applications on AWS?
All AWS Services can be used in compliance with GDPR
Many requirements under the GDPR focus on ensuring effective control and protection of personal data. AWS services give you the capability to implement your own security measures in the ways you need in order to enable your compliance with the GDPR, including specific measures such as:
Encryption of personal data
Ability to ensure the ongoing confidentiality, integrity, availability, and resilience of processing systems and services
Ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident
Processes for regularly testing, assessing, and evaluating the effectiveness of technical and organizational measures for ensuring the security of processing
This is an advanced set of security and compliance services that are designed specifically to handle the requirements of the GDPR. There are numerous AWS services that have particular significance for customers focusing on GDPR compliancea and AWS has 500+ features and services focused on security and compliance.
For more information, have a look at the AWS GDPR Center.
The AWS Shared Responsibility Model and GDPR
AWS has a shared responsibility model with the customer and this doesn't change under GDPR. AWS is responsible for securing the underlying infrastructure that supports the cloud and the services provided; while customers, acting either as data controllers or data processors, are responsible for any personal data they put in the cloud.
You can find more information about the shared responsibility under GDPR in the AWS Security Blog.
Data -- A perfect figure and shows the characteristics of any individual. When it comes to security, it's a null hypothesis statement " The data is secured ". We are believing it, the statement is true and how sure the organizational data is secured with the cloud.
Even though there is a security concern in all the three models of cloud services --
SaaS vs PaaS vs IaaS models
. How can we handle the data secured with High-reliability, maintain resource and cost-effective?
Google values customers data and commits to uphold trust to the highest degree. see terms of service and security and privacy. As Google puts security to the forefront, the onus is on the user to ensure that they are using the necessary tools (Provided by google) and steps to secure resources. Google Infrastructure Security Design will explain what is managed on their part, but implementing other means of securing resources she be done on the clients part as well.
I'm building a mobile app that needs a backend that I've chosen to host using Amazon Web Services.
Their mobile SDKs provide APIs to work directly with the DynamoDB (making my app a thick client), including user authentication/authorization with their IAM service (which is what I'm going to use to track users). This makes it easy to say "user X wants their information. Here's their temporary access key. Oh, here's the information you requested."
However, if I used RDS as a backend database, I'd have to create web services (in PHP or Java/etc) that my app can talk to. Then I'd also have to implement the authentication/authorization myself within my web service (which I feel could get very messy). I'd also have to host the web service on an EC2 instance, as well as having the RDS instance. So my costs would increase.
The latter seems like it would be a lot of work, something which I could avoid by using DynamoDB (and its API) as my backend.
Am I correct in my reasoning here? Or is there an easy way to authenticate/authorize a PHP web service with an AWS RDS database?
I ask because I've only ever worked with relational databases before, so there would be a learning curve to get the NoSQL db running. Though hypothetically my plan is to eventually switch to a NoSQL db at some point in the future anyways due to my apps increasing demands.
Side note: I already have my database designed in MySQL.
There is no solution to use IAM directly with RDS because of the unavailability of fine-grained access control over RDS tables. Moreover IAM policies cannot be enforced dynamically (i.e. with an Identity Pool).
RDS is an unmanaged service, so it is not provided as a SaaS endpoint. DynamoDB is a REST service presented as a distributed key-value store and exposes endpoints to clients (AWS SDK is just a wrapper around them).
DynamoDB is born as a distributed service and can guarantee fine-grained control over data access, thus allowing concurrent access.