terraform GCE service account stanza - google-cloud-platform

I'm creating an instance on GCP and am running into some issues using the service account stanza. When I do this:
service_account {
email = "terraformdeploy-dev#projectxxx.org.com.iam.gserviceaccount.com"
scopes = []
}
The instance does provision with that service account but all of the Cloud API access scopes show disabled in the UI.
If I do this:
service_account {
email = "terraformdeploy-dev#projectxxx.org.com.iam.gserviceaccount.com"
scopes = ["cloud-platform"]
}
the instance provisions with full access to all the APIs, but the weird thing is that the above service account doesn't have access to all of those API. I'm confused on how to use the service account stanza here as the documentation isn't very clear.
Can I just assign the service account or do I need to specify the service account and the scopes that it has?

GCE offers two methods to limit the API access that an instance can provide, and you're getting caught up on the two. One is that a GCE instance has a scope which limits ANY API requests from that machine to those services. For example if your GCE instance does not allow GCS Write operations, regardless of the service account associated with the instance you can not perform GCS write operations.
You could SSH in, authenticate with the Project Owner account, and try to write to GCS and it would fail. This is done to allow an extra layer of security, and is primarily useful if you know that 'Instances in this instance group will only ever need GCS Read, and StackDriver API access'.
Now the service account that is associated with the instance is only used when a client library or gcloud command looks up the credentials in the 'application default' location. So if your application includes a service account json key, and reads from it, it doesn't matter the associated key.
So the service account you specify makes all applications default to performing API requests using that accounts credentials.
Also do keep in mind there are much more refined scopes than just 'cloud-platform'.

Related

GCP Resource with service account lagging logging role logs

I create a node-pool under a GKE cluster while using a custom service account. When I created this service account, I did not associate it with any roles.
the Resource (node-pool) itself was created with scope required for logging. but, the service account used does not have policy to log and it still is able to generate logs!
my understanding was that in order for a resource to have enough permissions, it should stratify both:
have required scope (or cloud_platform scope)
have service account with required policy.
can someone throw some light on? am I missing something? I am fairly new to GCP.
I learned that the ServiceAgent that's associated with a GKE cluster has required permission to generate logs. Thus, the moment logging.write scope is associated with the node_pool within the cluster, it's good to start logging.
Service Agents are nothing but Google-managed service accounts that allow the services to access your resources. These are hidden from the user on and cant be seen on the console, but there are evident in places like resource policies. you can read more about it here

GCP default service accounts best security practices

So, we have a "Compute Engine default service account", and everything is clear with it:
it's a legacy account with excessive permission
it used to be limited by "scope" assigned to each GCE instance or instances group
it's recommended to delete this account and use custom service account for each service with the least privilege principle.
The second "default service account" mentioned in the docs is the "App Engine default service account". Presumably it's assigned to the App Engine instances and it's also a legacy thing that needs to be treated similarly to the Compute Engine default service account. Right?
And what about "Google APIs Service Agent"? It has the "Editor" role. As far as I understand, this account is used internally by GCP and is not accessed by any custom resources I create as a user. Does it mean that there is no reason to reduce its permissions for the sake of complying with the best security practices?
You don't have to delete your default service account however at some point it's best to create accounts that have minimum permissions required for the job and refine the permissions to suit your needs instead of using default ones.
You have full control over this account so you can change it's permissions at any moment or even delete it:
Google creates the Compute Engine default service account and adds it to your project automatically but you have full control over the account.
The Compute Engine default service account is created with the IAM basic Editor role, but you can modify your service account's roles to control the service account's access to Google APIs.
You can disable or delete this service account from your project, but doing so might cause any applications that depend on the service account's credentials to fail
If something stops working you can recover the account up to 90 days.
It's also advisable not to use service accounts during development at all since this may pose security risk in the future.
Google APIs Service Agent which
This service account is designed specifically to run internal Google processes on your behalf. The account is owned by Google and is not listed in the Service Accounts section of Cloud Console
Addtiionally:
Certain resources rely on this service account and the default editor permissions granted to the service account. For example, managed instance groups and autoscaling uses the credentials of this account to create, delete, and manage instances. If you revoke permissions to the service account, or modify the permissions in such a way that it does not grant permissions to create instances, this will cause managed instance groups and autoscaling to stop working.
For these reasons, you should not modify this service account's roles unless a role recommendation explicitly suggests that you modify them.
Having said that we can conclude that remooving either default service account or Google APIs Service Agent is risky and requires a lot of preparation (especially that latter one).
Have a look at the best practices documentation describing what's recommended and what not when managing service accounts.
Also you can have a look at securing them against any expoitation and changing the service account and access scope for an instances.
When you talk about security, you especially talk about risk. So, what are the risks with the default service account.
If you use them on GCE or Cloud Run (the Compute Engine default service account) you have over permissions. If your environment is secured, the risk is low (especially on Cloud Run). On GCE the risk is higher because you have to keep up to date the VM and to control the firewall rules to access to your VM.
Note: by default, Google Cloud create a VPC with firewall rules open to 0.0.0.0/0 on port 22, RDP and ICMP. It's also a security issue to fix by default.
The App Engine default service account is used by App Engine and Cloud Functions by default. Same as Cloud Run, the risk can be considered as low.
Another important aspect is the capacity to generate service account key files on those default services accounts. Service account key file are simple JSON file with a private key in it. This time the risk is very high because a few developers take REALLY care of the security of that file.
Note: In a previous company, the only security issues that we had came from those files, especially with service account with the editor role
Most of the time, the user doesn't need a service account key file to develop (I wrote a bunch of articles on that on Medium)
There is 2 ways to mitigate those risks.
Perform IaC (Infra as code, with product like teraform) to create and deploy your projects and to enforce all the best security practices that you have defined in your company (VPC without default firewall rules, no editor role on service accounts,...)
Use organisation policies, especially this one "Disable service account key creation" to prevent the service account key creation, and this one "Disable Automatic IAM Grants for Default Service Accounts" to prevent the editor role on the default service accounts.
The deletion isn't a solution, but a good knowledge of the risk, a good security culture in the team and some organisation policies are the key.

How can I create a new member in Cloud IAM for external services accessing my resources (eg. Cloud Function)

I have a cloud function that has restricted access by Cloud IAM. I have an external service (Auth0) that launches hooks when something happens. I want that hook to trigger my Cloud Function. However the hook should authorize itself beforehand with Cloud IAM.
What I want to do:
Create a new member auth0-hooks
Give that member the Cloud Function Invoker permission
In the hook's code I want to fetch a IAM token from Google (metaserver?)
Use that token within the request to the Cloud Function trigger URL
Trigger access through Cloud IAM and the given token
I am currently stuck in the step of creating a new member auth0-hooks. I thought that's a trivial one but quickly figured out that there is no way to simply add a new member? I thought about creating a service-account but was unsure if a service-account can be used from outside (by requesting the access token of it via the google metaserver)?
That's where I am stuck currently
The service account is the correct way. A service account is a technical account. Like a user account, but for servers.
You can grant permissions on it. When you need to use this service account from outside GCP environment, you need to create a service account key file which contain a private key (it's a secret, keep it safe!). With this service account key file you are able to generate an identity token required by your hook to call the Cloud Functions and be authenticated and authorized.
The Google Cloud Auth libraries help you, in several languages.
Note: metadata server are only internal services on Google Cloud, not reachable externally.

Custom IAM policy binding for a custom service account in GCP

I created a service account mycustomsa#myproject.iam.gserviceaccount.com.
Following the GCP best practices, I would like to use it in order to run a GCE VM named instance-1 (not yet created).
This VM has to be able to write logs and metrics for Stackdriver.
I identified:
roles/monitoring.metricWriter
roles/logging.logWriter
However:
Do you advise any additional role I should use? (i.e. instance admin)
How should I setup the IAM policy binding at project level to restrict the usage of this service account just for GCE and instance-1?
For writing logs and metrics on Stackdriver those roles are appropriate, you need to define what kind of activities the instance will be doing. However as John pointed in his comment, using a conditional role binding 1 might be useful as they can be added to new or existing IAM policies to further control access to Google Cloud resources.
As for the best practices on SA, I would recommend to make the SA as secure as possible with the following:
-Specify who can act as service accounts. Users who are Service Account Users for a service account can indirectly access all the resources the service account has access to. Therefore, be cautious when granting the serviceAccountUser role to a user.
-Grant the service account only the minimum set of permissions required to achieve their goal. Learn about granting roles to all types of members, including service accounts.
-Create service accounts for each service with only the permissions required for that service.
-Use the display name of a service account to keep track of the service accounts. When you create a service account, populate its display name with the purpose of the service account.
-Define a naming convention for your service accounts.
-Implement processes to automate the rotation of user-managed service account keys.
-Take advantage of the IAM service account API to implement key rotation.
-Audit service accounts and keys using either the serviceAccount.keys.list() method or the Logs Viewer page in the console.
-Do not delete service accounts that are in use by running instances on App Engine or Compute Engine unless you want those applications to lose access to the service account.

AWS - how to separate resource of each user for an AWS Service

I am opening an AWS Service (say: AWS Rekognition) for my app's users.
The problem is: when one user (ex: user1) creates a resource (such as a collection), other users (ex: user2, user3) also see the resource that was created by user1.
I have tried to use Identity Pool, and acquired Token/Identity from my backend server for my users but things are not better (my users still see the resources of each other).
What should I do to let user1 receive user1's resource only?
I have been struggling with this problem for days, but can't seem to figure out.
Regards
There are two approaches to this architecture:
Option 1: Client/Server
In this architecture, client apps (eg on a mobile device or a web-based app) make calls to an API that is hosted by your back-end application. The back-end app then verifies the request and makes calls to AWS on behalf of the user.
The user's app never receives AWS credentials. This is very secure because the back-end app can authenticate all requests and apply business logic.
Option 2: Providing AWS credentials
In this architecture, the client apps receive temporary AWS credentials that enables them to directly call AWS services (which matches the architecture you describe).
The benefit is that the app can directly access AWS services such as Amazon S3. The downside is that they you need to very tightly limit the permissions they are given to ensure they only access the desired resources.
Some services make this easy by allowing Conditions on IAM Permissions that can limit the resources that can be accessed, such as by tag or other identifier.
However, based upon Actions, Resources, and Condition Keys for Amazon Rekognition - AWS Identity and Access Management, there is no such capability for Amazon Rekognition:
Rekognition has no service-specific context keys that can be used in the Condition element of policy statements.
I think you could limit the calls by providing a Resource string in the IAM Policy, which can limit their ability to make certain calls (eg DeleteFaces) so that it is only done against a specific collection.
However, please note that list calls such as ListCollections are either permitted fully or not at all. It is not possible to limit the list of collections returned. (This is the same as most AWS Services, such as listing EC2 instances.)
Thus, when using this method of providing credentials, you should be very careful about the permissions granted to the app.