I create a node-pool under a GKE cluster while using a custom service account. When I created this service account, I did not associate it with any roles.
the Resource (node-pool) itself was created with scope required for logging. but, the service account used does not have policy to log and it still is able to generate logs!
my understanding was that in order for a resource to have enough permissions, it should stratify both:
have required scope (or cloud_platform scope)
have service account with required policy.
can someone throw some light on? am I missing something? I am fairly new to GCP.
I learned that the ServiceAgent that's associated with a GKE cluster has required permission to generate logs. Thus, the moment logging.write scope is associated with the node_pool within the cluster, it's good to start logging.
Service Agents are nothing but Google-managed service accounts that allow the services to access your resources. These are hidden from the user on and cant be seen on the console, but there are evident in places like resource policies. you can read more about it here
Related
So, we have a "Compute Engine default service account", and everything is clear with it:
it's a legacy account with excessive permission
it used to be limited by "scope" assigned to each GCE instance or instances group
it's recommended to delete this account and use custom service account for each service with the least privilege principle.
The second "default service account" mentioned in the docs is the "App Engine default service account". Presumably it's assigned to the App Engine instances and it's also a legacy thing that needs to be treated similarly to the Compute Engine default service account. Right?
And what about "Google APIs Service Agent"? It has the "Editor" role. As far as I understand, this account is used internally by GCP and is not accessed by any custom resources I create as a user. Does it mean that there is no reason to reduce its permissions for the sake of complying with the best security practices?
You don't have to delete your default service account however at some point it's best to create accounts that have minimum permissions required for the job and refine the permissions to suit your needs instead of using default ones.
You have full control over this account so you can change it's permissions at any moment or even delete it:
Google creates the Compute Engine default service account and adds it to your project automatically but you have full control over the account.
The Compute Engine default service account is created with the IAM basic Editor role, but you can modify your service account's roles to control the service account's access to Google APIs.
You can disable or delete this service account from your project, but doing so might cause any applications that depend on the service account's credentials to fail
If something stops working you can recover the account up to 90 days.
It's also advisable not to use service accounts during development at all since this may pose security risk in the future.
Google APIs Service Agent which
This service account is designed specifically to run internal Google processes on your behalf. The account is owned by Google and is not listed in the Service Accounts section of Cloud Console
Addtiionally:
Certain resources rely on this service account and the default editor permissions granted to the service account. For example, managed instance groups and autoscaling uses the credentials of this account to create, delete, and manage instances. If you revoke permissions to the service account, or modify the permissions in such a way that it does not grant permissions to create instances, this will cause managed instance groups and autoscaling to stop working.
For these reasons, you should not modify this service account's roles unless a role recommendation explicitly suggests that you modify them.
Having said that we can conclude that remooving either default service account or Google APIs Service Agent is risky and requires a lot of preparation (especially that latter one).
Have a look at the best practices documentation describing what's recommended and what not when managing service accounts.
Also you can have a look at securing them against any expoitation and changing the service account and access scope for an instances.
When you talk about security, you especially talk about risk. So, what are the risks with the default service account.
If you use them on GCE or Cloud Run (the Compute Engine default service account) you have over permissions. If your environment is secured, the risk is low (especially on Cloud Run). On GCE the risk is higher because you have to keep up to date the VM and to control the firewall rules to access to your VM.
Note: by default, Google Cloud create a VPC with firewall rules open to 0.0.0.0/0 on port 22, RDP and ICMP. It's also a security issue to fix by default.
The App Engine default service account is used by App Engine and Cloud Functions by default. Same as Cloud Run, the risk can be considered as low.
Another important aspect is the capacity to generate service account key files on those default services accounts. Service account key file are simple JSON file with a private key in it. This time the risk is very high because a few developers take REALLY care of the security of that file.
Note: In a previous company, the only security issues that we had came from those files, especially with service account with the editor role
Most of the time, the user doesn't need a service account key file to develop (I wrote a bunch of articles on that on Medium)
There is 2 ways to mitigate those risks.
Perform IaC (Infra as code, with product like teraform) to create and deploy your projects and to enforce all the best security practices that you have defined in your company (VPC without default firewall rules, no editor role on service accounts,...)
Use organisation policies, especially this one "Disable service account key creation" to prevent the service account key creation, and this one "Disable Automatic IAM Grants for Default Service Accounts" to prevent the editor role on the default service accounts.
The deletion isn't a solution, but a good knowledge of the risk, a good security culture in the team and some organisation policies are the key.
I created a service account mycustomsa#myproject.iam.gserviceaccount.com.
Following the GCP best practices, I would like to use it in order to run a GCE VM named instance-1 (not yet created).
This VM has to be able to write logs and metrics for Stackdriver.
I identified:
roles/monitoring.metricWriter
roles/logging.logWriter
However:
Do you advise any additional role I should use? (i.e. instance admin)
How should I setup the IAM policy binding at project level to restrict the usage of this service account just for GCE and instance-1?
For writing logs and metrics on Stackdriver those roles are appropriate, you need to define what kind of activities the instance will be doing. However as John pointed in his comment, using a conditional role binding 1 might be useful as they can be added to new or existing IAM policies to further control access to Google Cloud resources.
As for the best practices on SA, I would recommend to make the SA as secure as possible with the following:
-Specify who can act as service accounts. Users who are Service Account Users for a service account can indirectly access all the resources the service account has access to. Therefore, be cautious when granting the serviceAccountUser role to a user.
-Grant the service account only the minimum set of permissions required to achieve their goal. Learn about granting roles to all types of members, including service accounts.
-Create service accounts for each service with only the permissions required for that service.
-Use the display name of a service account to keep track of the service accounts. When you create a service account, populate its display name with the purpose of the service account.
-Define a naming convention for your service accounts.
-Implement processes to automate the rotation of user-managed service account keys.
-Take advantage of the IAM service account API to implement key rotation.
-Audit service accounts and keys using either the serviceAccount.keys.list() method or the Logs Viewer page in the console.
-Do not delete service accounts that are in use by running instances on App Engine or Compute Engine unless you want those applications to lose access to the service account.
Recently, the IAM key of GCP was exposed, and miner was installed.
In the case of AWS, 2fa can be set when accessing with an access key, or access can be made only from a specific IP.
If there was such a setting, the accident would not have occurred immediately even if the key was exposed.
I searched for ACL and 2FA settings in GCP, but there is no key setting, only the instance access setting is checked.
Is it possible to set up GCP's Web Console access, 2FA for access to IAM key, and IP ACL?
In addition, an IP-based ACL is required for BigQuery, but it is impossible to access the ACL for BigQuery access when contacting other teams, and it is only controlled by IAM.
If IAM is exposed by user error, is there any way GCP can prevent this?
You can enforce 2fa and IP control on IAM service (with IAM conditions and context-aware access).
Google helps you as it can:
The support contact you in case of abnormal activity, such as miner installed your VM and thus suspicious network activity
The public repository, such as Github, are periodically scanned by google and in case of service account key file found, you are notified
Platform proposes you solution to mitigate the risk
Context aware accesss
IAM condition
Organisation policy to disable the capacity to generate service account key file. Only a small group of users are able to generate them after the validation of the user request. The target is to limit the number of key and to generate them only when the use case require them
SCC (Security Command Center) findings can raise primitive role on service account: too much roles, use predefined role instead
IAM recommender that propose you to reduce the permission scope based on the 90 last days of activities
So a set of tools to be proactive and reactive to events.
You can set up 2 step-verification(2sv) which is triggerd when trying to access GCP vm instances.
Follow this guide to set up 2sv to your instance.
Also, VPC service control can add additional sercurity layer for managed service like bigQuery.
You can block specific IP address bia this service.
This article will help you a lot in using VPC Service Control.
I am having some trouble doing code deploy with my AWS Educate account. Initially, when I was setting things up I was following this article.
https://hackernoon.com/deploy-to-ec2-with-aws-codedeploy-from-bitbucket-pipelines-4f403e96d50c?fbclid=IwAR3rezVMGpuQxTJ3AneOeTL2oMHjCKbQB5C5ouTLhJQ5gRp3JeL4GK0f53o
In it is talks about setting up an IAM service account. The problem is that AWS Educate allows you to create the accounts but it won't generate keys. In order for me to deploy my Spring Boot (and VueJS) apps to my s3 buckets and ec2s from my bitbucket repo, I need a key and secret key and CodeDeploy Group.
Fine I was able to use my Click the Account Details button on the labs.vocareum page and get my keys, however when I am attempting to set up a Code Deploy Group it asks for a service role and I am unsure where to get this?
Why is the service role necessary?
The service role is used by the CodeDeploy service in order to perform actions outside CodeDeploy (i.e. on another service such as S3).
AWS has a special approach of integrating services. Basically, you have to give each service you are using explicit permission to use another service (even if the access stays in the bounds of the same account). There is no inherent permission given to the CodeDeploy service to change things in S3. In fact, CodeDeploy is not even allowed to read files from S3 without explicitly allowing it.
Here is the official explanation from the docs [1]:
In AWS, service roles are used to grant permissions to an AWS service so it can access AWS resources. The policies that you attach to the service role determine which AWS resources the service can access and what it can do with those resources.
What you are actually doing according to the hackernoon article
you need a user account with programmatic access to your aws account
the user account needs to have a policy attached which grants permission to upload files into S3 and trigger a CodeDeploy deployment --> you provide the access key and secret access key of this user to Bitbucket so it can upload the stuff into S3 and trigger a deployment on bahalf of your user identity
Unrelated to steps 1 and 2: Create a role in AWS IAM [2] which will be used by both services (NOT Bitbucket): CodeDeploy and EC2. Strictly speaking, the author of the hackernoon article is merging two steps into one here: You are creating one role which is used by both services (as specified by the two different principals in the trust relationship: ec2.amazonaws.com and codedeploy.us-west-2.amazonaws.com). Usually this is not how IAM policies should be configured because it violates the principle of granting least privilege [4] as the EC2 instances receives permissions from the AWSCodeDeployRole policy which it probably does not need as far as I see. But that is just a philosophical note here. All the steps mentioned in the hackernoon article should technically work.
So, what you actually do is:
granting CodeDeploy permission to perform various actions inside your account, such as viewing which EC2 instances you have started etc. (this is specified inside the policy AWSCodeDeployRole [3])
granting EC2 permission to read the revision which was uploaded to S3 (this is specified inside the policy AmazonS3FullAccess)
To get back to your question...
However when I am attempting to set up a Code Deploy Group it asks for a service role and I am unsure where to get this?
You need to create the service role by yourself inside the IAM service (see [2]). I do not know if this is supported by AWS Educate, but I guess it should be. After creating the service role, you MUST assign it to the CodeDeploy Group (that is the point where you are stuck right now). Moreover, you must assign that same service role to you EC2 instance profile.
References
[1] https://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-create-service-role.html
[2] https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html#roles-creatingrole-service-console
[3] https://github.com/SummitRoute/aws_managed_policies/blob/master/policies/AWSCodeDeployRole
[4] https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege
When I try to create a job in the GCP Cloud Scheduler I get this error:
{"error":{"code":7,"message":"The principal (user or service account) lacks IAM permission \"iam.serviceAccounts.actAs\" for the resource \"[my service account]\" (or the resource may not exist)."}}
When I enabled the GCP Cloud Scheduler the service account was created (and I can see it in my accounts list). I have verified that it has the "Cloud Scheduler Service Agent" role.
I am logged in as an Owner of our project. It is when I try to create the job that I get this error. I tried to add the "Service Account User" to my principal account, but to no avail.
Does anyone know if I have to add any additional permissions? Or if I have to allow my principal to act (impersonate?) this service account in some way?
Many thanks.
Ben
Ok I figured this out. The documentation is (sort of, in my view) clear if you read it in a certain way / know how GCP IAM works.
You actually need two service accounts. You need one that you set up yourself (can be whatever name you like and doesn't require any special permissions) and you also need the one for Cloud Scheduler itself.
Don't confuse the two. And use the one that you created when specifying the service account to generate the OAuth / OICD tokens.