Find permissions of service account associated with buckets - google-cloud-platform

I have created a service account using the command
gcloud iam service-accounts create test-sa --display-name "TEST SA"
And then I go ahead and give this service account admin privileges on a GCS bucket.
gsutil iam ch serviceAccount:test-sa#<PROJECT>.iam.gserviceaccount.com:admin gs://<BUCKET>
Now I want a method to check what roles/permissions are granted to a service account.
One way is to do something like:
gcloud projects get-iam-policy <PROJECT> \
--flatten="bindings[].members" \
--format='table(bindings.role)' \
--filter="bindings.members: serviceAccount:test-sa#<PROJECT>.iam.gserviceaccount.com"
But the above command returns empty.
But if I get the ACL for the bucket, I can clearly see that the members and the roles for the bucket.
gsutil iam get gs://<BUCKET>
{
"bindings": [
{
"members": [
"serviceAccount:test-sa#<PROJECT>.iam.gserviceaccount.com"
],
"role": "roles/storage.admin"
},
{
"members": [
"projectEditor:<PROJECT>",
"projectOwner:<PROJECT>"
],
"role": "roles/storage.legacyBucketOwner"
},
{
"members": [
"projectViewer:<PROJECT>"
],
"role": "roles/storage.legacyBucketReader"
}
],
"etag": "CAI="
}
Can someone guide me as to how can I view the buckets/permissions associated with a service account and not the other way around ?

The issue here is that you are mixing project-level roles with bucket-level roles by assigning the permissions to the bucket directly(bucket-level role), and then checking at project-level. You can find more information about this over here.
This is why you get different results when checking either the project(cloud projects get-iam-policy ) or the bucket(gsutil iam get gs://).
You should stick to using either bucket-level roles or project-level roles and avoid mixing the 2 as if you start mixing them, it is gonna get tricky to know what roles each user has and were.
Depending on the number of buckets you plan to manage, it may be easier for you to stick to bucket-level roles and just iterate over a list of buckets when checking the permissions of a user as you can do this very easily with the Cloud SDK in a little for cycle such as:
for i in $(cat bucket-list.txt)
do
gsutil iam get gs://$i
done
Hope you find this useful.

As you are giving permission at Bucket ACL level and not using service account iam-binding,
gcloud projects get-iam-policy command wont return this permission.
You can only get this from querying bucket ACL.

You can assign permission at a resource such as a Project/Folder/Organization and at individual resources such as buckets, objects, compute engine instances, KMS keys, etc. There is no single command that checks everything.
At the Project level permissions are project-wide. At the resource level such as an object, only affect that object. You will need to check everything to know exactly what/where an IAM member has permissions.

Related

Please ensure you have OWNER-role access to this resource

I have a bucket with an object inside it.
gsutil acl ch -u AllUsers:R gs://object-path
I get the following error
Please ensure you have OWNER-role access to this resource.
I am set as the owner of the object when i click the three dots to check and also,
When I run
gsutil acl get gs://object-path
I get
{"email":"ci-tool-access#project-name.iam.gserviceaccount.com",
"entity":"ci-tool-access#project-name.iam.gserviceaccount.com",
"role":"OWNER"
}
My IAM permissions for the entire project not just cloud storage resources is set as owner just to repeat myself. Running
gsutil acl get gs://bucket-path
gets me this
[
{
"entity": "project-owners.....",
"projectTeam": {
"projectNumber": "...",
"team": "owners"
},
"role": "OWNER"
}
]
running gcloud projects get-iam-policy <PROJECT_ID>
output:
- members:- serviceAccount:ci-tool-access#project-name.iam.gserviceaccount.com
role: roles/owner
and gcloud auth list
ACTIVE ACCOUNT
* ci-tool-access#project-name.iam.gserviceaccount.com
This is doing my head in, any ideas what might be happening thats preventing me from changing the ACL on the object? Bucket uniform permissions is set to false so i should have granular access to each object

How to use EC2 instance role wht Aws Cli

When I run aws command like aws s3 ls, it uses default profile. Can I create a new profile to use a role attached to EC2 instance?
If so, how can I write credentials/config files?
From Credentials — Boto 3 Docs documentation:
The mechanism in which boto3 looks for credentials is to search through a list of possible locations and stop as soon as it finds credentials. The order in which Boto3 searches for credentials is:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
Since the Shared Credential File is consulted before the Instance Metadata service, it is not possible to use an assigned IAM Role if a credentials file is provided.
One idea to try: You could create another user on the EC2 instance that does not have a credentials file in their ~/.aws/ directory. In this case, later methods will be used. I haven't tried it, but using sudo su might be sufficient to change to this other user and use the IAM Role.
Unfortunately if you have a credentials file, use the environment variables or specify the IAM key/IAM secret via the SDK these will always take a higher precedence than the using the role itself.
If the credentials are required infrequently you could create another role that the EC2s IAM role can assume (using sts:AssumeRole) whenever it needs to perform these interactions. You would then remove the credentials file on disk.
If you must have a credentials file on the disk, the suggestion would be to create another user on the server exclusively for using these credentials. As a credentials file is only used by default for that user all other users will not use this file (unless explicitly stated within the SDK/CLI interaction as an argument).
Ensure that your local user that you create is locked down as much a possible to reduce the chance of unauthorized users gaining access to the user and its credentials.
This is what we solved this problem.I write this answer in case this is valuable for other people looking for answer.
Add a role "some-role" to a instance with id "i-xxxxxx"
$ aws iam create-instance-profile --instance-profile-name some-profile-name
$ aws iam add-role-to-instance-profile --instance-profile-name some-profile-name --role-name some_role
$ aws ec2 associate-iam-instance-profile --iam-instance-profile Name=some-profile-name --instance-id i-xxxxxx
Attach "sts:AssumeRole"
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
}
$ aws iam update-assume-role-policy --role-name some-role --policy-document file://policy.json
Define profile in the instance
Add "some-operator-profile" to use EC2 instace role.
~/.aws/config
[profile some-operator-profile]
credential_source = Ec2InstanceMetadata
Do what you want with the EC2 provided role
$ aws --profile some-operator-profile s3 ls

Identify AWS IAM user that assumed an IAM role

I'm working on a system that receives new findings from Amazon GuardDuty. Most access in our organization is delegated to IAM roles instead of directly to users, so the findings usually result from the actions of assumed roles, and the actor identity of the GuardDuty finding looks something like this:
"resource": {
"accessKeyDetails": {
"accessKeyId": "ASIALXUWSRBXSAQZECAY",
"principalId": "AROACDRML13PHK3X7J1UL:129545928468",
"userName": "my-permitted-role",
"userType": "AssumedRole"
},
"resourceType": "AccessKey"
},
I know that the accessKeyId is created when a security principal performs the iam:AssumeRole action. But I can't tell who assumed the role in the first place! If it was an IAM user, I want to know the username. Is there a way to programmatically map temporary AWS STS keys (starts with ASIA...) back to an original user?
Ideally I'm looking for a method that runs in less than 30 seconds so I can use it as part of my security event pipeline to enrich GuardDuty findings with the missing information.
I've already looked at aws-cli and found aws cloudtrail lookup-events but it lacks the ability to narrow the query to a specific accessKeyId so it takes a loooong time to run. I've explored the CloudTrail console but it's only about as capable as aws-cli here. I tried saving my CloudTrail logs to S3 and running an Athena query, but that was pretty slow too.
This seems like it would be a common requirement. Is there something obvious that I'm missing?
Actually, aws-cli can perform a lookup on the session! Just make sure to specify ResourceName as the attribute key in the lookup attributes.
$ aws cloudtrail lookup-events \
--lookup-attributes 'AttributeKey=ResourceName,AttributeValue=ASIALXUWSRBXSAQZECAY' \
--query 'Events[*].Username'
[
"the.user#example.com"
]

Can I get a list of all resources for which a user has been added to a role?

I'm wondering if there is a way to get a list of all roles to which a user has been added, regardless of which resource the role is applied to?
e.g. I can get a list of all members of roles/storage.admin on a bucket and I can get a list of all members of the same role but on a project:
gsutil iam get $BUCKET | jq '.bindings[] | select(.role == "roles/storage.admin")'
gcloud projects get-iam-policy $PROJECT --format=json | jq '.bindings[] | select(.role == "roles/storage.admin")'
But it seems there is no single command to tell you which roles a user has been added to and which resource the role is applied to. Does anyone know a way of doing this?
Roles are not assigned directly to users. This is why there is no single command that you can use.
IAM members (users, service accounts, groups, etc.) are added to resources with roles attached. A user can have permissions to a project and also have permissions at an individual resource (Compute Engine Instance A, Storage Bucket A/Object B). A user can also have no permissions to a project but have permissions at individual resources in the project.
You will need to run a command against resources (Org, Folder, Project and items like Compute, Storage, KMS, etc).
To further complicate this, there are granted roles and also inherited roles.

Can I use the gcloud command to adjust permissions for a service account and enable write access to a storage bucket inside firebase functions?

I have a firebase function which I want to permit write access to cloud storage. I believe I need to setup a service account with those permissions, and then grant them programmatically inside my function, but I'm confused how to do this.
The firebase function writes a file to a bucket on a trigger. The storage settings for the firebase storage are set to the default, which means they require the client to be authenticated:
service firebase.storage {
match /b/{bucket}/o {
match /{allPaths=**} {
allow read, write: if request.auth != null;
}
}
}
In this document (https://cloud.google.com/functions/docs/concepts/iam), under "Runtime service account", I see this:
At runtime, Cloud Functions uses the service account
PROJECT_ID#appspot.gserviceaccount.com, which has the Editor role on
the project. You can change the roles of this service account to limit
or extend the permissions for your running functions.
When it says "runtime," I'm assuming this means the firebase function runs within a context of that service account and the permissions granted to it. As such, I'm assuming I need to make sure the permissions of that service account have write access, as I see from this link (https://console.cloud.google.com/iam-admin/roles?authuser=0&consoleUI=FIREBASE&project=blahblah-2312312).
I see the permission named storage.objects.create and would assume I need to add this to the service account.
To investigate the service account current settings, I ran these commands:
$ gcloud iam service-accounts describe blahblah-2312312#appspot.gserviceaccount.com
displayName: App Engine default service account
email: blahblah-2312312#appspot.gserviceaccount.com
etag: BwVwvSpcGy0=
name: projects/blahblah-2312312/serviceAccounts/blahblah-2312312#appspot.gserviceaccount.com
oauth2ClientId: '98989898989898'
projectId: blahblah-2312312
uniqueId: '12312312312312'
$ gcloud iam service-accounts get-iam-policy blahblah-2312312#appspot.gserviceaccount.com
etag: ACAB
I'm not sure if there is a way to get more details from this, and unsure what etag ACAB indicates.
After reviewing this document (https://cloud.google.com/iam/docs/granting-roles-to-service-accounts) I believe that I need to grant the permissions. But, I'm not entirely sure how to go from the JSON example and what structure it should be and then associate the policy, or if that is even the correct path.
{
"bindings": [
{
"role": "roles/iam.serviceAccountUser",
"members": [
"user:alice#gmail.com"
]
},
{
"role": "roles/owner",
"members": [
"user:bob#gmail.com"
]
}
],
"etag": "BwUqLaVeua8=",
}
For example, my questions would be:
Do I need to make up my own etag?
What email address do I use inside the members array?
I see this command listed as an example
gcloud iam service-accounts add-iam-policy-binding \
my-sa-123#my-project-123.iam.gserviceaccount.com \
--member='user:jane#gmail.com' --role='roles/editor'
What I don't understand is why I have to specify two quasi-email addresses. One is the service account, and one is the user associated with the service account. Does this mean that user jane#gmail.com can operate under the credentials of the service account? Can I just have the service account on its own have permissions which I use in my cloud function?
Is there a simpler way to do this using only the command line, without manually editing JSON?
And, then once I have my credentials properly established, do I need to use a JSON service account file as many examples show:
var admin = require('firebase-admin');
var serviceAccount = require('path/to/serviceAccountKey.json');
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
databaseURL: 'https://<DATABASE_NAME>.firebaseio.com'
});
Or, can I just make a call to admin.initializeApp() and since "... at runtime, Cloud Functions uses the service account PROJECT_ID#appspot.gserviceaccount.com..." the function will automatically get those permissions?
The issue was (as documented here: How to write to a cloud storage bucket with a firebase cloud function triggered from firestore?) that I had incorrectly specified the first parameter to the bucket as a subdirectory inside the bucket and not as just a bucket. This meant storage thought I was trying to access a bucket which did not exist, and I got the permissions error.