I have a bucket with an object inside it.
gsutil acl ch -u AllUsers:R gs://object-path
I get the following error
Please ensure you have OWNER-role access to this resource.
I am set as the owner of the object when i click the three dots to check and also,
When I run
gsutil acl get gs://object-path
I get
{"email":"ci-tool-access#project-name.iam.gserviceaccount.com",
"entity":"ci-tool-access#project-name.iam.gserviceaccount.com",
"role":"OWNER"
}
My IAM permissions for the entire project not just cloud storage resources is set as owner just to repeat myself. Running
gsutil acl get gs://bucket-path
gets me this
[
{
"entity": "project-owners.....",
"projectTeam": {
"projectNumber": "...",
"team": "owners"
},
"role": "OWNER"
}
]
running gcloud projects get-iam-policy <PROJECT_ID>
output:
- members:- serviceAccount:ci-tool-access#project-name.iam.gserviceaccount.com
role: roles/owner
and gcloud auth list
ACTIVE ACCOUNT
* ci-tool-access#project-name.iam.gserviceaccount.com
This is doing my head in, any ideas what might be happening thats preventing me from changing the ACL on the object? Bucket uniform permissions is set to false so i should have granular access to each object
Related
I have just moved to a multi account set up using Control Tower and am having a 'mare using Terraform to deploy resources in different accounts.
My (simplified) account structure is:
|--Master
|--management (backends etc)
|--images (s3, ecr)
|--dev
|--test
As a simplified experiment I am trying to create an ecr in the images account. So I think I need to create a policy to enable role switching and provide permissions within the target account. For now I am being heavy handed and just trying to switch to Admin access. The AWSAdministratorAccess role is created by Control Tower on configuration.
provider "aws" {
region = "us-west-2"
version = "~> 3.1"
}
data "aws_iam_group" "admins" { // want to attach policy to admins to switch role
group_name = "administrators"
}
// Images account
resource "aws_iam_policy" "images-admin" {
name = "Assume-Role-Images_Admin"
description = "Allow assuming AWSAdministratorAccess role on Images account"
policy = <<EOP
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": "arn:aws:iam::<Images_Account_ID>:role/AWSAdministratorAccess"
}
]
}
EOP
}
resource "aws_iam_group_policy_attachment" "assume-role-images-admin" {
group = data.aws_iam_group.admins.group_name
policy_arn = aws_iam_policy.images-admin.arn
}
Having deployed this stack I then attempt to deploy another stack which creates a resource in the images account.
provider "aws" {
region = var.region
version = "~>3.1"
}
provider "aws" {
alias = "images"
region = var.region
version = "~> 3.1"
assume_role {
role_arn = "arn:aws:iam::<Images_Account_ID>:role/AWSAdministratorAccess"
}
}
resource "aws_ecr_repository" "boot-images" {
provider = aws.images
name = "boot-images"
}
On deployment I got:
> Error: error configuring Terraform AWS Provider: IAM Role (arn:aws:iam::*********:role/AWSAdministratorAccess) cannot be assumed.
There are a number of possible causes of this - the most common are:
* The credentials used in order to assume the role are invalid
* The credentials do not have appropriate permission to assume the role
* The role ARN is not valid
Error: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
First one: the creds provided are from the master account which always worked in a single account environment
Second: that's what I think has been achieved by attaching the policy
Third: less sure on this but AWSAdministratorAccess defo exists in the account, I think the arn format is correct and while AWS Single Sign On refers to it as a Permission Set the console also describes this as a role.
I found Deploying to multiple AWS accounts with Terraform? which was helpful but I am missing something here.
I am also at a loss of how to extend this idea to deploying an s3 remote backend into my "management" account.
Terraform version 0.12.29
Turns out there were a couple of issues here:
Profile
The credentials profile was incorrect. Setting the correct creds in Env Vars let me run a simple test when just using the creds file failed. There is still an issue here I don't understand as updating the creds file also failed but I have a system that works.
AWS created roles
While my assumption was correct that the Permission Sets are defined as Roles, they have a trust relationship which was not extended to my Master Admin User (my bad) AND it cannot be amended as it was created by AWS automatically and it is locked down.
Manually grant permissions
So while I can grant permissions to a Group to assume a role programatically via Terraform I need to manually create a role in the Target account which extends trust and hence permissions to the Master account.
In my own experience and considering you already have a working AWS infrastructure, I'd rule out and move away from Control Tower and look into doing same things with CloudFormation StackSets. They let you target OUs or individual accounts.
Control Tower has been recommended to me several times, but having an AWS ecosystem of more >25 accounts with production workloads, I am very reluctant to event try. It's great to start from scratch I guess, but not when you already have a decent amount of workloads and accounts in AWS.
When I run aws command like aws s3 ls, it uses default profile. Can I create a new profile to use a role attached to EC2 instance?
If so, how can I write credentials/config files?
From Credentials — Boto 3 Docs documentation:
The mechanism in which boto3 looks for credentials is to search through a list of possible locations and stop as soon as it finds credentials. The order in which Boto3 searches for credentials is:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
Since the Shared Credential File is consulted before the Instance Metadata service, it is not possible to use an assigned IAM Role if a credentials file is provided.
One idea to try: You could create another user on the EC2 instance that does not have a credentials file in their ~/.aws/ directory. In this case, later methods will be used. I haven't tried it, but using sudo su might be sufficient to change to this other user and use the IAM Role.
Unfortunately if you have a credentials file, use the environment variables or specify the IAM key/IAM secret via the SDK these will always take a higher precedence than the using the role itself.
If the credentials are required infrequently you could create another role that the EC2s IAM role can assume (using sts:AssumeRole) whenever it needs to perform these interactions. You would then remove the credentials file on disk.
If you must have a credentials file on the disk, the suggestion would be to create another user on the server exclusively for using these credentials. As a credentials file is only used by default for that user all other users will not use this file (unless explicitly stated within the SDK/CLI interaction as an argument).
Ensure that your local user that you create is locked down as much a possible to reduce the chance of unauthorized users gaining access to the user and its credentials.
This is what we solved this problem.I write this answer in case this is valuable for other people looking for answer.
Add a role "some-role" to a instance with id "i-xxxxxx"
$ aws iam create-instance-profile --instance-profile-name some-profile-name
$ aws iam add-role-to-instance-profile --instance-profile-name some-profile-name --role-name some_role
$ aws ec2 associate-iam-instance-profile --iam-instance-profile Name=some-profile-name --instance-id i-xxxxxx
Attach "sts:AssumeRole"
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
}
$ aws iam update-assume-role-policy --role-name some-role --policy-document file://policy.json
Define profile in the instance
Add "some-operator-profile" to use EC2 instace role.
~/.aws/config
[profile some-operator-profile]
credential_source = Ec2InstanceMetadata
Do what you want with the EC2 provided role
$ aws --profile some-operator-profile s3 ls
I have created a service account using the command
gcloud iam service-accounts create test-sa --display-name "TEST SA"
And then I go ahead and give this service account admin privileges on a GCS bucket.
gsutil iam ch serviceAccount:test-sa#<PROJECT>.iam.gserviceaccount.com:admin gs://<BUCKET>
Now I want a method to check what roles/permissions are granted to a service account.
One way is to do something like:
gcloud projects get-iam-policy <PROJECT> \
--flatten="bindings[].members" \
--format='table(bindings.role)' \
--filter="bindings.members: serviceAccount:test-sa#<PROJECT>.iam.gserviceaccount.com"
But the above command returns empty.
But if I get the ACL for the bucket, I can clearly see that the members and the roles for the bucket.
gsutil iam get gs://<BUCKET>
{
"bindings": [
{
"members": [
"serviceAccount:test-sa#<PROJECT>.iam.gserviceaccount.com"
],
"role": "roles/storage.admin"
},
{
"members": [
"projectEditor:<PROJECT>",
"projectOwner:<PROJECT>"
],
"role": "roles/storage.legacyBucketOwner"
},
{
"members": [
"projectViewer:<PROJECT>"
],
"role": "roles/storage.legacyBucketReader"
}
],
"etag": "CAI="
}
Can someone guide me as to how can I view the buckets/permissions associated with a service account and not the other way around ?
The issue here is that you are mixing project-level roles with bucket-level roles by assigning the permissions to the bucket directly(bucket-level role), and then checking at project-level. You can find more information about this over here.
This is why you get different results when checking either the project(cloud projects get-iam-policy ) or the bucket(gsutil iam get gs://).
You should stick to using either bucket-level roles or project-level roles and avoid mixing the 2 as if you start mixing them, it is gonna get tricky to know what roles each user has and were.
Depending on the number of buckets you plan to manage, it may be easier for you to stick to bucket-level roles and just iterate over a list of buckets when checking the permissions of a user as you can do this very easily with the Cloud SDK in a little for cycle such as:
for i in $(cat bucket-list.txt)
do
gsutil iam get gs://$i
done
Hope you find this useful.
As you are giving permission at Bucket ACL level and not using service account iam-binding,
gcloud projects get-iam-policy command wont return this permission.
You can only get this from querying bucket ACL.
You can assign permission at a resource such as a Project/Folder/Organization and at individual resources such as buckets, objects, compute engine instances, KMS keys, etc. There is no single command that checks everything.
At the Project level permissions are project-wide. At the resource level such as an object, only affect that object. You will need to check everything to know exactly what/where an IAM member has permissions.
I have a firebase function which I want to permit write access to cloud storage. I believe I need to setup a service account with those permissions, and then grant them programmatically inside my function, but I'm confused how to do this.
The firebase function writes a file to a bucket on a trigger. The storage settings for the firebase storage are set to the default, which means they require the client to be authenticated:
service firebase.storage {
match /b/{bucket}/o {
match /{allPaths=**} {
allow read, write: if request.auth != null;
}
}
}
In this document (https://cloud.google.com/functions/docs/concepts/iam), under "Runtime service account", I see this:
At runtime, Cloud Functions uses the service account
PROJECT_ID#appspot.gserviceaccount.com, which has the Editor role on
the project. You can change the roles of this service account to limit
or extend the permissions for your running functions.
When it says "runtime," I'm assuming this means the firebase function runs within a context of that service account and the permissions granted to it. As such, I'm assuming I need to make sure the permissions of that service account have write access, as I see from this link (https://console.cloud.google.com/iam-admin/roles?authuser=0&consoleUI=FIREBASE&project=blahblah-2312312).
I see the permission named storage.objects.create and would assume I need to add this to the service account.
To investigate the service account current settings, I ran these commands:
$ gcloud iam service-accounts describe blahblah-2312312#appspot.gserviceaccount.com
displayName: App Engine default service account
email: blahblah-2312312#appspot.gserviceaccount.com
etag: BwVwvSpcGy0=
name: projects/blahblah-2312312/serviceAccounts/blahblah-2312312#appspot.gserviceaccount.com
oauth2ClientId: '98989898989898'
projectId: blahblah-2312312
uniqueId: '12312312312312'
$ gcloud iam service-accounts get-iam-policy blahblah-2312312#appspot.gserviceaccount.com
etag: ACAB
I'm not sure if there is a way to get more details from this, and unsure what etag ACAB indicates.
After reviewing this document (https://cloud.google.com/iam/docs/granting-roles-to-service-accounts) I believe that I need to grant the permissions. But, I'm not entirely sure how to go from the JSON example and what structure it should be and then associate the policy, or if that is even the correct path.
{
"bindings": [
{
"role": "roles/iam.serviceAccountUser",
"members": [
"user:alice#gmail.com"
]
},
{
"role": "roles/owner",
"members": [
"user:bob#gmail.com"
]
}
],
"etag": "BwUqLaVeua8=",
}
For example, my questions would be:
Do I need to make up my own etag?
What email address do I use inside the members array?
I see this command listed as an example
gcloud iam service-accounts add-iam-policy-binding \
my-sa-123#my-project-123.iam.gserviceaccount.com \
--member='user:jane#gmail.com' --role='roles/editor'
What I don't understand is why I have to specify two quasi-email addresses. One is the service account, and one is the user associated with the service account. Does this mean that user jane#gmail.com can operate under the credentials of the service account? Can I just have the service account on its own have permissions which I use in my cloud function?
Is there a simpler way to do this using only the command line, without manually editing JSON?
And, then once I have my credentials properly established, do I need to use a JSON service account file as many examples show:
var admin = require('firebase-admin');
var serviceAccount = require('path/to/serviceAccountKey.json');
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
databaseURL: 'https://<DATABASE_NAME>.firebaseio.com'
});
Or, can I just make a call to admin.initializeApp() and since "... at runtime, Cloud Functions uses the service account PROJECT_ID#appspot.gserviceaccount.com..." the function will automatically get those permissions?
The issue was (as documented here: How to write to a cloud storage bucket with a firebase cloud function triggered from firestore?) that I had incorrectly specified the first parameter to the bucket as a subdirectory inside the bucket and not as just a bucket. This meant storage thought I was trying to access a bucket which did not exist, and I got the permissions error.
I was setting the permission to AllUsers for uploading the files,
I used:
gsutil acl ch -u AllUsers:R gs://[mywebsite.com]
gsutil defacl set public-read gs://[mywebsite.com]
but I found the directory was wrong.
So I wanna disable the permission to the current directory.
First, I checked the IAM policy for my setting by
gsutil iam get gs://[mywebsite.com]
and the part of the results show:
{
"members": [
"allUsers",
"projectViewer:[myprojectID]"
],
"role": "roles/storage.legacyBucketReader"
}
I guess it means the permission to all users, so I have to disable it.
Then, removing AllUsers from this directory by:
gsutil iam ch -d All gs://[mywebsite.com]
However....it can't work......it shows
CommandException: Incorrect public member type for binding AllUsers:R
Is there any solution to this ?
If I delete the bucket, the permission will also disabled?
(Update) Does "public-read" mean user can read the directory that you set? or just the permission to upload files to the Storage? (this is what I really wanna know)
Looking at the syntax mentioned in gsutil help iam ch, it says to specify allUsers, not AllUsers. This works when you specify the former, but throws an error for the latter.
Case sensitivity for removing users/roles was fixed for acl ch -d in this GitHub commit, but it looks like it was never fixed for iam ch -d. I've opened a GitHub issue for this.