I have user owned objects in a Google Cloud Storage bucket which I'm controlling access to through a webapp backend. Currently, the webapp backend authenticates the user and then generates signed read URLs for the object. This works great, but can result in high volume of URLs being generated in response to a bulk action. The failure rate of these signed URLs is very low, but when enough of them are generated some fail and a timeout or connection reset is noticeable to users.
Is there any way to give this kind of controlled, time limited access to users at the bucket level, or in bulk in another way, without creating GCP accounts for users?
You are correct, all these the methods require a service account. After further investigation, there is no way to provide access without a GCP account.
At the bucket level, there is uniform bucket-level access, Identity and Access Management (IAM) and Access Control List (ACL). If you want to avoid creating GCP accounts for the users, then try Access Control List (ACL).
In this access control you can also determine who the reader, writer and owners will be. But this access control lets you use grant access to anyone who has external email addresses. This will save you the time of creating GCP accounts for the users, here are the scope of who can grant access:
And here it's what each scope covers:
Google account email address:
Every user who has a Google account must have a unique email address associated with that account. You can specify a scope by using any email address that is associated with a Google account, such as a gmail.com address.
Cloud Storage remembers email addresses as they are provided in ACLs until the entries are removed or replaced. If a user changes email addresses, you should update ACL entries to reflect these changes.
Google group email address:
Every Google group has a unique email address that is associated with the group. For example, the Cloud Storage Announce group has the following email address: gs-announce#googlegroups.com. You can find the email address that is associated with a Google group by clicking About on the homepage of every Google group.
Like Google account email addresses, Cloud Storage remembers group email addresses as they are provided in ACLs until the entries are removed. You do not need to worry about updating Google Group email addresses, because Google Group email addresses are permanent and unlikely to change.
Convenience values for projects:
Convenience values allow you to grant bulk access to your project's viewers, editors, and owners. Convenience values combine a project role and an associated project number. For example, in project 867489160491, editors are identified as editors-867489160491. You can find your project number on the homepage of the Google Cloud Console.
You should generally avoid using convenience values in production environments, because they require granting basic roles, a practice which is discouraged in production environments.
G Suite or Cloud Identity:
G Suite and Cloud Identity customers can associate their email accounts with an Internet domain name. When you do this, each email account takes the form USERNAME#YOUR_DOMAIN.com. You can specify a scope by using any Internet domain name that is associated with G Suite or Cloud Identity.
Special identifier for all Google account holders:
This special scope identifier represents anyone who is authenticated with a Google account. The special scope identifier for all Google account holders is allAuthenticatedUsers. Note that while this identifier is a User entity type, when using the Cloud Console it's labeled as a Public entity type.
Special identifier for all users:
This special scope identifier represents anyone who is on the Internet, with or without a Google account. The special scope identifier for all users is allUsers. Note that while this identifier is a User entity type, when using the Cloud Console it's labeled as a Public entity type.
You have full control of the access you want the users to have. You can learn about the access and what each does with the following link 1, Link 2.
Related
IAP allows you to protect apps on AppEngine by defining which principal has access using roles/iap.httpsResourceAccessor. If I have a group in IAM called participants and I add external people (personal gmail accounts & contractors) to that group, will these people have access to my application?
Or do I have to submit the application for verification even though I want to limit the access to our employees and a few dozen customers taking part in a workshop?
So in other words, does IAP define "people in your organisation" as people who have a #myorg.com email address only or as people who are part of a group that has IAM permission?
"People in your organization" are users who have been granted permission in your Google Cloud Project or Organization. This includes #gmail.com accounts, which can have roles granted and can be added to groups.
For an internal application you do not need to verify the app, but you will need an internal OAuth page.
You can find more information in this documentation.
I am trying to understand access security as it relates to Amazon S3. I want to host some files in an S3 bucket, using CloudFront to access it via my domain. I need to limit access to certain companies/individuals. In addition I need to manage that access individually.
A second access model is project based, where I need to make a library of files available to a particular project team, and I need to be able to add and remove team members in an ad hoc manner, and then close access for the whole project at some point. The bucket in question might be the same for both scenarios.
I assume something like this is possible in AWS, but all I can find (and understand) on the AWS site involves using IAM to control access via the AWS console. I don't see any indication that I could create an IAM user, add them to an IAM group, give the group read only access to the bucket and then provide the name and password via System.Net.WebClient in PowerShell to actually download the available file. Am I missing something, and this IS possible? Or am I not correct in my assumption that this can be done with AWS?
I did find Amazon CloudFront vs. S3 --> restrict access by domain? - Stack Overflow that talks about using CloudFront to limit access by Domain, but that won't work in a WfH scenario, as those home machines won't be on the corporate domain, but the corporate BIM Manager needs to manage access to content libraries for the WfH staff. I REALLY hope I am not running into an example of AWS just not being ready for the current reality.
Content stored in Amazon S3 is private by default. There are several ways that access can be granted:
Use a bucket policy to make the entire bucket (or a directory within it) publicly accessible to everyone. This is good for websites where anyone can read the content.
Assign permission to IAM Users to grant access only to users or applications that need to access to the bucket. This is typically used within your organization. Never create an IAM User for somebody outside your organization.
Create presigned URLs to grant temporary access to private objects. This is typically used by applications to grant web-based access to content stored in Amazon S3.
To provide an example for pre-signed URLs, imagine that you have a photo-sharing website. Photos provided by users are private. The flow would be:
A user logs in. The application confirms their identity against a database or an authentication service (eg Login with Google).
When the user wants to view a photo, the application first checks whether they are entitled to view the photo (eg it is their photo). If they are entitled to view the photo, the application generates a pre-signed URL and returns it as a link, or embeds the link in an HTML page (eg in a <img> tag).
When the user accesses the link, the browser sends the URL request to Amazon S3, which verifies the encrypted signature in the signed URL. If if it is correct and the link has not yet expired, the photo is returned and is displayed in the web browser.
Users can also share photos with other users. When another user accesses a photo, the application checks the database to confirm that it was shared with the user. If so, it provides a pre-signed URL to access the photo.
This architecture has the application perform all of the logic around Access Permissions. It is very flexible since you can write whatever rules you want, and then the user is sent to Amazon S3 to obtain the file. Think of it like buying theater tickets online -- you just show the ticket and the door and you are allowed to sit in the seat. That's what Amazon S3 is doing -- it is checking the ticket (signed URL) and then giving you access to the file.
See: Amazon S3 pre-signed URLs
Mobile apps
Another common architecture is to generate temporary credentials using the AWS Security Token Service (STS). This is typically done with mobile apps. The flow is:
A user logs into a mobile app. The app sends the login details to a back-end application, which verifies the user's identity.
The back-end app then uses AWS STS to generate temporary credentials and assigns permissions to the credentials, such as being permitted to access a certain directory within an Amazon S3 bucket. (The permissions can actually be for anything in AWS, such as launching computers or creating databases.)
The back-end app sends these temporary credentials back to the mobile app.
The mobile app then uses those credentials to make calls directly to Amazon S3 to access files.
Amazon S3 checks the credentials being used and, if they have permission for the files being requests, grants access. This can be done for uploads, downloads, listing files, etc.
This architecture takes advantage of the fact that mobile apps are quite powerful and they can communicate directly with AWS services such as Amazon S3. The permissions granted are based upon the user who logs in. These permissions are determined by the back-end application, which you would code. Think of it like a temporary employee who has been granted a building access pass for the day, but they can only access certain areas.
See: IAM Role Archives - Jayendra's Blog
The above architectures are building blocks for how you wish to develop your applications. Every application is different, just like the two use-cases in your question. You can securely incorporate Amazon S3 in your applications while maintaining full control of how access is granted. Your applications can then concentrate on the business logic of controlling access, without having to actually serve the content (which is left up to Amazon S3). It's like selling the tickets without having to run the theater.
You ask whether Amazon S3 is "ready for the current reality". Many of the popular web sites you use every day run on AWS, and you probably never realize it.
If you are willing to issue IAM User credentials (max 5000 per account), the steps would be:
Create an IAM User for each user and select Programmatic access
This will provide an Access Key and Secret Key that you can provide to each user
Attach permissions to each IAM User, or put the users in an IAM Group and attach permissions to the IAM Group
Each user can run aws configure on their computer (using the AWS Command-Line Interface (CLI) to store their Access Key and Secret Key
They can then use the AWS CLI to upload/download files
If you want the users to be able to access via the Amazon S3 management console, you will need to provide some additional permissions: Grant a User Amazon S3 Console Access to Only a Certain Bucket
Alternatively, users could use a program like CyberDuck for an easy Drag & Drop interface to Amazon S3. Cyberduck will also ask for the Access Key and Secret Key.
Disclaimer: https://console.cloud.google.com/support/community leads here. Google's documentation is horrific so giving this a whirl on the off chance that I don't get downvoted to the depths of dev/null
Out of impending necessity I am migrating a private application that monitors our Gmail accts to OAuth 2, and as part of this process it was necessary to create an OAuth consent screen. Since this application will only be used internally it makes the most sense to choose "Internal" for Application Type - which is described as follows:
Only users with a Google Account in your organization can grant access to the scopes requested by this app.
The users on this Project consist of two "owners" — myself using my personal Gmail acct, and
another employee who is part of the company G Suite account.
My question is who qualifies as a "user in my organization"? Is this based on the project owners? Does my non-G-Suite account (which is an owner of the project) qualify? Does the inclusion of one member in a G Suite account automatically associated the other employee accounts? Is the anywhere to actually see these users or manage them directly?
I'd actually like to add another couple accounts to the mix but still keep the application private, but I'm confused about how Google determines which gmail accounts will be able to authorize the app.
UPDATE: To clarify, when I visit the consent page while logged in as a member of our G Suite on the same domain as the project owner, everything is fine. However, we have other members managed in the same G Suite account who are under a different domain and for these I get the message:
Error 403: org_internal
This client is restricted to users within its organization.
Furthermore, I am not even able to grant access using my own email which is the creator and owner of the application. I'd like to know how I can add myself and the other G Suite members to be able to grant access to the application without making it public. It was suggested below that I add them (or their domain) to Google Cloud IAM but I'm unclear about how to get this working. My own email does already exist in IAM with role of "owner" and apparently that doesn't satisfy the requirement.
In order for internal apps to be used for OAuth, the project must belong to the organization associated with the same GSuite customer as all the users.
non-GSuite accounts cannot be used by internal apps. There's more information about this here: https://support.google.com/cloud/answer/6158849#public-and-internal.
Who is a member of my organization?
Anyone that you have added to Google Cloud IAM for a project, folder or at the organization level. This can include Google Accounts (Gmail email addresses), G Suite and Google Identity. The last two use a domain name (example.com) and anyone with an identity in that domain (someone#example.com).
Google's goal is to tighten up security for Google Cloud Platform. In the past anyone with a Google Accounts email address could use your projects OAuth to request access. The level of access is controlled by OAuth Scopes. Today, granting that access results in a Consent Screen with an unverified application warning. To get beyond (remove) that warning often requires a security audit of your application with a cost estimated at $75,000 USD.
How do I manage members?
Through Google Cloud IAM. You can add and remove members; assign and remove IAM roles attached to member IDs. Through G Suite or Google Identity by adding or removing member accounts. Don't forget that members can be part of a Google Group and part of a Domain each of which are also an identity in Google Cloud Platform.
For GSuite Users:
Cloud IAM only deals with authorisation you would need to handle authentication elsewhere. By default GSuite integrates with CloudIAM as a default authentication provider.
For Non-GSuite Users:
You can use cloud identity free edition but users will have to manage separate set of credentials.
Single Sign On without GSuite
If you want Single Sign On Option you can also use Google Cloud Directory Sync to sync with your on-premise Active Directory or LDAP server for authentication. So users can keep their login details.
That's how authentication works on GCP. As for authorisation you have CloudIAM where you can manage access through Predefined Roles, Primitive Roles and Custom Roles.
Cloud IAM and Authorisation
Typically you assign access using google groups and resource hierarchy to make it easier for you to manage user access. But bear in mind that if you grant an access to something through a ascenstor folder in resource hierarchy then you can't deny access downstream. So you need to plan access hierarchy accordingly.
To answer your question who qualifies as a "user in my organization"?, everyone can login but by default they cannot access any projects, it's resources or apis unless they are given access to either individually or through a group.
Hope this clarifies things for you a little.
I want to create firewall rules particular to a storage browser in Google Cloud platform. I see that we have an option to create firewall rules but, How can we have that rules to specific storage browser and not to all other storage browser buckets?
You do not have to create firewall rules to buckets. What you need is to set the permisions on the buckets Using Cloud IAM with buckets.
Open the Cloud Storage browser in the Google Cloud Platform Console.
Click the drop-down menu associated with the bucket to which you want
to grant a member a role.
The drop-down menu appears as three vertical dots to the far right of
the bucket's row.
Choose Edit bucket permissions.
In the Add members field, enter one or more identities that need
access to your bucket.
Add member dialog.
Select a role (or roles) from the Select a role drop-down menu. The
roles you select appear in the pane with a short description of the
permissions they grant.
Click Add.
You can add as members individual users, groups, domains, or even the public as a whole. Members are assigned roles, which grant members the ability to perform actions in Cloud Storage as well as GCP more generally.
You can make a Cloud Storage bucket accessible only by a certain service account link.
A service account is a special type of Google account intended to
represent a non-human user that needs to authenticate and be
authorized to access data in Google APIs link.
You can not apply firewall rules to single buckets.
Firewall rules are defined at the network level, and only apply to the
network where they are created.
Your inquiry is a known Feature Request that has not been implemented yet on Cloud Storage. It has been requested and ongoing, in order to allow IP Whitelisting in Bucket Policy, just like AWS does it with S3 buckets. You can “star” the FR, so that it gets more visibility and also add your email to the “CC” list so that you can get the updates.
As a workaround, you may request access to use VPC Service Controls. According to official documentation, with VPC Service Controls, administrators can define a security perimeter around resources of Google-managed services to control communication to and between those services.
Cloud Storage is included in the Supported products of these Google-managed services and here you can find its limitations.
You can use access levels to grant controlled access to protected Google Cloud Platform (GCP) resources in service perimeters from outside a perimeter.
Access levels define various attributes that are used to filter requests made to certain resources. Access levels can consider various criteria, such as IP address and user identity. Additionally, they are created and managed using Access Context Manager.
This example describes how to create an access level condition that allows access only from a specified range of IP addresses.
However, it needs to be considered that VPC Service controls create a “borders” around the project specifying a “virtual area”, where Access Context Manager rules can be applied. The ACM rule specifying an IP address will allow that IP address to access all Cloud Storage Objects and all other protected resources owned by that project, which is not the expected result. As stated here, you cannot apply an IP address rule to an object, only to all objects in a project.
Furthermore, here you can find a useful link for the Best Practices concerning Security and Access Control on Cloud Storage buckets. Here, you can find tips on “sharing your files” while hosting a static website.
In conclusion, another option is Firebase Hosting instead of Cloud Storage, as stated here. Firebase Hosting is a Google hosting service which provides static web content to the user in a secure, fast, free and easy way.
Prior to Google's restructuring of Cloud API access, I had a gmail account that had access to a bunch of Google Analytics accounts, through which I established API access via OAuth credentials for a large number of sites. They changed their policies and began requiring domains to be verified before they could access credentials. This was the case for "public" applications, but if you switched it to "private" the domain verification no longer mattered. I had to do this because making the project public was a violation of the TOS. However, this coincided with the introduction of GCP's IAM permissions setup that forced me to create an "organization" and a "project" - and also forced me to create a Google Cloud Identity.
The stipulation of a private project was that you can only grant access to accounts under your organization. I added my gmail account to the organization and gave it administrative permissions.
So, I'm here: I set up new OAuth credentials for a new site, then try to access the API through those credentials. During the initial authorization screen, it asks me to select the appropriate Google account, and then is SUPPOSED to ask me to allow access. Instead, I get this error:
Authorization Error
Error 403: org_internal
This client is restricted to users within its organization.
BUT, the account I selected has been established as an administrator of the organization under which the API project resides! I have tried a billion different things, and the only way I seem to be able to grant access to ANYTHING is if I create the credentials under a different project and the log in with the GCI account. HOWEVER, that's not the account that has access to the Google Analytics, so it doesn't help me one bit.
To top it all off, Google has absolutely no support for this. They send me here, to Stack Overflow, to get support. Can anyone help?
The accepted answer didn't help. What helped were the following steps:
Go to Google Developer console (https://console.cloud.google.com/apis/credentials/consent?project=XXX)
Change User Type to External
Note: This does not make your site publicly accessible. It makes it so users outside your organization can be granted the normal way via IAM.
Linking an external email address does NOT make that identity part of the organization. Create a new identity based inside the organization. If your organization is example.com, create an identity such as john#example.com and use that identity. Your other option is to remove the restriction.