I want to create a button that will open GCP cloud shell and run code that create some resources in the account.
I am trying to use "Open in Cloud Shell" (https://cloud.google.com/shell/docs/open-in-cloud-shell) URL and adding my GIT repo to the URL, but the problem is that my code should get different arguments in every run. There is a way to send arguments with this URL? Or maybe there is another solution for running code with arguments in GCP cloud shell via URL?
This is NOT a direct answer to your original question however it might be useful for an overall answer. If we don't like this answer, simply let me know and we'll delete it.
From you clarification in the comments, what I now sense is that you want to create GCP resources that the user can work with. For example, a PubSub topic. We'll use that as an illustration. The first thing I want to do is disavow us of the notion that there is anything "special" about a resource and the identity that it used to create that resource other than the identity must have authority to create it. For example, if user "john" creates a topic, that doesn't mean that the topic is "owned" by john. A GCP resource "just exists" after it is created. In order for a user to "use" a resource, it (the resource) must authorize the sets of users to work with it. This is where GCP IAM comes into play. Separate your goal into two parts.
Upon request, a new GCP topic is created
Once the GCP topic is created, you grant permissions on the topic to be worked with by named identities (users/groups)
Don't think "The user who creates the topic is immediately the one who can work with it".
For example, you may wish to grant your users the ability to subscribe to a topic but may not want those users to be able to "manipulate" topics such as creation/update/delete.
I am assuming that the solution you are working against is for end users rather than internal developers?
Off the top of my head, I'm tempted to suggest that you review the following very short video:
How to authenticate calls to your Google Cloud Run service
This is just a teaser but it does give us a clue. It alludes to the notion that a request from an authenticated (to Google) user can be received by a Cloud Run instance and Cloud Run can then know who the user is. With that in mind, in the code of your Cloud Run, you can then make a "yes/no" decision as to whether to proceed. If yes to proceed, then Cloud Run (which is indeed running as a single user and we won't change that) creates the topic and then assigns subscription (or publication or other) permissions to the topic on behalf of the identity that came in with the request.
Related
If I were to create a client desktop application, I'm trying to find a reliable way to notify client applications of new data that needs to be queried from the server. Would pubsub be a good use for this? Most of the documentation I see for it seems to be focused on server to server communication, and is a bit ambiguous if this would work well for server to client notifications.
If it should work, would I be able to properly authenticate subscribers to limit the topics they could subscribe to? This application would be potentially downloadable by anyone, and I would need to ensure that information intended for one client couldn't end up in the hands of another client.
Cloud Pub/Sub is not going to be a good choice for this use case. First of all, note that each topic and project is limited to 10,000 subscriptions. Therefore, if you intend to have more than that, you will run out of subscriptions. Secondly, note that a subscription only receives messages published after the subscription is created. If you only need messages to be delivered that were published after the user came to the website, this may be okay. However, with these two issues combined, you'll need to consider lifetime of your subscriptions. Do they get deleted when a user logs out? If not, when a user comes back, do you expect them to get all of the messages published since the last time they visited?
Additionally, as discussed in the comments, there is the issue of authentication. Your client-side app would have to have the credentials to subscribe. This would require you to essentially leak those credentials into your client-side code, which could be a vulnerability in your application.
The service designed to deliver notifications of this nature is Firebase Cloud Messaging.
If you want to open the application to anyone on the internet, you can't rely on the IAM service that only works with Google identity -> You can't ask your user to have a Google Account, the user experience will be bad.
Thus, you can't use IAM service to secure the PubSub access, and thus to use PubSub because anyone could access it.
In your use case, the first step is to ask the user to register (create an account, validate email, maybe use payment method,...). Then, you have an identity, but managed by you, not by IAM. You know which messages are for this user and which aren't.
If you want to be notified "in real time", I propose you to use long polling method or streaming to push data to the user. Cloud Run is now capable to do this and I recommend you to have a look on that.
I'm studying GCP and reading about different ways to communicate and manage cloud functions I end up wondering when to use each of the services that offer GCP.
So, I have been reading about GCP Composer, GCP Workflows, Cloud Pub/Sub and I don't see clearly when to use each one, or use simple HTTP calls.
I understand that it depends a lot on the application that you are building, but for example, If I'm building a payment gateway and some functions should be fired after the payment was verified, like sending emails, making not related business logic, adding the purchase to a sales platform. So which one should be the way I manage this flow and in which case would be better to use the others? Should I use events to create an async flow with Pub/Sub, or use complex solutions like composer and workflows? or just simple HTTP calls?
As always, it depends!! Even in your use case, it depends! Ok, after a payment you want to send an email, make business logic, adding the order to your databases,...
But, is all theses actions can be done in parallel, or you need to execute them in a certain order and if a step fails, you stop the process?
In the first case, you can use Cloud PubSub with 1 message published (payment OK) and then a fan out to several functions in parallel. Else, you can use workflow to test the response of the fonction and then to call, or not the following fonctions. With composer you can perform much more checks and actions.
You can also imagine to send another email 24h after to thank the customer for their order, and use Cloud Task to delayed an action.
You talked about Cloud Functions, but you also have other solutions to host code on GCP: App Engine and Cloud Run. Cloud function is, most of the time, single purpose. Sending an email is perfect for a function.
Now, if you have "set of functions" to browse your stock, view the object details, review the price, and book an object (validate an order "books" the order content in your warehouse), the "functions" are all single purpose but related to the same domain: warehouse management. Thus you can create a webserver that propose different path to manage the warehouse (a microservice for the warehouse if you prefer) and host it on CloudRun or App Engine.
Each product has its strength and weakness. You will also see this when you will learn about the storage on GCP. Most of the time, you can achieve things with several product, but if you don't use the right one, it will be slower, or cost much more.
I'm looking to get help on the GCP billing. I know we can get cost info based on the service and project, however, is it possible to get info based on the access email ID? because I'm planning to give access to my colleagues and I want to know how much each one their access cost and against which service.
Something like: Date, Email ID, Service, Cost
With respect to another project, how should we know which access cost us so much?
We are running ~30 sandbox projects internally, each allocated to a specific person that can test and run his/her stuff on GCP.
I strongly suggest you create isolated workspaces (projects) for your colleagues so they don't accidentally delete/update services of other people. You will get a separate billing report for each project as well.
I am also setting up a billing alert for all my colleagues so they get an early notification if they left something running on their testbench.
There are three ways I think you could do that kind of cost segregation, I will number them in order of complexity.
1.- Cloud Export Billing, For this one the best practice is to segregate your resources and users by "Labels", as administrator, you may ask the users to use them and assign them to any resource they create, e.g. If they create a new VM instance, then you will be able to filter by field the exported table and create the reports as you want.(Also your GCP billing dashboard will show these "labels" segregations)
2.- Use Billing API to curl directly the information you need to get from it,you can manage to use in the request the information you need like SKU, User, Date and description.
3.- Usage Reports. This solution is more GSuite scope,and I can't vouch that will work as the documentation say but you can take a look to it, there is an option to get "Usage reports", this usage reports can be made from GSuite to any resource below, GCP included if you already have an organization.
I have just recently started to work with Google Cloud and I am trying to wrap my head around some of its inner workings, mainly the audit logging part.
What I want do is get the log activity from when my keys are used for anything and also when someone actually logged into the Google Console Cloud (it could be the Key Vault or the Key Ring, too).
I have been using power shell to extract these logs using gcloud read logging and this is where I start to doubt whether I have the right place. I will explain:
I have created new keys and I see in the Activity Panel this action, and I can already extract this through gcloud read logging resource.type=cloudkms_cryptokey (there could be a typo on the command line, since I am writing it from the top of my head, sorry for that!).
Albeit I have this information, I am rather curious if this is the correct course of action here. I saw the CreateCryptoKey and SetIamPolicy methods on my logs, alright, but am I going to see all actions related to these keys? By reading the GCloud docs, I feel as though I am only getting some of the actions?
As I have said, I am trying to work my way around the GCloud Documentation, but it is such an overwhelming amount of information that I am not really getting the proper answer I am looking for, this is why I thought about resorting to this community.
So, to summarize, am I getting all the information related to my keys the way I am doing right now? And what about the people that have access to the Google Cloud Console page, is there a way to find who accessed it and which part (Crypto Keys page, Crypto Vault page for example)? That's something I have not understood from the docs as well, sadly. Perhaps someone could show me the proper page where I can make references to what I am looking for? Because the Cloud Audit Logging page doesn't feel totally clear to me on this front (and I assume I could be at fault here, these past weeks have been harsh!)
Thanks for anyone that takes some time to answer my question!
Admin activities such as creating a key or setting IAM policy are logged by default.
Data access activities such as listing Cloud KMS resources (key rings, keys, etc.), or performing cryptographic operations (encryption, decryption, etc.) are not logged by default. You can enable data access logging, via the steps at https://cloud.google.com/kms/docs/logging. I'm not sure if that is the topic you are referring to, or https://cloud.google.com/logging/docs/audit/.
Currently I log into IAM and edit policies by hand for my S3 bucket. When I change something in the editor, I have no idea what the policy was before unless I exit the editor by canceling and then go back and view it. So there's no way to tell exactly what I've changed. So editing is kind of painful, especially considering that I sometimes find myself changing something and then testing the change, with no trivial way to roll back to where I started.
Another problem created by the lack of version control is there's no log of why or when a particular permission was modified. For example, I would really like to know that the reason we need the ListBucket permission on our bucket is because that was required to get file uploads to work. You know, the kind of thing you might put in a git commit message.
Now that you understand and care deeply about my motivations, I would like to know how best to get my policies into git. To the extent possible, I'd like the only way to change the permissions to be through code that is written by me, with the presumption being that any time you make a change, you commit to the repository. This is not perfect security of course, but it does provide an accounting of what changed when, and gives us a single place where we make changes.
Here's my proposal:
Create an IAM user called policy_editor
Revoke policy editing privileges from all users
Give policy_editor policy editing privileges
Do not give policy_editor a password (thus have to use api credentials to change policies)
My questions are:
Is this possible? (Ideally even the root user wouldn't have permission to edit policies, so that wouldn't happen by accident)
Is this a good idea?
Is there a better solution?
Is there a tool that does this already?
Thanks!
Is this possible?
Yes, the API is flexible enough to do that. Writing automation around IAM pays off in spades.
By "root user", do you mean the AWS access keys directly on the account? Step 1 is to delete those creds (directly on the account) and only use IAM users for everything.
http://docs.aws.amazon.com/IAM/latest/UserGuide/IAMBestPractices.html
Is this a good idea?
Yes, automation is good.
Is there a better solution?
Well, here are some related ideas:
Use CloudTrail to log all IAM changes.
If you disable your IAM-changing privs, create a second user (with MFA enabled) for emergencies.
For some "dangerous" commands, use automation instead. (i.e. give them a web form where they can delete a bucket, but your code verifies it's OK to delete beforehand.)
Avoid adding privs directly to people. Always use groups to organize permissions. Don't be afraid to spend some time figuring out what logical permission groups would be. For example, you could have a "debugging production" group.
Don't get too fine-grained (at least not at first). There is a trade-off between security and bureaucracy here. If people have to ping you for every little permission, they will start requesting privs "just in case".
Use the conditionals: You can say "you can delete any bucket that doesn't have 'production' in the name". Or "You can terminate instances, but it requires MFA".
Review your policies regularly. People move around between teams, so people often end up with permissions they don't need. If your groups are well-named, you can make the managers review the permissions needed for their underlings.
Is there a tool that does this already?
Not that I know of. It's pretty easy via API calls, so someone is going to write it.
(This guy started a project: https://github.com/percolate/iamer )