My company works with IOT devices, and we have a product where each device should have a service account.
This scenario it's impossible to us right now because, follow that doc (https://cloud.google.com/iam/docs/understanding-service-accounts) studying more about it, was discovered GCP had a limit quota of 100 service accounts. Makes us impossible to work with 1 service account by device.
At that moment, in GCP, have another option than service accounts?
Are there a way to increase the amount of service accounts?
I would suggest to check this article that describes the authentication strategies you can use to work in GCP, in particular Google Cloud APIs.
If you have decided that you would rather have a service account for each of your IOT devices, instead if using another option such as the OAuth 2.0 client then you can request a quota increase from the default limit of 100.
The quota increase request is subject to evaluation, so it's best to add a clear note on why you need more than 100 SAs.
Maybe authenticating as a en user could be a better option as whenever you need to increase the number of devices you won't need to wait for any type of approval. However it's not possible to know for sure if this option is best, as your application flow is not clear with the details you have added in the question so far. As mentioned before, you could take a look to the documentation and select the best option for you use case.
Related
Following Coursera Architecting with Google Kubernetes Engine for switching to Service Account.
It says create and download a key file and authenticate using the key. Is this the common way in GCP? There will be many keys created by developers and downloaded to many laptops or servers scattering the keys in many places, which seems to be not secure manner.
Answering your question, yes. The service accounts are the common way to authenticate in GCP.
There are two different service account types, and the recommendation is to use the second one:
User Managed Service Accounts: to authenticate you will then need a “password” that comes in the form of Service Account Key (json file), and if you leak the service account key, the service account can be considered compromised.
Using keys implies that you are in charge of their lifecycle and security, and it’s a lot to ask because:
You need a robust system for secrets distribution.
You need to implement a key rotation policy.
You need to implement safeguards to prevent key leaks.
Google Managed Service Account: Google Managed Service Accounts, are SAs for which you don’t need to generate keys and your applications can just assume their identity. No keys are involved: the VM will continuously request short lived authorization tokens from the metadata service.
Documentation
NO, no and no, don't use service account key file. As you smell it, you are right, it's a terrible thing for the security.
Today, there are several way to prevent the service account key usage, even if, in some corner case, you need them.
I have wrote bunch of articles on that topics:
the limits
the service account credential API
and a fight against a Google dev advocate and one of his article
Because YES, even Google tutorials, courses, documentation (...) promote that bad practice for years and continue. It was my nightmare in my previous company, and I increased my knowledge and skill to prevent key usage and find workarounds. Let me know your use case, I will try to help your the most
I am reading about multi-region architecture considerations.
Our reasons for moving to a multi-region architecture are pretty much the same as everyone else's:
Reducing latency for customers that are in different continents (EU, US, Asia, Africa)
Being in compliance with their data storage needs
Enable regional failover
We will be using Cognito pools and dynamo DB for data storage. Global Cognito pools do not seem to be a thing as Global DynamoDB tables. For a multi-tenant system SAAS system with tenants in different continents, Should the user pool be generated per region or per tenant? In this video here, https://www.youtube.com/watch?v=kmVUbngCyOw&feature=emb_logo&ab_channel=AmazonWebServices , it is recommended to have a pool per tenant. I fail to see many advantages for it though.
Is it instead a good idea to have a user pool per region instead?
The video also suggests having identity pools in addition to user pools. Why should that be the case in a multi-tenant system?
if I was to ensure data residency in the same region as the tenants in dynamodb as well, how should that be handled? and how should active-active architecture
We also need to host the application URLs like tenant1.companydomain.com for all of the tenants. what's the best way to go about it?
This question is too large, but anyway.
1. Reducing Latency
Unless you are calling Cognito APIs often Authentication is really the main concern here, but if you are using a long lived refresh token they shouldn't have to authenticate all the time so it wont be a massive problem. However a bigger problem is if you only use the one pool, you will have to consider that you sometimes need some Cognito Integration to be in the same region, such as a Cognito Authorizer on an API gateway. But you could write your own Lambda Authorizer to get around this.
2. Being in compliance with their data storage needs
This may force your hand in the User Pool decision, although you can always do the option of keeping your identity and your details in separate storage, making use of Lambda Triggers in the User Pool to sync data.
3. regional failover
I don't understand this in regards to your question about how many User Pools to use. If you use one in each region, then you will need to duplicate it in another region if you wanted to add your own failover capability. If you had only 1 User Pool you would have to duplicate only 1. I've never heard of anybody duplicating a User Pool to another region, this conflicts with what you wanted in 2.. If you've used integrations too, you cannot default to another User Pool you'd have to default to a whole new instance of the website, not just the User Pool. You'd have to also create your own Triggers which would do this for you.
Should the user pool be generated per region or per tenant?
This honestly is a large question in itself, we build a multi-tenant SAAS platform and I can honestly say User Pool per tenant would be a nightmare. 1 User Pool is easiest (as for example with the API GW Integration you cannot select multi user pools). You can use an app client per tenant and customize the signin for that tenant and give each tenant their own sub-domain.
4. Is it instead a good idea to have a user pool per region instead?
Other more meaningful questions are maybe? Do I want users to have the same identity in different regions? Do I want a user to be able to use more than on region? etc. Think about different websites that have this feature. For example Amazon you have a global identity and you can switch your store you are visiting, you need to specify requirements.
5. The video
Sorry not going to watch the video, but you can have an identity pool select permissions from the token (i.e. the group permissions for that user in Cognito). That covers 99% of use cases start from there.
6. if I was to ensure data residency...
Out of the box this is how the cloud is unless you specify a global resource everything is per region so you do nothing. You cannot have both data residency and regional failover at the same time.
I'm trying to find a way to list all services used on AWS account, having 1000s of accounts, I can't use cloud trail. Config only provides data on infrastructure like EC2, lambdas, RDSs, which is a fraction of what I need.
Is there an API or a way to make call preferably using an aggregator of some sort?
I would be interested in simple output like account, what services, when first consumed. Any suggestions?
Why do I need it? We let our app teams to consume number of services and they are gradually whitelisted to consume, we would like to understand how quick these are being utilized since whitelsited.
Have you had a look at your AWS bill? That would be a nice source of all the information as to what services were active on an account.
AWS provides the AWS Cost Explorer API which would permit you to do this programatically.
We are in the process of transferring what we currently have in our on-premises infrastructure to the cloud and taking advantage of what AWS has to offer. We are in the process of planning how we can make this process as smooth as possible, so one of the first things that came to mind was, What are the best possible solutions to implement what we currently have in our premises with users registered in AD and how we will be able to manage them, e.g. we create a new user in AD and automatically we can see that new user in our AWS environment so we don't have to manage them on premises as well as AWS and so they can sync?
The next question which I think the answer is Control Tower (and that's why I'm sending my question on this topic), but I would like to confirm and see if there are any other options out there that we might me missing.
As I said earlier, we are in the process of transferring our current on-site infrastructure to the cloud, so at this time we have three environments where we manage development: Development, Staging and Production. We thought of having each of them separated in their own AWS account so we can manage them individually but also we want a way to easy switch accounts between them and possibly get one consolidated bill for all of those three accounts but with the details in each account, and be able to easily make them communicate resources in one account to resources in another account. What would be the best solution for these challenges in AWS if someone can suggest best practices on these?
Thank you so much for your help!
For the AD connection, you can use the AWS AD Connector service. The official AWS blog has a tutorial: https://aws.amazon.com/blogs/security/how-to-connect-your-on-premises-active-directory-to-aws-using-ad-connector/
Billing for a multi-account organization is pretty straightforward, all sub-accounts pay through the root account so you won't have to worry about separation of billing.
Communicating between the environments (accounts), however, requires a bit more legwork. You can use a hub and spoke model and reach out to all environments from an individual environment, or, you can create trust relationships between roles and resources via IAM policy in different accounts and map them to one another.
I try and google does not accept request to increase the quota of in instance free Google Cloud ...someone could tell them what to do to accept and approve? thank... how do I pay normal and get used and if you know one that offers VPS the same service? Thanks
Quota increases are not available for free accounts at the moment. You can upgrade to the paid service by logging into your Google Cloud Console as the Owner.
Then click “Upgrade my account” from the top of any page once logged in to upgrade the account from the free trial.
Once the account is upgraded, quota increases are then requested from the "Quotas" page. You can reference this article for more detailed information and steps on requesting quota increases.
This will file a case with the Cloud Engine support team to process the request and it can take 24-48 hours for them to respond to you.
Prior to making any requests, I would also suggest reading through this article to ensure you are as informed as possible prior to making any changes to the billing for your account.
OK, I was also working on the GCP cloud computing feature and I discovered that the rejection e-mail has a note that Until Billing account has additional history. I guess the billing account must be tested before being assigned a GPU.
This was happening to me too. They would reject all my requests and I was just asking an increase from 0 to 1. If you happen to have an edu email, you can try using that. When I switched to my edu email, my request got approved on the first try.