I've got an ASP.NET Web API that is using AWS Cognito for authentication and resource access control. We've been using user pool groups up until this point to define certain entities users have access to (non-aws resources in a DB).
The problem is, now that our requirements for access control are more detailed, we are hitting the group cap of 25 per pool. I've looked into alternatives within Cognito, such as using custom attributes, but I've found that there are also limits on the number of custom attributes per pool, as well as they only support string & number types, not arrays.
Another alternative I've explored is intercepting the token when it hits our API, and adding claims based on permissions mapped in the DB. This works reasonably well, but this is only a solution server side, and I'm not entirely thrilled with needing to intercept every request to add claims with a DB call (not great for performance). We need some of these claims client side as well, so this isn't a great solution.
Beyond requesting a service limit increase to the amount of groups available per pool, am I missing anything obvious? Groups seem to be the suggested way to do this, based on documentation from AWS. Even if we went for a multi-tenant approach with multiple pools, I think the 25 group cap is still going to be an issue.
https://docs.aws.amazon.com/cognito/latest/developerguide/scenario-backend.html
You can request limit increases for nearly any part of the service. They will consider. Sometimes this is more straightforward than building side systems, as you point out. See https://docs.aws.amazon.com/cognito/latest/developerguide/limits.html
Related
the need: I have a team of people who are not particularly tech savvy but we still want them to be able to access a jupyter notebook in a hosted ec2 instance (which has got access to a variety of resources in AWS). Not all of the people use this instance all the time but it's reasonable to believe that that the instance will be used continuosly by one person or the other throughout the day ( and we do not have budget to scale up things on demand).
the solution: to use a single medium sized instance in a multi-tenant fashion so that basically every user could link up to it and, as more users connect, resources are re-distributed (meaning the already present users will witness reduced capacity in RAM and CPU - up to a limit as CPUs are finite in number) and, as they disconnect, the opposite happens.
the problem & question: I saw there are some pretty beefy/complicated infrastructure designs for this. my question is: what's the simplest (=least complicated/expensive to maintain solution) architecture to cater such need?
p.s. on security: all users are equal, there are not users who have got more rights than others, instance will be accessible just via a pwd, no ID management is required.
p.s. on caching sessions: after a certain period of inactivity (or you close down the browser in this case), session will be terminated on its own. no session caching needs to be employed
I have to implement a Pre Token Generation Lambda in order to add custom attributes into the Access Token. The custom attribute/value is stored in the user settings of each user within the Cognito User Pool and I can retrieve it with the boto3 admin_get_user function.
The question I have is whether it is a good idea to call the admin_get_user (or any other function that loads data from Cognito) from a performance point of view. Does Cognito internally scales and handles a burst of requests well? Or is it better to retrieve the custom attributes from a different place because Cognito is perhaps not meant to be used for such lookups?
My Lambda will be executed on every successful authentication and more importantly, on every token refresh which happens every 60min (given that ever issued access token expires after max 60min)
I know the question is old. I recently faced the same issue. So just adding an answer to help others.
the AdminGetUser documented quota/limit is 5 requests per min. You can request AWS to increase the limit. or You can configure the aws client, you are using to have a back off strategy and retry configuration.
you can find limit or quota for api calls here : https://docs.aws.amazon.com/cognito/latest/developerguide/limits.html
An interesting article on how the backOff strategy work & which one to choose: https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/
I would recommend https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/retry/PredefinedBackoffStrategies.FullJitterBackoffStrategy.html
For more info read https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/ClientConfiguration.html
Is there a way to issue access tokens that are valid for a single use? My use case is to invoke Lambda functions from browser but want to restrict the number of invocations to one per token.
If a short lived token is issued then there is still potential for it to be used for multiple invocations.
I am using DeveloperAuthenticatedIdentities to issue the temporary tokens.
There is no such thing with AWS Cognito.
You can implement a custom Authorizer with API Gateway to manage your invocations count. If the same URL accessed more than once, you can deny the service.
More info on Custom Authorizers.
https://docs.aws.amazon.com/apigateway/latest/developerguide/use-custom-authorizer.html
Hope it helps.
The AWS Cognito is not designed for that, however you could achieve this by
throwing undesired expensive computation at it:
Your Api/app adds the user on behalf of the admin.
Your api/app removes the confirmed user after certain amount of time.
You could see that this approach is not feasible even for low number of users.
Better approach, if the routes are unique (still using Cognito)
Same as above.
You have the list of routes, as a bucket names, in S3; each has a file
that consists, something like
{
accessed: false
}
If the user uses the token to access the route your app check for the above, grand
the access, and sets it to true. You could even not have the above file and just the
buckets; that represents the routes and gets removed upon being accessed.
Much Better approach
The application could generate/verify, short expiry JWT tokens, for supporting short lived authorized users. The downside here is that the development time which might lead to
security risks if the application is not throughly tested.
2.Same as the above approach (using S3).
For limiting usage, I think the best approach will be using usage plans.
It is not a token responsibility to restrict usage, API Key is there for that purpose.
Have a look at this AWS page.
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html
I want to start by saying that I'm a total newcomer to AWS.
I'm investigating using AWS WAF for dynamic rate limiting based on a component of the request URL. The AWS website has a tutorial for doing this by IP address, but I have no idea if it can be modified to do what I need.
So, with that in mind, please tell me what, if any, of the following is actually possible:
Rate limit by a component of the URL (an API key in this case)
Determine limit dynamically (different behaviour for different keys)
Perform some non-blocking action in the first instance of exceeding
the limit, then block if the limit is exceeded consistently
Log both of the above actions and do something with the outputted logs (i.e. forward them somewhere)
Again, I'm not looking for detailed how-tos here as they would probably warrant seperate questions - just want to know if this is possible.
API Gateway is probably the right fit for what you are looking to implement. It has throttling implemented out of the box.
Take a look at API Gateway Usage Plans for implementation details for your specific use case.
I'm fleshing out an idea for a web service that will only allow requests from desktop applications (and desktop applications only) that have been registered with it. I can't really use a "secret key" for authentication because it would be really easy to discover and the applications that use the API would be deployed to many different machines that aren't controlled by the account holder.
How can I uniquely identify an application in a cross-platform way that doesn't make it incredibly easy for anyone to impersonate it?
You can't. As long as you put information in an uncontrolled place, you have to assume that information will be disseminated. Encryption doesn't really apply, because the only encryption-based approaches involve keeping a key on the client side.
The only real solution is to put the value of the service in the service itself, and make the desktop client be a low-value way to access that service. MMORPGs do this: you can download the games for free, but you need to sign up to play. The value is in the service, and the ability to connect to the service is controlled by the service (it authenticates players when they first connect).
Or, you just make it too much of a pain to break the security. For example, by putting a credential check at the start and end of every single method. And, because eventually someone will create a binary that patches out all of those checks, loading pieces of the application from the server. With credentials and timestamp checks in place, and using a different memory layout for each download.
You comment proposes a much simpler scenario. Companies have a much stronger incentive to protect access to the service, and there will be legal agreements in effect regarding your liability if they fail to protect access.
The simplest approach is what Amazon does: provide a secret key, and require all clients to encrypt with that secret key. Yes, rogue employees within those companies can walk away with the secret. So you give the company the option (or maybe require them) to change the key on a regular basis. Perhaps daily.
You can enhance that with an IP check on all accesses: each customer will provide you with a set of valid IP addresses. If someone walks out with the desktop software, they still can't use it.
Or, you can require that your service be proxied by the company. This is particularly useful if the service is only accessed from inside the corporate firewall.
Encrypt it (the secret key), hard-code it, and then obfuscate the program. Use HTTPS for the web-service, so that it is not caught by network sniffers.
Generate the key using hardware speciffic IDs - processor ID, MAC Address, etc. Think of a deterministic GUID.
You can then encrypt it and send it over the wire.