AWS Cognito single use access token - amazon-web-services

Is there a way to issue access tokens that are valid for a single use? My use case is to invoke Lambda functions from browser but want to restrict the number of invocations to one per token.
If a short lived token is issued then there is still potential for it to be used for multiple invocations.
I am using DeveloperAuthenticatedIdentities to issue the temporary tokens.

There is no such thing with AWS Cognito.
You can implement a custom Authorizer with API Gateway to manage your invocations count. If the same URL accessed more than once, you can deny the service.
More info on Custom Authorizers.
https://docs.aws.amazon.com/apigateway/latest/developerguide/use-custom-authorizer.html
Hope it helps.

The AWS Cognito is not designed for that, however you could achieve this by
throwing undesired expensive computation at it:
Your Api/app adds the user on behalf of the admin.
Your api/app removes the confirmed user after certain amount of time.
You could see that this approach is not feasible even for low number of users.
Better approach, if the routes are unique (still using Cognito)
Same as above.
You have the list of routes, as a bucket names, in S3; each has a file
that consists, something like
{
accessed: false
}
If the user uses the token to access the route your app check for the above, grand
the access, and sets it to true. You could even not have the above file and just the
buckets; that represents the routes and gets removed upon being accessed.
Much Better approach
The application could generate/verify, short expiry JWT tokens, for supporting short lived authorized users. The downside here is that the development time which might lead to
security risks if the application is not throughly tested.
2.Same as the above approach (using S3).

For limiting usage, I think the best approach will be using usage plans.
It is not a token responsibility to restrict usage, API Key is there for that purpose.
Have a look at this AWS page.
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html

Related

Counting AWS lambda calls and segmenting data per api key

Customers (around 1000) sign up to my service and receive a customer unique api key. They then use the key when calling a AWS lambda function through AWS api gateway in to access data in DynamoDb.
Requirement 1: The customers get billed by the number of api calls, so I have to be able to count those. AWS only provides metrics for total number of api calls per lambda so I have a few options:
At every api hit increment a counter in DynamoDB.
At every api hit enqueue a message in SQS, receive it in "hit
counter" lambda and increment a counter in DynamoDB.
Deploy a separate lambda for each customer. Use AWS built-in call
counter.
Requirement 2: The data that the lambda can access is unique for each customer and thus dependent on the api key provided.
To enable this I also have a number of options:
Store the required api key together with the data that the customer
has the right to access.
Deploy a separate lambda for each customer. Use api gateway to
protect it with a key.
Create a separate endpoint in api gateway for each customer, protect
it with the api key.
None of the options above seem like a good way to design the solution. Is there a canonical way of doing this? If not, which of the options above is the best? Have I missed an obvious solution due to my unfamiliarity with AWS?
I will try to break your problems down with my experience, but maybe Michael - Sqlbot or John Rotenstein may be able to give more appropriate answers.
Requirement 1
1) This sounds like a good approach. I don't see anything critical here.
2) This, IMHO, is the best out of the 3. It will decouple data access from the billing service, which is a great thing in a Microservices world.
3) This is not scalable. Imagine your system grows and you end up with 10K Lambda functions. Not only you'll have to build a very reliable mechanism to automate this process, but also you'll need to monitor 10K different things (imagine CloudWatch logs, API Gateway, etc), not to mention you'll have 10 thousand functions with exactly the same code (client specific parameters apart). I wouldn't even think about this one.
Requirement 2
1) It could work and it fits nicely in the DynamoDB model of doing things: store as much data as you can in a unique table, so you can fetch everything in one go. From what I see, you could even use this ApiKey as your partition key and, for the sake of simplicity for this answer, store the client's data as JSON in a column named data. Since your query only needs to query by the ApiKey, storing a JSON in DynamoDB won't hurt (do keep in mind, however, that if you need to query by any of its JSON attributes than you're in bad shoes, since DynamoDB's query capabilities are very limited)
2) No, because of Requirement 1.3
3) No, because of the above.
If you still need to store the ApiKey in a different table so you can run different analysis and keep a finer grained control over the client's calls, access, billing and etc., that's not a problem either, just make sure you duplicate your ApiKey on your ClientData table instead of creating a FK (DynamoDB doesn't support FKs, so you'd need to manage these constraints yourself). Duplication is just fine in a NoSQL world.
Your use case is clearly a Multi-Tenancy one, so I'd also recommend you to read Multi-Tenant Storage with Amazon DynamoDB which will give you some more insights and broaden your options a little bit. Multi-Tenancy is not an easy task and can give you lots of headaches if not implemented correctly. I think this is why AWS has also prepared this nice read for us :)
Happy to continue this on the comments section in case you have more info to share
Hope this helps!

AWS Cognito admin_get_user performance on large(r) scale

I have to implement a Pre Token Generation Lambda in order to add custom attributes into the Access Token. The custom attribute/value is stored in the user settings of each user within the Cognito User Pool and I can retrieve it with the boto3 admin_get_user function.
The question I have is whether it is a good idea to call the admin_get_user (or any other function that loads data from Cognito) from a performance point of view. Does Cognito internally scales and handles a burst of requests well? Or is it better to retrieve the custom attributes from a different place because Cognito is perhaps not meant to be used for such lookups?
My Lambda will be executed on every successful authentication and more importantly, on every token refresh which happens every 60min (given that ever issued access token expires after max 60min)
I know the question is old. I recently faced the same issue. So just adding an answer to help others.
the AdminGetUser documented quota/limit is 5 requests per min. You can request AWS to increase the limit. or You can configure the aws client, you are using to have a back off strategy and retry configuration.
you can find limit or quota for api calls here : https://docs.aws.amazon.com/cognito/latest/developerguide/limits.html
An interesting article on how the backOff strategy work & which one to choose: https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/
I would recommend https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/retry/PredefinedBackoffStrategies.FullJitterBackoffStrategy.html
For more info read https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/ClientConfiguration.html

Highly granular access control of non-AWS resources in AWS Cognito

I've got an ASP.NET Web API that is using AWS Cognito for authentication and resource access control. We've been using user pool groups up until this point to define certain entities users have access to (non-aws resources in a DB).
The problem is, now that our requirements for access control are more detailed, we are hitting the group cap of 25 per pool. I've looked into alternatives within Cognito, such as using custom attributes, but I've found that there are also limits on the number of custom attributes per pool, as well as they only support string & number types, not arrays.
Another alternative I've explored is intercepting the token when it hits our API, and adding claims based on permissions mapped in the DB. This works reasonably well, but this is only a solution server side, and I'm not entirely thrilled with needing to intercept every request to add claims with a DB call (not great for performance). We need some of these claims client side as well, so this isn't a great solution.
Beyond requesting a service limit increase to the amount of groups available per pool, am I missing anything obvious? Groups seem to be the suggested way to do this, based on documentation from AWS. Even if we went for a multi-tenant approach with multiple pools, I think the 25 group cap is still going to be an issue.
https://docs.aws.amazon.com/cognito/latest/developerguide/scenario-backend.html
You can request limit increases for nearly any part of the service. They will consider. Sometimes this is more straightforward than building side systems, as you point out. See https://docs.aws.amazon.com/cognito/latest/developerguide/limits.html

How do I configure an Amazon AWS Lambda function to prevent tailing the log in the response?

Please see this:
http://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html
LogType
You can set this optional parameter to Tail in the request
only if you specify the InvocationType parameter with value
RequestResponse. In this case, AWS Lambda returns the base64-encoded
last 4 KB of log data produced by your Lambda function in the
x-amz-log-result header.
Valid Values: None | Tail
So this means any user with valid credentials for invoking a function can also read the logs this function emits?
If so, this is an obvious vulnerability that can give some attacker useful information regarding processing of invalid input.
How do I configure an Amazon AWS Lambda function to prevent tailing the log in the response?
Update 1
1) Regarding the comment: "If a hacker can call your Lambda function, you have
more problems than seeing log files."
Not true: Lambda functions are also meant to be called directly form client code, using the SDK.
As an example, see the picture below from the book "AWS Lambda in Action":
2) Regarding the comment: "How is this a vulnerability exactly? Only someone you have provided AWS IAM credentials would be able to invoke the Lambda function."
Of course, clients do have some credentials, most of the time (for example,
from having signed in to your mobile app with their Facebook account, through Amazon Cognito). Am I supposed to trust all my users?
3) Regarding the comment: "Only if you have put some secure information to be logged."
Logs may contain sensible information. I'm not talking about secure information like passwords, but simply information to help the development team debugging, or the security team finding out about attacks. Applications may log all kinds of information, including why some invalid input failed, which can help an attacker learn what is the valid input. Also, attackers can see all the information the security team is logging about their attacks. Not good. Even privacy may be at risk depending on what you log.
Update 2
It would also solve my problem if I could somehow detect the Tail parameter in the Lambda code. Then I would just fail with a "Tail now allowed" message. Unfortunately the Context object doesn't seem to contain this information.
I think you can't configure AWS Lambda to prevent tailing the log in the response. However, you could use your own logging component instead of using the one provided by Amazon Lambda to avoid the possibility to expose them via the LogType parameter.
Otherwise, I see your point about adding complexity, but using API Gateway is the most common solution to provide the possibility to invoke Lambdas for clients applications that you do not trust.
You're right, not only it's a bad practice, it's obviously (as you already understood) introducing security vulnerabilities.
If you look carefully in the book you will also find this part:
which explains that in order to be more secure, the client requests should hit Amazon API gateway which will expose a clean API interface and which will call the relevant lambda-function without exposing it to the outer-world.
An example of such API is demo'ed in a previous page:
By introducing a middle-layer between the client and AWS-lambda, we take care of authentication, authorization, access and all other points of potential vulnerability.
This is a comment.
While this should be a comment, I am sorry that I do not have yet enough stackoverflow reputation to do so.
Before commenting on this, please note that lambda Invoke may result in more than one execution of your lambda (per AWS documentation)
Invocations occur at least once in response to an event and functions must be idempotent to handle this.
As the LogType is documented as a valid option, I don't think you can prevent it in your backend. However, you need to have a workaround to handle it. I can think of
1- Generate a junk 4KB tail log (by console.log() for example). Then, the attacker will get a junk info. (incur cost only in case of attacker)
2- Use step functions. This is not only to hide the log but to overcome the problem of 'Invocations occur at least once' and have a predictable execution of your backend. It incurs cost though.

Pass IAM identity of AWS API-Gateway calls to backend server

We want to set-up an existing API as SAAS using AWS
Our code has been deployed via elastic-beanstalk, and we created access to the methods via Gateway to manage permissions.
We're now trying to log the user's activity, for billing purposes
Currently, the best solution we found involves a full logging of the calls (Enabled CloudWatch Logs + Log full requests/responses data), which looks quite heavy, and may even end up beeing expensive.
We reworked the request body in the integration request, by adding a mapping template for the body, but this seems heavy and complicated, whe hope there was a better solution we missed.
Basically, we replaced the default "passthrough" with a generated basic "passthrough" code, and added a value "MyUserArn" : "$context.identity.userArn" in it, which fills the requests body with a large mess, but looks like "The most reliable way to avoid to breaking something".
We'd like to just add the IAM user identifier in a header, or query string parameter, but failed to find if this is even possible. Several posts mention an "Invoke with caller credentials" option, but we didn't find this either.
Is is something related to cognito or something else ?
Are we doing something wrong ?
You have a couple different options for getting this information, both of which have trade offs:
Your current solution pulling the value from $context.identity in a mapping template and sending to your Lambda as part of the body. It seems like you are opposed to this given your "large mess" comment, but ultimately you have control over the content passed to your Lambda.
Enable "user caller credentials" on your method and then use identity value inside your Lambda. Currently this only works if you've used credentials vended from a Cognito authentication flow and does require that Lambda invocation also be part of your role policy, but doesn't require any modification of the template.
UPDATE Apologies, I somehow missed you were using Beanstalk and not Lambda. You can definitely just add a header to your integration request and simply have pull its value from $context.identity.userArn.
UPDATE 2 Double apologies, when using context variables in headers, you omit the $ so you need to use context.identity.userArn.