Hi I am trying to implement Custom authorization using api gateway and lambda. My current understanding is as follows. I have created simple GET method and deployed to Dev Enviroment. Create lambda authorize to return the IAM policy. I used python blue print api-gateway-authorizer-python. Below is the format of response we should get.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "execute-api:Invoke",
"Effect": "Deny",
"Resource": "arn:aws:execute-api:us-east-1:{ACCOUNTID}:{APIID}/ESTestInvoke-stage/GET/"
}
]
}
In the above IAM policy, Resource is ARN of my Api Dev stage. What is Action indicates? Also to test this now, How can I get token? I want to test it from postman? I am just confused here. I have my AWS account and authorization is nothing but my current account has access to this Dev stage? How internally it works? To store all the permissions do we need to maintain any other DB? Can someone help me to understand this? Any help would be appreciated. Thanks
To get a token you need an identity provider. Amazon Cognito is one of those (Google, Facebook works as well). To understand that policy you have to understand the chain of commands.
Suppose a client calls an API endpoint (GET /orders), this will trigger a service Lambda so the token can be verified. If the verification is successful, another Lambda (GetOrder a business Lambda this time) will be invoked by Api Gateway.
If your service Lambda (Lambda authorizer) will return a policy like this:
{
"principalId": "apigateway.amazonaws.com",
"policyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Action": "execute-api:Invoke",
"Effect": "Allow",
"Resource": "arn:aws:execute-api:{REGION}:{ACCOUNT_ID}:{API_ID}/Prod/GET/"
}]
}
}
the API Gateway service (i.e. the principalId equals to apigateway.amazonaws.com) is allowed (i.e. Effect equals to Allow) to invoke (i.e. Action equals to execute-api:Invoke) the given API resource (e.g. Resource equals to arn:aws:execute-api:{REGION}:{ACCOUNT_ID}:{API_ID}/Prod/GET/).
In your case the ARN that you return is related to the tester of API Gateway, but it should point to your real function.
This article may help.
Ok, what are custom authorisers for API gateway: custom authorisers let you define your own authentication & authorisation logic.
How do you get the token: Thats part of your authentication and authorisation logic, If you are deploying your services on AWS, you can use AWS cognito. API gateway also supports cognito authorization.
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-integrate-with-cognito.html
Using postman to test? it's handy to use postman. I use it.
How does the permission internally work: You can use a token to authenticate a user. (If you are using a JWT token, you can also verify the user's claims).
Do you need an internal db? this is entirely depends on your use case. if your use case is simple as all users treated equal, you might not need a db. lets say some users can access some additional features, you may still not need a db (you can use claims). but if your application becomes complicated and you have to manage different access permissions, users, groups, etc, you may surely need a db.
Related
My setup:
- Mobile Hub
- Cognito User Pool
- Api Gateway
- DynamoDB
What I got working so far:
The User can sign up/in with the Cognito User Pool and get an Id and AccessToken.
The IdToken is used with the Api Gateway Cognito Authorizer to access the Api Gateway.
The Mapping of the user sub into the integration message to DynamoDb works.
"userId": {
"S": "$context.authorizer.claims.sub"
}
Restricting the access to non user rows in a DynamoDb Table does not work.
The DynamoDb tables were created using Mobile Hubs Protected Table feature, which creates the following policy:
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:LeadingKeys": [
"${cognito-identity.amazonaws.com:sub}"
]
}
}
But thats not working, because this expression returns the Identity User Id and NOT the User Pool Sub. At first I'm not using Identity Pools and second I want to use the User Sub here.
I found out
${cognito-idp.<REGION>.amazonaws.com/<POOL-ID>:sub} should do the trick, but thats not working too.
If I hardcode the Condition to use the Sub of my test user, everything works as expected, so the Policy itself is okay, it's only the expression to get the sub of the current user is not working correctly.
Is it possible to debug the IAM Policys to see what the values of the expressions are at runtime?
Any Ideas, hints, suggestions?
Thanks in advance.
I got the answer now from an AWS Dev. You are only able to use the
${cognito-idp.us-east-1.amazonaws.com/us-east-1_XXXXXXX:sub}
variable if you configure the Cognito User Pool as an open ID Connect provider directly against IAM.
But there is a big problem, because you need to update the SSL Thumbprint of the service endpoint if the certificate changes in the Open ID Connector configuration. But you are not able to tell when the aws certificate has changed.
I finally figured this out by using aws:PrincipalTags
Pre-req is making sure that the IAM role that the Cognito User assumes has sts:TagSession assume role policy permission. This allows principal tags to be passed after successful login to Cognito.
On the Cognito Identity Pool, open Authentication Providers and find the Cognito provider. Make sure that Attributes for access control uses the default mappings which maps username to sub, or you have a custom rule set that passes the sub property to a PrincipalTag.
Finally you can use the tag passed in the filter of the role to only allow access to resources by that sub.
{
"Action": [
"dynamodb:UpdateItem",
"dynamodb:Query",
"dynamodb:PutItem",
"dynamodb:GetItem"
],
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:LeadingKeys": "${aws:PrincipalTag/username}"
}
},
"Effect": "Allow",
"Resource": "arn:aws:dynamodb:*:*:table/MyTableName",
"Sid": ""
}
With the recent release of API Gateway Cognito Custom Authorizers, I'm attempting to use Cognito, API Gateway and S3 together for authenticated access control without Lambdas.
Authorizing with API Gateway works as it should (with Trust Relationships for the API Gateway execution role set correctly) but I can't seem to get the resource policy to capture the Cognito User ID Sub variable for fine grain access control to S3 resources based on User ID.
Here's the current flow I'm trying to accomplish:
Authenticate with Cognito and get valid token
Send token to API Gateway to gain access to S3 bucket (through AWS Service integration type)
Fine grain access to only User ID's directory
Return S3 object (based on API endpoint)
Here's my current resource policy for the API Gateway execution role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cognito-idp:*",
"cognito-sync:*",
"cognito-identity:*"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:Get*",
],
"Resource": [
"arn:aws:s3:::mybucket/${cognito-identity.amazonaws.com:sub}/*"
]
}
]
}
Everything works as it should but this IAM variable (in the policy attached to the API Gateway execution role) doesn't seem to be right.
I came across this StackOverflow article and tried using both formats us-east-1:xxxx-xxxx-xxxx-xxxx and xxxx-xxxx-xxxx-xxxx but both didn't seem to work. I'm using the sub attribute found in the Cognito User Pool User info. If I hard code the folder in S3 to the Cognito User ID Sub it works just fine.
How do I get the Cognito variable to work in the API Gateway's execution role policy?
Here are a couple other articles I found related to the question on the AWS forums:
Cognito IAM variables not working for assumed-role policies
What cognito information can we use as IAM Variables?
That's not the sub that the variable expects. There is no way to use cognito user pool attributes in policy. The sub that you want is the cognito identity id which is the id of the user in the cognito Identity (federated identity pool). You can get this ID by using the get id method. I would suggest you store this ID as a custom attribute variable in your cognito user pool so you don't have to keep making the call.
You can read more about this identity id here.
I have a use-case where I need to have temporary AWS STS token made available for each authenticated user (auth using company IDP). These tokens will be used to push some data in AWS S3. I am able to get this flow, by using SAML assertion in IDP response and integrating with AWS as SP (IDP initiated sign-on) similar to one shown here.
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html#CreatingSAML-configuring
But as STS allows token validity to be max for 1 hour, I want to refresh those tokens before expiry so that I don't have to prompt user to give credentials again (bad user experience). Also as these are company login credentials, I cant store them in the application.
I was looking at AWS IAM trust policy, and one way to do this is adding 'AssumeRole' entry to the existing SAML trust policy as shown below (second entry in the policy)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::xxxxxxxxxxxx:saml-provider/myidp.com"
},
"Action": "sts:AssumeRoleWithSAML",
"Condition": {
"StringEquals": {
"SAML:aud": "https://signin.aws.amazon.com/saml"
}
}
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:sts::xxxxxxxxxxxx:assumed-role/testapp/testuser"
},
"Action": "sts:AssumeRole"
}
]
}
So for first time when testuser logs in as uses AssumeRoleWithSAML API/CLI, he will get temporary credentials. Next, he can use 'AssumeRole' API/CLI with those credentials, so that he can keep on refreshing the tokens without requires IDP credentials.
As can be seen, this works only for STS user with ARN of "arn:aws:sts::xxxxxxxxxxxx:assumed-role/testapp/testuser" for refreshing tokens as he/she can assume that role. but I need a generic way, where for any logged in user, he can generate STS tokens.
One way is to use wildcard characters in Trust policy for Principal, but looks like it is not supported. So I am stuck with tacking credentials every time the tokens expire. Is there a way to solve this?
thanks,
Rohan.
I have been able to get this working by specifying a role instead of an assumed-role in the IAM trust policy. Now my users can indefinitely refresh their tokens if they have assumed the testapp role.
"Principal": {
"AWS": "arn:aws:sts::xxxxxxxxxxxx:role/testapp"
},
AWS STS supports longer role sessions (up to 12 hours) for the AssumeRole* APIs. This was launched on 3/28/18, here is the AWS whats-new link: https://aws.amazon.com/about-aws/whats-new/2018/03/longer-role-sessions/. By that you need not to do a refresh as I assume a typical workday is < 12 hours :-)
Your question is one I was working on solving myself, we have a WPF Desktop Application that is attempting to log into AWS through Okta, then use the AssumeRoleWithSaml API to get the STS Token.
Using this flow invoked the Role Chaining rules and thus our token would expire every hour.
What I did to overcome this is to cache the initial SAMLResponse Data from Okta (after the user does MFA) and use that information to ask for a new Token every 55 minutes. I then use that new token for any future AWS resource calls.
Once 12 hours passes, I ask the user to authenticate with Okta again.
For those wondering about implementation for their own WPF apps, we use the AWS Account Federation App in Okta.
The application uses 2 packages:
Okta .NET Authentication SDK
AWS SDK for .NET
After setting up your AWS Account Federation App in Okta, use the AWS Embed Url and SAML Redirect Url in your application to get your SAMLResponse data.
This is driving me totally crazy
Im trying to call a api-gateway rest service from an angular app that i have restricted from API-GATEWAY with IAM access. So i need to call with IAM authentication. Im using temporary IAM credentials that i have already obtained
My call to the service fails saying there is no Access-Control-Allow-Origin header. When i try and call my service from postman i dont see the required header on the response. On my postman call I get a 403 status back since my authentication actually failed, but i still expected the header. If i remove the IAM authentication on the method it works, i get back the response string and the header im looking for.
What am i missing here? Surely even if my authentication failed i must still get back that header so that i can actually see the message that says your authentication failed.
Any help will be much appreciated
Thanks
From this link on the AWS forum it appears that there is an open issue related to exactly what im experiencing here... Not sure if what im looking for is currently possible
Did you allow your Cognito role to access to the API endpoints?
You have to use a policy similar to this one for the Cognito role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"execute-api:invoke"
],
"Resource": [
"arn:aws:execute-api:<API_REGION>:<ACCOUNT_ID>:<API_ID>/*"
]
}
}
}
If you use IAM authentication for your endpoint and if you do not allow the Cognito role to access to your API, API Gateway will generate a 403 response. This response is not defined by your API definition because the request is refused by API Gateway before it could get to the point of been processed by your API. That is why your CORS configuration will not apply.
Please ensure you do not have AWS_IAM authentication enabled on your OPTIONS method, otherwise the browser will not be able to make the pre-flight request and your request will fail. You should still have AWS_IAM enabled on your other methods.
I have been struggling to figure out how to communicate with the Amazon ES service from my EC2 instances.
The documentation clearly states that the Amazon ES service supports IAM User & Role based access policies. http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-access-policies
However, when I have this access policy for my ES domain:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789:role/my-ec2-role"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-west-2:123456789:domain/myDomain/*"
}
]
}
I can't log into an ec2 instance and run a curl to hit my elasticsearch cluster.
Trying to do a simple curl of the _search API:
curl "http://search-myDomain.es.amazonaws.com/_search"
Produces an authentication error response:
{"Message":"User: anonymous is not authorized to perform: es:ESHttpGet on resource: arn:aws:es:us-west-2:123456789:domain/myDomain/_search"}
Just to be extra safe I put the AmazonESFullAccess Policy on my IAM Role, still doesn't work.
I must be missing something, because being able to programmatically interact with Elasticsearch from ec2 instances that use an IAM Role is essential to getting anything accomplished with the Amazon ES Service.
I also see this contradictory statement in the docs.
IAM-based Policy Example You create IAM-based access policies by
using the AWS IAM console rather than the Amazon ES console. For
information about creating IAM-based access policies, see the IAM
documentation.
That link to IAM documentation, is to the home page of IAM and contains exactly zero information about how to do it. Anyone got a solution for me?
When using IAM service with AWS, you must sign your requests. curl doesn't support signed requests (which consists of hashing the request and adding a parameter to the header of the request). You can use one of their SDK's that has the signing algorithm built in, and then submit that request.
See:
http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/what-is-amazon-elasticsearch-service.html#signing-requests
You can find the SDKs for popular languages here:
http://aws.amazon.com/tools/
First, you said you can't login to an EC2 instance to curl the ES instance? You can't login? Or you can't curl it from EC2?
I have my Elasticsearch (Service) instance open to the world (with nothing on it) and am able to curl it just fine, without signing. I changed the access policy to test, but unfortunately it takes forever to come back up after changing it...
My policy looks like this:
{ "Version": "2012-10-17", "Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": "*",
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:843348267853:domain/myDomain/*"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": "*",
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:843348267853:domain/myDomain"
}
]
}
I realize this isn't exactly what you want, but start off with this (open to the world), curl from outside AWS and test it. Then restrict it, that way you're able to isolate the issues.
Also, I think you have an issue with the "Principal" in your access policy. You have your EC2 Role. I understand why you're doing that, but I think the Principal requires a USER, not a role.
See below:
http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-access-policies
Principal
Specifies the AWS account or IAM user that is allowed or denied access
to a resource. Specifying a wildcard (*) enables anonymous access to
the domain, which is not recommended. If you do enable anonymous
access, we strongly recommend that you add an IP-based condition to
restrict which IP addresses can submit requests to the Amazon ES
domain.
EDIT 1
To be clear, you added the AmazonESFullAccess policy to the my-ec2-role? If you're going to use IAM access policies, I don't think you can have a resource based policy attached to it (which is what you're doing).
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_compare-resource-policies.html
For some AWS services, you can grant cross-account access to your
resources. To do this, you attach a policy directly to the resource
that you want to share, instead of using a role as a proxy. The
resource that you want to share must support resource-based policies.
Unlike a user-based policy, a resource-based policy specifies who (in
the form of a list of AWS account ID numbers) can access that
resource.
Possibly try removing the access policy altogether?
Why you don't create a proxy with elastic ip and allow your proxy to access your ES?
Basically exists three forms that you can limit access in your ES:
Allow everyone
White IP list
Signing the access key and secret key provided by AWS.
I'm using two forms, in my php apps I prefer to use proxy behind the connection to ES and in my nodejs app I prefer to sign my requests using the http-aws-es node module.
It's useful to create a proxy environment because my users needs to access the kibana interface to see some reports and it's possible because they have configured the proxy in their browsers =)
I must recommend to you close the access to your ES indexes, because it's pretty easy to delete them, curl -XDELETE https://your_es_address/index anyone can do it but you can say: "how the others users will get my ES address?" and I will answer you: "Security based in dimness isn't a real security"
My security access policy is basically something like it:
http://pastebin.com/EUKT1ekX
I encountered this issue recently and the root problem is that none of the Amazon SDKs yet support calling Elasticsearch operations like search, put, etc.
The only workaround at the moment is to execute requests directly against the endpoint using signed requests:
http://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html
The example here is for calling EC2, but it can be modified to instead call against Elasticsearch. Just modify the "service" value to "es". From there, you have to fill in values for
the endpoint (which is the full URL of your cluster including operation without request parameters)
the host (the part between https:// and your canonical URI like /_status
the canonical uri which is the URI after the first / inclusive (like /_status) but without the query string
the request parameters (everything after ? inclusive)
Note that I've only managed to get this working so far using AWS credentials as the assumption is that you pass in an access key and secret key to the various signing calls (access_key and secret_key in the example). It should be doable using IAM roles but you'll have to call into the security token service first to get temporary credentials that can be used to sign the request. Until you do that, be sure to edit your access policy on the Elasticsearch cluster to allow user creds (user/
you need to sign your request and unfortunately, it is no longer supported by the official elasticsearch library. Check this Github issue (https://github.com/elastic/elasticsearch-js/issues/1182#issuecomment-630641702)
They want to enforce their own cloud solution