I have granted the Cognito user access to the S3 object matching that user attribute with Cognito Identity Pool.
How can I view images in a bucket from a URL in a browser logged in as the Cognito user?
What I want to
To split the s3 bucket into tenants and only allow image files from the tenant to which the Cognito user belongs to be viewed.
To display images with img tag <img src="">.
What I did
S3 object arn: arn:aws:s3:::sample-tenant-bucket/public/1/photos/test.jpg
Create Cognito User with custom attributes
aws cognito-idp admin-create-user \
--user-pool-id "ap-northeast-1_xxxx" \
--username "USER_NAME" \
--user-attributes Name=custom:tenant_id,Value=1
Cognito Identity Pool
Mapping of user attributes to principal tags
"tenantId": "custom:tenant_id"
Added sts:TagSession permission to the authenticated role
Attach policy to the authenticated role
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "*",
"Resource": "arn:aws:s3:::sample-tenant-bucket/public/${aws:PrincipalTag/tenantId}/*",
"Effect": "Allow"
}
]
}
Hypothesis
Typing the object URL (https://sample-tenant-bucket.s3.ap-northeast-1.amazonaws.com/public/1/1670773819-1.jpg) into the address bar results in a 403, even when logged in with an AWS account as an administrator, not as a Cognito User. Therefore,
I thought that Object URLs could not be used for non-public objects.
The Presigned URL seems to work, but I was wondering if there is another way to do it, since it is a method that does not depend on whether the user is already logged in to the AWS account or not.
Maybe this question is the same as asking how to display an image in a bucket by URL in a browser that is already logged in with the object owner's AWS account
Related
I am using Federated Identity with Okta being the IDP. I would like to add an Identity based policy which provides access to resources which are tagged with the user's Okta username. For each resource, I want to set the tag username and give it a value of the user who needs to access it.
"Condition": { "StringEquals": {"aws:ResourceTag/username": "${????}"} }
What should I add to the StringEquals condition so that the Okta username gets used?
Is it possible to access two tables within one lambda function while one of the tables is in the same account as the lambda function and the other is in another account?
I've seen articles on cross-account access delegation using IAM roles here and there. But I'm not sure how the code should reflect accessing a resource from another account. This is how I usually access some DynamoDb table:
const dynamodb = new AWS.DynamoDB();
const docClient = new AWS.DynamoDB.DocumentClient({ service: dynamodb });
docClient
.get({
TableName: 'SomeTable',
Key: { id }
});
Looking at the documentation, there's no mention of account ID in the constructor. So I'm not sure how I can have two connections at the same time, one pointing to one account and the other pointing to another account!?
Yes, it's possible to access 2 dynamoDb Tables (1 is in another AWS account) from same lambda function.
It's straight-forward to access the same Account's DynamoDB table.
dynamodb_client_of_same_account = boto3.client('dynamodb', region_name='us-east-1')
To access the Table of another AWS account, a new dynamoDb client needs to be created. You need to create cross account IAM Role and use STS to get temporary credentials. Follow the below steps to create cross account role.
AWS Account A has Lambda-----trying read/write on-----> AWS Account B with DynamoDB
Create an IAM Role in Account B to establish a Trust Relationship with Account A.
Go to IAM Role -------> Create Role ------> Select *Another AWS account *in the widget-----> Type AWS Account A Id and create the role
Don't forget to add DynamoDBAccess policy to this IAM Role.
Attach STS Assume Role policy to your AWS Lambda's Role in Account A
Create a new policy with below JSON and attach this policy to your Lambda Role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": ""
}
]
}
Use the below code to create a new client to access DynamoDB table in another account.
roleARN = '*<ARN of the role created in Step-1>*'
client = boto3.client('sts')
response = client.assume_role(RoleArn=roleARN,
RoleSessionName='RoleSessionName',
DurationSeconds=900)
dynamodb_client_for_other_account = boto3.client('dynamodb', region_name='us-east-1',
aws_access_key_id=response['Credentials']['AccessKeyId'],
aws_secret_access_key=response['Credentials']['SecretAccessKey'],
aws_session_token = response['Credentials']['SessionToken'])
you could create a IAM user on the second account,
add DynamoDB permissions and get access/key secret for it
and then:
const dynamodb1 = new AWS.DynamoDB();
const dynamodb2 = new AWS.DynamoDB({
accessKeyId: 'x',
secretAccessKey: 'y',
region: 'z',
})
dynamodb1 will work with the role permissions from lambda
and dynamodb2 with the IAM user from the second account
I want to embed Quicksight dashboard to an application. I have gone through the AWS quicksight documents, I did not get where I will find secure signed dashboard url.
In order to generate Quicksight secure dashboard url, follow the below steps:
Step 1: Create a new Identity Pool. Go to https://console.aws.amazon.com/cognito/home?region=us-east-1 , click ‘Create new Identity Pool’
Give an appropriate name.
Go to the Authentication Providers section, select Cognito.
Give the User Pool ID(your User pool ID) and App Client ID (go to App Clients in userpool and copy id).
Click ‘Create Pool’. Then click ‘Allow’ to create roles of the identity pool in IAM.
Step 2: Assign Custom policy to the Identity Pool Role
Create a custom policy with the below JSON.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "quicksight:RegisterUser",
"Resource": "*",
"Effect": "Allow"
},
{
"Action": "quicksight:GetDashboardEmbedUrl",
"Resource": "*",
"Effect": "Allow"
},
{
"Action": "sts:AssumeRole",
"Resource": "*",
"Effect": "Allow"
}
]
}
Note: if you want to restrict the user to only one dashboard, replace the * with the dashboard ARN name in quicksight:GetDashboardEmbedUrl,
then goto the roles in IAM.
select the IAM role of the Identity pool and assign custom policy to the role.
Step 3: Configuration for generating the temporary IAM(STS) user
Login to your application with the user credentials.
For creating temporary IAM user, we use Cognito credentials.
When user logs in, Cognito generates 3 token IDs - IDToken, AccessToken, RefreshToken. These tokens will be sent to your application server.
For creating a temporary IAM user, we use Cognito Access Token and credentials will look like below.
AWS.config.region = 'us-east-1';
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId:"Identity pool ID",
Logins: {
'cognito-idp.us-east-1.amazonaws.com/UserPoolID': AccessToken
}
});
For generating temporary IAM credentials, we call sts.assume role method with the below parameters.
var params = {
RoleArn: "Cognito Identity role arn",
RoleSessionName: "Session name"
};
sts.assumeRole(params, function (err, data) {
if (err) console.log( err, err.stack); // an error occurred
else {
console.log(data);
})
You can add additional parameters like duration (in seconds) for the user.
Now, we will get the AccessKeyId, SecretAccessKey and Session Token of the temporary user.
Step 4: Register the User in Quicksight
With the help of same Cognito credentials used in the Step 3, we will register the user in quicksight by using the quicksight.registerUser method with the below parameters
var params = {
AwsAccountId: “account id”,
Email: 'email',
IdentityType: 'IAM' ,
Namespace: 'default',
UserRole: ADMIN | AUTHOR | READER | RESTRICTED_AUTHOR | RESTRICTED_READER,
IamArn: 'Cognito Identity role arn',
SessionName: 'session name given in the assume role creation',
};
quicksight.registerUser(params, function (err, data1) {
if (err) console.log("err register user”); // an error occurred
else {
// console.log("Register User1”);
}
})
Now the user will be registered in quicksight.
Step5: Update AWS configuration with New credentials.
Below code shows how to configure the AWS.config() with new credentials generated Step 3.
AWS.config.update({
accessKeyId: AccessToken,
secretAccessKey: SecretAccessKey ,
sessionToken: SessionToken,
"region": Region
});
Step6: Generate the EmbedURL for Dashboards:
By using the credentials generated in Step 3, we will call the quicksight.getDashboardEmbedUrl with the below parameters
var params = {
AwsAccountId: "account ID",
DashboardId: "dashboard Id",
IdentityType: "IAM",
ResetDisabled: true,
SessionLifetimeInMinutes: between 15 to 600 minutes,
UndoRedoDisabled: True | False
}
quicksight.getDashboardEmbedUrl(params,
function (err, data) {
if (!err) {
console.log(data);
} else {
console.log(err);
}
});
Now, we will get the embed url for the dashboard.
Call the QuickSightEmbedding.embedDashboard from front end with the help of the above generated url.
The result will be the dashboard embedded in your application with filter controls.
this link will give you what you need from aws cli https://aws.amazon.com/blogs/big-data/embed-interactive-dashboards-in-your-application-with-amazon-quicksight/
this is the step 3 aws cli cmd to give you embeded URL ( i was able to excecute)
aws quicksight get-dashboard-embed-url --aws-account-id (your account ID) --dashboard-id (your dashgboard ID) --identity-type IAM
there are many other dependence to enable the embeded dashboard per aws dcouments. i have not able to successfully doen that. GL and let me know if you make it happen!
PHP implementation
(in addition to Siva Sumanth's answer)
https://gist.github.com/evgalak/d0d1adf099e2d7bff741c16a89bf30ba
What I want to implement:
I have a Cognito User-Pool and I have some Users and some Groups. I want that certain Users have access to API Gateway functions, some Users can access some functions and others have no access.
What I did:
I created three groups and assigned the Users to each of the groups. I gave each of the groups an IAM role and gave each roled spezific policies. The permission for the group for all users looks like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "execute-api:*",
"Resource": "*"
}
]
}
I created Lambda functions and API Gateway Resources through the Serverless framework. I set the authorizer to a Cognito User-Pool authorizer.
(I tried a couple different things like using federated identities but that didnt seem to work as well)
What is my result:
All Users have full access to the API Gateway. The given permissions do not seem to make any difference to the access of each user.
Help:
What did I do wrong?
How can I achieve my goal?
The roles attached to a user pool group only come into picture when you generate credentials for the user using Cognito Federated Identity. Adding groups to a user pool
IAM roles and their permissions are tied to the temporary AWS
credentials that Amazon Cognito identity pools provide for
authenticated users. Users in a group are automatically assigned the
IAM role for the group when AWS credentials are provided by Amazon
Cognito Federated Identities using the Choose role from token option.
So basically
create an identity pool attached to your user pool.
change authorization for API gateway to IAM
after login to user pool, user id_token to generate the federated identity
use this identity (secret key + access key + token) for authorization with API gateway.
Now your roles should be honored. But mind you - you will be required to generate AWS SigV4 credentials on your own as for some reason this is not provided out of the box. I ended up using aws-sign-web for use in browser.
PS: your role seems to give blanket access to API gateway. you will need to fix that as well. e.g. sample role I used to limit access to one API endpoint
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "execute-api:Invoke",
"Resource": [
"arn:aws:execute-api:us-east-2:<aws account id>:<API id>/*/*/acc/*"
],
"Effect": "Allow"
}
]
}
Sample code to generate federated identity
function getAccessToken(idToken, idenPoolId, userPool) {
let region = idenPoolId.split(":")[0];
let provider = "cognito-idp." + region + ".amazonaws.com/" + userPool;
let login = {};
login[provider] = idToken;
// Add the User's Id Token to the Cognito credentials login map.
let credentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId: idenPoolId,
Logins: login
});
//call refresh method in order to authenticate user and get new temp credentials
credentials.get((error) => {
if (error) {
console.error(error);
//let response = {
// statusCode: 500,
// body: JSON.stringify(error)
//};
return null;
} else {
console.log('Successfully logged!');
console.log('AKI:'+ credentials.accessKeyId);
console.log('AKS:'+ credentials.secretAccessKey);
console.log('token:' + credentials.sessionToken);
let response = JSON.stringify({
'AKI': credentials.accessKeyId,
'AKS': credentials.secretAccessKey,
'token': credentials.sessionToken
});
return response;
}
});
}
I have a much better solution, and you don't need the IAM.
Simply save the pair of {username, serviceName} in a S3 or DB. So every time, you get the request for a service:
Check if the user is authorized (from Cognito).
Check if the user is authorized for the service (S3, MySQL, RDS, etc.).
Why I think it is better
Because adding/removing users from services, you don't need to login as an admin to IAM. And hopefully later on, you can create a dashboard for management.
Work Flow
UserA sends a request to your securityApi.
SecurityApi checks the token is authorized (user is valid or not).
If the UserA is valid, the securityApi, sends the username of the user (can get it from the payload of the token) and the service name to a DB, to see if the user has access to the user. For example for Mysql (use RDS for this):
SELECT username from ServiceX LIMIT 1 WHERE username = "xyz";
If the second or third steps passed the user is 1. valid user and 2. has the right to use the service. If the user is failed in step 2 or 3, the user is not authorized to use the service.
I am trying to accomplish the following scenario:
1) Account A uploads a file to an S3 bucket owned by account B. At upload I specify full control for Account owner B
s3_client.upload_file(
local_file,
bucket,
remote_file_name,
ExtraArgs={'GrantFullControl': 'id=<AccountB_CanonicalID>'}
)
2) Account B defines a bucket policy that limits the access to the objects by IP (see below)
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowIPs",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucketB/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
<CIDR1>,
<CIDR2>
]
}
}
}
]
}
I get access denied if I try to download the file as anonymous user, even from the specific IP range. If at upload I add public read permission for everyone then I can download the file from any IP.
s3_client.upload_file(
local_file, bucket,
remote_file_name,
ExtraArgs={
'GrantFullControl': 'id=AccountB_CanonicalID', GrantRead':'uri="http://acs.amazonaws.com/groups/global/AllUsers"'
}
)
Question: is it possible to upload the file from Account A to Account B but still restrict public access by an IP range.
This is not possible. According to the documentation:
Bucket Policy – For your bucket, you can add a bucket policy to grant
other AWS accounts or IAM users permissions for the bucket and the
objects in it. Any object permissions apply only to the objects that
the bucket owner creates. Bucket policies supplement, and in many
cases, replace ACL-based access policies.
However, there is a workaround for this scenario. The problem is that the owner of the uploaded file is Account A. We need to upload the file in such a way that the owner of the file is Account B. To accomplish this we need to:
In Account B create a role for trusted entity (select "Another AWS account" and specify Account A). Add upload permission for the bucket.
In Account A create a policy that allows AssumeRole action and as resource specify the ARN of the role created in step 1.
To upload the file from boto3 you can use the following code. Note the use of cachetools to deal with limited TTL of temporary credentials.
import logging
import sys
import time
import boto3
from cachetools import cached, TTLCache
CREDENTIALS_TTL = 1800
credentials_cache = TTLCache(1, CREDENTIALS_TTL - 60)
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(message)s')
logger = logging.getLogger()
def main():
local_file = sys.argv[1]
bucket = '<bucket_from_account_B>'
client = _get_s3_client_for_another_account()
client.upload_file(local_file, bucket, local_file)
logger.info('Uploaded to %s to %s' % (local_file, bucket))
#cached(credentials_cache)
def _get_s3_client_for_another_account():
sts = boto3.client('sts')
response = sts.assume_role(
RoleArn='<arn_of_role_created_in_step_1>',
DurationSeconds=CREDENTIALS_TTL
)
credentials = response['Credentials']
credentials = {
'aws_access_key_id': credentials['AccessKeyId'],
'aws_secret_access_key': credentials['SecretAccessKey'],
'aws_session_token': credentials['SessionToken'],
}
return boto3.client('s3', 'eu-central-1', **credentials)
if __name__ == '__main__':
main()