I need to give temporary Access Keys to my clients to connect to IoT services (publish, receive, etc.). To provide this access, I've created a Lambda function that calls sts.assumeRole to create temporary STS keys. Those keys are being created and look fine.
I'm using assumeRole with Lambda in a role with the following inline policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"iot:Connect",
"iot:Subscribe",
"iot:Publish",
"iot:Receive"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"ec2:*"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"sts:AssumeRole"
],
"Resource": "*",
"Effect": "Allow"
}
]
}
Note: I've added ec2 permissions to try a secondary (simplified) test.
This role has an open trust relationship:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "sts:AssumeRole"
}
]
}
However, in my client code (browser), I can't connect to IoT and I receive the following error:
WebSocket connection to 'wss://my-endpoint.iot.us-east-1.amazonaws.com/mqtt?X-Amz-Algorithm=AWS4-H…Signature=my-signature' failed: Error during WebSocket handshake: Unexpected response code: 403
Trying a simplified test, I've used EC2, but received another permission error. The code used follows below (used browserify to bundle in browse).
const AWS = require('aws-sdk');
// connect to Lambda to retrieve accessKeyId and secretAccessKey
$.ajax({
method: 'GET',
url: 'my-url',
success: function(res) {
// connect to EC2
AWS.config.update({accessKeyId: res.accessKeyId, secretAccessKey: res.secretAccessKey, region: res.region});
const ec2 = new AWS.EC2();
ec2.describeInstances({}, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
});
Error:
POST https://ec2.us-east-1.amazonaws.com/ 401 (Unauthorized)
Error: AWS was not able to validate the provided access credentials(…) "AuthFailure: AWS was not able to validate the provided access credentials"
Found the error. When connecting with the client, I need to provide the sessionToken created by the assumeRole.
Client code:
// connect to EC2
AWS.config.update({accessKeyId: res.accessKeyId, secretAccessKey: res.secretAccessKey, sessionToken: res.sessionToken, region: res.region});
const ec2 = new AWS.EC2();
ec2.describeInstances({}, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Related
I am using the AWS Javascript SDK to access the mechanical turk API, but receive the following error:
AccessDeniedException: User: arn:aws:sts::186682853025:assumed-role/Cognito_projectUnauth_Role/CognitoIdentityCredentials is not authorized to perform: mechanicalturk:AssociateQualificationWithWorker because no session policy allows the mechanicalturk:AssociateQualificationWithWorker action
when making the following call:
AWS.config.update({
region: 'us-east-1',
credentials: new AWS.CognitoIdentityCredentials({
IdentityPoolId: 'IDENTITY_POOL_ID',
}),
});
new AWS.MTurk().associateQualificationWithWorker({
QualificationTypeId: 'QUAL_ID',
WorkerId: 'WORKER_ID',
IntegerValue: 1,
SendNotification: false
}, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
The identity pool associated with IDENTITY_POOL_ID has the unauthenticated role Cognito_projectUnauth_Role which has an attached policy that allows full Mechanical Turk permissions.
Here is the trust policy for that role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "cognito-identity.amazonaws.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"cognito-identity.amazonaws.com:aud": "IDENTITY_POOL_ID"
},
"ForAnyValue:StringLike": {
"cognito-identity.amazonaws.com:amr": "unauthenticated"
}
}
}
]
}
What else could be missing from this setup to get the right authorization?
Summary: I am using AWS Amplify Auth class with a pre-configured Cognito User Pool for authentication. After authentication, I am using the Cognito ID token to fetch identity pool credentials (using AWS CredentialProviders SDK) whose assumed role is given access to an S3 access point. I then attempt to fetch a known object from the bucket's access point using the AWS S3 SDK. The problem is that the request returns a response of 403 Forbidden instead of successfully getting the object, despite my role policy and bucket (access point) policy allowing the s3:GetObject action on the resource.
I am assuming something is wrong with the way my policies are set up. Code snippets below.
I am also concerned I'm not getting the right role back from the credentials provider, but I don't allow unauthenticated roles on the identity pool so I am not sure, and I don't know how to verify the role being sent back in the credentials' session token to check.
I also may not be configuring the sdk client objects properly, but I followed the documentation provided to the best of my understanding from the documentation (I am using AWS SDK v3, not v2, so slightly different syntax and uses modular imports)
Backend Configurations - IAM
Identity Pool: Authenticated Role Trust Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "cognito-identity.amazonaws.com"
},
"Action": [
"sts:AssumeRoleWithWebIdentity",
"sts:TagSession"
],
"Condition": {
"StringEquals": {
"cognito-identity.amazonaws.com:aud": "<MY_IDENTITY_POOL_ID>"
},
"ForAnyValue:StringLike": {
"cognito-identity.amazonaws.com:amr": "authenticated"
}
}
}
]
}
Identity Pool: Authenticated Role S3 Access Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::<MY_ACCESS_POINT_NAME>/object/*"
}
]
}
Backend Configurations - S3
S3 Bucket and Access Points: Block All Public Access
S3 Bucket CORS Policy:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"HEAD"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 300
}
]
S3 Bucket Policy (Delegates Access Control to Access Points):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DelegateAccessControlToAccessPoints",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "*",
"Resource": [
"arn:aws:s3:::<MY_BUCKET_NAME>",
"arn:aws:s3:::<MY_BUCKET_NAME>/*"
],
"Condition": {
"StringEquals": {
"s3:DataAccessPointAccount": "<MY_ACCT_ID>"
}
}
}
]
}
Access Point Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccessPointToGetObjects",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<ACCT_ID>:role/<MY_IDENTITY_POOL_AUTH_ROLE_NAME>"
},
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:<REGION>:<ACCT_ID>:accesspoint/<MY_ACCESS_POINT_NAME>/object/*"
}
]
}
Front End AuthN & AuthZ
Amplify Configuration of User Pool Auth
Amplify.configure({
Auth: {
region: '<REGION>',
userPoolId: '<MY_USER_POOL_ID>',
userPoolWebClientId: '<MY_USER_POOL_APP_CLIENT_ID>'
}
})
User AuthZ process:
On user login event, call Amplify's Auth.signIn() which returns type CognitoUser:
// Log in user (error checking ommitted here for post)
const CognitoUser = await Auth.signIn(email, secret);
// Get ID Token JWT
const CognitoIdToken = CognitoUser.signInUserSession.getIdToken().getJwtToken();
// Use #aws-sdk/credentials-provider to get Identity Pool Credentials
const credentials = fromCognitoIdentityPool({
clientConfig: { region: '<REGION>' },
identityPoolId: '<MY_IDENTITY_POOL_ID>',
logins: {
'cognito-idp.<REGION>.amazonaws.com/<MY_USER_POOL_ID>': CognitoIdToken
}
})
// Create S3 SDK Client
client = new S3Client({
region: '<REGION>',
credentials
})
// Format S3 GetObjectCommand parameters for object to get from access point
const s3params = {
Bucket: '<MY_ACCESS_POINT_ARN>',
Key: '<MY_OBJECT_KEY>'
}
// Create S3 client command object
const getObjectCommand = new GetObjectCommand(s3params);
// Get object from access point (execute command)
const response = await client.send(getObjectCommand); // -> 403 FORBIDDEN
I am trying to create a solution where every client that will use my service will have a sqs (which is in my AWS account). So in order that the client will be able to send messages and read messages from the queue, I want to use cognito with a single role that has variables, as there is a limitation on the number of roles that a single account can have.
I have created cognito user pool with an application, also created federated identity, role, policy and linked everything together.
the policy is
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"sqs:DeleteMessage",
"sqs:GetQueueUrl",
"sqs:DeleteMessageBatch",
"sqs:SendMessageBatch"
],
"Resource": [
"arn:aws:sqs:us-east-1:XXXX:test-${cognito-identity.amazonaws.com:sub}",
"arn:aws:sqs:us-east-1:XXXX:test"
]
}
]
}
the test client code is
const cognitoUser = userPool.getCurrentUser();
cognitoUser.getSession((err, session) => {
console.log(`session token: ${session.getIdToken().getJwtToken()}`);
const paramsCredentials = {
IdentityPoolId: 'XXXX',
Logins: {}
};
AWS.config.region = 'XXXX';
paramsCredentials.Logins[
`cognito-idp.${AWS.config.region}.amazonaws.com/XXXX`
] = session.getIdToken().getJwtToken();
AWS.config.credentials = new AWS.CognitoIdentityCredentials(
paramsCredentials
);
AWS.config.credentials.get(err => {
if (err) {
console.log(`got error - getting credentials. error: ${err}`);
}
const id = AWS.config.credentials.identityId;
console.log('Cognito Identity ID ' + id);
const sqs = new AWS.SQS({
region: AWS.config.region
});
const params = {
QueueName: 'test-9ea2b895-2971-4ee2-b372-451bf2b19731'
};
sqs.getQueueUrl(params, (err, data) => {
if (err) {
console.log(`got error getting url for queue, error: ${err}`);
} else {
console.log(`SQS url = ${data.QueueUrl}`);
}
});
});
});
and I am getting an error of
AWS.SimpleQueueService.NonExistentQueue: The specified queue does not exist or you do not have access to it.
Blockquote
But when I change the queue to the test one, all is working fine. I have double checked the sub and it is the correct id
What did i do wrong?
${cognito-identity.amazonaws.com:sub} IAM policy variable will return region:uuid your queue name will be test-us-east-1:9ea2b895-2971-4ee2-b372-451bf2b19731 which is a invalid SQS queue name(colon not allowed). So, it is not possible to restrict access to a queue named after that identity but you can create a policy limited to only a specific set of users of your application
Here is a blog from AWS on Understanding Amazon Cognito Authentication Part 3: Roles and Policies
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"sqs:DeleteMessage",
"sqs:GetQueueUrl",
"sqs:DeleteMessageBatch",
"sqs:SendMessageBatch"
],
"Resource": [
"arn:aws:sqs:us-east-1:XXXX:test"
]
"Condition": {
"StringEquals": {
"cognito-identity.amazonaws.com:sub": [
"us-east-1:12345678-1234-1234-1234-123456790ab"
]
}
}
}
]
}
I am trying to connect a kinesis trigger to a lambda function, but keep getting the same error no matter how I configure the IAM. I've also tried setting the IAM role to "*" in most fields. How can I determine what's preventing the trigger from connecting if the IAM role is configured the way the error has requested?
Here is the error:
There was an error creating the trigger: Cannot access stream [arn]. Please ensure the role can perform the GetRecords, GetShardIterator, DescribeStream, and ListStreams Actions on your stream in IAM.
Here is my IAM role which can perform the GetRecords, GetShardIterator, DescribeStream, and ListStreams PLUS more:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1505763107000",
"Effect": "Allow",
"Action": [
"kinesis:CreateStream",
"kinesis:DescribeStream",
"kinesis:GetShardIterator",
"kinesis:GetRecords",
"kinesis:ListRecords",
"kinesis:PutRecord",
"kinesis:PutRecords"
],
"Resource": [
"arn:aws:kinesis:us-east-2:219021079475:stream/lead"
]
},
{
"Sid": "Stmt1505763184000",
"Effect": "Allow",
"Action": [
"cloudwatch:*"
],
"Resource": [
"*"
]
},
{
"Sid": "Stmt1505763256000",
"Effect": "Allow",
"Action": [
"lambda:*"
],
"Resource": [
"arn:aws:lambda:us-east-2:219021079475:function:logsKinesisData"
]
}
]
}
I've not written the lambda function beyond set up:
'use strict';
console.log('Loading function');
exports.handler = (event, context, callback) => {
//console.log('Received event:', JSON.stringify(event, null, 2));
event.Records.forEach((record) => {
// Kinesis data is base64 encoded so decode here
const payload = new Buffer(record.kinesis.data, 'base64').toString('ascii');
console.log('Decoded payload:', payload);
});
callback(null, `Successfully processed ${event.Records.length} records.`);
};
I am having issues using the temporary credentials to initiate a connection to AWS IoT using STS temporary credentials, whilst keeping things secure.
I have already successfully connected embedded devices using certificates with policies.
But when I come to try connecting via the browser, using a pre-signed URL, I have hit a stumbling block.
Below is a code snippet from a Lambda function which first authenticates the request (not shown), and then builds the url using STS credentials via assumeRole.
Using my generated URL along with Paho javascript client, I have been successful up to the point of receiving a response of "101 Switching Protocols" in the browser. But the connection is terminated instead of switching to websockets.
Any help or guidance anyone out there can provide me with would be much appreciated.
const iot = new AWS.Iot();
const sts = new AWS.STS({region: 'eu-west-1'});
const params = {
DurationSeconds: 3600,
ExternalId: displayId,
Policy: JSON.stringify(
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iot:*"
],
"Resource": [
"*"
]
},
/*{
"Effect": "Allow",
"Action": [
"iot:Connect"
],
"Resource": [
"arn:aws:iot:eu-west-1:ACCID:client/" + display._id
]
},
{
"Effect": "Allow",
"Action": [
"iot:Receive"
],
"Resource": [
"*"
]
}*/
]
}
),
RoleArn: "arn:aws:iam::ACCID:role/iot_websocket_url_role",
RoleSessionName: displayId + '-' + Date.now()
};
sts.assumeRole(params, function(err, stsData) {
if (err) {
fail(err, db);
return;
}
console.log(stsData);
const AWS_IOT_ENDPOINT_HOST = 'REDACTED.iot.eu-west-1.amazonaws.com';
var url = v4.createPresignedURL(
'GET',
AWS_IOT_ENDPOINT_HOST,
'/mqtt',
'iotdata',
crypto.createHash('sha256').update('', 'utf8').digest('hex'),
{
key: stsData.Credentials.AccessKeyId,
secret: stsData.Credentials.SecretAccessKey,
protocol: 'wss',
expires: 3600,
region: 'eu-west-1'
}
);
url += '&X-Amz-Security-Token=' + encodeURIComponent(stsData.Credentials.SessionToken);
console.log(url);
context.succeed({url: url});
});
Edit: If it helps, I just checked inside the "Frames" window in Chrome debugger, after selecting the request which returns a 101 code. It shows a single frame: "Binary Frame (Opcode 2, mask)".
Does this Opcode refer to MQTT control code 2 AKA "CONNACK"? I am not an expert at MQTT (yet!).
I realised my mistake by reading the docs on STS.
If you pass a policy to this operation, the temporary security credentials that are returned by the operation have the permissions that are allowed by both the access policy of the role that is being assumed, and the policy that you pass.
The RoleARN that is supplied must also allow the actions that you are requesting via STS assumeRole.
i.e. The RoleARN could allow iot:*, then when you assume role, you can narrow the permissions down to, for instance iot:Connect and for specific resources.