The yodlee documentation about the MFA flow is a bit blurry/outdated.
I am following this flow chart to refresh a site with MFA: http://developer.yodlee.com/Aggregation_API/Aggregation_Services_Guide/API_Flow/Refresh_Site_Account
On the flow chart, after calling getMFAResponseForSite, we are supposed to check if there is an errorCode field in the response, I don't see such field in the documentation of the getMFAResponseForSite method. Because without this error code field, we cannot go back into the regular flow and wait for the refresh to be completed.
Also what is the difference between retry and isMessageAvailable?
The documentation specify to call stopSiteRefresh method, I don't see it in the flow, it sounds weird for me to call it but the documentation says:
Note that this is one of the APIs that is required to refresh MFA accounts.
Can somebody give me a clear flow when I have to deal with MFA sites? when and how can we go back on the regular process (getSiteRefreshInfo) and wait for the end of the refresh? thanks in advance.
The "errorCode" field only comes when there is no MFA question available and hence you are not seeing it in the sample of the API documentation as the sample contains the response with MFA question.
If you follow the flow closely you can see that you have to call getMFAResponseForSite in a loop and check for errorCode. So please call the API as depicted in the API flow documentation.
Here is a sample with errorCode field present after successfully answering the MFA question.
{
"isMessageAvailable": true,
"fieldInfo": {
"questionAndAnswerValues": [],
"numOfMandatoryQuestions": -1,
"mfaFieldInfoType": "SECURITY_QUESTION"
},
"timeOutTime": 97690,
"itemId": 0,
"errorCode": 0,
"memSiteAccId": xxxxxxxxxx,
"retry": false
}
Please also ignore the stopSiteRefresh API call, we will rectify the API reference documentation, as that API call should not be made in case of getMFAResponseForSite API.
Related
I want to use an AWS service or a combination of services to check if the signed-in IAM user used MFA or not.
I tried with cloud watch Events and Lambda. By using this, I could check the details of the user sign in the event object. But, I cannot able to check whether he signed in using MFA or not.
Any Suggestions?
UPDATE:
I used, CloudWatch Event rules -> Source: Build Event by Service (Service Name:AWS Console Sign-in, Event Type: Sign-in) -> Target: Lambda/SNS email. Then I got he below information:
sample Event object which I got:
{
"version":"0",
"id":"XXXX-XXX-XXX-XXX-XXX",
"detail-type":"AWS Console Sign In via CloudTrail",
"source":"aws.signin",
"account":"XXXXXXXXX",
"time":"2020-12-14T12:03:43Z",
"region":"us-east-2",
"resources":[
],
"detail":{
"eventVersion":"1.05",
"userIdentity":{
"type":"IAMUser",
"principalId":"XXXXXXXX",
"arn":"arn:aws:iam::XXXXXXX:user/XXXXX",
"accountId":"XXXXX",
"userName":"XXXX"
},
"eventTime":"2020-12-14T12:03:43Z",
"eventSource":"signin.amazonaws.com",
"eventName":"ConsoleLogin",
"awsRegion":"us-east-2",
"sourceIPAddress":"XX.XX.XX.XX",
"userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) ",
"requestParameters":null,
"responseElements":{
"ConsoleLogin":"Success"
},
"additionalEventData":{
"LoginTo":"https://XXXXXXXXXXX.XXX.XXXX.XXXXXXXXXXXXXXXXXXXXX"
},
"eventID":"XXXXX-XXXX-XXXX-XXXX-XXXXXXX",
"eventType":"AwsConsoleSignIn"
}
}
Your example event indicates that the user did not login with MFA. If they had, the additionalEventData would include an MFAUsed value, like this:
"additionalEventData": {
"LoginTo": "https://console.aws.amazon.com/console/home?region=us-east-1&state=hashArgs%23&isauthcode=true",
"MobileVersion": "No",
"MFAUsed": "Yes"
},
I suspect, however, that this data is less useful than you might think. For example, if you use SAML-based authentication, or even (I believe) AWS Single SignOn, you won't see this information. The reason is that such authentication methods -- even if they support MFA -- do not convey that information to AWS.
It can be even more complicated if you use assumed roles, where the indication of AWS MFA is in the userIdentity object.
If you're looking for users that don't have MFA enabled, the best solution imo is to go to the Console's user list.
I would suggest using AWS CloudTrail combined with LookupEvents API using SDK of your choice (Python, JS, Java etc).
First one will capture any IAM calls, even multi-regionally if required, and second will query the results of the capture. You should be able to see the required MFA info in the event body.
https://docs.aws.amazon.com/IAM/latest/UserGuide/cloudtrail-integration.html
https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_LookupEvents.html
I am just new to Amazon Connect and Lex and have just starting creating simple projects. I already have created an entire contact flow which uses Lex and Lambda for routing. Problem is in the "Get Customer Input" stage, it seems to always go to the error output and I could not figure out why. I tried to check if there's any way I can find logs for each stages in the contact flow but could not find any.
Can anyone help me solve this issue? I need to see logs to find out the cause of the error.
EDIT: I got the contact flow logs from cloudwatch. See below. I can't find any significant error from it.
{
"Results": "Error",
"ContactId": "<contact-id>",
"ContactFlowId": "<the contact flow id>",
"ContactFlowModuleType": "GetUserInput",
"Timestamp": "2019-07-08T08:27:01.185Z"
}
You might be getting error because you are getting error from your Lex and that is why the flow is going in error.
You can check the logs for connect and Lex in Amazon service - Amazon CloudWatch.
You can also provide details from logs/screenshot what exactly error you are getting, so that I can help.
This might be due to language settings mismatch.
If you're using LexV2 make sure you set the proper Language Attribute as well. Easiest way is using the set Voice block in your Contact Flow, on the very bottom of the block you can enable "set language attribute".
Original answer: https://repost.aws/questions/QUn9bLLnclQxmD_DMBgfB9_Q/amazon-connect-error-using-lex-as-customer-input
We're using a lambda function to respond to the 'User Migration' trigger in AWS Cognito. When something like a syntax error occurs, you can see it in cloud watch logs. However, "Exception during user migration" errors seen on the login page are no where to be found in the cloud watch logs.
Where are we supposed to look for these? I can't find any anything in the documentation and assumed it would have gone to cloud watch.
I can't test it in the lambda interface because one of the parameters being passed into the lambda function will have a function nested within the object and I can't create a test JSON setup that has that. There's also no test trigger for user migration that is pre-built.
Any ideas as to why I can't see this in cloud watch or where the exceptions would be shown would be greatly appreciated.
Unfortunately Cogntio doesn't expose any logs (or metrics, for that matter!).
The closest you can get is to view the lambda's logs in CloudWatch. If you log your response, and watch your lambda's error metric then you should mostly be able to debug issues internal to the lambda.
This does leave a few edge cases:
You won't see anything if the lambda can't be invoked (this would only happen under heavy concurrent loads either on that single lambda, or on all lambdas across your account)
If you return a bad response the lambda will succeed but the trigger action will fail and Cognito will give you a fairly generic message. At this point you're at the mercy of AWS' documentation to work out what's wrong (which can be a bit hit and miss- although StackOverflow always helps!).
You can find an example payload for the lambda in the trigger documentation:
{
"userName": "THE USERNAME",
"request": {
"password": "THE PASSWORD"
},
"response": {
// it is your responsibility to fill this bit in and return the completed object back:
"userAttributes": {
"string": "string",
...
},
"finalUserStatus": "string",
"messageAction": "string",
"desiredDeliveryMediums": [ "string", ... ],
"forceAliasCreation": boolean
}
}
n.b. As an aside, which you might know, but Lambda payloads always have to be in JSON, which does not store functions. So you should always be able to derive a test payload to use in the console.
As part of some testing that I was doing, I replied STOP to an SMS message that was sent via Amazon's Pinpoint service. I received the acknowledgement that I had been removed from further notifications.
I want to opt back in to receiving those messages, but I can't figure out how to do that. I looked at the Pinpoint documentation and I did not see a way to do it. I looked in the Amazon Pinpoint Console and I did not see a way to remove a number from the blacklist. I have tried the standard terms that other SMS providers use such as UNSTOP, UNBLOCK, and START, but none of those work either. Does anyone have any suggestions. I do not want to contact Amazon support about this unless I have to.
As described here: https://docs.aws.amazon.com/cli/latest/reference/sns/opt-in-phone-number.html
aws sns opt-in-phone-number --phone-number ###-###-####
You can also use AWS Console -> Amazon SNS -> Mobile -> Text Messaging(SMS) section to see a list of opt-out phone numbers that was done through Pinpoint and choose to opt-in these numbers...
AWS Pinpoint has not come up with the APIs to check if the number is opted out or not. You can use the AWS SNS APIs for checking this as well for marking a mobile number as active again.
I've been trying to figure this out as well and I think I have a solution from a set of documentation I found about setting up Pinpoint. Below is python pseudocode; from my understanding we just have to update the "OptOut" status for the endpoint (i.e. the phone number that originally opted out).
# Python pseudo code with comments
import boto 3
import datetime
pinpoint = boto3.Session(**login_kwargs).client("pinpoint")
opt_in_response = pinpoint.update_endpoint(
ApplicationId="<App ID from your project>",
EndpointId="<Endpoint you are updating>", # Same as your phone number?
EndpointRequest={
"Address": "<Phone you are updating>",
"ChannelType": 'SMS',
"OptOut": "NONE", # Change from "ALL" (which is opt-out) to "NONE", opt-in
"EffectiveDate": datetime.datetime.utcnow().isoformat(),
"Attributes": {
"OptInTimestamp": [datetime.datetime.utcnow().isoformat()]
}
}
)
I attempted to follow this documentation https://docs.aws.amazon.com/pinpoint/latest/developerguide/pinpoint-dg.pdf (the relevant stuff starts on page 92), which happens to not be in Python.
I wasn't successful but I'm pretty sure this is how you should be able to opt back in (if anyone who knows node.js can verify this solution that'd be awesome).
Currently, the message specified in the Document field while creating alerting policy appears in the Document field of the Stackdriver alert email.
I would like to overwrite the entire email message body with my custom content.
How can I overwrite the message body of Stackdriver Alert email with my custom message?
Is there any other workaround to do this?
You should be able to send the notification to a webhook, and this could directly be an HTTP-triggered Cloud Function.
This Cloud Function would receive all the information from the alert, and you can follow this tutorial to use SendGrid to send your alerts.
This is a lot more complex than just setting the email notifications, but also provides you with an amazing flexibility regarding alerts, as you'll be able to not just write the message however you want, but you could process the data in any way you want:
You have low priority alerts? Then store them and just send a digest
once in a while instead of spamming.
Want to change who is sent the
alert depending on a calendar rotation? Use the function to look up
who should be notified.
And those are just some random quick ideas I got while writing this message.
The information provided in the POST body is this one (that's just a sample):
{
"incident": {
"incident_id": "f2e08c333dc64cb09f75eaab355393bz",
"resource_id": "i-4a266a2d",
"resource_name": "webserver-85",
"state": "open",
"started_at": 1385085727,
"ended_at": null,
"policy_name": "Webserver Health",
"condition_name": "CPU usage",
"url": "https://app.google.stackdriver.com/incidents/f333dc64z",
"summary": "CPU for webserver-85 is above the threshold of 1% with a value of 28.5%"
},
"version": 1.1
}
You can create a single webhook that handles all the alerts, or you can create a webhook on a per-policy basis to handle things separately.