Pubnub functions not working on AWS Lambda - amazon-web-services

I'm trying to use history method provided by Pubnub to get the chat history of a channel and running my node.js code on AWS Lambda. However, my function is not getting called. I'm not sure if I'm doing it correctly, but here's the code snippet-
var publishKey = "pub-c-cfe10ea4-redacted";
var subscribeKey = "sub-c-fedec8ba-redacted";
var channelId = "ChatRoomDemo";
var uuid;
var pubnub = {};
function readMessages(intent,session,callback){
pubnub = require("pubnub")({
publish_key : publishKey,
subscribe_key: subscribeKey
});
pubnub.history({
channel : 'ChatRoomDemo',
callback : function(m){
console.log(JSON.stringify(m));
},
count : 100,
reverse : false
});
}
I expect the message history in JSON format to be displayed on the console.

I had the same problem and finally got it working. What you will need to do is allow the CIDR address for pubnub.com. This was a foreign idea to me until I figured it out! Here's how to do that to publish to a channel:
Copy the CIDR address for pubnub.com which is 54.246.196.128/26 (Source) [WARNING: do not this - see comment below]
Log into https://console.aws.amazon.com
Under "Services" go to "VPC"
On the left, under "Security," click "Network ACLs"
Click "Create Network ACL" give it a name tag like "pubnub.com"
Select the VPC for your Lambda skill (if you're not sure, click around your Lambda function, you'll see it. You probably only have one listed like me)
Click "Yes, Create"
Under the "Outbound Rules" tab, click "Edit"
For "Rule #" I just used "1"
For "Type" I used "HTTP (80)"
For "Destination" I pasted in the CIDR from step 1
"Save"
Note, if you're subscribing to a channel, you'll also need to add an "Inbound Rule" too.

Related

GCP terraform - alerts module based on log metrics

As per subject, I have set up log based metrics for a platform in gcp i.e. firewall, audit, route etc. monitoring.
enter image description here
Now I need to setup alert policies tied to these log based metrics, which is easy enough to do manually in gcp.
enter image description here
However, I need to do it via terraform thus using this module:
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/monitoring_alert_policy#nested_alert_strategy
I might be missing something very simple but finding it hard to understand this as the alert strategy is apparently required but yet does not seem to be supported?
I am also a bit confused on which kind of condition I should be using to match my already setup log based metric?
This is my module so far, PS. I have tried using the same filter as I did for setting up the log based metric as well as the name of the log based filter:
resource "google_monitoring_alert_policy" "alert_policy" {
display_name = var.display_name
combiner = "OR"
conditions {
display_name = var.display_name
condition_matched_log {
filter = var.filter
#duration = "600s"
#comparison = "COMPARISON_GT"
#threshold_value = 1
}
}
user_labels = {
foo = "bar"
}
}
var filter is:
resource.type="gce_route" AND (protoPayload.methodName:"compute.routes.delete" OR protoPayload.methodName:"compute.routes.insert")
Got this resolved in the end.
Turns out common issue:
https://issuetracker.google.com/issues/143436657?pli=1
Had to add this to the filter parameter in my terraform module after the metric name - AND resource.type="global"

AWS Kendra PreHook Lambdas for Data Enrichment

I am working on a POC using Kendra and Salesforce. The connector allows me to connect to my Salesforce Org and index knowledge articles. I have been able to set this up and it is currently working as expected.
There are a few custom fields and data points I want to bring over to help enrich the data even more. One of these is an additional answer / body that will contain key information for the searching.
This field in my data source is rich text containing HTML and is often larger than 2048 characters, a limit that seems to be imposed in a String data field within Kendra.
I came across two hooks that are built in for Pre and Post data enrichment. My thought here is that I can use the pre hook to strip HTML tags and truncate the field before it gets stored in the index.
Hook Reference: https://docs.aws.amazon.com/kendra/latest/dg/API_CustomDocumentEnrichmentConfiguration.html
Current Setup:
I have added a new field to the index called sf_answer_preview. I then mapped this field in the data source to the rich text field in the Salesforce org.
If I run this as is, it will index about 200 of the 1,000 articles and give an error that the remaining articles exceed the 2048 character limit in that field, hence why I am trying to set up the enrichment.
I set up the above enrichment on my data source. I specified a lambda to use in the pre-extraction, as well as no additional filtering, so run this on every article. I am not 100% certain what the S3 bucket is for since I am using a data source, but it appears to be needed so I have added that as well.
For my lambda, I create the following:
exports.handler = async (event) => {
// Debug
console.log(JSON.stringify(event))
// Vars
const s3Bucket = event.s3Bucket;
const s3ObjectKey = event.s3ObjectKey;
const meta = event.metadata;
// Answer
const answer = meta.attributes.find(o => o.name === 'sf_answer_preview');
// Remove HTML Tags
const removeTags = (str) => {
if ((str===null) || (str===''))
return false;
else
str = str.toString();
return str.replace( /(<([^>]+)>)/ig, '');
}
// Truncate
const truncate = (input) => input.length > 2000 ? `${input.substring(0, 2000)}...` : input;
let result = truncate(removeTags(answer.value.stringValue));
// Response
const response = {
"version" : "v0",
"s3ObjectKey": s3ObjectKey,
"metadataUpdates": [
{"name":"sf_answer_preview", "value":{"stringValue":result}}
]
}
// Debug
console.log(response)
// Response
return response
};
Based on the contract for the lambda described here, it appears pretty straight forward. I access the event, find the field in the data called sf_answer_preview (the rich text field from Salesforce) and I strip and truncate the value to 2,000 characters.
For the response, I am telling it to update that field to the new formatted answer so that it complies with the field limits.
When I log the data in the lambda, the pre-extraction event details are as follows:
{
"s3Bucket": "kendrasfdev",
"s3ObjectKey": "pre-extraction/********/22736e62-c65e-4334-af60-8c925ef62034/https://*********.my.salesforce.com/ka1d0000000wkgVAAQ",
"metadata": {
"attributes": [
{
"name": "_document_title",
"value": {
"stringValue": "What majors are under the Exploratory track of Health and Life Sciences?"
}
},
{
"name": "sf_answer_preview",
"value": {
"stringValue": "A complete list of majors affiliated with the Exploratory Health and Life Sciences track is available online. This track allows you to explore a variety of majors related to the health and life science professions. For more information, please visit the Exploratory program description. "
}
},
{
"name": "_data_source_sync_job_execution_id",
"value": {
"stringValue": "0fbfb959-7206-4151-a2b7-fce761a46241"
}
},
]
}
}
The Problem:
When this runs, I am still getting the same field limit error that the content exceeds the character limit. When I run the lambda on the raw data, it strips and truncates it as expected. I am thinking that the response in the lambda for some reason isn't setting the field value to the new content correctly and still trying to use the data directly from Salesforce, thus throwing the error.
Has anyone set up lambdas for Kendra before that might know what I am doing wrong? This seems pretty common to be able to do things like strip PII information before it gets indexed, so I must be slightly off on my setup somewhere.
Any thoughts?
since you are still passing the rich text as a metadata filed of a document, the character limit still applies so the document would fail at validation step of the API call and would not reach the enrichment step. A work around is to somehow append those rich text fields to the body of the document so that your lambda can access it there. But if those fields are auto generated for your documents from your data sources, that might not be easy.

I am learning to create AWS Lambdas. I want to create a "chain": S3 -> 4 Chained Lambda()'s -> RDS. I can't get the first lambda to call the second

I really tried everything. Surprisingly google has not many answers when it comes to this.
When a certain .csv file is uploaded to a S3 bucket I want to parse it and place the data into a RDS database.
My goal is to learn the lambda serverless technology, this is essentially an exercise. Thus, I over-engineered the hell out of it.
Here is how it goes:
S3 Trigger when the .csv is uploaded -> call lambda (this part fully works)
AAA_Thomas_DailyOverframeS3CsvToAnalytics_DownloadCsv downloads the csv from S3 and finishes with essentially the plaintext of the file. It is then supposed to pass it to the next lambda. The way I am trying to do this is by putting the second lambda as destination. The function works, but the second lambda is never called and I don't know why.
AAA_Thomas_DailyOverframeS3CsvToAnalytics_ParseCsv gets the plaintext as input and returns a javascript object with the parsed data.
AAA_Thomas_DailyOverframeS3CsvToAnalytics_DecryptRDSPass only connects to KMS, gets the encrcypted RDS password, and passes it along with the data it received as input to the last lambda.
AAA_Thomas_DailyOverframeS3CsvToAnalytics_PutDataInRds then finally puts the data in RDS.
I created a custom VPC with custom subnets, route tables, gateways, peering connections, etc. I don't know if this is relevant but function 2. only has access to the s3 endpoint, 3. does not have any internet access whatsoever, 4. is the only one that has normal internet access (it's the only way to connect to KSM), and 5. only has access to the peered VPC which hosts the RDS.
This is the code of the first lambda:
// dependencies
const AWS = require('aws-sdk');
const util = require('util');
const s3 = new AWS.S3();
let region = process.env;
exports.handler = async (event, context, callback) =>
{
var checkDates = process.env.CheckDates == "false" ? false : true;
var ret = [];
var checkFileDate = function(actualFileName)
{
if (!checkDates)
return true;
var d = new Date();
var expectedFileName = 'Overframe_-_Analytics_by_Day_Device_' + d.getUTCFullYear() + '-' + (d.getUTCMonth().toString().length == 1 ? "0" + d.getUTCMonth() : d.getUTCMonth()) + '-' + (d.getUTCDate().toString().length == 1 ? "0" + d.getUTCDate() : d.getUTCDate());
return expectedFileName == actualFileName.substr(0, expectedFileName.length);
};
for (var i = 0; i < event.Records.length; ++i)
{
var record = event.Records[i];
try {
if (record.s3.bucket.name != process.env.S3BucketName)
{
console.error('Unexpected notification, unknown bucket: ' + record.s3.bucket.name);
continue;
}
if (!checkFileDate(record.s3.object.key))
{
console.error('Unexpected file, or date is not today\'s: ' + record.s3.object.key);
continue;
}
const params = {
Bucket: record.s3.bucket.name,
Key: record.s3.object.key
};
var csvFile = await s3.getObject(params).promise();
var allText = csvFile.Body.toString('utf-8');
console.log('Loaded data:', {Bucket: params.Bucket, Filename: params.Key, Text: allText});
ret.push(allText);
} catch (error) {
console.log("Couldn't download CSV from S3", error);
return { statusCode: 500, body: error };
}
}
// I've been randomly trying different ways to return the data, none works. The data itself is correct , I checked with console.log()
const response = {
statusCode: 200,
body: { "Records": ret }
};
return ret;
};
While this shows how the lambda was set up, especially its destination:
I haven't posted on Stackoverflow in 7 years. That's how desperate I am. Thanks for the help.
Rather than getting each Lambda to call the next one take a look at AWS managed service for state machines, step functions which can handle this workflow for you.
By providing input and outputs you can pass output to the next function, with retry logic built into it.
If you haven't much experience AWS has a tutorial on setting up a step function through chaining Lambdas.
By using this you also will not need to account for configuration issues such as Lambda timeouts. In addition it allows your code to be more modular which improves testing the individual functionality, whilst also isolating issues.
The execution roles of all Lambda functions, whose destinations include other Lambda functions, must have the lambda:InvokeFunction IAM permission in one of their attached IAM policies.
Here's a snippet from Lambda documentation:
To send events to a destination, your function needs additional permissions. Add a policy with the required permissions to your function's execution role. Each destination service requires a different permission, as follows:
Amazon SQS – sqs:SendMessage
Amazon SNS – sns:Publish
Lambda – lambda:InvokeFunction
EventBridge – events:PutEvents

How to add an inventory host to specific group using ansible tower API? So that it will display on related groups list on UI

I am unable to assign a host to group in ansible tower inventory using rest API's. Any one have worked on it please let me know the request with body.
I found a solution. For me, the problem was that I was searching in api/v2/inventories/{id}/groups/; turns out you actually have to look in api/v2/groups/{id}/hosts/.
Add host to inventory group
URI: {your host}/api/v2/groups/{id}/hosts/
Method: POST
Payload:
{
"name": "{hostname}",
"description": "",
"enabled": true,
"instance_id": "",
"variables": ""
}
This will create a host in the specified group.
In AWX and Ansible Tower, you can navigate to the url in your browser, then you can scroll all the way down, and if you can do a POST, there'll be a form there that has the payload. You can fill it in and post it right there in the browser.
When you are at the inventory group in the normal GUI, you can find the id of the inventory group in the URL.

How to retrieve password from aws.amazon.com if lost my .pem file?

I want to login in to the aws.amazon.com using remote desktop.For that require the password of user.I am trying to login with that but as some securities changes have been made from AWS side.So it's not working.Now to get the password i require the pem file but my client has misplaced it.
I have tried to Resetting an Administrator password for Instance.But in step 5-d it stopping me to do this.I have attached screenshot for more.
Also I an thinking that can we convert the fingerprint key in to RSA PRIVATE KEY? As i have the fingerprint key.If I found the RSA key then i can decrypt the password.Any online creation method available to do that ?
Tried to create new instace and attaching the older volume disk though not getting password.
Anybody is facing the same issue ? If you have any solution then let me know.
Thanks in Advance.
Ah! That documented process won't work if the AMI used to originally launch the instance has been deprecated (expired).
For step 5, simply select your AMI from the AMIs section of the EC2 management console, then choose the Launch command from the Actions menu. This will let you launch a new machine using the AMI you created. Make sure you choose a new keypair for which you have the .pem file.
Then, just continue from step 6. The general steps are:
Stop your original instance
Detach the boot disk ("Disk A")
Launch another Windows instance (or use one you already have access to)
Attach Disk A to the 2nd instance
Update the \Program Files\Amazon\Ec2ConfigService\Settings\config.xml file on Disk A and update the Ec2SetPassword parameter to Enabled (see Step 9 on that documentation page)
Detach Disk A from the 2nd instance and reattach it to the original instance (from Step 5 on the documentation page)
Start the original instance and try to login
I purchase the developer support and found the solution.So i think I should share over here which will save your money as well.
An alternative way to reset the password is listed below.
Use SSM[1]
First we created the new"Recovery" IAM role adding the AmazonEC2RoleforSSM policies
Apply that rule to your instance by right on the instance and click on "instance settings" then "Attach/Replace IAM role" Note you may need to reboot before the rule applies.
Reset Local Administrator Password:
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
2.On the left side of the page, click “Documents” then “Create Document” and paste the following JSON script into “Contents.” Then click the Blue “Create Documents” button on the bottom right of the page.
{
"schemaVersion": "1.2",
"description": "Changes the local administrator password.",
"parameters": {
"Password": {
"type": "String",
"default": "",
"description": "(Required) The new password for the local administrator account."
}
},
"runtimeConfig": {
"aws:runPowerShellScript": {
"properties": [
{
"id": "0.aws:runPowerShellScript",
"timeoutSeconds": 60,
"runCommand": [
"",
"Function Reset-AdminPassword {",
" Param (",
" [Parameter(Mandatory=$true)]",
" [string]$Password",
" )",
"",
" $computer = [ADSI]\"WinNT://$($env:computername)\"",
" $users = $computer.psbase.Children | Where-Object {$_.psbase.schemaclassname -eq 'user'}",
" foreach ($user in $users) {",
" $userObject = New-Object System.Security.Principal.NTAccount($user.Name)",
" $userSID = $userObject.Translate([System.Security.Principal.SecurityIdentifier])",
" if(($userSID.Value.Substring(0,6) -eq 'S-1-5-') -and ($userSID.Value.Substring($userSID.Value.Length-4, 4) -eq '-500')) {",
" $user.SetPassword($Password)",
" Write-Host \"Successfully changed the password for $($user.Name).\"",
" }",
" }",
"}",
"",
"Reset-AdminPassword -Password {{ Password }}"
]
}
]
}
}
}
On the left side of the EC2 Management Console, select “Run Commands” and click the blue “Run a Command” button. Select the document we created earlier from the list under “Command Document.”
Under Target Instances, select the instance for which you would like to reset the password.
You should be able to enter the password in the “Password” field.
Click the blue “Run” button on the bottom right of the screen.
On the next page you should see a green “Success” box showing a command ID. You can use this Command ID to keep track of the changes made by the command by clicking on it or by searching this command ID in the future under the “Run Command” section.