How do I structure Zend_Controller_Router_Route to handle the action and a param key in the same position? - zend-acl

Here is my route in JSON:
"jobs":
{
"type":"Zend_Controller_Router_Route",
"route":"/jobs/:action/:id/*",
"defaults":
{
"module":"api",
"controller":"jobs",
"action":"index",
"id":0
}
}
This allows for URIs like the following and works perfectly well so far:
/jobs/ -> action=index, id=0
/jobs/view/1 -> action=view, id=1
/jobs/edit/1 -> action=edit, id=1
However, I would like the :action position to also allow for URIs like the following:
/jobs/type/volunteer -> action=index, type=volunteer
/jobs/search/php%20developer -> action=index, search=php developer
So far I'm accomplishing this within App_Controller_Action::__call(). It works, but it's messy because until the request is dispatched, the action is technically still listed as "search" or "term", and the value meant for those keys is assigned to id.
This is causing an issue in my Zend_Acl checks Im doing in a front controller plugin. As a workaround I've added "search" and "type" as permissions to my ACL but again, this is messy. The ACL should remain clean of those semantics. Id like for the request to be modified before it gets to the ACL plugin.

Related

Creating Batch Operations with AWS Amplify [GraphQL, DataStore, AppSync]

I've currently been handling batch operations with a for loop, but obviously, this is not the best approach, especially as I'm adding an 'upload by CSV' option, which will take 1000+ putItems.
I searched around for the best ways to implement this, specifically this link:
https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-batch.html
However, even after following those steps mentioned I'm not able to achieve a batch operation. Below is my code for a 'batch delete' operation.
Here is my schema.graphql file:
type Client #model #auth(rules: [{ allow: owner }]) {
id: ID!
name: String!
company: String
phone: String
email: String
}
type Mutation {
batchDelete(ids: [ID]): [Client]
}
I then create two new files. One request mapping template and one response mapping template.
#set($clientsdata = [])
#foreach($item in ${ctx.args.clients})
$util.qr($clientsdata.delete($util.dynamodb.toMapValues($item)))
#end
{
"version" : "2018-05-29",
"operation" : "BatchDeleteItem",
"tables" : {
"Clients": $utils.toJson($clientsdata)
}
}
and then as per the tutorial a "simple pass through" response mapping template:
$util.toJson($ctx.result.data.Posts)
However now when I run the batchdelete command, I keep getting nothing returned.
Would really appreciate guidance on this!
When it comes to performing DynamoDB batch operations in tandem with Amplify, note that the table name specified in the schema is actually different per environment, i.e. your "Client" table wouldn't be recognized as "Clients" as you have stated it in the request mapping template, but rather the name it is given on Amplify push, per environment.
E.g. Client-<some alphanumeric number>-envName
Add the full name of the table to your request and response mapping templates.
Also your foreach statement should read:
#foreach($item in ${ctx.args.clientsdata}) wherein you iterate through each of the items in the array that is passed as the argument to the context object.
Hope this helps.

AWS Kendra PreHook Lambdas for Data Enrichment

I am working on a POC using Kendra and Salesforce. The connector allows me to connect to my Salesforce Org and index knowledge articles. I have been able to set this up and it is currently working as expected.
There are a few custom fields and data points I want to bring over to help enrich the data even more. One of these is an additional answer / body that will contain key information for the searching.
This field in my data source is rich text containing HTML and is often larger than 2048 characters, a limit that seems to be imposed in a String data field within Kendra.
I came across two hooks that are built in for Pre and Post data enrichment. My thought here is that I can use the pre hook to strip HTML tags and truncate the field before it gets stored in the index.
Hook Reference: https://docs.aws.amazon.com/kendra/latest/dg/API_CustomDocumentEnrichmentConfiguration.html
Current Setup:
I have added a new field to the index called sf_answer_preview. I then mapped this field in the data source to the rich text field in the Salesforce org.
If I run this as is, it will index about 200 of the 1,000 articles and give an error that the remaining articles exceed the 2048 character limit in that field, hence why I am trying to set up the enrichment.
I set up the above enrichment on my data source. I specified a lambda to use in the pre-extraction, as well as no additional filtering, so run this on every article. I am not 100% certain what the S3 bucket is for since I am using a data source, but it appears to be needed so I have added that as well.
For my lambda, I create the following:
exports.handler = async (event) => {
// Debug
console.log(JSON.stringify(event))
// Vars
const s3Bucket = event.s3Bucket;
const s3ObjectKey = event.s3ObjectKey;
const meta = event.metadata;
// Answer
const answer = meta.attributes.find(o => o.name === 'sf_answer_preview');
// Remove HTML Tags
const removeTags = (str) => {
if ((str===null) || (str===''))
return false;
else
str = str.toString();
return str.replace( /(<([^>]+)>)/ig, '');
}
// Truncate
const truncate = (input) => input.length > 2000 ? `${input.substring(0, 2000)}...` : input;
let result = truncate(removeTags(answer.value.stringValue));
// Response
const response = {
"version" : "v0",
"s3ObjectKey": s3ObjectKey,
"metadataUpdates": [
{"name":"sf_answer_preview", "value":{"stringValue":result}}
]
}
// Debug
console.log(response)
// Response
return response
};
Based on the contract for the lambda described here, it appears pretty straight forward. I access the event, find the field in the data called sf_answer_preview (the rich text field from Salesforce) and I strip and truncate the value to 2,000 characters.
For the response, I am telling it to update that field to the new formatted answer so that it complies with the field limits.
When I log the data in the lambda, the pre-extraction event details are as follows:
{
"s3Bucket": "kendrasfdev",
"s3ObjectKey": "pre-extraction/********/22736e62-c65e-4334-af60-8c925ef62034/https://*********.my.salesforce.com/ka1d0000000wkgVAAQ",
"metadata": {
"attributes": [
{
"name": "_document_title",
"value": {
"stringValue": "What majors are under the Exploratory track of Health and Life Sciences?"
}
},
{
"name": "sf_answer_preview",
"value": {
"stringValue": "A complete list of majors affiliated with the Exploratory Health and Life Sciences track is available online. This track allows you to explore a variety of majors related to the health and life science professions. For more information, please visit the Exploratory program description. "
}
},
{
"name": "_data_source_sync_job_execution_id",
"value": {
"stringValue": "0fbfb959-7206-4151-a2b7-fce761a46241"
}
},
]
}
}
The Problem:
When this runs, I am still getting the same field limit error that the content exceeds the character limit. When I run the lambda on the raw data, it strips and truncates it as expected. I am thinking that the response in the lambda for some reason isn't setting the field value to the new content correctly and still trying to use the data directly from Salesforce, thus throwing the error.
Has anyone set up lambdas for Kendra before that might know what I am doing wrong? This seems pretty common to be able to do things like strip PII information before it gets indexed, so I must be slightly off on my setup somewhere.
Any thoughts?
since you are still passing the rich text as a metadata filed of a document, the character limit still applies so the document would fail at validation step of the API call and would not reach the enrichment step. A work around is to somehow append those rich text fields to the body of the document so that your lambda can access it there. But if those fields are auto generated for your documents from your data sources, that might not be easy.

AWS Config: Custom rule for required-tags says "no tags" even when "Name" tag exists

I have customised the managed rule - required-tags and modified the lambda function to extend upto 9 tags and more.
The code seems to work and gives me the expected result.
This rule checks for tags given in "Rule Parameters". Empty and partially tagged are non compliant; fully tagged are compliant.
The problem I face is when a new EC2 instance is created and the custom rule triggers with configuration changes gives me "no tags" even when "name" tag is present.
When I re-evaluate for the second time, the expected result is got (missing tags except name tag)
The condition goes like this
if evaluation["compliance_type"] == "NON_COMPLIANT":
print ("NON_COMPLIANT")
if len(evaluation["current_tags"]) > 0:
print ("Non zero tags")
// evaluation report
else:
print ("Zero tags")
// evaluation report
EC2 instance was laucnhed around 12:39 PM
Cloudwatch logs after automatic trigger(configuration changes) around 12:41 PM
{
'current_tags': [],
'compliance_type': 'NON_COMPLIANT',
'annotation': 'Name, Customer, Environment, etc not present'
}
Cloudwatch logs after manual re-evaluation around 12:43 PM
{
'current_tags': [{u'value': u'instance_name', u'key': u'Name'}],
'compliance_type': 'NON_COMPLIANT',
'annotation': 'Customer, Environment, etc not present'
}
The event payload passed to lambda function (trigger: configuration changes) at instance creation time has current_tags: empty (even though name tag is added during instance creation; exists). Is there a way to find out how and when the tags gets added to the instance. Or the trigger can be delayed (not periodic trigger)

How to add an inventory host to specific group using ansible tower API? So that it will display on related groups list on UI

I am unable to assign a host to group in ansible tower inventory using rest API's. Any one have worked on it please let me know the request with body.
I found a solution. For me, the problem was that I was searching in api/v2/inventories/{id}/groups/; turns out you actually have to look in api/v2/groups/{id}/hosts/.
Add host to inventory group
URI: {your host}/api/v2/groups/{id}/hosts/
Method: POST
Payload:
{
"name": "{hostname}",
"description": "",
"enabled": true,
"instance_id": "",
"variables": ""
}
This will create a host in the specified group.
In AWX and Ansible Tower, you can navigate to the url in your browser, then you can scroll all the way down, and if you can do a POST, there'll be a form there that has the payload. You can fill it in and post it right there in the browser.
When you are at the inventory group in the normal GUI, you can find the id of the inventory group in the URL.

Pubnub functions not working on AWS Lambda

I'm trying to use history method provided by Pubnub to get the chat history of a channel and running my node.js code on AWS Lambda. However, my function is not getting called. I'm not sure if I'm doing it correctly, but here's the code snippet-
var publishKey = "pub-c-cfe10ea4-redacted";
var subscribeKey = "sub-c-fedec8ba-redacted";
var channelId = "ChatRoomDemo";
var uuid;
var pubnub = {};
function readMessages(intent,session,callback){
pubnub = require("pubnub")({
publish_key : publishKey,
subscribe_key: subscribeKey
});
pubnub.history({
channel : 'ChatRoomDemo',
callback : function(m){
console.log(JSON.stringify(m));
},
count : 100,
reverse : false
});
}
I expect the message history in JSON format to be displayed on the console.
I had the same problem and finally got it working. What you will need to do is allow the CIDR address for pubnub.com. This was a foreign idea to me until I figured it out! Here's how to do that to publish to a channel:
Copy the CIDR address for pubnub.com which is 54.246.196.128/26 (Source) [WARNING: do not this - see comment below]
Log into https://console.aws.amazon.com
Under "Services" go to "VPC"
On the left, under "Security," click "Network ACLs"
Click "Create Network ACL" give it a name tag like "pubnub.com"
Select the VPC for your Lambda skill (if you're not sure, click around your Lambda function, you'll see it. You probably only have one listed like me)
Click "Yes, Create"
Under the "Outbound Rules" tab, click "Edit"
For "Rule #" I just used "1"
For "Type" I used "HTTP (80)"
For "Destination" I pasted in the CIDR from step 1
"Save"
Note, if you're subscribing to a channel, you'll also need to add an "Inbound Rule" too.