I'm using the ColdFusion CFPayment Library for processing Stripe payments. The payments are working fine, but now I want to pass metadata values to along with the transaction.
I went into the stripe.cfc file in the library and passed the static metadata values, but in the stripe logs it's showing a request like this:
{
"firstname2": "test",
"source": "tok_1**************",
"currency": "usd",
"description": "test",
"amount": "3300"
}
The value firstname2 was supposed to be in metadata, but it's automatically putting this value in the main array / json.
Added later
TransactionData = {
"description" = "test",
"metadata[test]" = "1",
"metadata[FirstName]" = "Abdur",
"metadata[LastName]" = "Rehman",
"description" = "Online Donation"
};
gw_response = gw.purchase(money=money, account=account, options=TransactionData);
I modified my code according to your example but TransactionData items are not being displayed in the request.
I've been successfully using metadata w/Stripe using CFPayment. When authorizing, you need to pass an object as the third parameter and name your keys something like this.:
TransactionData = {
"statement_descriptor" = "Descriptor Override (5-22 chars)",
"metadata[test]" = "1",
"metadata[donorid]" = DonorID,
"metadata[accountid]" = AccountID,
"description" = "Online Donation"
};
authResponse = gateway.authorize(money, cardAccount, TransactionData);
"description" and "statement-descriptor" are not required, but I like to include them so I can override the Stripe account defaults.
I was able to pass the metadata in the purchase call of CFPayment library. The purchase function in stipe.cfc file was only passing pre-defined values to the payment gateway. I added the following code in the purchase function and it worked.
<cfloop collection="#arguments.options.metadata#" item="option_index">
<cfset p["metadata[#option_index#]"] = #arguments.options.metadata[option_index]# />
</cfloop>
And passed the values to purchase function using the following code;
options["metadata"]["First Name"] = "Abdur";
options["metadata"]["Last Name"] = "Rehman";
gw_response = gw.purchase(money=money, account=account, options=options);
Related
I am currently learning how to use AWS pricing SDK. My objective is to get all the prices of aws virtual machines, as the prices can be different from a region to another.
Basically, I am running this code :
AmazonPricingClient client = new(keyId, key, RegionEndpoint.USEast1);
// Developement filters to handle smaller amount of data
GetProductsRequest request = new() {
ServiceCode = "AmazonEC2",
Filters = new() {
new ()
{
Field = "vcpu",
Type = "TERM_MATCH",
Value = "2"
},
new()
{
Field = "currentGeneration",
Type = "TERM_MATCH",
Value = "Yes"
},
new()
{
Field = "regionCode",
Type = "TERM_MATCH",
Value = "eu-west-1"
},
new()
{
Field = "operatingSystem",
Type = "TERM_MATCH",
Value = "Windows"
}
}
};
GetProductsResponse response = await client.GetProductsAsync(request);
Taking in consideration the filters (added to reduce the amount of data while testing the code out), I will only get the prices of the matching virtual machines for region eu-west-1.
If I delete this dev filter (for production for exemple), I will get the prices of every region, but each time, I will also get this part of the returned json :
"product":{
"productFamily":"Compute Instance",
"attributes":{
"enhancedNetworkingSupported":"Yes",
"intelTurboAvailable":"No",
"memory":"16 GiB",
"dedicatedEbsThroughput":"Up to 3500 Mbps",
[...]
"operation":"RunInstances:000g",
"availabilityzone":"NA"
},
"sku":"2A56CED7V5PFGAH8"
}
And this part would be duplicated for each region.
Is there a way to tell the api that I just want the different prices of a specific virtual machine ? Using either the request or the response objects ?
I may have missed some possibilities offered by the SDK, feel free to tell me anything I can improve in that snippet, good practices,...
Thanks !
I am new to block chain and I am using Alchemy and my NFTs and NFT Meta Data is on "Pinata". When I fetch my minted NFTs from Alchemy API, in response I get list of "contract addresses" and "Token Ids". Is there any way to get list of meaningful names of my minted NFTs instead of ids (without using loops). OR is there a way to store a meaningful name upon minting . Any help will be appreciated.
response upon calling API :
{"balance": "1", "contract": {"address": "0xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}, "id": {"tokenId": "0x0000000000000000000000000000000000000000000000000000000000000000"}}]```
Is there any way to get list of meaningful names of my minted NFTs instead of ids (without using loops).
Yes! As of July 12th, 2022 (current time) -- the getNFTs endpoint includes a withMetadata query param option that defaults to true (see docs).
That means that the response should include the info you might want, including:
title: name of the NFT asset
description: brief human-readable description
media.gateway: public gateway uri for the raw asset
etc.
See full documentation here: https://docs.alchemy.com/alchemy/enhanced-apis/nft-api/getnfts
An example response might look like this:
{
"ownedNfts": [
{
"contract": {
"address": "0x0beed7099af7514ccedf642cfea435731176fb02"
},
"id": {
"tokenId": "28",
"tokenMetadata": {
"tokenType": "ERC721"
}
},
"title": "DuskBreaker #28",
"description": "Breakers have the honor of serving humanity through their work on The Dusk. They are part of a select squad of 10,000 recruits who spend their days exploring a mysterious alien spaceship filled with friends, foes, and otherworldly technology.",
"tokenUri": {
"raw": "https://duskbreakers.gg/api/breakers/28",
"gateway": "https://duskbreakers.gg/api/breakers/28"
},
"media": [
{
"raw": "https://duskbreakers.gg/breaker_images/28.png",
"gateway": "https://duskbreakers.gg/breaker_images/28.png"
}
],
"metadata": {
...
}
},
...
]
...
}
You should then be able to do something like this to get your names:
const names = ownedNfts.map((nft) => nft.title};
Use the getNFTMetadata method to get information on each NFT.
I am working on a POC using Kendra and Salesforce. The connector allows me to connect to my Salesforce Org and index knowledge articles. I have been able to set this up and it is currently working as expected.
There are a few custom fields and data points I want to bring over to help enrich the data even more. One of these is an additional answer / body that will contain key information for the searching.
This field in my data source is rich text containing HTML and is often larger than 2048 characters, a limit that seems to be imposed in a String data field within Kendra.
I came across two hooks that are built in for Pre and Post data enrichment. My thought here is that I can use the pre hook to strip HTML tags and truncate the field before it gets stored in the index.
Hook Reference: https://docs.aws.amazon.com/kendra/latest/dg/API_CustomDocumentEnrichmentConfiguration.html
Current Setup:
I have added a new field to the index called sf_answer_preview. I then mapped this field in the data source to the rich text field in the Salesforce org.
If I run this as is, it will index about 200 of the 1,000 articles and give an error that the remaining articles exceed the 2048 character limit in that field, hence why I am trying to set up the enrichment.
I set up the above enrichment on my data source. I specified a lambda to use in the pre-extraction, as well as no additional filtering, so run this on every article. I am not 100% certain what the S3 bucket is for since I am using a data source, but it appears to be needed so I have added that as well.
For my lambda, I create the following:
exports.handler = async (event) => {
// Debug
console.log(JSON.stringify(event))
// Vars
const s3Bucket = event.s3Bucket;
const s3ObjectKey = event.s3ObjectKey;
const meta = event.metadata;
// Answer
const answer = meta.attributes.find(o => o.name === 'sf_answer_preview');
// Remove HTML Tags
const removeTags = (str) => {
if ((str===null) || (str===''))
return false;
else
str = str.toString();
return str.replace( /(<([^>]+)>)/ig, '');
}
// Truncate
const truncate = (input) => input.length > 2000 ? `${input.substring(0, 2000)}...` : input;
let result = truncate(removeTags(answer.value.stringValue));
// Response
const response = {
"version" : "v0",
"s3ObjectKey": s3ObjectKey,
"metadataUpdates": [
{"name":"sf_answer_preview", "value":{"stringValue":result}}
]
}
// Debug
console.log(response)
// Response
return response
};
Based on the contract for the lambda described here, it appears pretty straight forward. I access the event, find the field in the data called sf_answer_preview (the rich text field from Salesforce) and I strip and truncate the value to 2,000 characters.
For the response, I am telling it to update that field to the new formatted answer so that it complies with the field limits.
When I log the data in the lambda, the pre-extraction event details are as follows:
{
"s3Bucket": "kendrasfdev",
"s3ObjectKey": "pre-extraction/********/22736e62-c65e-4334-af60-8c925ef62034/https://*********.my.salesforce.com/ka1d0000000wkgVAAQ",
"metadata": {
"attributes": [
{
"name": "_document_title",
"value": {
"stringValue": "What majors are under the Exploratory track of Health and Life Sciences?"
}
},
{
"name": "sf_answer_preview",
"value": {
"stringValue": "A complete list of majors affiliated with the Exploratory Health and Life Sciences track is available online. This track allows you to explore a variety of majors related to the health and life science professions. For more information, please visit the Exploratory program description. "
}
},
{
"name": "_data_source_sync_job_execution_id",
"value": {
"stringValue": "0fbfb959-7206-4151-a2b7-fce761a46241"
}
},
]
}
}
The Problem:
When this runs, I am still getting the same field limit error that the content exceeds the character limit. When I run the lambda on the raw data, it strips and truncates it as expected. I am thinking that the response in the lambda for some reason isn't setting the field value to the new content correctly and still trying to use the data directly from Salesforce, thus throwing the error.
Has anyone set up lambdas for Kendra before that might know what I am doing wrong? This seems pretty common to be able to do things like strip PII information before it gets indexed, so I must be slightly off on my setup somewhere.
Any thoughts?
since you are still passing the rich text as a metadata filed of a document, the character limit still applies so the document would fail at validation step of the API call and would not reach the enrichment step. A work around is to somehow append those rich text fields to the body of the document so that your lambda can access it there. But if those fields are auto generated for your documents from your data sources, that might not be easy.
I'm trying to set up AWS' Cloudsearch with a DynamoDB table. My data structure is something like this:
{
"name": "John Smith",
"phone": "0123 456 789"
"business": {
"name": "Johnny's Cool Co",
"id": "12345",
"type": "contractor",
"suburb": "Sydney"
},
"profession": {
"name": "Plumber",
"id": "20"
},
"email": "johnsmith#gmail.com",
"id": "354684354-4b32-53e3-8949846-211384",
}
Importing this data from DynamoDB -> Cloudsearch is a breeze, however I want to be able to index on some of these nested object parameters (like business.name, profession.name etc).
Cloudsearch is pulling in some of the nested objects like suburb, but it seems like it's impossible for it to differentiate between the name in the root of the object and the name within the business and profession objects.
Questions:
How do I make these nested parameters searchable? Can I index on business.name or something?
If #1 is not possible, can I somehow send my data through a transforming function before it gets to Cloudsearch? This way I could flatten all of my objects and give the fields unique names like businessName and professionName
EDIT:
My solution at the moment is to have a separate DynamoDB table which replicates our users table, but stores it in a CloudSearch-friendly format. However, I don't like this solution at all so any other ideas are totally welcome!
You can use dynamodb streams and write a function that runs in lambda to capture changes and add documents to cloudsearch, flatenning them at that point, instead of keeping an additional dynamodb table.
For example, within my lambda function I have logic that keeps the list of nested fields (within a "body" parent in this case) and I create a just flatten them with their field name, in the case of duplicate sub-field names you can append the parent name to create a new field such as "body-name" as the key.
... misc. setup ...
headers = { "Content-Type": "application/json" }
indexed_fields = ['app', 'name', 'activity'] #fields to flatten
def handler(event, context): #lambda handler called at each update
document = {} #document to be uploaded to cloudsearch
document['id'] = ... #your uid, from the dynamo update record likely
document['type'] = 'add'
all_fields = {}
#flatten/pull out info you want indexed
for record in event['Records']:
body = record['dynamodb']['NewImage']['body']['M']
for key in indexed_fields:
all_fields[key] = body[key]['S']
document['fields'] = all_fields
#post update to cloudsearch endpoint
r = requests.post(url, auth=awsauth, json=document, headers=headers)
What would be the right way to change json response key value in aws appsync response mapping template?
JSON that I get looks like this:
{
"tenant_id": 1,
"id": "bd8ce6a8-8532-47ec-8b7f-dcd1f1603320",
"header": "Header name",
"visible": true
}
and what I would like to pass forward is
{
"tenantId": 1,
"id": "bd8ce6a8-8532-47ec-8b7f-dcd1f1603320",
"header": "Header name",
"visible": true
}
Schema wants tenant id in form of tenantID and lambda returns it in form of tenant_id. I could change it in lambda but I would like to know how to do it in response mapping template.
You could do this via the response mapping template for the field you are resolving to in the following manner:
Consider the JSON response from your lambda to be stored in the response variable, then you can return something like this.
$#set($result = {
"tenantId": ${response.tenant_id},
"id": "${response.id}",
"header": "${response.header}",
"visible": $response.visible
})
$util.toJson($result)
Alternatively, you could also mutate your response from the lambda by setting a tenantId field, something like #set( $response.tenantId = $response.tenant_id ). Let me know if you still face an issue.
Thanks,
Shankar