Duplicating file when I want to display image from server Filepond - laravel-livewire

I know that I have to use from "files" property to display an uploaded image from server to Filepond, but it always duplicates my uploaded file on server and it loads again with percentage progress bar on top right, I only want to display an uploaded image without duplicating it and filling progress of circleprogress, but how?
here is the code:
const pond = FilePond.create({
files: [
{
// The id of the file
source: '12345',
// A 'local' file has been uploaded
// to the server in a previous sessions
options: {
type: 'local',
},
},
],});
It is the duplicated image:
Original file is Screenshot from 2022-08-16 08-36-47.png and duplicating file is blob
I dont want load file again:
I have this issue when I want to edit a post.

Related

What timestamp is being used by Amazon Connect for recordings filename? Initiation timestamp or Disconnect timestamp?

As we know amazon connect record the calls and store the recordings on S3 bucket, I am looking for what timestamp I can use to make the filename by myself in my code!
There are two timestamps e.g. Initiation timestamp & Disconnect timestamp are being used while creating filenames with contactId_timestamp_UTC e.g. 7bb75057-76ae-4e7e-a140-44a50cc5954b_20220418T06:44_UTC.wav.
I have used the callStartTime to create these filenames and then get the files from S3 using SignedURL but in few cases there is a difference of 1 sec as file stored with incremental of one sec on S3 and I couldn't get the file form S3.
For example I the filename is been created by my application is: 7bb75057-76ae-4e7e-a140-44a50cc5954b_20220418T06:44_UTC.wav. but the recording stored on S3 has filename as: 7bb75057-76ae-4e7e-a140-44a50cc5954b_20220418T06:45_UTC.wav.
The last thing can this data (timestamp) is available in contact object? so I can use it.
Looks like the timestamp for the file name is based on ConnectedToAgentTimestamp which makes sense as the recording doesn't start until the caller is talking to an agent. ConnectedToAgentTimestamp is under AgentInfo in the contact details...
{
"Contact": {
"Arn": "arn:aws:connect:us-west-2:xxxxxxxxxx:instance/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/contact/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"Id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"InitiationMethod": "INBOUND",
"Channel": "VOICE",
"QueueInfo": {
"Id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"EnqueueTimestamp": "2022-04-13T15:05:45.334000+12:00"
},
"AgentInfo": {
"Id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"ConnectedToAgentTimestamp": "2022-04-13T15:06:25.706000+12:00"
},
"InitiationTimestamp": "2022-04-13T15:05:15.869000+12:00",
"DisconnectTimestamp": "2022-04-13T15:08:08.298000+12:00",
"LastUpdateTimestamp": "2022-04-13T15:08:08.299000+12:00"
}
}

Automation in Postman - Different set of input datasets

Suppose my input body is the below in JSON format
[
{
"username": "Test User 1",
"rollNo": 45
},
{
"username": "Test User 2",
"rollNo": 46,
"hometown": "XYZ"
},
{
"username": "Test User 3",
"rollNo": 47,
"location": "ABC"
}
]
How to automate this set of data in POSTMAN . I need help in writing a script where the collection runs 3 times each time takes the input from 1 of the above mentioned values in POSTMAN.
You can do this but using each object from that datafile, as the request body.
Add the following items to the request, these with be used to get and use the data.
In the request body:
{{jsonBody}}
In the Pre-request Script:
pm.variables.set('jsonBody', JSON.stringify(pm.iterationData.toObject()));
Ensure that the request is saved (no orange dot in the request tab), open the Collection Runner and select the Collection. Select your JSON data file (The iteration count will match the number of objects in the file) and check the Save responses box.
Run the Collection and each request should use the whole data object for that iteration.

AWS Kendra PreHook Lambdas for Data Enrichment

I am working on a POC using Kendra and Salesforce. The connector allows me to connect to my Salesforce Org and index knowledge articles. I have been able to set this up and it is currently working as expected.
There are a few custom fields and data points I want to bring over to help enrich the data even more. One of these is an additional answer / body that will contain key information for the searching.
This field in my data source is rich text containing HTML and is often larger than 2048 characters, a limit that seems to be imposed in a String data field within Kendra.
I came across two hooks that are built in for Pre and Post data enrichment. My thought here is that I can use the pre hook to strip HTML tags and truncate the field before it gets stored in the index.
Hook Reference: https://docs.aws.amazon.com/kendra/latest/dg/API_CustomDocumentEnrichmentConfiguration.html
Current Setup:
I have added a new field to the index called sf_answer_preview. I then mapped this field in the data source to the rich text field in the Salesforce org.
If I run this as is, it will index about 200 of the 1,000 articles and give an error that the remaining articles exceed the 2048 character limit in that field, hence why I am trying to set up the enrichment.
I set up the above enrichment on my data source. I specified a lambda to use in the pre-extraction, as well as no additional filtering, so run this on every article. I am not 100% certain what the S3 bucket is for since I am using a data source, but it appears to be needed so I have added that as well.
For my lambda, I create the following:
exports.handler = async (event) => {
// Debug
console.log(JSON.stringify(event))
// Vars
const s3Bucket = event.s3Bucket;
const s3ObjectKey = event.s3ObjectKey;
const meta = event.metadata;
// Answer
const answer = meta.attributes.find(o => o.name === 'sf_answer_preview');
// Remove HTML Tags
const removeTags = (str) => {
if ((str===null) || (str===''))
return false;
else
str = str.toString();
return str.replace( /(<([^>]+)>)/ig, '');
}
// Truncate
const truncate = (input) => input.length > 2000 ? `${input.substring(0, 2000)}...` : input;
let result = truncate(removeTags(answer.value.stringValue));
// Response
const response = {
"version" : "v0",
"s3ObjectKey": s3ObjectKey,
"metadataUpdates": [
{"name":"sf_answer_preview", "value":{"stringValue":result}}
]
}
// Debug
console.log(response)
// Response
return response
};
Based on the contract for the lambda described here, it appears pretty straight forward. I access the event, find the field in the data called sf_answer_preview (the rich text field from Salesforce) and I strip and truncate the value to 2,000 characters.
For the response, I am telling it to update that field to the new formatted answer so that it complies with the field limits.
When I log the data in the lambda, the pre-extraction event details are as follows:
{
"s3Bucket": "kendrasfdev",
"s3ObjectKey": "pre-extraction/********/22736e62-c65e-4334-af60-8c925ef62034/https://*********.my.salesforce.com/ka1d0000000wkgVAAQ",
"metadata": {
"attributes": [
{
"name": "_document_title",
"value": {
"stringValue": "What majors are under the Exploratory track of Health and Life Sciences?"
}
},
{
"name": "sf_answer_preview",
"value": {
"stringValue": "A complete list of majors affiliated with the Exploratory Health and Life Sciences track is available online. This track allows you to explore a variety of majors related to the health and life science professions. For more information, please visit the Exploratory program description. "
}
},
{
"name": "_data_source_sync_job_execution_id",
"value": {
"stringValue": "0fbfb959-7206-4151-a2b7-fce761a46241"
}
},
]
}
}
The Problem:
When this runs, I am still getting the same field limit error that the content exceeds the character limit. When I run the lambda on the raw data, it strips and truncates it as expected. I am thinking that the response in the lambda for some reason isn't setting the field value to the new content correctly and still trying to use the data directly from Salesforce, thus throwing the error.
Has anyone set up lambdas for Kendra before that might know what I am doing wrong? This seems pretty common to be able to do things like strip PII information before it gets indexed, so I must be slightly off on my setup somewhere.
Any thoughts?
since you are still passing the rich text as a metadata filed of a document, the character limit still applies so the document would fail at validation step of the API call and would not reach the enrichment step. A work around is to somehow append those rich text fields to the body of the document so that your lambda can access it there. But if those fields are auto generated for your documents from your data sources, that might not be easy.

How to add an inventory host to specific group using ansible tower API? So that it will display on related groups list on UI

I am unable to assign a host to group in ansible tower inventory using rest API's. Any one have worked on it please let me know the request with body.
I found a solution. For me, the problem was that I was searching in api/v2/inventories/{id}/groups/; turns out you actually have to look in api/v2/groups/{id}/hosts/.
Add host to inventory group
URI: {your host}/api/v2/groups/{id}/hosts/
Method: POST
Payload:
{
"name": "{hostname}",
"description": "",
"enabled": true,
"instance_id": "",
"variables": ""
}
This will create a host in the specified group.
In AWX and Ansible Tower, you can navigate to the url in your browser, then you can scroll all the way down, and if you can do a POST, there'll be a form there that has the payload. You can fill it in and post it right there in the browser.
When you are at the inventory group in the normal GUI, you can find the id of the inventory group in the URL.

How to update object with an image field - Django Rest Framework

I have a model called ProductImage that contains a few fields and an Django ImageField. In this case I already have the object created, and I want to update the featured boolean in the object.
Problem is that when I do a $http.put() (Using AngularJS) I get an error returned saying:
The submitted data was not a file. Check the encoding type on the form.
My REST API Object looks like this on the GET request:
{
"id": 15,
"image": "http://127.0.0.1:8000/media/products/photo_1_5.JPG",
"alt": "HelloWorld",
"featured": false,
"product": 1
}
The HTTP PUT request I send looks like this: (Notice featured has been changed to true)
{
"id": 15,
"image": "http://127.0.0.1:8000/media/products/photo_1_5.JPG",
"alt": "HelloWorld",
"featured": true,
"product": 1
}
So... How do I update my object without having to re-submit/re-upload the image file?
If you use PUT to update an object you have to send a full instance. So in your case you have to send a image file for image not an url to the image.
The easiest solution is probably to use PATCH instead of PUT. Then you can do a partial update and send only the updated fields.
{
"featured": true
}