How to send multi-line value in postman..? in realtime we get this payload of 20-30 lines based on events triggered on the source - postman

I am building an integration where request payload has a key : value and one of the value is multiline. it can be single line or 40 lines also.. how to process this in postman .
I am testing in postman about my integration and would like to process the real payload.. the real payload has 30-40 lines as a value and i want to send this to my end point.. which is camunda here there i process and send this to other system.. I take care of camunda and north bound systems.. my question is about how to process multi line in postman

Related

Design Issue for web service call out from salesforce

For the scenario
As a user, whenever I try to generate or fetch codes, :
If, while generating codes via PUT callout, the request fails, then the system should identify that the put callout has failed and should not do subsequent GET callout to the codes which were not even created in the first place.
If, while generating codes via PUT callout, the request is successful, the system should wait for a while (30 secs to 1 min) and should not poll the Service API very frequently.
I have written a code thats call the PUT callout than after success of Put , calling the GET callout in future to retrieve the codes
Expected result is -
When PUT callout is sucess , system should wait for 30sec to 1 min to GET callout and retrieve all the data and store it in salesforce using scheduler and batch.
You can't schedule in Salesforce on a second-level cadence. The smallest allowable increment for a Schedulable job is fifteen minutes. Salesforce asynchronous jobs are always executed based on server load and are in a queue; you cannot control the time of their execution to the second.
While some approximation of this pattern could potentially be achieved using a Queueable chain, this pattern is not at all suited to the Salesforce architecture and really should be delegated to a middleware platform.

Postmates - webhook: determining the actual pickup_complete and delivered_complete

so I am looking into the postmates API and I have been able to create a delivery. This was great, I also setup a webhook url with ngrok to test the response from postmates but I am totally stumped as to how to determine when the pickup was actually completed and the dropoff/delivery was actually completed.
I saved all of the responses in a database and each time I did the test delivery, I received exactly 70 calls from the webhook endpoint. And each time 47 of them were in regards to the 'kind': 'event.delivery_status'. Here are the stats:
THIS IS ALL IN TEST MODE WITH THE SANDBOX...
11 of those are 'status':'pickup_complete'
14 of those are 'status':'pickup'
11 of those are 'status':'dropoff'
11 of those are 'status':'delivered'
all of the webhook responses for status=delivered have a 'data.courier_imminent':false value.
I went to the webpage for the 'data.tracking_url' and when the webpage showed that the delivery was complete, I immediately updated the database to see how many records that I had saved and I was only at 32 total records. this means that the webhook was continuing to send me updates after it was supposedly complete.
Lastly, all of these statuses are not in order, they are totally random, in fact the 6th to last record that was received was a pickup_complete status..
The real question:
how will I know what is actually a picked=completed, delivered=complete etc..
You'll receive a webhook of type event.delivery_status. One of the field within the body of the payload will be {status: "delivered"}. This has been accurate so far. Postmates doesn't return adelivered_at` timestamp, but you could create your own timestamp and store it along with the delivery for reporting.
As for the number of webhooks, Postmates has a delivery robot (called robo) that moves as if it was a real postmate. You'll receive a lot of webhooks of type event.courier_update with the updated location.

Alexa sent multiple request to AWS Lambda

I'm building the Alexa skill that sends the request to my web server,
then web server will do some process and upload a file to Amazon S3.
During the period of web server process, I make skill keep getting the file from Amazon S3 per 10 seconds till get the file. And the response is based on the file content.
But unfortunately, the web server process takes more than 1 minute. That means skill must stay more than 1 minute to get the file to response.
For now, I used progressive response with async await in my code,
and skill did keep waiting for the file on S3.
But I found that the skill will send the second request to Lambda after 50 seconds automatically. That means for the same skill, i got the two lambda function running at the same time.
And the execution result is : After the first response that progressive response made, 50 seconds later will hear another response that also made by the progressive response which belongs to the second request.
And nothing happened till the end.
I know it is bad to let skill waits this long, but i still want to figure out the executable way if skill needs to wait this long.
There are some points I want to figure out.
Is there anyway to prevent the skill to send the second
requests to Lambda?
Is there another way I can try to accomplish the goal?
Thanks
Eventually, I found that the second invoke of Lambda is not from Alexa, is from AWS Lambda itself. Refer to the following artical
https://cloudonaut.io/your-lambda-function-might-execute-twice-deal-with-it/
So you have to deal with this kind of situation in your Lambda code. One thing can be used is these two times invoke's request id is the same. So you can tell if this is the first time execution by checking your storage for the same request id which you store at the first time execution.
Besides, I also found that once the Alexa Skill waits for more than 1 minutes, it will crash and return the error by speaking (test by Amazon Echo). And there is nothing different in the AWS Lambda log compare to the normal execution one. That meaning the Log seems to be fine but actually the execution result is not.
Hope this can help someone is also struggled at this problem.

Dialogflow : display "processing" message on intents triggered by followup events

I've been working on a Dialogflow chatbot that calls a webhook which can often take more than the 5s delay to process and answer the user's request. So, following this post, my webhook sends a response containing a followup event if the processing is too long, and will be able to answer the following request sent by the intent triggered by the event.
Now, while this approach is working great, I have two questions :
Is there any way to send a message ("Please wait, I'm processing your request") to the user on every followup event ?
Since I'm using the Dialogflow-Messenger integration, is there any way to display the three dots "typing" animation while the webhook is processing the request ?
Thanks !
When developing a chatbot, you should keep in mind that you are trying to duplicate how 2 humans interact. You are developing a conversation and in the conversation, we should not keep other person waiting. All your requests should be completed within 4-5 seconds (to avoid timeout by the platform) to have a better UX.
So there is no way to show either Please Wait or animated 3 dots!
Write a good backend code to fetch response faster or tweak and cache your response. Dialogflow is currently designed for 1-1 conversation and cannot provide multiple delayed responses. If you need it that way, you will require to develop your own NLP engine.

What is the best way to get response time in OSB

I am doing it like this:
Inside OSB pipeline's message flow, at the beginning of request, assign the current time to a variable. Then in the response, use the current time of the response subtract the variable to calculate the response time. Then I have a reporting action to reporting this number.
I know OSB has a build in monitoring tool, it can display the response time for proxy server, pipeline and business server. As you can see my solution only include the time from the beginning of the pipeline + business server, but not including the time of the request and response message going through the proxy server. Besides that calculating it this way also feels like a non-standard approach.
OSB provided a JMX API which can get these build in monitoring data. But this would make our project more complicated.
If we want to use the OSB reporting action to report the response time. Is there a best way to do it?
Just switch Weblogic to use extended log format, and tell it to add time-taken to the list of tokens it logs on each response.
http://middlewaretechnologies.blogspot.com.au/2012/03/configure-extended-logging-in-http.html
or if you want to read the official docs:
http://docs.oracle.com/cd/E14571_01/web.1111/e13701/web_server.htm#CNFGD207