Import a standard http request in postman? - postman

I have a simple HTTP request :
POST /a/b/c HTTP/1.1
Host: localhost:17814
Content-Type: application/json
jwt: x.x.x
{
"requestId": "E1EC8B9E-A78E-443A-B2A9-8D6F7692B63C"
}
I don't have another format. It's a basic HTTP standard request structure.
I want to invoke it in Postman.
But it seems that when I try to "Import" it in postman it says :
Question:
Is there any way to import standard HTTP requests in postman? there must be. it's the standard syntax

The Import feature only supports certain formats.
Import a Postman Collection, Environment, data dump, curl command, or
a RAML / WADL / Open API (1.0/2.0/3.0) / GraphQL Schema / Runscope
file
I guess that would be something this in curl:
curl -X POST 'localhost:17814/a/b/c' \
-H 'jwt: x.x.x' \
-H 'Content-Type: application/json' \
-d '{
"requestId": "E1EC8B9E-A78E-443A-B2A9-8D6F7692B63C"
}'
You can import that format into the app and it should create the request.

Related

Unable to import Kibana Dashbord using api

I exported a dashboard and have been attempting to import it using the Kibana API.
On making the below curl request
curl -X POST -u <USERNAME>:<PASSWORD> <URL> -H "kbn-xsrf: true" --form file=#export.ndjson -H 'kbn-xsrf: true'
I'm getting the response as:
{"error":"Content-Type header [multipart/form-data; boundary=------------------------0506088858c35b19] is not supported","status":406}%
Note: I'm using AWS managed Opensearch
Can someone help me to please fix this error?
Thanks in advance :)
I was able to fix this issue by augmenting few header key-val pairs.
'kbn-xsrf': 'true',
"Accept": "*/*",
"Accept-Language": "en-GB,en-US;q=0.9,en;q=0.8",
"Accept-Encoding": "gzip, deflate, br",
'osd-xsrf': 'true'

Multipart upload with presigned urls - Scaleway S3-compatible object storage

I’m trying to have multipart upload working on Scaleway Object Storage (S3 compatible) with presigned urls and I’m getting errors (403) on preflight request generated by the browser but my CORS settings seems correctly set. (Basically wildcard on allowed headers and origins).
The error comes with a 403 status code and is as follow:
<?xml version='1.0' encoding='UTF-8'?>
<Error><Code>AccessDenied</Code><Message>Access Denied.</Message><RequestId>...</RequestId></Error>
I’m stuck on this one for a while now, I tried to copy the pre-flight request from my browser to reproduce it elsewhere and tried to tweak it a little bit.
Removing the query params from the url of the pre-flight request make the request successful (returns a 200 with Access-Control-Allow-* response headers correctly set) but this is obviously not the browser behavior...
This Doesn’t work (secrets, keys and names have been changed)
curl 'https://bucket-name.s3.fr-par.scw.cloud/tmp-screenshot-2021-01-20-at-16-21-33.png?AWSAccessKeyId=XXXXXXXXXXXXXXXXXXXX&Expires=1638217988&Signature=NnP1XLlcvPzZnsUgDAzm1Uhxri0%3D&partNumber=1&uploadId=OWI1NWY5ZGrtYzE3MS00MjcyLWI2NDAtNjFkYTM1MTRiZTcx' -X OPTIONS -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:94.0) Gecko/20100101 Firefox/94.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: http://domain.tech/' -H 'Access-Control-Request-Method: PUT' -H 'Origin: http://domain.tech' -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: no-cors' -H 'Sec-Fetch-Site: cross-site' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache'
This Works (secrets, keys and names have been changed)
curl 'https://bucket-name.s3.fr-par.scw.cloud/tmp-screenshot-2021-01-20-at-16-21-33.png' -X OPTIONS -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:94.0) Gecko/20100101 Firefox/94.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: http://domain.tech/' -H 'Access-Control-Request-Method: PUT' -H 'Origin: http://domain.tech' -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: no-cors' -H 'Sec-Fetch-Site: cross-site' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache'
The url comes from the aws-sdk and is generated this way :
const S3Client = new S3({
credentials: {
accessKeyId: env.SCW_ACCESS_KEY,
secretAccessKey: env.SCW_SECRET_KEY,
},
endpoint: `https://s3.${env.SCW_REGION}.scw.cloud`,
})
S3Client.getSignedUrlPromise('uploadPart', {
Bucket: bucket,
Key: key,
UploadId: multipartUpload.UploadId,
PartNumber: idx + 1,
})
and used this way in frontend:
// url being the url generated in backend as demonstrated above
const response = await fetch(url, {
method: 'PUT',
body: filePart,
signal: abortController.signal,
})
If anyone can give me a hand at this or that would be great!
As it turns out, Scaleway Object Storage is not fully S3-compatible on this case.
Here is a workaround:
Install aws4 library to sign request easily (or follow this scaleway doc to manually sign your request)
Form your request exactly as per stated in this other scaleway doc (this is where aws-sdk behavior differs, it generates an url with AWSAccessKeyId, Expires and Signature query params that cause the scaleway API to fail. Scaleway API only wants partNumber and uploadId).
Return the generated url and headers to the frontend
// Backend code
const signedRequest = aws4.sign(
{
method: 'PUT',
path: `/${key}?partNumber=${idx + 1}&uploadId=${
multipartUpload.UploadId
}`,
service: 's3',
region: env.SCW_REGION,
host: `${bucket}.s3.${env.SCW_REGION}.scw.cloud`,
},
{
accessKeyId: env.SCW_ACCESS_KEY,
secretAccessKey: env.SCW_SECRET_KEY,
},
)
return {
url: `https://${signedRequest.host}${signedRequest.path}`,
headers: Object.keys(signedRequest.headers).map((key) => ({
key,
value: signedRequest.headers[key] as string,
})),
}
And then in frontend:
// Frontend code
const headers = signedRequest.headers.reduce<Record<string, string>>(
(acc, h) => ({ ...acc, [h.key]: h.value }),
{},
)
const response = await fetch(signedRequest.url, {
method: 'PUT',
body: filePart,
headers,
signal: abortController.signal,
})
Scaleway knows this issue as I directly discussed with their support team and they are putting some effort in order to be as compliant as possible with S3. This issue might be fixed by the time you read this.
Thanks to them for the really quick response time and for taking this seriously.

POST, PUT and PATCH resources not working in WSO2 Api Gateway

I am quite new to the WSO2 tools. I recently started using the WSO2 API Manager(ver. 3.1.0).
I created an API gateway by importing the httpbin swagger specs: https://github.com/Azure/api-management-samples/blob/master/apis/httpbin.swagger.json. I published the API, subscribed to it, generated the API keys and started testing.
I imported the spec in Postman, configured the API key for authorization, changed the server to the local gateway http://localhost:8280/Api_Base/1.0
All the resources defined with GET method were accessible, but the POST, PUT and PATCH resources
were not reachable via the gateway. I received the following error response "<faultstring>unknown" for these resources. I tried with cURL as well but got the same results. When I tried POST for httpbin directly it was working just fine:
curl --location --request POST 'http://httpbin.org/post'
{
"args": {},
"data": "",
"files": {},
"form": {},
"headers": {
"Accept": "*/*",
"Host": "httpbin.org",
"User-Agent": "curl/7.58.0",
"X-Amzn-Trace-Id": "Root=1-5e8e0d39-ddf21f1055008f60707cf150"
},
"json": null,
"origin": "95.103.xxx.xxx",
"url": "http://httpbin.org/post"
}
and via my API gateway(with API key as well):
curl --location --request POST 'http://localhost:8280/HTTP_Bin_Mock/1.0/post'
<faultstring>unknown</faultstring>
What could have gone wrong?
please try below CURL command
curl --location --request POST 'http://localhost:8280/HTTP_Bin_Mock/1.0/post' --data '{}' --header 'Content-Type: Application/JSON'

AWS Application Load Balancer fails to process request body for content encoding gzip and content type application/json

I am trying to send a gzipped json as POST request body to AWS Application Load Balancer, which calls AWS Lambda.
When I set the content type request header as application/json, I get 502 Bad Gateway error as response and AWS Lambda does not get invoked.
I am using following curl command.
curl -v -s --data-binary #samples/small-batch.json.gz -H "Content-Encoding: gzip" -H "Content-Type: application/json" -X POST https://sub.domain.com/batch
Am I sending invalid request headers?
My AWS Lambda code:
import json
def lambda_handler(event, context):
print("event = ", event)
return {
'statusCode': 200,
'body': json.dumps({ 'success': True }),
'headers': {
'Content-Type': 'application/json'
}
}
Update
If I place request with empty content type, then Lambda gets called successfully.
curl -v --data-binary #samples/small-batch.json.gz -H "Content-Type: " -H "Content-encoding: gzip" -X POST https://sub.domain.com/batch
If I make request with application/gzip content type then Lambda gets called successfully.
curl -v --data-binary #samples/small-batch.json.gz -H "Content-Type: application/gzip" -H "Content-encoding: gzip" -X POST https://sub.domain.com/batch
The 502 error is occurred only when I request with Content Encoding as gzip and Content Type as application/json. But as per my understanding these are valid headers.
Update 2
From the documentation I found at https://docs.aws.amazon.com/elasticloadbalancing/latest/application/lambda-functions.html
If the content type is one of the following types, the load balancer
sends the body to the Lambda function as is and sets isBase64Encoded
to false: text/*, application/json, application/javascript, and
application/xml. For all other types, the load balancer Base64 encodes
the body and sets isBase64Encoded to true.
I think because of this, header Content-Encoding: gzip can not be coupled with header Content-Type: application/json. I think something is going wrong in ALB while calling Lambda.
It's probably not a problem with header Content-Encoding: gzip & Content-Type: application/json but it could be a problem with the binary data size (which could be more than 1 MB) that you are sending.
According to AWS documenation, Lambda has limitation in request and response payload while using with ALB.
The maximum size of the request body that you can send to a Lambda function is 1 MB.
The maximum size of the response JSON that the Lambda function can send is 1 MB.
Please refere:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/lambda-functions.html

Difference in request body in aws api gateway test and curl

I'm trying to add a POST HTTP method to my AWS API Gateway. I'm using SAM framework with Python.
I find that there is a difference in the "body" of the response when it is generated from my desktop (curl or postman) and the AWS API Gateway 'TEST'
Right now, the "POST" command only prints the 'event' object received by the lambda_handler. (I'm using an object to store the event as you can see below)
def add(self):
response = {
"statusCode": 200,
"body": json.dumps(self._event)
}
return response
When I'm using the 'TEST' option of the API Gateway console, with the input:
{"username":"xyz","password":"xyz"}
I receive the following output:
{
"body": "{\"username\":\"xyz\",\"password\":\"xyz\"}",
<the rest of the response>
}
However, when I'm sending the curl (or postman) request:
curl --header "Content-Type: application/json" --request POST --data '{"username":"xyz","password":"xyz"}' <aws api gateway link>
I get the following response:
{
"body": "eyJ1c2VybmFtZSI6Inh5eiIsInBhc3N3b3JkIjoieHl6In0="
<the rest of the response>
}
Why do you think there is a difference between the two tests?
Curl and Postman seem to be automatically Base64 encoding your Authentication credentials.
The responses are the same. The latter response is a Base64-encoded token of the first response.