fileChange(event) {
debugger;
let fileList: FileList = event.target.files;
if (fileList.length > 0) {
let file: File = fileList[0];
let formData: FormData = new FormData();
formData.append('uploadFile', file, file.name);
let headers = new Headers()
headers.append('Authorization', this.token);
headers.append('Content-Type', 'application/json');
headers.append('Content-Disposition', 'form-data; filename="'+file.name+'"');
// Content-Disposition: form-data; name="fieldName"; filename="filename.jpg"
headers.append('Content-Type', 'multipart/form-data');
let options = new RequestOptions({ headers: headers });
// let apiUrl1 = "/api/UploadFileApi";
this.http.post('http://192.168.1.160:8000/profilepic/', {FormData:formData}, options)
.map(res => res.json())
.catch(error => Observable.throw(error))
.subscribe(
data => alert('success'),
error => alert(error)
)
}
// window.location.reload();
}
<input class="uploadfile-style" [(ngModel)]="FilePath" (change)="fileChange($event)" name="CContractorFPath"
size="10" type="file" enctype="multipart/form-data">
Hii
I have write this code for post image in angular 2...but it shows error Unsupported media type \"application/json,multipart/form-data\" in request
same file accepted when i post it from Postman
following are two different curl commands
1. This is accepted by Json
curl -X POST \
http://192.168.1.223:8010/profilepic/ \
-H 'authorization: Basic YWRtaW46YWRtaW4=' \
-H 'cache-control: no-cache' \
-H 'content-type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW' \
-H 'postman-token: 794107c3-791d-e198-fe36-48f407a3ec8c' \
-F datafile=#/home/ashwini/Pictures/b.jpeg
2. This is not accepted by API
curl 'http://192.168.1.223:8010/profilepic/' -H 'Authorization: 6df62b2d808c15acbdd8598d0153c8ca7e9ea28a' -H 'Origin: http://localhost:4201' -H 'Accept-Encoding: gzip, deflate' -H 'Accept-Language: en-GB,en-US;q=0.8,en;q=0.6' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36' -H 'Content-Type: application/json,multipart/form-data' -H 'Accept: application/json, text/plain, */*' -H 'Referer: http://localhost:4201/' -H 'Connection: keep-alive' --data-binary $'{\n "FormData": {}\n}' --compressed
Related
I am playing around with lambda function URLs, which seem like a perfect fit for my use case.
I have a lambda configured with function URL turned on, Auth type of AWS_IAM, and CORS turned on with default settings (Allow Origin *, nothing else set). From Javascript running in Chrome I am sending a signed request to the lambda with code that looks like:
const url = new URL(rawUrl)
const signer = new SignatureV4({
credentials: *******,
sha256: Sha256,
service: "lambda",
region: region,
})
const request = new HttpRequest({
hostname: url.hostname,
path: url.pathname,
body: JSON.stringify({}),
method: 'POST',
headers: {
'Content-Type': 'application/json',
host: url.hostname,
}
})
const send = async () => {
const {headers, body, method} = await signer.sign(request)
console.log('send.headers', headers)
const result = await fetch(rawUrl, { headers, body, method })
.then((res) => res.json())
return result
}
send()
.then((data) => {
console.log('success', data)
})
.catch((reason) => {
console.log('error', reason)
})
When this code runs it generates a request that fails CORS because the preflight is not validated. The curl equivalents of the requests Chrome is sending are
curl 'https://**********.lambda-url.us-west-2.on.aws/' \
-H 'sec-ch-ua: "Google Chrome";v="105", "Not)A;Brand";v="8", "Chromium";v="105"' \
-H 'sec-ch-ua-mobile: ?0' \
-H 'authorization: AWS4-HMAC-SHA256 Credential=*******/*******/us-west-2/lambda/aws4_request, SignedHeaders=content-type;host;x-amz-content-sha256;x-amz-date;x-amz-security-token, Signature=***********' \
-H 'Content-Type: application/json' \
-H 'x-amz-content-sha256: ************' \
-H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36' \
-H 'x-amz-security-token: ********' \
-H 'Referer: http://localhost:3000/' \
-H 'x-amz-date: 20220916T035331Z' \
-H 'sec-ch-ua-platform: "macOS"' \
--data-raw '{}' \
--compressed
for the request and
curl 'https://*******.lambda-url.us-west-2.on.aws/' \
-X 'OPTIONS' \
-H 'Accept: */*' \
-H 'Accept-Language: en-US,en;q=0.9' \
-H 'Access-Control-Request-Headers: authorization,content-type,x-amz-content-sha256,x-amz-date,x-amz-security-token' \
-H 'Access-Control-Request-Method: POST' \
-H 'Connection: keep-alive' \
-H 'Origin: http://localhost:3000' \
-H 'Referer: http://localhost:3000/' \
-H 'Sec-Fetch-Dest: empty' \
-H 'Sec-Fetch-Mode: cors' \
-H 'Sec-Fetch-Site: cross-site' \
-H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36' \
-H 'dnt: 1' \
-H 'sec-gpc: 1' \
--compressed
for the preflight.
If I run that preflight in a terminal I get the following response:
HTTP/1.1 200 OK
Date: Xxx, xx Xxx 2022 xx:xx:xx GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
x-amzn-RequestId: xxxxxxxxx
Note that there aren't valid CORS headers being sent back, which results in Chrome not making the actual request. If I run the request itself in a terminal and bypass the preflight, then it runs successfully.
Is there an error in the way that I am making a request or how I've configured the lambda (I've tried various combinations)?
Thanks for your help!
Thanks to the comment from #jub0bs above I figured out the answer.
All the CORS configuration values default to being locked down. I had tried * for all the values, but did not set the Max age configuration value. With all setting properly filled out to the origins, methods, and headers that I am using AND Max age set to 0 everything started behaving itself.
I’m trying to have multipart upload working on Scaleway Object Storage (S3 compatible) with presigned urls and I’m getting errors (403) on preflight request generated by the browser but my CORS settings seems correctly set. (Basically wildcard on allowed headers and origins).
The error comes with a 403 status code and is as follow:
<?xml version='1.0' encoding='UTF-8'?>
<Error><Code>AccessDenied</Code><Message>Access Denied.</Message><RequestId>...</RequestId></Error>
I’m stuck on this one for a while now, I tried to copy the pre-flight request from my browser to reproduce it elsewhere and tried to tweak it a little bit.
Removing the query params from the url of the pre-flight request make the request successful (returns a 200 with Access-Control-Allow-* response headers correctly set) but this is obviously not the browser behavior...
This Doesn’t work (secrets, keys and names have been changed)
curl 'https://bucket-name.s3.fr-par.scw.cloud/tmp-screenshot-2021-01-20-at-16-21-33.png?AWSAccessKeyId=XXXXXXXXXXXXXXXXXXXX&Expires=1638217988&Signature=NnP1XLlcvPzZnsUgDAzm1Uhxri0%3D&partNumber=1&uploadId=OWI1NWY5ZGrtYzE3MS00MjcyLWI2NDAtNjFkYTM1MTRiZTcx' -X OPTIONS -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:94.0) Gecko/20100101 Firefox/94.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: http://domain.tech/' -H 'Access-Control-Request-Method: PUT' -H 'Origin: http://domain.tech' -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: no-cors' -H 'Sec-Fetch-Site: cross-site' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache'
This Works (secrets, keys and names have been changed)
curl 'https://bucket-name.s3.fr-par.scw.cloud/tmp-screenshot-2021-01-20-at-16-21-33.png' -X OPTIONS -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:94.0) Gecko/20100101 Firefox/94.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: http://domain.tech/' -H 'Access-Control-Request-Method: PUT' -H 'Origin: http://domain.tech' -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: no-cors' -H 'Sec-Fetch-Site: cross-site' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache'
The url comes from the aws-sdk and is generated this way :
const S3Client = new S3({
credentials: {
accessKeyId: env.SCW_ACCESS_KEY,
secretAccessKey: env.SCW_SECRET_KEY,
},
endpoint: `https://s3.${env.SCW_REGION}.scw.cloud`,
})
S3Client.getSignedUrlPromise('uploadPart', {
Bucket: bucket,
Key: key,
UploadId: multipartUpload.UploadId,
PartNumber: idx + 1,
})
and used this way in frontend:
// url being the url generated in backend as demonstrated above
const response = await fetch(url, {
method: 'PUT',
body: filePart,
signal: abortController.signal,
})
If anyone can give me a hand at this or that would be great!
As it turns out, Scaleway Object Storage is not fully S3-compatible on this case.
Here is a workaround:
Install aws4 library to sign request easily (or follow this scaleway doc to manually sign your request)
Form your request exactly as per stated in this other scaleway doc (this is where aws-sdk behavior differs, it generates an url with AWSAccessKeyId, Expires and Signature query params that cause the scaleway API to fail. Scaleway API only wants partNumber and uploadId).
Return the generated url and headers to the frontend
// Backend code
const signedRequest = aws4.sign(
{
method: 'PUT',
path: `/${key}?partNumber=${idx + 1}&uploadId=${
multipartUpload.UploadId
}`,
service: 's3',
region: env.SCW_REGION,
host: `${bucket}.s3.${env.SCW_REGION}.scw.cloud`,
},
{
accessKeyId: env.SCW_ACCESS_KEY,
secretAccessKey: env.SCW_SECRET_KEY,
},
)
return {
url: `https://${signedRequest.host}${signedRequest.path}`,
headers: Object.keys(signedRequest.headers).map((key) => ({
key,
value: signedRequest.headers[key] as string,
})),
}
And then in frontend:
// Frontend code
const headers = signedRequest.headers.reduce<Record<string, string>>(
(acc, h) => ({ ...acc, [h.key]: h.value }),
{},
)
const response = await fetch(signedRequest.url, {
method: 'PUT',
body: filePart,
headers,
signal: abortController.signal,
})
Scaleway knows this issue as I directly discussed with their support team and they are putting some effort in order to be as compliant as possible with S3. This issue might be fixed by the time you read this.
Thanks to them for the really quick response time and for taking this seriously.
I'm trying to implement PayPal in Django without any SDK or package.
https://developer.paypal.com/docs/business/checkout/server-side-api-calls/create-order/
Want to rewrite this cURL to Python
curl -v -X POST https://api-m.sandbox.paypal.com/v2/checkout/orders \
-H "Content-Type: application/json" \
-H "Authorization: Bearer Access-Token" \
-d '{
"intent": "CAPTURE",
"purchase_units": [
{
"amount": {
"currency_code": "USD",
"value": "100.00"
}
}
]
}'
My current code:
t = gettoken()
d = {"intent": "CAPTURE","purchase_units": [{"amount": {"currency_code": "USD","value": "100.00"}}]}
h = {"Content-Type: application/json", "Authorization: Bearer "+t}
r = requests.post('https://api-m.sandbox.paypal.com/v2/checkout/orders', headers=h, data=d).json()
My Error:
Internal Server Error: /createOrder
.....
AttributeError: 'set' object has no attribute 'items'
The Bearer Token is fine.
Any idea? What am I missing?
d = {"intent": "CAPTURE","purchase_units": [{"amount": {"currency_code": "USD","value": "100.00"}}]}
h = {"Content-Type": "application/json", "Authorization": "Bearer "+t}
r = requests.post('https://api-m.sandbox.paypal.com/v2/checkout/orders', headers=h, json=d).json()
Works.
anyone knows how to create a virtual dataset through rest api in superset. able to create physical dataset using this json parameters.
{ "database": 0, "owners": [
0 ], "schema": "string", "table_name": "string" }
the problem is here , no parameter options for passing virtual table querys, no details about the virtual table in superset swagger documentation.
while i can create physical and virtual dataset from dashboard side.
thankyou
I`ve solve this by do this
Import this to your Raw text in Postman
curl 'http://192.168.10.107:8088/superset/sqllab_viz/' \
-H 'Accept: */*' \
-H 'Accept-Language: en-EN,en;q=0.9,en-US;q=0.8,en;q=0.7' \
-H 'Connection: keep-alive' \
-H 'Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryVmkUTQRE8ZgVIAfU' \
-H 'Cookie: session=.eJyFkd1qwzAMRl9l-Do0lm05dl5llCBb8hpWluC4G6P03eey3Q90Jb5z0M9dLaXKcVFzqzcZ1LKymlUx5ApTBC8sgZ1AKRSTKRkgFA_eiaccHDuDRmKCFCyk3hLSBnUyaCNYRm3dhLmIs8-WC8g2iTElkEPnJ0BK00SeDSbNGrXOOiOIUYPKRy1L297l4zmPoEHKKZkAFLFEy84Xp8vEkEtIxMAWCTt33TJdpTMdHNRGt9Z3u6uXpuZXFXKyXeUIiWw3maQpRBczx8lZ8J1Q58cftux1-1xZapd9bZX3fqejJ3Z6k-WyHm2r30_ppbV9HkeI5gQ-nED3muagQxjrdpVjvPbs2MF_g8JrGwHUeVC3Q-rvLwDV4wcIjHo-.Y1ULLQ.PlsmVk7EAYDhMh4FXWDw-2BuOms' \
-H 'DNT: 1' \
-H 'Origin: http://192.168.10.107:8088' \
-H 'Referer: http://192.168.10.107:8088/superset/sqllab/' \
-H 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36' \
-H 'X-CSRFToken: ImZlNTI1YWNiYjI4MWE5NWY5M2Q0NmY0MGY3ZDFjZjhiYWQxZDM1YTUi.Y1ULZg.IdeebPxxB1WPAnzMM_shgAaP1Sk' \
--data-raw $'------WebKitFormBoundaryVmkUTQRE8ZgVIAfU\r\nContent-Disposition: form-data; name="data"\r\n\r\n{"schema":"superset","sql":"SELECT KODE_RS , KELAS_RS , KELAS_RAWAT , ADMISSION_DATE FROM rsa_januari_2021 WHERE ADMISSION_DATE = \'21/01/2021\' LIMIT 10","dbId":3,"datasourceName":"VirtualDatasetTest1","columns":[{"name":"SOME_TEXT1","type":"BLOB","is_date":false},{"name":"SOME_TEXT2","type":"BLOB","is_date":false},{"name":"SOME_TEXT3","type":"LONGLONG","is_date":false},{"name":"SOME_TEXT4","type":"BLOB","is_date":false}]}\r\n------WebKitFormBoundaryVmkUTQRE8ZgVIAfU--\r\n' \
--compressed \
--insecure
so After That you can request Post in Postman.
Note. you must copy as Curl(bash) in your web inspector
and after you run this in postman you should create virtual dataset named "VirtualDatasetTest1"
I hope this help the other.
Is there any way to use delete method by custom fields , not by object id , something like
curl -v --dump-header - -H "Content-Type: application/json" -X DELETE --data '{"username":"your_username"}' "http://127.0.0.1:8300/api/v1/group/username=EMAIL&api_key=SECRET"
not the following
curl -v --dump-header - -H "Content-Type: application/json" -X DELETE "http://127.0.0.1:8300/api/v1/group/obj_id/username=EMAIL&api_key=SECRET"