I am trying to initiate Aws::CognitoIdentityProvider::CognitoIdentityProviderClient InitiateAuth
Here is my code:
Aws::Client::ClientConfiguration clientConfiguration;
clientConfiguration.region = regionId;
clientConfiguration.proxyHost = "<>";
clientConfiguration.proxyPort = <>;
clientConfiguration.proxyScheme = Aws::Http::Scheme::HTTP;
Aws::CognitoIdentityProvider::Model::InitiateAuthRequest authRequest;
authRequest.SetAuthFlow(Aws::CognitoIdentityProvider::Model::AuthFlowType::USER_SRP_AUTH);
authRequest.SetClientId(clientId);
authParameters.insert(std::make_pair("USERNAME", userName));
authParameters.insert(std::make_pair("SRP_A", srp_A));
authRequest.SetAuthParameters(authParameters);
std::shared_ptr<Aws::CognitoIdentityProvider::CognitoIdentityProviderClient> cognitoClient = std::make_shared<Aws::CognitoIdentityProvider::CognitoIdentityProviderClient>(clientConfiguration);
auto authRequestResult = cognitoClient->InitiateAuth(authRequest);
Here is what I get in the Trace logs:
[TRACE] 2018-10-24 14:02:16 CurlHttpClient [140737353963648] content-length: 935
[TRACE] 2018-10-24 14:02:16 CurlHttpClient [140737353963648] content-type: application/x-amz-json-1.1
[TRACE] 2018-10-24 14:02:16 CurlHttpClient [140737353963648] host: cognito-idp.eu-west-1.amazonaws.com
[TRACE] 2018-10-24 14:02:16 CurlHttpClient [140737353963648] user-agent: aws-sdk-cpp/1.6.30 Linux/3.10.0-862.9.1.el7.x86_64 x86_64 GCC/4.8.5
[TRACE] 2018-10-24 14:02:16 CurlHttpClient [140737353963648] x-amz-target: AWSCognitoIdentityProviderService.InitiateAuth
[DEBUG] 2018-10-24 14:02:26 CURL [140737353963648] (HeaderIn) HTTP/1.1 400 Bad Request
I have managed to debug and I think that behind the scene the code is executing an equivalent of:
curl -i \
-H "transfer-encoding:" \
-H "content-length: 914" \
-H "content-type: application/x-amz-json-1.1" \
-H "host: cognito-idp.eu-west-1.amazonaws.com" \
-H "user-agent: aws-sdk-cpp/1.6.30 Linux/3.10.0-862.9.1.el7.x86_64 x86_64 GCC/4.8.5" \
-H "x-amz-target: AWSCognitoIdentityProviderService.InitiateAuth" \
-X POST \
-d '{"AuthFlow":"USER_SRP_AUTH","AuthParameters":{"SRP_A":"<>","USERNAME":"<>"},"ClientId":"<>"}' \
https://cognito-idp.eu-west-1.amazonaws.com --verbose --proxy http://proxy:port
I do not know why it does not work ? Proxy or wrong setup in the code?
Please help.
Related
I am playing around with lambda function URLs, which seem like a perfect fit for my use case.
I have a lambda configured with function URL turned on, Auth type of AWS_IAM, and CORS turned on with default settings (Allow Origin *, nothing else set). From Javascript running in Chrome I am sending a signed request to the lambda with code that looks like:
const url = new URL(rawUrl)
const signer = new SignatureV4({
credentials: *******,
sha256: Sha256,
service: "lambda",
region: region,
})
const request = new HttpRequest({
hostname: url.hostname,
path: url.pathname,
body: JSON.stringify({}),
method: 'POST',
headers: {
'Content-Type': 'application/json',
host: url.hostname,
}
})
const send = async () => {
const {headers, body, method} = await signer.sign(request)
console.log('send.headers', headers)
const result = await fetch(rawUrl, { headers, body, method })
.then((res) => res.json())
return result
}
send()
.then((data) => {
console.log('success', data)
})
.catch((reason) => {
console.log('error', reason)
})
When this code runs it generates a request that fails CORS because the preflight is not validated. The curl equivalents of the requests Chrome is sending are
curl 'https://**********.lambda-url.us-west-2.on.aws/' \
-H 'sec-ch-ua: "Google Chrome";v="105", "Not)A;Brand";v="8", "Chromium";v="105"' \
-H 'sec-ch-ua-mobile: ?0' \
-H 'authorization: AWS4-HMAC-SHA256 Credential=*******/*******/us-west-2/lambda/aws4_request, SignedHeaders=content-type;host;x-amz-content-sha256;x-amz-date;x-amz-security-token, Signature=***********' \
-H 'Content-Type: application/json' \
-H 'x-amz-content-sha256: ************' \
-H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36' \
-H 'x-amz-security-token: ********' \
-H 'Referer: http://localhost:3000/' \
-H 'x-amz-date: 20220916T035331Z' \
-H 'sec-ch-ua-platform: "macOS"' \
--data-raw '{}' \
--compressed
for the request and
curl 'https://*******.lambda-url.us-west-2.on.aws/' \
-X 'OPTIONS' \
-H 'Accept: */*' \
-H 'Accept-Language: en-US,en;q=0.9' \
-H 'Access-Control-Request-Headers: authorization,content-type,x-amz-content-sha256,x-amz-date,x-amz-security-token' \
-H 'Access-Control-Request-Method: POST' \
-H 'Connection: keep-alive' \
-H 'Origin: http://localhost:3000' \
-H 'Referer: http://localhost:3000/' \
-H 'Sec-Fetch-Dest: empty' \
-H 'Sec-Fetch-Mode: cors' \
-H 'Sec-Fetch-Site: cross-site' \
-H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36' \
-H 'dnt: 1' \
-H 'sec-gpc: 1' \
--compressed
for the preflight.
If I run that preflight in a terminal I get the following response:
HTTP/1.1 200 OK
Date: Xxx, xx Xxx 2022 xx:xx:xx GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
x-amzn-RequestId: xxxxxxxxx
Note that there aren't valid CORS headers being sent back, which results in Chrome not making the actual request. If I run the request itself in a terminal and bypass the preflight, then it runs successfully.
Is there an error in the way that I am making a request or how I've configured the lambda (I've tried various combinations)?
Thanks for your help!
Thanks to the comment from #jub0bs above I figured out the answer.
All the CORS configuration values default to being locked down. I had tried * for all the values, but did not set the Max age configuration value. With all setting properly filled out to the origins, methods, and headers that I am using AND Max age set to 0 everything started behaving itself.
I’m trying to have multipart upload working on Scaleway Object Storage (S3 compatible) with presigned urls and I’m getting errors (403) on preflight request generated by the browser but my CORS settings seems correctly set. (Basically wildcard on allowed headers and origins).
The error comes with a 403 status code and is as follow:
<?xml version='1.0' encoding='UTF-8'?>
<Error><Code>AccessDenied</Code><Message>Access Denied.</Message><RequestId>...</RequestId></Error>
I’m stuck on this one for a while now, I tried to copy the pre-flight request from my browser to reproduce it elsewhere and tried to tweak it a little bit.
Removing the query params from the url of the pre-flight request make the request successful (returns a 200 with Access-Control-Allow-* response headers correctly set) but this is obviously not the browser behavior...
This Doesn’t work (secrets, keys and names have been changed)
curl 'https://bucket-name.s3.fr-par.scw.cloud/tmp-screenshot-2021-01-20-at-16-21-33.png?AWSAccessKeyId=XXXXXXXXXXXXXXXXXXXX&Expires=1638217988&Signature=NnP1XLlcvPzZnsUgDAzm1Uhxri0%3D&partNumber=1&uploadId=OWI1NWY5ZGrtYzE3MS00MjcyLWI2NDAtNjFkYTM1MTRiZTcx' -X OPTIONS -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:94.0) Gecko/20100101 Firefox/94.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: http://domain.tech/' -H 'Access-Control-Request-Method: PUT' -H 'Origin: http://domain.tech' -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: no-cors' -H 'Sec-Fetch-Site: cross-site' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache'
This Works (secrets, keys and names have been changed)
curl 'https://bucket-name.s3.fr-par.scw.cloud/tmp-screenshot-2021-01-20-at-16-21-33.png' -X OPTIONS -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:94.0) Gecko/20100101 Firefox/94.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: http://domain.tech/' -H 'Access-Control-Request-Method: PUT' -H 'Origin: http://domain.tech' -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: no-cors' -H 'Sec-Fetch-Site: cross-site' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache'
The url comes from the aws-sdk and is generated this way :
const S3Client = new S3({
credentials: {
accessKeyId: env.SCW_ACCESS_KEY,
secretAccessKey: env.SCW_SECRET_KEY,
},
endpoint: `https://s3.${env.SCW_REGION}.scw.cloud`,
})
S3Client.getSignedUrlPromise('uploadPart', {
Bucket: bucket,
Key: key,
UploadId: multipartUpload.UploadId,
PartNumber: idx + 1,
})
and used this way in frontend:
// url being the url generated in backend as demonstrated above
const response = await fetch(url, {
method: 'PUT',
body: filePart,
signal: abortController.signal,
})
If anyone can give me a hand at this or that would be great!
As it turns out, Scaleway Object Storage is not fully S3-compatible on this case.
Here is a workaround:
Install aws4 library to sign request easily (or follow this scaleway doc to manually sign your request)
Form your request exactly as per stated in this other scaleway doc (this is where aws-sdk behavior differs, it generates an url with AWSAccessKeyId, Expires and Signature query params that cause the scaleway API to fail. Scaleway API only wants partNumber and uploadId).
Return the generated url and headers to the frontend
// Backend code
const signedRequest = aws4.sign(
{
method: 'PUT',
path: `/${key}?partNumber=${idx + 1}&uploadId=${
multipartUpload.UploadId
}`,
service: 's3',
region: env.SCW_REGION,
host: `${bucket}.s3.${env.SCW_REGION}.scw.cloud`,
},
{
accessKeyId: env.SCW_ACCESS_KEY,
secretAccessKey: env.SCW_SECRET_KEY,
},
)
return {
url: `https://${signedRequest.host}${signedRequest.path}`,
headers: Object.keys(signedRequest.headers).map((key) => ({
key,
value: signedRequest.headers[key] as string,
})),
}
And then in frontend:
// Frontend code
const headers = signedRequest.headers.reduce<Record<string, string>>(
(acc, h) => ({ ...acc, [h.key]: h.value }),
{},
)
const response = await fetch(signedRequest.url, {
method: 'PUT',
body: filePart,
headers,
signal: abortController.signal,
})
Scaleway knows this issue as I directly discussed with their support team and they are putting some effort in order to be as compliant as possible with S3. This issue might be fixed by the time you read this.
Thanks to them for the really quick response time and for taking this seriously.
I am getting "403" forbidden when I try to import an API for my distributed APIM instance.
I have 4 VMs running centos 7 and JDK 8:
1st - PostgreSQL
2nd - WSO2 IS with Key manager
3rd - WSO2 APIMMANAGER (2 instances - APIMStore and APIMPublisher)
4th - WSO2 APIMWORKER (2 instances - APIMGateway and APIMTrafficManager)
1- After start all servers OK, I create an 'env' for APIMCLI as follows:
apimcli add-env -n apimm_hml --registration https://apimmanager:9444/client-registration/v0.14/register --apim https://apimmanager:9444 --token https://apimmanager:8244/token --import-export https://apimmanager:9444/api-import-export-2.6.0-v2 --admin https://apimmanager:9444/api/am/admin/v0.14 --api_list https://apimmanager:9444/api/am/publisher/v0.14/apis --app_list https://apimmanager:9444/api/am/store/v0.14/applications
2- I had add my exported APIs to $ .wso2apimcli/esported/apis
3- I get Ok the Tokken from APIM
curl -X POST -c cookies http://apimmanager:9764/publisher/site/blocks/user/login/ajax/login.jag -d 'action=login&username=admin&password=admin' -k -v
* About to connect() to apimmanager port 9764 (#0)
* Trying 10.61.1.68...
* Connected to apimmanager (10.61.1.68) port 9764 (#0)
> POST /publisher/site/blocks/user/login/ajax/login.jag HTTP/1.1
> User-Agent: curl/7.29.0
> Host: apimmanager:9764
> Accept: */*
> Content-Length: 42
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 42 out of 42 bytes
< HTTP/1.1 200 OK
< X-Frame-Options: DENY
< X-Content-Type-Options: nosniff
< X-XSS-Protection: 1; mode=block
< Cache-Control: no-store, no-cache, must-revalidate, private
* Added cookie JSESSIONID="E97A1EC2A610C0985E9149C6AEDB0FC9AAF492239437DB11D6A64F0ADBB3CA2424437A19ED8A51409F453D1E53640A547E186AC3810235AD7761DE58093C432314B3D46DE5B353562FBCFEB3268A6084945840CD1083330A69B8564068B92A39B17714D2F94807129392AB6EDFE10CB19EC4ED87E514B31E09D19991F6D6938A" for domain apimmanager, path /publisher, expire 0
< Set-Cookie: JSESSIONID=E97A1EC2A610C0985E9149C6AEDB0FC9AAF492239437DB11D6A64F0ADBB3CA2424437A19ED8A51409F453D1E53640A547E186AC3810235AD7761DE58093C432314B3D46DE5B353562FBCFEB3268A6084945840CD1083330A69B8564068B92A39B17714D2F94807129392AB6EDFE10CB19EC4ED87E514B31E09D19991F6D6938A; Path=/publisher; HttpOnly
< Content-Type: application/json;charset=UTF-8
< Content-Length: 17
< Date: Mon, 21 Oct 2019 14:32:47 GMT
< Server: WSO2 Carbon Server
<
* Connection #0 to host apimmanager left intact
{"error" : false}
4- I get 403 forbiden afte try to import an API:
apimcli import-api -f APIM_ABC_v1.0.zip -e apimm_hml -u admin -p admin -k --preserve-provider=false --verbose
[INFO]: Insecure: true
[INFO]: import-api called
[INFO]: Environment: 'apimm_hml'
[INFO]: Import URL: https://apimmanager:9444/api-import-export-2.6.0-v2/import-api?preserveProvider=false
[INFO]: Source Environment: ConsentimentoService_v1.0.zip
ZipFilePath: /home/centos/.wso2apimcli/exported/apis/ConsentimentoService_v1.0.zip
Error importing API.
Status: 403 Forbidden
Error importing API
[ERROR]: 403 Forbidden
From APIMPublisher Log file I get:
WARN {org.owasp.csrfguard.log.JavaLogger} - potential cross-site request forgery (CSRF) attack thwarted (user:<anonymous>, ip:10.61.1.68, method:POST, uri:/api-import-export-2.6.0-v2/import-api, error:required token is missing from the request) {org.owasp.csrfguard.log.JavaLogger}
After the creation of the environment (ie step 1) try to login to the environment using the below command
apimcli login <environment> -u <username> -p <password>
in your case..
apimcli login apimm_hml -u admin -p admin -k
fileChange(event) {
debugger;
let fileList: FileList = event.target.files;
if (fileList.length > 0) {
let file: File = fileList[0];
let formData: FormData = new FormData();
formData.append('uploadFile', file, file.name);
let headers = new Headers()
headers.append('Authorization', this.token);
headers.append('Content-Type', 'application/json');
headers.append('Content-Disposition', 'form-data; filename="'+file.name+'"');
// Content-Disposition: form-data; name="fieldName"; filename="filename.jpg"
headers.append('Content-Type', 'multipart/form-data');
let options = new RequestOptions({ headers: headers });
// let apiUrl1 = "/api/UploadFileApi";
this.http.post('http://192.168.1.160:8000/profilepic/', {FormData:formData}, options)
.map(res => res.json())
.catch(error => Observable.throw(error))
.subscribe(
data => alert('success'),
error => alert(error)
)
}
// window.location.reload();
}
<input class="uploadfile-style" [(ngModel)]="FilePath" (change)="fileChange($event)" name="CContractorFPath"
size="10" type="file" enctype="multipart/form-data">
Hii
I have write this code for post image in angular 2...but it shows error Unsupported media type \"application/json,multipart/form-data\" in request
same file accepted when i post it from Postman
following are two different curl commands
1. This is accepted by Json
curl -X POST \
http://192.168.1.223:8010/profilepic/ \
-H 'authorization: Basic YWRtaW46YWRtaW4=' \
-H 'cache-control: no-cache' \
-H 'content-type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW' \
-H 'postman-token: 794107c3-791d-e198-fe36-48f407a3ec8c' \
-F datafile=#/home/ashwini/Pictures/b.jpeg
2. This is not accepted by API
curl 'http://192.168.1.223:8010/profilepic/' -H 'Authorization: 6df62b2d808c15acbdd8598d0153c8ca7e9ea28a' -H 'Origin: http://localhost:4201' -H 'Accept-Encoding: gzip, deflate' -H 'Accept-Language: en-GB,en-US;q=0.8,en;q=0.6' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36' -H 'Content-Type: application/json,multipart/form-data' -H 'Accept: application/json, text/plain, */*' -H 'Referer: http://localhost:4201/' -H 'Connection: keep-alive' --data-binary $'{\n "FormData": {}\n}' --compressed
In working with the AWS C++ SDK I ran into an issue where trying to execute a PutObjectRequest complains that it is "unable to connect to endpoint" when uploaded more than ~400KB.
Aws::Client::ClientConfiguration clientConfig;
clientConfig.scheme = Aws::Http::Scheme::HTTPS;
clientConfig.region = Aws::Region::US_EAST_1;
Aws::S3::S3Client s3Client(clientConfig);
Aws::S3::Model::PutObjectRequest putObjectRequest;
putObjectRequest.SetBucket("mybucket");
putObjectRequest.SetKey("mykey");
typedef boost::iostreams::basic_array_source<char> Device;
boost::iostreams::stream_buffer<Device> stmbuf(compressedData, dataSize);
std::iostream *stm = new std::iostream(&stmbuf);
putObjectRequest.SetBody(std::shared_ptr<Aws::IOStream>(stm));
putObjectRequest.SetContentLength(dataSize);
Aws::S3::Model::PutObjectOutcome outcome = s3Client.PutObject(putObjectRequest);
As long as my data is less than ~400KB it gets uploaded into a file on S3 but beyond that it is unable to connect to endpoint. I should be able to upload up to 5GB in one PutObjectRequest.
Any thoughts?
Edit:
Responding to #JonathanHenson's comment, the AWS log shows this timeout error repeatedly:
[DEBUG] 2016-08-04 13:42:03 AWSClient [0x700000081000] Request Successfully signed
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] Making request to https://s3.amazonaws.com/mybucket/myfile
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] Including headers:
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] content-length: 3151261
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] content-type: binary/octet-stream
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] host: s3.amazonaws.com
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] user-agent: aws-sdk-cpp/0.13.9 Darwin/15.6.0 x86_64
[DEBUG] 2016-08-04 13:42:03 CurlHandleContainer [0x700000081000] Attempting to acquire curl connection.
[DEBUG] 2016-08-04 13:42:03 CurlHandleContainer [0x700000081000] Returning connection handle 0x10b09cc00
[DEBUG] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] Obtained connection handle 0x10b09cc00
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000] HTTP/1.1 100 Continue
[TRACE] 2016-08-04 13:42:03 CurlHttpClient [0x700000081000]
[ERROR] 2016-08-04 13:42:06 CurlHttpClient [0x700000081000] Curl returned error code 28
[DEBUG] 2016-08-04 13:42:06 CurlHandleContainer [0x700000081000] Releasing curl handle 0x10b09cc00
[DEBUG] 2016-08-04 13:42:06 CurlHandleContainer [0x700000081000] Notifying waiting threads.
[DEBUG] 2016-08-04 13:42:06 AWSClient [0x700000081000] Request returned error. Attempting to generate appropriate error codes from response
[WARN] 2016-08-04 13:42:06 AWSClient [0x700000081000] Request failed, now waiting 12800 ms before attempting again.
[DEBUG] 2016-08-04 13:42:19 InstanceProfileCredentialsProvider [0x700000081000] Checking if latest credential pull has expired.
Ultimately what fixed this for me was setting the request timeout. The request time out needs to be long enough for your entire transfer to finish. If you are transferring large files on a slow internet connection make sure the request timeout is long enough to allow those files to transfer.
Aws::Client::ClientConfiguration clientConfig;
clientConfig.scheme = Aws::Http::Scheme::HTTPS;
clientConfig.region = Aws::Region::US_EAST_1;
clientConfig.connectTimeoutMs = 30000;
clientConfig.requestTimoutMs = 600000;
Tweak your config file to below.And see it will work.
Aws::Client::ClientConfiguration clientConfig;
clientConfig.scheme = Aws::Http::Scheme::HTTPS;
clientConfig.region = Aws::Region::US_EAST_1;
clientConfig.connectTimeoutMs = 30000;
Aws::S3::S3Client s3Client(clientConfig);
Aws::S3::Model::PutObjectRequest putObjectRequest;
putObjectRequest.SetBucket("mybucket");
putObjectRequest.SetKey("mykey");
typedef boost::iostreams::basic_array_source<char> Device;
boost::iostreams::stream_buffer<Device> stmbuf(compressedData, dataSize);
std::iostream *stm = new std::iostream(&stmbuf);
putObjectRequest.SetBody(std::shared_ptr<Aws::IOStream>(stm));
putObjectRequest.SetContentLength(dataSize);
Aws::S3::Model::PutObjectOutcome outcome = s3Client.PutObject(putObjectRequest);