when I execute
curl --request GET "https://${ES_DOMAIN_ENDPOINT}/my_index_pattern-*/my_type/_mapping" \
--user $AWS_ACCESS_KEY_ID:$AWS_SECRET_ACCESS_KEY \
--aws-sigv4 "aws:amz:ap-southeast-2:es"
where $ES_DOMAIN_ENDPOINT is my AWS Elasticsearch endpoint, I'm getting the following response:
{"message":"The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."}
I'm confident that my $AWS_ACCESS_KEY_ID:$AWS_SECRET_ACCESS_KEY are correct.
However, when I send the same postman request with the AWS Authentication and the parameters above, the response is coming through. I compared the verbose output of both requests and they have very minor differences, such as timestamps and signature.
I'm wondering, what is wrong with the --aws-sigv4 config?
This issue happens due to the* character in the path. There is a bug report in curl repository to fix this issue https://github.com/curl/curl/issues/7559.
Meanwhile, to mitigate the error you should either remove a * from the path or build curl from the branch https://github.com/outscale-mgo/curl-appimage/tree/http_aws_sigv4_encoding.
Related
I am trying to automate via the Postman CLI my collections.
I am able to run a folder (with the Postman Runner) without problems, using Basic Authentication to access many endpoints I am calling.
If I try to run the very same folder with the Postman CLI, all the protected endpoints answer with 403 Forbidden.
It seems that the requests are not using the authentication header.
Is it a known problem? Is there a workaround?
Plus, to troubleshoot better, is there a way to inspect the requests when the collection is run with the Postman CLI? I can see a recap but I cannot see the detailed requests with all the headers, body, ect...
I am running the collection/folder with
postman collection run COLLECTION_UUID -k --verbose -e ENVIRONMENT_UUID -i FOLDER_UUID --env-var "source=X.X.X.X" -d "datafile.json"
I have some cloud run that make http requests between them, the url is hardcoded in the code, is there a way to resolve the url by the cloud run name or another attribute?
Another possible solution could be using Method: namespaces.services.get.
If the service name is known to you, you can make a GET HTTP request in API calls to https://{endpoint}/apis/serving.knative.dev/v1/{name} where endpoint is one of the supported endpoints and name is the name of the Cloud Run service to retrieve. For Cloud Run (fully managed), replace {namespace_id} with the project ID or number. It takes the form namespaces/{namespace}/services/{service}.
Authorization requires the following IAM permission on the specified resource name : run.services.get
For example :
curl -X GET -H "Authorization: Bearer $(gcloud auth print-access-token)" https://us-central1-run.googleapis.com/apis/serving.knative.dev/v1/namespaces/your-project/services/your-service| grep url
Output :
"url" :"https://cloud-run-xxxxxxxxxx-uc.a.run.app"
There is a gcloud command to do so. You could for instance get the url during your build and save it into an environment variable. The following command will get the complete url:
gcloud run services describe YOUR_CLOUDRUN_NAME --region=INSTANCE_REGION --platform=managed --format=yaml | grep -m 1 url | awk '{print $NF}'
There is no easy way for now (but Cloud Next 21 is coming, maybe great announcement on that; it's a feature requested by many Alpha tester like me).
However, you can implement a bunch of API calls to achieve that. I wrote an article where I use that to get the current Cloud Run service URL. But it could be another service.
It's in Golang. Have a look on it, and let me know if you have issues to translate the calls in your preferred language.
You can:
gcloud run services ${NAME} \
--platform=managed \
--region=${REGION} \
--project=${PROJECT} \
--format="value(status.address.url)")
Numerous services can accept query string parameters in the URL when a POST request is made with Content-Type: application/x-www-form-urlencoded and other parameters in the body, but it seems AWS API Gateway cannot while also accepting query string parameters.
When I call the AWS API Gateway with a POST Mapping Template for application/x-www-form-urlencoded and query string URL parameters (with a Lambda function), I get the following error:
{
"message":"When Content-Type:application/x-www-form-urlencoded,
URL cannot include query-string parameters (after '?'):
'/prod/webhook?inputType=wootric&outputType=glip&url=...'"
}
Here is an example cURL:
curl -XPOST 'https://{myid}.execute-api.{myregion}.amazonaws.com/prod/webhook? \
inputType=wootric&outputType=glip&url=https://hooks.glip.com/webhook/ \
11112222-3333-4444-5555-666677778888' \
-d "#docs/handlers/wootric/event-example_response-created.txt" \
-H 'Content-Type: application/x-www-form-urlencoded' -v
The specific goal is to get a Wootric webhook event posted to a Lambda function using a URL with query string parameters.
You can get the code here:
https://github.com/grokify/chathooks
The Wootric event body file is here:
https://raw.githubusercontent.com/grokify/chathooks/master/docs/handlers/wootric/event-example_response-created.txt
The GitHub issue is here:
https://github.com/grokify/chathooks/issues/15
The error message seems pretty definitive but I wanted to ask:
Is there a workaround to configure an API Gateway to support both?
Is there a standards-based reason why AWS would not support this or is this just a design decision / limitation?
If there's no solution to this, is there a good lightweight solution other than deploying a hosted server solution like Heroku. Also, do other cloud services support this with their API gateway + cloud functions, like Google?
Some examples showing support for both:
jQuery example: jQuery send GET and POST parameters simultaneously at AJAX request
C# example: Accessing query string variables sent as POST in HttpActionContext
Yes,there is a workaround and the key issue is to set the mapping template that will convert string into json . Very detailed example shown in
API Gateway any content type.
Please set the request property as "Content-Type", "application/json" for your HttpURLConnection like below
connection.setRequestProperty("Content-Type", "application/json");
I had a similar problem, with a 3rd party provider using web hooks. It turns out that my provider is transforming the url path from UPPERCASE to LOWERCASE. Example the endpoint should be apigateway.com/dev/0bscur3dpathRANDOM instead apigateway.com/dev/0bscur3dpathRANDOM. You get the point.
I'm not sure if I got the point in question correctly, but if you want to access the request body that is encoded as application/x-www-form-urlencoded(or anything, actually) in your Lambda function, you should use LAMBDA_PROXY request integration type (aka tick "Use Lambda Proxy integration" checkbox) when creating a method for your resource. Then you can access the request body in event.body field as a plain text in your lambda function and parse it manually.
I'm trying to post a request using curl to my es cluster in AWS using my accessKey and secretKey. I have successfully done this through postman (details here) where you can specify AWS credentials but I would like to make this work with curl. Postman can auto-generate your curl request for you but all I get are errors.
This is the generated curl request along with the response
curl -X GET \
https://search-00000000000001.eu-west-1.es.amazonaws.com/_cat/indices \
-H 'Authorization: AWS4-HMAC-SHA256 Credential=11111111111111111111/20181119/eu-west-1/es/aws4_request, SignedHeaders=cache-control;content-type;host;postman-token;x-amz-date, Signature=11111111116401882398f46011f14fdb9d55e012a4fb912706d67c1111111111' \
-H 'Content-Type: application/x-www-form-urlencoded' \
-H 'Host: search-00000000000001.eu-west-1.es.amazonaws.com' \
-H 'Postman-Token: 00000000-0000-4001-8006-9291e208a000' \
-H 'X-Amz-Date: 20181119T220000Z' \
-H 'cache-control: no-cache'
{"message":"The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."}%
IDs have been changed to protect the innocent.
I have checked all my keys and region, and like i said this works through postman. Is it possible to access this AWS service using my keys through curl?
This is quite a long rabbit hole. Thanks to Adam for the comment that sent me in the correct direction. The link https://docs.aws.amazon.com/apigateway/api-reference/signing-requests/ really helps you understand what you need to do.
I've since found a script that follows the signing requests method outlined above. It runs in bash and whilst it is not written for use with elasticsearch requests it can be used for them.
https://github.com/riboseinc/aws-authenticating-secgroup-scripts many thanks to https://www.ribose.com for putting this on github.
If your host contains ':443' remove it and try again.
This worked for me.
"My initial problem: If I access it with Postman using the same url, I get the same error, but removing the ‘:443/’, it works fine, so it’s nothing wrong with the key and secret I’m using."
As in the title, I can't seem to get it to work, i'm following the high level guide detailed here but any images uploaded seem to be blank.
What i've set up:
/images/{object} - PUT
> Integration Request
AWS Region: ap-southeast-2
AWS Service: S3
AWS Subdomain [bucket name here]
HTTP method: PUT
Path override: /{object}
Execution Role [I have one set up]
> URL Path Paramaters
object -> method.request.path.object
I'm trying to use Postman to send a PUT request with Content-Type: image/png and the body is a binary upload of a png file.
I've also tried using curl:
curl -X PUT -H "Authorization: Bearer [token]" -H "Content-Type: image/gif" --upload-file ~/Pictures/bart.gif https://[api-url]/dev/images/cool.gif
It creates the file on the server and the size seems to be double what ever was uploaded, when viewed I just get "image has an error".
When I try with .txt files (content-type: text/plain) it seems to work though.
Any ideas?
After reading alot and chatting to AWS technical support, the problem seems to be that you can't do binary uploads through API Gateway as anything that passes through automatically goes through a UTF-8 encode.
There are a few workarounds for this I can think of, my solution will be to base64 the files before upload and trigger a lambda when they hit the bucket to decode them
This is a old post, but I got a solution.
AWS now support binary upload through APIGateway READ.
In general, go to your API settings, and add a Binary Media type.
After that, you can handle the file in base64