I'm trying to curl a GET request - amazon-web-services

I'm quite new to openstack and software development, but here goes.
i'm trying to curl a GET request via AWS Api Gateway.
the curl looks like this:
curl -H "Accept: application/json" -H "Content-Type: application/json" -i GET -d 'name=Claus&username=gettest&password=test' https://xy8fbbpvak.execute-api.eu-west-1.amazonaws.com/prod/adduser
but it gives me this response:
curl: (6) Could not resolve host: GET
HTTP/1.1 403 Forbidden
Content-Type: application/json
Content-Length: 43
Connection: keep-alive
Date: Tue, 18 Jul 2017 06:10:08 GMT
x-amzn-RequestId: c049f3e5-6b7f-11e7-a380-d966a8908f27
x-amzn-ErrorType: MissingAuthenticationTokenException
X-Cache: Error from cloudfront
Via: 1.1 dc81da318a4ae20e51ccfd9463219596.cloudfront.net (CloudFront)
X-Amz-Cf-Id: BI3LX_cwBic2EtCleIHd6yT0B1p4GRoqEbqx85L1nO2UUafPKXC2iQ==
{"message":"Missing Authentication Token"}
The method in AWS API Gateway doesn't need authorization or a token.
I'm really not sure what i'm doing wrong? please tell me if you need more info.

The message {"message":"Missing Authentication Token"} does not necessarily means that it needs an authorization or a token, but you receive the same error if you request a URL that doesn't exist
You need to make sure you're using the correct HTTP method and resource path to a valid resource.
From your example, you're using a GET and the action is prod/adduser, that does not sound too good to me, addUser would generally be made on PUT or POST when you design your API.
Also make sure to deploy your API changes, when you test from the API gateway, its a staging area but its not deployed

Related

Google Cloud Run: Webhook POST causes 400 Response

We are catching a BigCommerce webhook event in our Google Cloud Run application. The request looks like:
Headers
host: abc-123-ue.a.run.app
AccountId: ABC
Content-Type: application/json
Password: Goodbye
Platform: BC
User-Agent: akka-http/10.1.10
Username: Hello
Content-Length: 197
Connection: keep-alive
Body
{"created_at":1594914374,"store_id":"1001005173","producer":"stores/gy68868uk5","scope":"store/product/created","hash":"139fab64ded23b3e1b8473ba24ab21bedd3f535b","data":{"type":"product","id":132}}
For some reason, this causes a 400 response from Google Cloud Run. Our application doesn't even seem to be passed the request. All other endpoints work (including other post requests).
Any ideas?
Edit
In the original post, I had the path in the host header. This was a mistake made in creating this post and not the actual value passed to us. We can only inspect the request via Requestbin (I can't find the request values anywhere in Google logs) so I'm speculating on the host value and made a mistake writing it out here.
Research so far...
So upon further testing, it seems that BigCommerce Webhooks also fail to send to any Google Cloud Function we set up. As a workaround, I'm having Pipedream catch the webhook and send the payload to our application. No problems there. This endpoint also works with mirror payloads from local and Zapier which seems to eliminate authentication errors.
We are running FastAPI on Google Run and the simplest function on Google Cloud Functions. This seems to be an error with how Google Serverless and BigCommerce Webhook Events communicate with each other. I'm just not sure how...
Here are the headers we managed to capture on one of the only times a BigCommerce Webhook Event came through to our Google Cloud Function:
Content-Length: 197
Content-Type: application/json
Host: us-central1-abc-123.cloudfunctions.net
User-Agent: akka-http/10.1.10
Forwarded: for="0.0.0.0";proto=https
Function-Execution-Id: unes7v34vzyo
X-Appengine-Country: ZZ
X-Appengine-Default-Version-Hostname: f696ddc1d56c3fd66p-tp.appspot.com
X-Appengine-Https: on
X-Appengine-Request-Log-Id: 5f10e15c00ff082ecbb02ee3a70001737e6636393664646331643536633366643636702d7470000165653637393633633164376565323033383131366437343031613365613263303a36000100
X-Appengine-Timeout-Ms: 599999
X-Appengine-User-Ip: 0.0.0.0
X-Cloud-Trace-Context: a62207698d141465d0f38488492d088b/9870406606828581415
X-Forwarded-For: 0.0.0.0
X-Forwarded-Proto: https
Accept-Encoding: gzip
Connection: close
> host: abc-123-ue.a.run.app/bigcommerce/webhooks/
This is most likely the issue. Host headers must contain only the hostname, not the request /paths.
You can clearly see this will fail:
$ curl -IvH 'Host: pdf-2wvlk7vg3a-uc.a.run.app/foo' https://pdf-2wvlk7vg3a-uc.a.run.app
...
HTTP/2 400
However if you don't craft the Host header yourself, it will work.

Django on Gunicorn/Nginx - Stripe Webhooks Always Getting 400

Production Setup: Django v3.0.5 on Nginx / Gunicorn / Supervisor (i followed directions from here)
(I don't think this is any issue but i am using dj-stripe for django/stripe integration)
While on development (django's built-in HTTP server).. everything seems to work (i.e. stripe can send webhook events just fine)... however, on production, i get emails saying that Stripe can't reach my server.
When I run
curl -D - -d "user=user1&pass=abcd" -X POST https://my.server/stripe/webhook/
I get this response
HTTP/1.1 400 Bad Request
Server: nginx/1.15.9 (Ubuntu)
Date: Thu, 18 Jun 2020 19:44:07 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 0
Connection: keep-alive
X-Frame-Options: SAMEORIGIN
Vary: Cookie
However, non-webhook (i.e. visiting the website via browser) seems to work normally.. just webhooks.
Any idea where this is going wrong?
Your request doesn't have the Stripe secret which is needed for authentication.

Get ADFS SAML 2.0 Assertion Response from command line using curl

I would to know WINDOWS Server ADFS really exposes an API to post SAML request en get saml assertion response.
When I enter this in my browser : https://<myfqdn>/adfs/ls/IdpInitiatedSignOn.aspx
I’m presented with a site selection page, as shown in the following image .
Then I choose AWS, and get to the authentication page to provide my AD credentials, after what I get to AWS console page.
It's a POST request and If activate my Google Chrome Developer Tools for example, I can see the from headers
However I'm trying to reproduce the same from my Linux command line using curl but It's not working.
This is what I'm trying :
$api_body="{\"service\": \"aws\", \"email\": \"myemail\", \"password\": \"mypass\"}"
$SAML_IDP_ASSERTION_URL=https://<myfqdn>/adfs/ls/IdpInitiatedSignOn.aspx
$curl -sD - -X POST "$SAML_IDP_ASSERTION_URL" -H "Content-Type: application/json" -d "$api_body"
This gives me the following header :
HTTP/1.1 200 OK
Cache-Control: no-cache,no-store
Pragma: no-cache
Content-Length: 12844
Content-Type: text/html; charset=utf-8
Expires: -1
Server: Microsoft-HTTPAPI/2.0 Microsoft-HTTPAPI/2.0
x-frame-options: DENY
Date: Fri, 22 Mar 2019 10:25:13 GMT
Followed by the site html content in my terminal including javascript and others. I don't get or see any json data back, nor SAML Response.
Do you guys have an idea of what's the right command/request to get SAML Response from command line?

URL forbidden 403 when using a tool but fine from browser

I have some images that I need to do a HttpRequestMethod.HEAD in order to find out some details of the image.
When I go to the image url on a browser it loads without a problem.
When I attempt to get the Header info via my code or via online tools it fails
An example URL is http://www.adorama.com/images/large/CHHB74P.JPG
As mentioned, I have used the online tool Hurl.It to try and attain the Head request but I am getting the same 403 Forbidden message that I am getting in my code.
I have tried adding many various headers to the Head request (User-Agent, Accept, Accept-Encoding, Accept-Language, Cache-Control, Connection, Host, Pragma, Upgrade-Insecure-Requests) but none of this seems to work.
It also fails to do a normal GET request via Hurl.it. Same 403 error.
If it is relevant, my code is a c# web service and is running on the AWS cloud (just in case the adorama servers have something against AWS that I dont know about). To test this I have also spun up an ec2 (linux box) and run curl which also returned the 403 error. Running curl locally on my personal computer returns the binary image which is presumably just the image data.
And just to remove the obvious thoughts, my code works successfully for many many other websites, it is just this one where there is an issue
Any idea what is required for me to download the image headers and not get the 403?
same problem here.
Locally it works smoothly. Doing it from an AWS instance I get the very same problem.
I thought it was a DNS resolution problem (redirecting to a malfunctioning node). I have therefore tried to specify the same IP address as it was resolved by my client but didn't fix the problem.
My guess is that Akamai (the service is provided by an Akamai CDN in this case) is blocking AWS. It is understandable somehow, customers pay by traffic for CDN, by abusing it, people can generate huge bills.
Connecting to www.adorama.com (www.adorama.com)|104.86.164.205|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 403 Forbidden
Server: **AkamaiGHost**
Mime-Version: 1.0
Content-Type: text/html
Content-Length: 301
Cache-Control: max-age=604800
Date: Wed, 23 Mar 2016 09:34:20 GMT
Connection: close
2016-03-23 09:34:20 ERROR 403: Forbidden.
I tried that URL from Amazon and it didn't work for me. wget did work from other servers that weren't on Amazon EC2 however. Here is the wget output on EC2
wget -S http://www.adorama.com/images/large/CHHB74P.JPG
--2016-03-23 08:42:33-- http://www.adorama.com/images/large/CHHB74P.JPG
Resolving www.adorama.com... 23.40.219.79
Connecting to www.adorama.com|23.40.219.79|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.0 403 Forbidden
Server: AkamaiGHost
Mime-Version: 1.0
Content-Type: text/html
Content-Length: 299
Cache-Control: max-age=604800
Date: Wed, 23 Mar 2016 08:42:33 GMT
Connection: close
2016-03-23 08:42:33 ERROR 403: Forbidden.
But from another Linux host it did work. Here is output
wget -S http://www.adorama.com/images/large/CHHB74P.JPG
--2016-03-23 08:43:11-- http://www.adorama.com/images/large/CHHB74P.JPG
Resolving www.adorama.com... 23.45.139.71
Connecting to www.adorama.com|23.45.139.71|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.0 200 OK
Content-Type: image/jpeg
Last-Modified: Wed, 23 Mar 2016 08:41:57 GMT
Server: Microsoft-IIS/8.5
X-AspNet-Version: 2.0.50727
X-Powered-By: ASP.NET
ServerID: C01
Content-Length: 15131
Cache-Control: private, max-age=604800
Date: Wed, 23 Mar 2016 08:43:11 GMT
Connection: keep-alive
Set-Cookie: 1YDT=CT; expires=Wed, 20-Apr-2016 08:43:11 GMT; path=/; domain=.adorama.com
P3P: CP="NON DSP ADM DEV PSD OUR IND STP PHY PRE NAV UNI"
Length: 15131 (15K) [image/jpeg]
Saving to: \u201cCHHB74P.JPG\u201d
100%[=====================================>] 15,131 --.-K/s in 0s
2016-03-23 08:43:11 (460 MB/s) - \u201cCHHB74P.JPG\u201d saved [15131/15131]
I would guess that the image provider is deliberately blocking requests from EC2 address ranges.
The reason the wget outgoing ip address is different in the two examples is due to DNS resolution on the cdn provider that adorama are providing
Web Server may implement ways to check particular fingerprint attributes to prevent automated bots . Here a few of them they can check
Geoip, IP
Browser headers
User agents
plugin info
Browser fonts return
You may simulate the browser header and learn some fingerprinting "attributes" here : https://panopticlick.eff.org
You can try replicate how a browser behave and inject similar headers/user-agent. Plain curl/wget are not likely to satisfied those condition, even tools like phantomjs occasionally get blocked. There is a reason why some prefer tools like selenium webdriver that launch actual browser.
I found using another url also being protected by AkamaiGHost was blocking due to certain parts in the user agent. Particulary using a link with protocol was blocked:
Using curl -H 'User-Agent: some-user-agent' https://some.website I found the following results for different user agents:
Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:70.0) Gecko/20100101 Firefox/70.0 okay
facebookexternalhit/1.1 (+http\://www.facebook.com/externalhit_uatext.php): 403
https ://bar: okay
https://bar: 403
All I could find for now is this (downvoted) answer https://stackoverflow.com/a/48137940/230422 stating that colons (:) are not allowed in header values. That is clearly not the only thing happening here as the Mozilla example also has a colon, only not a link.
I guess that at least most webservers don't care and allow facebook's bot and other bots having a contact url in their user agent. But appearently AkamaiGHost does block it.

querying the graph api with curl

I want to check how to handle the access_token. So, I use curl to perform the following queries:
curl -X POST https://graph.facebook.com/oauth/access_token -d "client_id=<appId>&client_secret=<secret>&grant_type=client_credentials&redirect_uri="
It returns a value for access_token.
Then, I'd like to get the list of my friends:
curl -X POST https://graph.facebook.com/me/friends -d "access_token=<token>"
It returns this error:
{"error":{"message":"An active access token must be used to query information about the current user.","type":"OAuthException","code":2500}}
Any hints ?
The easy way is to use graph API explorer, go there and get and acces token with your facebook developer account:
Generate a Basic User Access Token
When you get to building your own app, you'll need to learn about
access tokens and how to generate them using Facebook Login, but for
now, we can get one really quickly through the Graph API Explorer:
Click on the Get Token button in the top right of the Explorer.
Choose the option Get User Access Token.
In the following dialog don't check any boxes, just click the blue Get Access Token button.
You'll see a Facebook Login Dialog, click **OK" here to proceed.
You can use the graph api explorer as curl, but if you want to try it with a real curl the sintaxis is as follows for last api v2.8:
curl -i -H 'Authorization: Bearer YOUR_ACCES_TOKEN' -XGET 'https://graph.facebook.com/v2.8/me'
in my case:
toni#MBP-de-Antonio  ~  curl -i -H 'Authorization: Bearer MY-ACCES_TOKEN' -XGET 'https://graph.facebook.com/v2.8/me'
HTTP/1.1 200 OK
x-app-usage: {"call_count":2,"total_cputime":3,"total_time":3}
Expires: Sat, 01 Jan 2000 00:00:00 GMT
x-fb-trace-id: BnLv25AHTjq
facebook-api-version: v2.8
Content-Type: application/json; charset=UTF-8
x-fb-rev: 2929740
Cache-Control: private, no-cache, no-store, must-revalidate
Pragma: no-cache
ETag: "39875e94193dcd62dcbbf583fc0008c110820a6c"
Access-Control-Allow-Origin: *
X-FB-Debug: fvO9W8Bfl+BihEy/3aZyzOiMrXOkrbK8q1I3Xk2wYnI7sSujZNC6vQzR4RoTWK7K3Hx6EdzoE2kZ/aWhsXe4OA==
Date: Sat, 01 Apr 2017 06:55:52 GMT
Connection: keep-alive
Content-Length: 61
{"name":"Antonio Juan Querol Giner","id":"10204458008519686"}%
Have you tried using the debugger? Debug your access token here, it'll tell you abou that access token, and if its tied to a user:
https://developers.facebook.com/tools/debug
Also, why are you using POST to get /me/friends?
I think your issue may be the "/me" identifier you're using when you try to access your own data. (Depending on what your client ID is set as you may be referring to the application instead of yourself, try it with your profiles ID instead of "/me").
Also check this out https://stackoverflow.com/a/6701982/1546542
You should enclose the "https://...." , with double quote after curl -X POST.