I have a web service that contains a method DoWork(). This method will be used to retrieve data from a database and pass the data back to the caller in JSON format.
[OperationContract]
[WebInvoke(Method = "GET", UriTemplate = "doWork")]
public Stream DoWork()
{
return new MemoryStream(Encoding.UTF8.GetBytes("<html><body>WORK DONE</body></html>"));
}
I have composed a fiddler request just to verify my method is available.
if I execute this from fiddler, the method in my web service gets called. But I can't seem to figure out how to construct a cURL command that will do the same thing.
Perhaps the easiest approach to this is having Chrome create that curl command line for you, especially when the request involves many headers and complicated POST data.
Open the developer tools by pressing F12 and going to Network. Then run whatever call you want to monitor.
(In my example you can see what happens when you open questions here on stack overflow)
Then right click on the relevant line and select copy as cURL (cmd) if you are on Windows (for Linux use the other)
This will give you a command line similar to this:
curl "http://stackoverflow.com/questions" -H "Accept-Encoding: gzip, deflate, sdch" -H "Accept-Language: de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4" -H "Upgrade-Insecure-Requests: 1" -H "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36" -H "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8" -H "Referer: ..." -H "Cookie: ..." -H "Connection: keep-alive" --compressed
If you experience problems you should add -v to see more details, for a detailled explanation of the commands you can see the manual.
Perhaps all you need to add to your already existing curl command line are those browser specific headers (User-Agent, Accept, ...)
Related
A have a Django app using the built-in settings called ALLOWED_HOSTS that whitelists request Host headers. This is needed as Django uses the Host header provided by the client to construct URLs in certain cases.
ALLOWED_HOSTS=djangoapp.com,subdomain.djangoapp.com
I made ten requests with a fake host header (let's call it fakehost.com) to the Django endpoint: /example.
curl -i -s -k -X $'GET' \
-H $'Host: fakehost.com' -H $'Accept-Encoding: gzip, deflate' -H $'Accept: */*' -H $'Accept-Language: en' -H $'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36' -H $'Connection: close' \
$'https://subdomain.djangoapp.com/example'
In the application logs I see the django.security.DisallowedHost error was raised ten times. However, according to the logs of fakehost.com, it did receive one request for /example.
As I understand, this is a server-side request forgery (SSRF) vulnerability as the Django server can be made to make requests to an arbitrary URL.
It makes debugging hard and is strange that the issue doesn't occur consistently. Also strange that the fake host seems to be recognised by Django, but one request still somehow reached fakehost.com.
Does anyone have any ideas what I could investigate further in order to fix the apparent vulnerability in this Django app? Is the problem potentially on the server level not the application level?
I have requested an api by postman but it didn't response required page, however it says: Request is missing required HTTP header ''
When I went to website developer section/Network tab in XHR, it shows required output.
Request Headers: Accept:application/json, text/plain, / Accept-Encoding:gzip, deflate Accept-Language:en-US,en;q=0.8 Connection:keep-alive Host:panthera.api.yuppcdn.net Origin:test.com Referer:test.com User-Agent:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36 Query String Parameters view source view URL encoded
How can I resolve this?
Please help.
Request an API contract from the developer who's API you are invoking. This API contract must contain what headers are required for the successful invocation of the API.
If it a public API it should also contain its contracts published or documented on the respective site from where you are referring the API specs.
Specifically, I'm trying to scrape this entire page, but am only getting a portion of it. If I use:
r = requests.get('http://store.nike.com/us/en_us/pw/mens-shoes/7puZoi3?ipp=120')
it only gets the "visible" part of the page, since more items load as you scroll downwards.
I know there are some solutions in PyQT such as this, but is there a way to have python requests continuously scroll to the bottom of a webpage until all items load?
You could monitor page network activity with browser development console (F12 - Network in Chrome) to see what request does the page do when you scroll down, use that data and reproduce the request with requests. As an alternative, you can use selenium to control a browser programmatically to scroll down until page is ended, then save its HTML.
I guess I found the right request
Request URL:http://store.nike.com/html-services/gridwallData?country=US&lang_locale=en_US&gridwallPath=mens-shoes/7puZoi3&pn=3
Request Method:GET
Status Code:200 OK
Remote Address:87.245.221.98:80
Request Headers
Provisional headers are shown
Accept:application/json, text/javascript, */*; q=0.01
Referer:http://store.nike.com/us/en_us/pw/mens-shoes/7puZoi3?ipp=120
User-Agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36
X-NewRelic-ID:VQYGVF5SCBAJVlFaAQIH
X-Requested-With:XMLHttpRequest
Seems that query parameter pn means the current "subpage". But you still need to understand the response correctly.
I am trying to send form data to a webservice but below "Request Header" in the "Network" of the Chrome DOM I got the origin evil.example and referer "localhost:8080".
Accept:application/json, text/plain, */*
Accept-Encoding:gzip, deflate
Accept-Language:nb-NO,nb;q=0.8,no;q=0.6,nn;q=0.4,en-US;q=0.2,en;q=0.2
Connection:keep-alive
Content-Length:91
Content-Type:application/x-www-form-urlencoded; charset=UTF-8;
Host:office.insoft.net:9091
Origin:http://evil.example/
Referer:http://localhost:8080/
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2230.0 Safari/537.36
I want to change to another origin and "localhost:8080" would be the best origin.
How do I resolve that problem?
The overwrite of the header origin is caused by Allow-Control-Allow-Origin: * chrome extension.
Link to the extension
Try disabling this extension in order to solve your problem.
To create a jupyter_notebook_config.py file if it is not there, , you can use the following command line from ~/.jupyter:
$ jupyter notebook --generate-config
Uncomment this
c.NotebookApp.allow_origin = '*'
how can i create a batch that can send HTTPS requests ?
byfar i used Fiddler Request Builder so i can send requests like:
GET https://website.com/index.aspx?typeoflink=**[HERE-VARIABLE-FROM-FILE]**&min=1 HTTP/1.1
Accept: */*
Referer: https://website.com/index.aspx?chknumbertypeoflink&min=1
Accept-Language: en-us
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)
Host: website.com
Connection: Keep-Alive
Cookie: cookieverrylongstringD%FG^&N*MJ( CVV%^B&N*&*(NHN*B*&BH*&H
But i have to mannualy change the variable NOT GOOD...
So the script would send many Requests and just changing the [HERE-VARIABLE-FROM-FILE] variable
The variables are textnames in a file (one variable per line)
if this could be done in a batch file or vbs or jscript or anything!
Thanks in advance!
adam
One way would be to download a version of curl for Windows, and then write a batch file that invokes curl.
set TYPEOFLINK=foo
curl https://website.com/index.aspx?typeoflink=%TYPEOFLINK%&min=1 > savedfile
I'm going to assume Windows because you mention Fiddler.
You can use curl which runs under cygwin.
curl is a command line tool which will allows you to initiates GET requests.