Why Django change requests header to uppercase? - django

I just want to know why django change requests header to uppercase ?
example :
i send headers
"User-Agent" : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36" ,
at backend django change it to
HTTP_USER_AGENT : Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36
What's the need of this ?
Any helpful suggestion will be appreciated .

request.META is a dictionary containing django's constants as keys, not HTTP header names.
I am quoting:
With the exception of CONTENT_LENGTH and CONTENT_TYPE, as given above,
any HTTP headers in the request are converted to META keys by
converting all characters to uppercase, replacing any hyphens with
underscores and adding an HTTP_ prefix to the name. So, for example, a
header called X-Bender would be mapped to the META key HTTP_X_BENDER.

HTTP headers are case insensitive.
According to the Django docs, HTTP headers are converted to upper case, hyphens are converted to underscores, and the HTTP_ prefix is added. This means that you can use request.META['HTTP_USER_AGENT'] in your code, whether the request used User-Agent, USER-AGENT, or something else.

Related

Can't access the website via Postman but on chrome without any issue

Problem 1: (resolved - Thanks #Ranjith Thangaraju)
I tried to access this website via postman, but I can't do this because I got an error: https://i.stack.imgur.com/Dmfj8.png
Then when I try to access it on chrome - there's no restriction at all - I can access it: https://finance.vietstock.vn/
Could someone please help me to explain or help with this?
I'm sorry if someone else had the same issue and it is fixed, if you see some other similar, please point me the direction on that
Problem 2:
When I access this page [https://finance.vietstock.vn/CEO/phan-tich-ky-thuat.htm],
there is one of the APIs that I've tried to call from the postman but I couldn't, could you please point me a solution for this?
Chrome: https://i.stack.imgur.com/RTfsM.png
Postman: https://i.stack.imgur.com/2P2Qe.png
Go to Headers -> Click on Bulk Edit
Add the Following Lines
Host: finance.vietstock.vn
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36
Then Hit Send!! ;)

Scrapy: browser not accepting cookies despite settings

This scraper works fine. I only want to get the titles of the items on this page.
walmart scraper
In scrapy shell, using the view(response) function reveals a web page that says "Your web browser is not accepting cookies." Even when I add USER_AGENT information to a scrapy shell launch.
"Your web browser is not accepting cookies."
As a result, the scraper doesn't manage to scrape any information. Things that I have changed:
COOKIES_ENABLED = True
COOKIES_DEBUG = True
ROBOTSTXT_OBEY = False
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36'
DOWNLOADER_MIDDLEWARES = {'walmartscraper.middlewares.WalmartscraperDownloaderMiddleware': 543,}
I have a feeling I need to add/change something in the middlewares section (it is still the default code) and/or implement requests somewhere. This is the first time I've worked with cookies while scraping and the information I've found hasn't helped me figure this out.
Any advice is very much appreciated. Thank you.

Request is missing required HTTP header

I have requested an api by postman but it didn't response required page, however it says: Request is missing required HTTP header ''
When I went to website developer section/Network tab in XHR, it shows required output.
Request Headers: Accept:application/json, text/plain, / Accept-Encoding:gzip, deflate Accept-Language:en-US,en;q=0.8 Connection:keep-alive Host:panthera.api.yuppcdn.net Origin:test.com Referer:test.com User-Agent:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36 Query String Parameters view source view URL encoded
How can I resolve this?
Please help.
Request an API contract from the developer who's API you are invoking. This API contract must contain what headers are required for the successful invocation of the API.
If it a public API it should also contain its contracts published or documented on the respective site from where you are referring the API specs.

Strange Google Favicon queries to API

I have recently created an API for internal use in my company. Only my colleagues and I have the URL.
From a few days ago, I detected that random requests where occuring to a given method of the API (less than once per day), so I logged accesses to that method and this is what I am getting:
2017-06-18 17:10:00,359 INFO (default task-427) 85.52.215.80 - Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.75 Safari/537.36 Google Favicon
2017-06-20 07:25:42,273 INFO (default task-614) 85.52.215.80 - Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.75 Safari/537.36 Google Favicon
The request to the API is performed with the full set of parameters (I mean, it's not just to the root of the webservice)
Any idea of what could be going on?
I have several thesis:
A team member that has a browser tab with the method request URL open, that reloads every time he opens the browser. --> This is my favourite, but everybody claims not their fault
A team member having the service URL (with all parameters) in their browser History, with the browser randomly querying it to retrieve the favicon
A team member having the service service URL (with all parameters) in their browser Favourites/Bookmarks, with the browser randomly querying it to retrieve the favicon
Since the UserAgent (Google Favicon) seems to suggest one of the two latter options, the IP (located near our own city, with Orange Spain ISP) seem to suggest the first option: After a quick search on the Internet, I've found that everybody that is having such requests seem to have a California's Google IP.
I know I could just block that User Agent or IP, but I'd really would like to get to the bottom of this issue.
Thanks!
Edit:
Now I am getting User Agents as:
Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko; Google Web Preview) Chrome/41.0.2272.118 Safari/537.36
as well :/
Both of these User Agents are associated with Google's Fetch and Render tool in Google Search Console. These user agents make request upon asking Google to Fetch and Render a given page for SEO validation. This really does not make sense considering you are asking about an API and not a page. But perhaps it is because a page that was submitted to the Fetch and Render service called the API?

Origin evil.example in Request Header

I am trying to send form data to a webservice but below "Request Header" in the "Network" of the Chrome DOM I got the origin evil.example and referer "localhost:8080".
Accept:application/json, text/plain, */*
Accept-Encoding:gzip, deflate
Accept-Language:nb-NO,nb;q=0.8,no;q=0.6,nn;q=0.4,en-US;q=0.2,en;q=0.2
Connection:keep-alive
Content-Length:91
Content-Type:application/x-www-form-urlencoded; charset=UTF-8;
Host:office.insoft.net:9091
Origin:http://evil.example/
Referer:http://localhost:8080/
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2230.0 Safari/537.36
I want to change to another origin and "localhost:8080" would be the best origin.
How do I resolve that problem?
The overwrite of the header origin is caused by Allow-Control-Allow-Origin: * chrome extension.
Link to the extension
Try disabling this extension in order to solve your problem.
To create a jupyter_notebook_config.py file if it is not there, , you can use the following command line from ~/.jupyter:
$ jupyter notebook --generate-config
Uncomment this
c.NotebookApp.allow_origin = '*'