Test SameSite and Secure cookies in Django Test client response - django

I have a Django 3.1.7 API.
Until now I was adding SameSite and Secure cookies in the responses through a custom middleware before Django 3.1, depending on the user agent, with automated tests.
Now that Django 3.1 can add those cookie keys itself, I removed the custom middleware and still want to test the presence of SameSite and Secure cookies in the responses.
So I added the following constants in settings.py, as Django doc says:
CSRF_COOKIE_SECURE = True
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SAMESITE = 'None'
SESSION_COOKIE_SAMESITE = 'None'
But when I look at the content of the responses in my tests, I don't get any SameSite neither Secure cookie keys anymore. I printed the content of the cookies, and it's not there.
Why?
Here are my tests:
agent_string = "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.2227.0 Safari/537.36"
from django.test import Client
test_client = Client()
res = test_client.get("/", HTTP_USER_AGENT=agent_string)
print(res.cookies.items())
I also tried with the DRF test client just in case, with same result:
agent_string = "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.2227.0 Safari/537.36"
from rest_framework.test import APIClient
test_client = APIClient()
res = test_client.get("/", HTTP_USER_AGENT=agent_string)
print(res.cookies.items())

Related

python 2 - cookie and headers with urllib2

With python 2 :
I have 2 different problems.
With an url, I have this error:
urllib2.HTTPError: HTTP Error 302: The HTTP server returned a redirect error that would lead to an infinite loop.
So I am trying to set up cookielib
But then I got this error
urllib2.HTTPError: HTTP Error 403: Forbidden
I tried to combine the 2, without success. It's always this error urllib2.HTTPError: HTTP Error 403: Forbidden
which is displayed
import urllib2, sys
from bs4 import BeautifulSoup
import cookielib
hdr = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.9 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3',
'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.7',
'Connection': 'close'}
req = urllib2.Request(row['url'], None, hdr)
cookie = cookielib.CookieJar() # CookieJar object to store cookie
handler = urllib2.HTTPCookieProcessor(cookie) # create cookie processor
opener = urllib2.build_opener(handler) # a general opener
page = opener.open(req)
pagedata = BeautifulSoup(page,"html.parser")
Or :
req = urllib2.Request(row['url'],None,headers=hdr)
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
page = opener.open(req)
pagedata = BeautifulSoup(page,"html.parser")
And many ...

403 Response when I use requests to make a post request

My core code is as follows:
import requests
url='https://www.xxxx.top' #for example
data=dict()
session = requests.session()
session.get(url)
token = session.cookies.get('csrftoken')
data['csrfmiddlewaretoken'] = token
res = session.post(url=url, data=data, headers=session.headers, cookies=session.cookies)
print(res)
# <Response [403]>
The variable url is my own website, which is based on Django. I know I can use #csrf_exempt to disable CSRF, but I don't want to do that.
However, it return 403 response when I use requests to make a post request. I wish someone could tell me what was wrong with my approach.
I have solved the problem. In this case, just add Referer to headers
import requests
url='https://www.xxxx.top' #for example
data=dict()
session = requests.session()
session.headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) '
'Chrome/51.0.2704.63 Safari/537.36','Referer':self.url}
session.get(url)
token = session.cookies.get('csrftoken')
data['csrfmiddlewaretoken'] = token
res = session.post(url=url, data=data, headers=session.headers, cookies=session.cookies)
print(res)

BS4 error 'NoneType' object has no attribute 'find_all'. Cannot parse html data

BS4 error 'NoneType' object has no attribute 'find_all'. Cannot parse html data.
import requests
from bs4 import BeautifulSoup as bs
session = requests.session()
def get_sizes_in_stock():
global session
endpoint = 'https://www.jimmyjazz.com/mens/footwear/nike-air-max-270/AH8050-100?color=White'
response = session.get(endpoint)
soup = bs(response.text,'html.parser')
div = soup.find('div',{'class':'box_wrapper'})
all_sizes = div.find_all('a')
sizes_in_stock = []
for size in all_sizes:
if 'piunavailable' not in size['class']:
size_id = size['id']
sizes_in_stock.append(size_id.split('_')[1])
return sizes_in_stock
print (get_sizes_in_stock())
enter image description here
try adding in the headers parameter:
change:
response = session.get(endpoint)
to:
response = session.get(endpoint, headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36'})
import requests
from bs4 import BeautifulSoup as bs
session = requests.session()
def get_sizes_in_stock():
global session
endpoint = "https://www.sneakers76.com/en/nike/5111-nike-af1-type-ci0054-001-.html"
response = session.get(endpoint, headers={'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Mobile Safari/537.36'})
soup = bs(response.text,"html.parser")
var = soup.find("var",{"blockwishlist_viewwishlist":"View your wishlist"})
all_sizes = var.find_all("var combinations")
sizes_in_stock = []
for size in all_sizes:
if "0" not in size["quantity"]:
size_id = size["attributes"]
sizes_in_stock.append(size_id)
return sizes_in_stock
print (get_sizes_in_stock())

Python Requests Post "trendmicro.com"

I'm trying to get python to make a post request to "global.sitesafety.trendmicro.com". I've tried getting the cookie and adding it to the header's. I'm not getting the results I should be.
import requests
from bs4 import BeautifulSoup
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings()
session = requests.Session()
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.162 Safari/537.36',
"Host": "global.sitesafety.trendmicro.com"
}
response = session.get('https://global.sitesafety.trendmicro.com/',verify=False, headers=headers)
cookies = session.cookies.get_dict()
domain = "http://hsdfsdfam.com"
url = 'https://global.sitesafety.trendmicro.com/result.php'
payload = {'urlname': domain, 'getinfo': 'Check+Now'}
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.162 Safari/537.36',
"Referer": "https://global.sitesafety.trendmicro.com/result.php",
"Origin": "https://global.sitesafety.trendmicro.com",
"Host": "global.sitesafety.trendmicro.com"
}
result = session.post(url, params=payload, headers=headers, cookies=cookies,verify=False)
if result.status_code == 200:
soup = BeautifulSoup(result.content, "lxml")
matching_divs = soup.find_all('div', class_='labeltitleresult')
for div in matching_divs:
print(div.content)
else:
print('failed to get the page somehow, see: {}'.format(results.status_code))

Change user agent with selenium webdriver and python

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import Select
import os
xpaths = { 'video' : "//video[#id='video']",
}
from selenium import webdriver
profile = webdriver.FirefoxProfile()
profile.set_preference("general.useragent.override", "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2227.0 Safari/537.36")
driver = webdriver.Firefox(profile)
mydriver = webdriver.Firefox()
baseurl = "XXXX"
mydriver.get(baseurl)
It's not changing the user agent. I want the user agent to be chrome. I don't know what's wrong...
And also, here's what i'd like it to do: Go to the website, if it redirects to another url > Goes back to main page and keeps doing that until it finds (id:video)
I have not implemented this yet because i have no idea how to...
The website i'm trying to automate got a vid and it appears sometimes. What i'd like this to do is keep visiting the website until it finds the id:video clicks it and waits.
Help is appreciated :)
You are navigating to your application URL using the wrong firefox instance - mydriver. Using the correct firefox instance (with required profile setting) should do the work (which is driver in your case).
Below is the correct code:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import Select
import os
xpaths = { 'video' : "//video[#id='video']",
}
profile = webdriver.FirefoxProfile()
profile.set_preference("general.useragent.override", "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2227.0 Safari/537.36")
driver = webdriver.Firefox(profile)
# the below line is not required
#mydriver = webdriver.Firefox()
baseurl = "XXXX"
# navigate to url with 'driver' instead of 'mydriver'
driver.get(baseurl)
If you change your baseurl to "http://whatsmyuseragent.com/", you will be able to right away see if the user agent change is reflected correctly.
Hope this helps!