Persistence for cookies - cookies

I am working with python mechanize on making a login script. I have read that the mechanize's Browser() object will handle with cookies automatically for further requests.
How can i make this cookie persistent ie save into a file so that I can load from that file later.
My script is currently logging-in(using mechanize/HTML forms) to the website with Browser() object every time it is run.

If you go through the API docs for Mechanize at
http://wwwsearch.sourceforge.net/mechanize/doc.html
there is some information regarding specifically what you're asking, specifically the CookieJar and LWPCookieJar materials.
From the docs:
There are also some CookieJar subclasses which can store cookies in files and databases. FileCookieJar is the abstract class for CookieJars that can store cookies in disk files. LWPCookieJar saves cookies in a format compatible with the libwww-perl library. This class is convenient if you want to store cookies in a human-readable file:
import mechanize
cj = mechanize.LWPCookieJar()
cj.revert("cookie3.txt")
opener = mechanize.build_opener(mechanize.HTTPCookieProcessor(cj))
r = opener.open("http://foobar.com/")
cj.save("cookie3.txt")
EDIT: Pseudo code for what is asked for in comments
Attempt to load your CookieJar from file
If successful, set your Browser() cookie jar to the loaded cookie jar
Attempt to access the page normally
Else if unsuccessful go through the pages until you reach a point where you have all of the cookies
Save the cookies to the file using the LWPCookieJar();

Related

How to extract Cookie from Request Body of View Results Tree in JMeter ? I want to save them to CSV File to use in future

In My application the login page is very slow such that when I perform load test its breaks even before going to the main page. For that with few login credentials and by iterating them I need to generate new cookie everytime and store all of them in .csv file so that in near future I can just use cookie to login and load test won't break.
Request Body has
Cookie Data:
cb=k3fp7s1rnjoil48ep8aeilro64; lang=en_US
Add the next lines to user.properties file (lives in "bin" folder of your JMeter installation)
CookieManager.save.cookies=true
sample_variables=COOKIE_cb
Restart JMeter to pick up the changes
Add HTTP Cookie Manager to your Test Plan
Add Flexible File Writer to your Test Plan and configure it like:
That's it, when you run your test next time you will see a new file called values.csv holding the cookies for each thread (virtual user), each cookie on the new line.

How to get Cookies from response header in jMeter

I am trying to get the cookies from a GET request when I first access a website via HTTP Request, there are a number of suggestions that suggest using user.properties files e.t.c. but I do not actually have these available as I am using the jMeter GUI to build the tests and it doesn't create these files.
Is there a way of getting the cookies from the header without the user.properties. Or if not, please could I request some detail as to how to achieve creating a user.properties file e.t.c. as I am very very new to jMeter.
Thanks in advance
For a simple caching you just need to add to Test Plan HTTP Cookie Manager and HTTP cookie(s) will be added.
user.properties is used for specific cases, and it is already exists in your JMeter bin folder in case you will need to update it.

File Uploads in Django from urllib

I have small django app where you can upload PDF files.
In the past only human beings used the web application.
In the future a script should be able to upload files.
Up to now we use ModelBackend for authentication (settings.AUTHENTICATION_BACKENDS)
Goal
A script should be able to authenticate and upload files
My current strategy
I add a new user remote-system-foo and give him a password.
Somehow log in to the django web application and then upload pdf files via a script.
I would like to use the requests library for the http client script.
Question
How to login into the django web application?
Is my current strategy the right one, or are there better strategies?
You can use the requests library to log into any site, you of course need to tailor the POST depending on which parameters your site requires. If things aren't trivial, take a look at the post data in Chrome's developer tools from when you log in to your site. Here is some code I used to log into a site, it could easily be extended to do what ever you need it to do.
from bs4 import BeautifulSoup as bs
import requests
data = requests.session.get(page)
soup = bs(data.text, "lxml")
# Grab csrf token
# soup.find(...) or something
# The POST data for authorizing, this may or may not have been a django
# site, so see what your POST needs
data = {
'user[login]': 'foo' ,
'user[password]': 'foofoo',
}
# Act like a computer, and insert token here, not with data!
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95
Safari/537.36', 'X-CSRF-Token': token
}
requests.session.post('https://www.examplesite.com/users/sign_in', data=data,
headers=headers)
Now, your session is logged in and you should be able to upload your pdf. But I've never tried to upload via requests. Take a look at the relevant requests documentation
That being said, this feels like a strange solution. You might consider uploading the files as fixtures or RunSQL, or rather, their location (eg AWS bucket url) to the database. But this is new territory for me.
Hope it helps.
We use this library now: https://github.com/hirokiky/django-basicauth
This way we use http-basic-auth for API views and session/cookie auth for interactive human beings.
Since I found no matching solution, I wrote and published this:
https://pypi.python.org/pypi/tbzuploader/
Generic http upload tool.
If the http upload was successfull, files get moved to a “done” sub
directory.
The upload is considered successfull by tbzuploader if the servers
replies with http status 201 Created
Additional features: Handles pairs of files.
For example you have four files: a.pdf, a.xml, b.pdf, b.xml
The first upload should take a.pdf and a.xml, and the second upload
b.pdf and b.xml, then read the docs for –patterns.

Login using python request module on a aspx webpage

I've being trying to log in to this web page but I fail every time. This is the code i used
import requests
headers = {'User-Agent': 'Chrome'}
payload = {'_GlobalLoginControl$UserLogin':'myUser','_GlobalLoginControl$Password':'myPass'}
s = requests.Session()
r = s.post('https://www.scadalynx.com/GlobalLogin.aspx',headers=headers,data=payload)
r = s.get('https://www.scadalynx.com/Default.aspx')
print r.url
The result I get from: print r.url is this:
https://www.scadalynx.com/GlobalLogin.aspx?Timeout=Y
You can't.
The main problem is, your payload isn't complete. Check chrome's networking tab. There are much more required payloads.
ScriptMgr:_GlobalLoginControl$UpdatePanel1|_GlobalLoginControl$LoginBtn
ScriptMgr_HiddenField:;;AjaxControlToolkit, Version=4.1.40412.0, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e:en-US:2d0688b9-5fe7-418f-aeb1-6ecaa4dca45f:475a4ef5:effe2a26:751cdd15:5546a2b:dfad98a5:1d3ed089:497ef277:a43b07eb:3cf12cf1
__EVENTTARGET:
__EVENTARGUMENT:
__VIEWSTATE:/wEPDwUKMTQxMjQ3NTE5MA9kFgICAQ8WAh4Ib25zdWJtaXQFkgFpZiAoJGdldCgnX0dsb2JhbExvZ2luQ29udHJvbF9QYXNzd29yZCcpICE9IG51bGwpICRnZXQoJ19HbG9iYWxMb2dpbkNvbnRyb2xfUGFzc3dvcmQnKS52YWx1ZSA9IGVzY2FwZSgkZ2V0KCdfR2xvYmFsTG9naW5Db250cm9sX1Bhc3N3b3JkJykudmFsdWUpOxYEAgEPZBYCZg9kFgICBQ9kFgICAg9kFgICAQ9kFgJmD2QWCgIND2QWAgIJDw9kFgIeB29uY2xpY2sFugFqYXZhc2NyaXB0OmlmKCRnZXQoJ19HbG9iYWxMb2dpbkNvbnRyb2xfX0ZvcmdvdFBhc3N3b3JkRU1haWxUZXh0Qm94JykgIT0gbnVsbCkkZ2V0KCdfR2xvYmFsTG9naW5Db250cm9sX19Gb3Jnb3RQYXNzd29yZEVNYWlsVGV4dEJveCcpLnZhbHVlID0gJGdldCgnX0dsb2JhbExvZ2luQ29udHJvbF9Vc2VyTG9naW4nKS52YWx1ZTtkAg8PZBYCAgUPEGRkFgBkAhEPD2QWAh8BBZUDamF2YXNjcmlwdDppZigkZ2V0KCdfR2xvYmFsTG9naW5Db250cm9sX1VzZXJMb2dpblZhbGlkYXRvcicpICE9IG51bGwpJGdldCgnX0dsb2JhbExvZ2luQ29udHJvbF9Vc2VyTG9naW5WYWxpZGF0b3InKS5lbmFibGVkID0gdHJ1ZTtpZigkZ2V0KCdfR2xvYmFsTG9naW5Db250cm9sX19Vc2VyTG9naW5SZWd1bGFyRXhwcmVzc2lvblZhbGlkYXRvcicpICE9IG51bGwpJGdldCgnX0dsb2JhbExvZ2luQ29udHJvbF9fVXNlckxvZ2luUmVndWxhckV4cHJlc3Npb25WYWxpZGF0b3InKS5lbmFibGVkID0gdHJ1ZTtpZigkZ2V0KCdfR2xvYmFsTG9naW5Db250cm9sX1Bhc3N3b3JkVmFsaWRhdG9yJykgIT0gbnVsbCkkZ2V0KCdfR2xvYmFsTG9naW5Db250cm9sX1Bhc3N3b3JkVmFsaWRhdG9yJykuZW5hYmxlZCA9IHRydWU7ZAITD2QWAgIBDw8WAh4HVmlzaWJsZWhkZAIVD2QWBAIBDw9kFgIfAQUtJGdldCgnX0dsb2JhbExvZ2luQ29udHJvbF9QYXNzd29yZCcpLmZvY3VzKCk7ZAILDw9kFgIfAQVhamF2YXNjcmlwdDokZ2V0KCdfR2xvYmFsTG9naW5Db250cm9sX19Gb3Jnb3RQYXNzd29yZEVNYWlsUmVxdWlyZWRGaWVsZFZhbGlkYXRvcicpLmVuYWJsZWQgPSB0cnVlO2QCAg8PFgIfAmhkFgICAw8WAh4LXyFJdGVtQ291bnRmZBgBBR5fX0NvbnRyb2xzUmVxdWlyZVBvc3RCYWNrS2V5X18WAwUpX0dsb2JhbExvZ2luQ29udHJvbCRSZW1lbWJlckxvZ2luQ2hlY2tCb3gFK19HbG9iYWxMb2dpbkNvbnRyb2wkX0ZvcmdvdFBhc3N3b3JkQ2xvc2VCdG4FEV9FcnJvckltYWdlQnV0dG9uIXu7XOl6z8WoghCWdElD7kNBanI=
__VIEWSTATEGENERATOR:ABDC7715
__SCROLLPOSITIONX:0
__SCROLLPOSITIONY:0
__EVENTVALIDATION:/wEdAA8j+x15hTpBOEjDv1LxVan3AUijrFjxy9PpisoGxfMqnNduSMVw1RChh3aZsdCK82jXRUWkWThaqEhU3Gr5iw98GHoUhEtg6gp73QcFIR1tGEGQHmQGQos+5LR8l78kIyNCGm6wvkKBlG3Z3EngFWzmX3gMRUNTCvY9T8lfFGMsRkvp3s0LtAU9sya5EgaP5MNrqxxx0HTfWwHJy49saUYlPDg6OL5q3VoZ6biOkvIG8l/ujxMESq+8VmX4sGwXcQBJxOm7RbAd1IEojVITrtk4hx8VhfPuqTNrqWHRrUAMgBj1ffXkwiR7kcJxJ3ixy43iLukJszI09WI7xsAFyAKxG82PcA==
_GlobalLoginControl$ScrWidth:1536
_GlobalLoginControl$ScrHeight:864
_GlobalLoginControl$UserLogin:asdsad#asdas.com
_GlobalLoginControl$Password:asdasd
_GlobalLoginControl$PasswordStore:
_GlobalLoginControl$HiddenField1:
_GlobalLoginControl$_HiddenSessionContentID:
_ErrorHiddenField:
__ASYNCPOST:true
_GlobalLoginControl$LoginBtn:Login
Probably, you could outsource this (I think it isn't possible, you have to use selenium or get the page first and scrape the informations.
But check this topic: How to make HTTP POST on website that uses asp.net?
We considered that, the login should be pass-through with phantomjs/chrome with selenium, the you should pass the cookies and the headers to requests. After you pass the required informations for requests, you could work with request for the further steps.

Format for storing cookies in a txt file and serving it to phantomjs

I am trying to send cookies to phantomjs while starting the driver
phanton.add_cookie({}) works.
I want to keep a list of cookies in a file and pass them as arguments while launching phantomjs
I found that webdriver.PhantomJS(service_args=['--cookies-file=/tmp/ph_cook.txt']) would add a given txt file at launch.
The problem is I do not know what format this txt file should be of. I tried using a map with key value pair, but no gain.
The cookie file format is a Qt internal serialization format of cookies. If you need such a cookie file, follow those instructions:
Create a PhantomJS script which sets the cookies that you need either using phantom.addCookie() or page.addCookie().
Run the previously written script as
$ phantomjs --cookies-file=cookies.txt script.js
Use the generated cookies file in python.