Add cookie to the header in SoapUI using GROOVY script - cookies

I am pretty new to GROOVY script.
I trigger a request from soapUI, which basically does a login to the database and returns cookie as part of the header
I need a groovy script, which can take the value of the cookie (EDEV)[marked red in the above pic] and pass the value to all other request inside a TestSuite.
Currently I am using the below GROOVY script to achieve this, but unable to do it. Can someone help?
import com.eviware.soapui.impl.wsdl.support.http.HttpClientSupport
def myCookieStore = HttpClientSupport.getHttpClient().getCookieStore()
def val = testRunner.testCase.testSteps['Login'].testRequest.response.getResponseHeaders()
def re = /(EDEV=.*,)/
def matcher = ( val =~ re )
def cookie = matcher[0][0]
def map=[:]
testRunner.testCase.testSteps['Login2'].testRequest.requestHeaders=map
def headers=testRunner.testCase.testSteps['Login2'].testRequest.requestHeaders
headers.put('Cookie', cookie)
testRunner.testCase.testSteps['Login2'].testRequest.requestHeaders=headers
Where Login is the testCase for login and Login2 is the target testCase where the cookie value needs to be passed and added into the request header.
I have checked http://stackoverflow.com/questions/20640173/how-do-i-get-a-cookie-from-a-soapui-response-using-a-groovy-test-step this answer and did some edit on my script, but still I am unable to see the EDEV cookie in the next request.

You are reading the headers of a request, and not the cookies.
To read your cookie:
import com.eviware.soapui.impl.wsdl.support.http.HttpClientSupport
def myCookieStore = HttpClientSupport.getHttpClient().getCookieStore()
def interestingCookie
myCookies.each {
if(it.name == "EDEV")
interestingCookie = it
}
There is only one cookie store in the session, so you will have to store your cookie somewhere, like in the properties:
testCase.testSuite.project.setPropertyValue("interestingCookie", interestingCookie)
At a later time, to set this cookie back
import org.apache.http.impl.Cookie.BasicClientCookie
import com.eviware.soapui.impl.wsdl.support.http.HttpClientSupport
def myCookieStore = HttpClientSupport.getHttpClient().getCookieStore()
def interestingCookie = testCase.testSuite.project.getPropertyValue("interestingCookie")
def myNewCookie = new BasicClientCookie("EDEV", interestingCookie)
myCookieStore.addCookie(myNewCookie)
You can find additional details on my blog.

Related

Authentication with GitLab to a terminal

I have a terminal that served in webbrowser with wetty. I want to authenticate the user from gitlab to let user with interaction with the terminal(It is inside docker container. When user authenticated i ll allow him to see the containers terminal).
I am trying to do OAuth 2.0 but couldn't manage to achieve.
That is what i tried.
I created an application on gitlab.
Get the code and secret and make a http call with python script.
Script directed me to login and authentication page.
I tried to get code but failed(Their is no mistake on code i think)
Now the problem starts in here. I need to get the auth code from redirected url to gain access token but couldn't figure out. I used flask library for get the code.
from flask import Flask, abort, request
from uuid import uuid4
import requests
import requests.auth
import urllib2
import urllib
CLIENT_ID = "clientid"
CLIENT_SECRET = "clientsecret"
REDIRECT_URI = "https://UnrelevantFromGitlabLink.com/console"
def user_agent():
raise NotImplementedError()
def base_headers():
return {"User-Agent": user_agent()}
app = Flask(__name__)
#app.route('/')
def homepage():
text = 'Authenticate with gitlab'
return text % make_authorization_url()
def make_authorization_url():
# Generate a random string for the state parameter
# Save it for use later to prevent xsrf attacks
state = str(uuid4())
save_created_state(state)
params = {"client_id": CLIENT_ID,
"response_type": "code",
"state": state,
"redirect_uri": REDIRECT_URI,
"scope": "api"}
url = "https://GitlapDomain/oauth/authorize?" + urllib.urlencode(params)
print get_redirected_url(url)
print(url)
return url
# Left as an exercise to the reader.
# You may want to store valid states in a database or memcache.
def save_created_state(state):
pass
def is_valid_state(state):
return True
#app.route('/console')
def reddit_callback():
print("-----------------")
error = request.args.get('error', '')
if error:
return "Error: " + error
state = request.args.get('state', '')
if not is_valid_state(state):
# Uh-oh, this request wasn't started by us!
abort(403)
code = request.args.get('code')
print(code.json())
access_token = get_token(code)
# Note: In most cases, you'll want to store the access token, in, say,
# a session for use in other parts of your web app.
return "Your gitlab username is: %s" % get_username(access_token)
def get_token(code):
client_auth = requests.auth.HTTPBasicAuth(CLIENT_ID, CLIENT_SECRET)
post_data = {"grant_type": "authorization_code",
"code": code,
"redirect_uri": REDIRECT_URI}
headers = base_headers()
response = requests.post("https://MyGitlabDomain/oauth/token",
auth=client_auth,
headers=headers,
data=post_data)
token_json = response.json()
return token_json["access_token"]
if __name__ == '__main__':
app.run(host="0.0.0.0",debug=True, port=65010)
I think my problem is on my redirect url. Because it is just an irrelevant link from GitLab and there is no API the I can make call.
If I can fire
#app.route('/console')
that line on Python my problem will probably will be solved.
I need to make correction on my Python script or different angle to solve my problem. Please help.
I was totally miss understand the concept of auth2. Main aim is to have access_token. When i corrected callback url as localhost it worked like charm.

Beaker session in bottle

while using beaker session, i came across to use same session object along the whole application.
I came through this url: Bottle.py session with Beaker
But, still i am getting 'KeyError' when i am trying to access the save session value in one function by another function.
my rest.py file looks like:
import bottle
from bottle import route,default_app
from beaker.middleware import SessionMiddleware
app = bottle.default_app()
#bottle.hook('before_request')
def setup_request():
request.session = request.environ['beaker.session']
#app.route('/login')
def login():
request.session['uname'] = 'user'
#app.route('/logout')
def logout():
print request.session['uname']
# expecting to print user
session_opts = {
'session.type': 'file',
'session.data_dir': '/tmp/',
'session.cookie_expires': True,
}
app = SessionMiddleware(bottle.default_app(),session_opts)
I have mentioned the SessionMiddleware at the end as im getting errors with the help of this link https://groups.google.com/forum/#!topic/bottlepy/m0akSbWRpZg
But when i am accessing request.session in the logout function i am getting
'KeyError': Uname not found
can any one give clear example of how to adjust the code inorder to maintain same session in whole application.

How to scrape pages after login

I try to find a way to scrape and parse more pages in the signed in area.
These example links accesible from signed in I want to parse.
#http://example.com/seller/demand/?id=305554
#http://example.com/seller/demand/?id=305553
#http://example.com/seller/demand/?id=305552
#....
I want to create spider that can open each one of these links and then parse them.
I have created another spider which can open and parse only one of them.
When I tried to create "for" or "while" to call more requests with other links it allowed me not because I cannot put more returns into generator, it returns error. I also tried link extractors, but it didn't work for me.
Here is my code:
#!c:/server/www/scrapy
# -*- coding: utf-8 -*-
from scrapy import Spider
from scrapy.selector import Selector
from scrapy.http import FormRequest
from scrapy.http.request import Request
from scrapy.spiders import CrawlSpider, Rule
from array import *
from stack.items import StackItem
from scrapy.linkextractors import LinkExtractor
class Spider3(Spider):
name = "Spider3"
allowed_domains = ["example.com"]
start_urls = ["http://example.com/login"] #this link lead to login page
When I am signed in it returns page with url, that contains "stat", that is why I put here first "if" condition.
When I am signed in, I request one link and call function parse_items.
def parse(self, response):
#when "stat" is in url it means that I just signed in
if "stat" in response.url:
return Request("http://example.com/seller/demand/?id=305554", callback = self.parse_items)
else:
#this succesful login turns me to page, it's url contains "stat"
return [FormRequest.from_response(response,
formdata={'ctl00$ContentPlaceHolder1$lMain$tbLogin': 'my_login', 'ctl00$ContentPlaceHolder1$lMain$tbPass': 'my_password'},callback=self.parse)]
Function parse_items simply parse desired content from one desired page:
def parse_items(self,response):
questions = Selector(response).xpath('//*[#id="ctl00_ContentPlaceHolder1_cRequest_divAll"]/table/tr')
for question in questions:
item = StackItem()
item['name'] = question.xpath('th/text()').extract()[0]
item['value'] = question.xpath('td/text()').extract()[0]
yield item
Can you help me please to update this code to open and parse more than one page in each sessions?
I don't want to sign in over and over for each request.
The session most likely depends on the cookies and scrapy manages that by itself. I.e:
def parse_items(self,response):
questions = Selector(response).xpath('//*[#id="ctl00_ContentPlaceHolder1_cRequest_divAll"]/table/tr')
for question in questions:
item = StackItem()
item['name'] = question.xpath('th/text()').extract()[0]
item['value'] = question.xpath('td/text()').extract()[0]
yield item
next_url = '' # find url to next page in the current page
if next_url:
yield Request(next_url, self.parse_items)
# scrapy will retain the session for the next page if it's managed by cookies
I am currently working on the same problem. I use InitSpider so I can overwrite __init__ and init_request. The first is just for initialisation of custom stuff and the actual magic happens in my init_request:
def init_request(self):
"""This function is called before crawling starts."""
# Do not start a request on error,
# simply return nothing and quit scrapy
if self.abort:
return
# Do a login
if self.login_required:
# Start with login first
return Request(url=self.login_page, callback=self.login)
else:
# Start with pase function
return Request(url=self.base_url, callback=self.parse)
My login looks like this
def login(self, response):
"""Generate a login request."""
self.log('Login called')
return FormRequest.from_response(
response,
formdata=self.login_data,
method=self.login_method,
callback=self.check_login_response
)
self.login_data is a dict with post values.
I am still a beginner with python and scrapy, so I might be doing it the wrong way. Anyway, so far I have produced a working version that can be viewed on github.
HTH:
https://github.com/cytopia/crawlpy

python web py automated testing

I am having an issue with automated testing in web py framework.
I am going through the last exercise of learn python the hard way. In this exercise we make a web application "engine" that runs a map of rooms.
I want to be able to automate test every single room, but there is one problem, is that the engine depends on the previous room to decide which room to go to next (and user input).
if web.config.get("_session") is None:
store = web.session.DiskStore("sessions")
session = web.session.Session(app, store, initializer={"room":None})
web.config._session = session
else:
session = web.config._session
This class handles GET request sent to /
class Index(object):
def GET(self):
session.room = map.START
web.seeother("/game")
This class handles GET and POST requests to /game
class GameEngine(object):
def GET(self):
if session.room:
return render.show_room(room=session.room)
else:
return render.you_died()
def POST(self):
form = web.input(action=None)
if session.room and form.action:
session.room = session.room.go(form.action)
web.seeother("/game")
In my automated testing I use two things: first I use the app.request API:
app.request(localpart='/', method='GET',data=None,
host='0.0.0.0:8080', headers=None, https=False)
create a response object, something like:
resp = app.request("/game", method = "GET")
Second I pass the resp object to this function to check for certain things:
from nose.tools import *
import re
def assert_response(resp, contains=None, matches=None, headers=None,
status="200"):
assert status in resp.status, "Expected response %r not in %r" %
(status, resp.status)
if status == "200":
assert resp.data, "Response data is empty"
if contains:
assert contains in resp.data, "Response does not contain %r" %
contains
if matches:
reg = re.compile(matches)
assert reg.matces(resp.data), "Response does not match %r" %
matches
if headers:
assert_equal(resp.headers, headers)
We can pass variables as a dictionary to the keyword argument data in the API app.request to modify the web.input().
my question is: in my automated test module how do we "pass" a value that overwrite the room value in the initializer dictionary in our session:
session = web.session.Session(app, store, initializer={"room":None})
In the app module its done by setting
session.room = map.START
and then session.room updates using:
if session.room and form.action:
session.room = session.room.go(form.action)
Thanks for taking the time to read this, and any insights would be appreciated!
Alright I finally found it! The main issue here was that every time I make a http request through app.request it gives me a new session ID.
The trick that I found thanks to this post:
How to initialize session data in automated test? (python 2.7, webpy, nosetests)
is to record the session ID of the request to reuse that ID in my automated tests by passing it to the headers keyword argument in the request!
record the session ID using this function (which I placed as suggested in the post in tests/tools.py):
def get_session_id(resp):
cookies_str = resp.headers['Set-Cookie']
if cookies_str:
for kv in cookies_str.split(';'):
if 'webpy_session_id=' in kv:
return kv
then in the automated tests something like:
def test_session():
resp = app.request('/')
session_id = get_session_id(resp)
resp1 = app.request('/game', headers={'Cookie':session_id})
assert_response(resp1, status='200', contains='Central Corridor')
I hope that helps in the future for programmers who get stuck on the same issue!

on data.put() i need to display to the user that the data has been successfully submitted or failure incase pof one

am using python and google app engine majorly on jinja2 templates
i could like when a user registers a new account, they get a popup indicating that their registration is successful of even any alert on the very interface before moving to the next registration step.
def post(self):
user = (str(users.get_current_user().email()))
userquery = Users.query(Users.email == user)
count = userquery.count()
if count == 0:
#test if user is admin or employee
qry = Users.query()
count = qry.count()
if count == 0:
privilage = 'admin'
db_put = Users(
f_name=self.request.get("f_name"),
l_name = self.request.get("l_name"),
org = self.request.get("org"),
email=users.get_current_user().email(),
privilage = privilage
)
db_put.put()
How are you calling this POST method? Are you sending a form there directly (use method 1) or is this being done with an AJAX call (use method 2)?
Method 1
You can redirect to a GET page where you render a template with a success or error message for Jinja to use. This would however involve a page change.
import webapp2
class MyHandler(webapp2.RequestHandler):
def get(self): # Let's assume /someurl is mapped to this handler.
template_values = {}
notification = self.request.get('notification')
if notification:
template_values['notification'] = notification
self.response.set_status(200)
self.response.headers['Content-Type'] = 'text/html; charset=utf-8'
# Need to get the template from jinja and set it as template variable.
self.response.out.write(template.render(template_values))
def post(self):
# Do all your stuff here.
self.redirect('/someurl?notification=Success')
Alternatively you can set the parameters directly on the request instead of passing them as URI parameters:
def post(self):
# Do all your stuff here.
self.redirect('/someurl, params={'notification': 'Success'})
Method 2
In this method you can send back a JSON response with a success or error message. The caller (whatever function in your javascript that submitted the request to the backend) can use that to render a butterbar message or other popup notification of your choosing:
import json
import webapp2
class MyHandler(webapp2.RequestHandler):
def post(self):
# Do all your stuff here.
self.response.set_status(200)
self.response.headers['Content-Type'] = 'application/json; charset=utf-8'
self.response.headers['Content-Disposition'] = 'attachment'
self.response.out.write(json.JsonEncoder(sort_keys=True).encode('Success'))
For the latter, make sure you think about cross-site scripting (XSS) vulnerabilities and perhaps add a JSON prefix.