I am running same QTP script in QA and Staging environment. One test case requires me to click on a PDF document which opens in a new window. My situation is that the even though the document is the same the domain name is different. What do I do to match it. Can I use regular expression to do it?
URL of document in QA:
http://qaapp2/InfoLibrary/ViewDocument\.aspx\?documentid=81b60525-9393-45ac-9c89-2fb1b0cb4701&documentname=ICD10\+physician\+readiness\+survey\.pdf"
URL of document in Staging:
http://stgapp2:81/InfoLibrary/ViewDocument\.aspx\?documentid=81b60525-9393-45ac-9c89-2fb1b0cb4701&documentname=ICD10\+physician\+readiness\+survey\.pdf"
If you look at the URL, you would notice that everything is the same except the domain name
QA: qaapp2
STG: stgapp2:81
Only common string sequence is 'app2'
I am unable to successfully match the using regex, I used this
[(stg)|(qa)][app2]
and it is not working. Please help.
Change the regular expression to
((stg)|(qa))app2
I use below site to verify my regex pattern.
http://www.regular-expressions.info/vbscriptexample.html
Note: Works only in IE as it is VBScript.
Related
I'm trying to make a script that activates on amazon. I use multiple amazons though (.NL, .DE, .CO.UK etc) and I would like to use a regex to pick that up, so I can visit any website that starts with https://www.amazon. and have the script activate.
I wrote this regex for the #match rule in the header;
((https?):\/\/)?(\w+)\.(amazon)\.(?P<extension>\w+(\.\w+)?)(\/.*)?
According to regex101, the regex is correct and should pick up on strings like https://www.amazon.de, but the Tampermonkey script (in both Chrome and Safari) is not activating when I visit any amazon websites.
According to the Tampermonkey documentation it should support regex in #match, so why is it not working? Did I make a mistake in my regex after all? Is this regex too complex for the #match rule?
I am trying to set up apache nutch to crawl only websites with a specified domain using Regex. I don't have much experience with Regex and I'm having trouble working out how to do my domain in Regex.
The domain is
https://www.health.gov.au/
and I would like any web page with this domain followed by anything else to be accepted by the Regex.
thanks for your time
EDIT
for example, I would like https://www.health.gov.au/health-topics to be accepted by the Regex
You can use (https://www.health.gov.au/.*).
This will match all characters after https://www.health.gov.au/
RegexDemo
Jmeter URL patterns to exclude under workbench - not excluding patterns that are giving there.
Can we give direct URL's. i have a list of URL that needs to be excluded from the recorded script.
Example:
'safebrowsing.google.com`
'safebrowsing-cache.google.com'
'self-repair.mozilla.org'
i'm giving these directly under patters to exclude. or do i need to give as a regular expression only.
Can someone provide more info whether to use regular expression or direct url can be provided under Requests Filtering in workbench
JMeter uses Perl5-style regular expressions for defining URL patterns to include/exclude so you can filter out all the requests to/from google and mozilla domains by adding the following regular expression to URL Patterns to Exclude input of the HTTP(S) Test Script Recorder:
^((?<google>|mozilla>).)*$
See Excluding Domains From The Load Test article for more details.
If you want any of the patterns to be excluded from recording in the scripts please follow the below pattern and add it in "URL Patterns to Exlcude" it must work.
1. For .html : .*\.html.*
2. For .gif : .*\.gif.* etc
I have a RESTful API endpoint with a Mongo backend. I allow users to filter their queries using GET parameters, e.g. to get analytics for the /about page on the site:
/page-analytics?filter_by=pagePath:/about
I would like to extend this to support regular expression filters, e.g. to get analytics for all pages below /about on the site:
/page-analytics?filter_by=pagePath:/about/.*
What is the safest way to do this? Should I enclose the regex inside enclosing markers, and should I get users to pass URL escaped variables?
My requirements are as follows: firstly, I would like to write backend code to check that we have a valid regex before passing it on to my database. Secondly, I don't want the regexes passed in by users to get messed up by URL escaping.
One example of a problem: I initially started by enclosing all my regexes in slashes, like /^about/.*$/, but that doesn't work well because my server removes trailing slashes from URLs.
If anyone has any tips, I'd be very grateful.
Surprise! I have another Apache Nutch v1.5 question. So in crawling and indexing our site to Solr via Nutch, we need to be able to exclude any content that falls under a certain path.
So say we have our site: http://oursite.com/ and we have a path that we don't want to index at http://oursite.com/private/
I have http://oursite.com/ in the seed.txt file and +^http://www.oursite.com/([a-z0-9\-A-Z]*\/)* in the regex-urlfilter.txt file
I thought that putting: -.*/private/.* also in the regex-urlfilter.txt file would exclude that path and anything under it, but the crawler is still fetching and indexing content under the /private/ path.
Is there some kind of restart I need to do on the server, like Solr? Or is my regex not actually the right way to do this?
thanks
My guess is that the url is accepted by first regex and the second one isn't checked anymore. If you want to deny URLs, put their regexes first in list.