Postman - Can't inspect Find & Replace results - postman

I am searching for requests in a collection which contain a specific string using Find And Replace. After entering in a value, I am able to see the results. I would like to inspect the requests/responses that are in the search results. Specifically, I need to see their headers, and in case of results that are responses—their entire response bodies.
There is an "Open in builder" button by each search result that unfortunately does absolutely nothing, and to further complicate things—the URL of the request is truncated in the search results. Because of this, I can not even manually find the request in the collection and inspect it.
Does anyone have a solution to this problem?

Related

Regex specific question and search function on my website dealing with broken links

I've been trying to figure out my regex pattern but it doesn't seem to be working for me.
Here's what i'm trying to do:
I have broken links on my website if someone accidentally gets to a page like so:
https://example.com/catalogsearch/result/?q=
or
https://example.com/catalogsearch/result/
So i'm redirecting them back to my homepage. The problem is now the search is just sending everything back to the homepage. So i'm assuming if there is something after the equals it needs to continue the search.. obviously
https://example.com/catalogsearch/result/?q=person
but currently i can't figure this out..
Here is my regex that i've been messing with for quite sometime now... still seems to be wrong or something else is wrong with my search.
"^/catalogsearch/result((/)|(/\\?)|(/\\?[a-z])|(/\\?[a-z]=))?$"
Please forgive me i'm horrible with regex.
After a lot of discussion, it is concluded that the routes.yaml will consider the url path as a valid route but not the query string part. Hence out of the two examples in the post, you can use
"/catalogsearch/result": { to: "https://example.com/", prefix: false }
and for other one please change it in nginx config to redirect to homepage or if its not possible then check with magento support on how to incorporate the query string part in routes.yaml file.

opencart: I can edit order but cannot delete it. (with Error log)

I use opencart version 2.1.0.1
Everytime I click admin > sales > order, it will pop up "error undefined." By closing that popup window, I can still edit order but cannot delete order (no response).
In my log, there is:
PHP Notice: Undefined variable: order_id in
/var/www/html/opencart2101/system/storage/modification/admin/view/template/sale/order_list.tpl on line 821
The line 821 is:
url: 'index.php?route=extension/openbay/addorderinfo&token=<?php echo $token; ?>&order_id=<?php echo $order_id; ?>&status_id=' + status_id,
However, I haven't installed any openbay related module. Also, line 821 is inside <!-- --> mark. It should have no effect.
Help!
Although this is now an older version of opencart, I still see this being reported a lot around and about.
The problem occurs due to the store front adding the http url rather than the https url to the order. So firstly you need to fix that. If you dont want to read all of my explanation, you can just hit up the bold points :)
Either way BACKUP EVERYTHING actually not really, back up the file you are going to edit and backup your whole database.
open:
catalog/controller/checkout/confirm.php at around line 100
Find:
$order_data['store_url'] = HTTP_SERVER;
Change to:
$order_data['store_url'] = HTTPS_SERVER;
Now you will want to fix your database because for reasons I cannot fathom, the domain name is placed in the order along with the stores id. and when editing orders it is the usage of that directly within your admin order page that throws up the undefined notice. Basically the browser blocks the request because its trying to make an insecure request from a secure page.
Crack open phpmyadmin or whatever database tool you have on hand.
locate the table, default is oc_orders
Browsing the table, look for the column that contains your store url (i cant remember the name off hand, i think its just store_url but it will be obvious anyway. if you are multi store you will need to run the query for each
I am sure somebody can come up with a clever way to automatically convert just the http into https with a single use sql query on the one column, but this works for me.
Run SQL: adjust as appropriate
UPDATE `oc_orders` SET `store_url` = 'https://example.com' WHERE store_id = 0;

SharePoint-Search 2013 Query Transform keeps appending SPSPeople exclusion

I'm trying to get FQL working in an out-of-the-box Enterprise Search Site Collection in on-premise SharePoint 2013, with no success.
Intended query behavior is to:
- Accept and query search terms
- Limit results to the current subdomain (https://teams.domain.com/...)
- Exclude People from results
Our functioning KQL Query Transform is
{?{searchTerms} {?path:{QueryString.p}} -ContentClass=urn:content-class:SPSPeople}
As instructed in MSDN I copied current Result Source (in Site Collection Administration) and modified the Query Transform to:
andnot((and({?{searchTerms}},{?path:{QueryString.p}})),(filter(contentclass:"urn:content-class:SPSPeople*")))
I tried other variations as well but none work.
Even more puzzling to me when I go from "Basics" tab to "Test" tab and click "Show more", the Query text box is ALWAYS appended with
-ContentClass=urn:content-class:SPSPeople
Since it's not FQL formatted I figure that's why my template won't work. I've been at this all day now... Any suggestions what to do next? How do I get rid of that KQL suffix?
Figured it out... I trusted the FQL Query Tranformation was correct and bypassed the "Launch Query Builder" button altogether, inputting the FQL into Query Transform text box.

Select all frames at once in Selenium

This could be a stupid questions for some. But its truly important for me.
I know how to switch frames using selenium webdriver.
However, is there a way to download all the page_source of the entire page for all the frames at once.
Instead of switching them again and again?
Could someone please let me know the command if it exists?
If not then please say there is none. And that should answer my question.
Thanks in advance
Webdrivers' getPageSource will return some state in some formatting of the last page the driver was on.
From the (java)docs, but most probably applies to other languages:
getPageSource
java.lang.String getPageSource()
Get the source of the last loaded page. If the page has been modified
after loading (for example, by Javascript) there is no guarantee that
the returned text is that of the modified page. Please consult the
documentation of the particular driver being used to determine whether
the returned text reflects the current state of the page or the text
last sent by the web server. The page source returned is a
representation of the underlying DOM: do not expect it to be formatted
or escaped in the same way as the response sent from the web server.
Think of it as an artist's impression.
Returns:
The source of the current page
http://selenium.googlecode.com/git/docs/api/java/org/openqa/selenium/WebDriver.html#getPageSource%28%29

Writing a regular expression for nutch's regex-urlfilter.txt file

I'm having some problems with regex-urlfilter.txt file.
I want to crawl only links that have numbers before '.html', should be easy but I can't get it right...
Here's an example:
http://www.utiltrucks.com/annonce-occasion-camion-poids-lourd/marque-renault/modele-midliner/ref-71015.html
http://www.utiltrucks.com/annonce-occasion-camion-poids-lourd/dpt-.html
I want to catch the first link.
I've tried with the following entry in regex-urlfilter:
accept anything else
+http://www.utiltrucks.com/annonce-occasion.+?[0-9]+.html
I get a message:
0 records selected for fetching, exiting ...
Anybody got an idea how to pull this off?
Note that your url filters should also match with your seed URLs or else they will be filter out and hence nutch won't get any chance to parse them and extract the links you wanted.
For example, if your seed file contains this url http://www.utiltrucks.com/home then you should also add an entry in your regex-urlfilter file like this:
+http://www.utiltrucks.com/home
This should be also done for all pages that in the path from your seed urls to your target pages that you want to extract links from.
you have to start your url like
+^(http|https)://www.example.com